California publishes first report on generative AI risks, potential use cases
California Gov. Gavin Newsom’s office on Tuesday announced a new report outlining the potential benefits that generative AI could bring to state government — from improving accessibility of the state’s services to bolstering cybersecurity — along with an extensive description of the risks the technology could bring along for the ride.
The report is the first major product of an executive order Newsom issued in September that directed an expansive effort to explore how the emerging technology could be used inside the state government and to capture the economic benefit of a new technology that’s largely developed by California software companies. The 34-page document includes descriptions and examples of six potential ways California could use generative AI, but the bulk of the report is dedicated to exploring the many risks the technology presents — to privacy, security, the state’s workforce, operations, transparency, safety and government accountability.
The report cites ways that generative AI could amplify existing threats and create new ones. Among the manifold new risks outlined in the report are threats as alarming as generative AI’s potential ability to enable “bad actors to design, synthesize, or acquire dangerous chemical, biological, radiological, or nuclear (CBRN) weapons.”
Other threats listed include generative AI’s capacity to support mis- and disinformation campaigns, generate offensive material and to create “deepfakes,” materials that synthesize the likeness, speech or writing of individuals. Authors also pointed out that generative AI can “lower technical barriers” that once kept bad actors from effectively launching campaigns on social media to harm the public’s mental health or polarize politically.
Officials also cited concern with the intractable challenge of identifying how generative AI models reach their conclusions. Sourcing information, the report says, is expected to be a perennial challenge.
Generative AI could also create new risks for California’s cybersecurity efforts, authors wrote. The report notes a handful of examples, including the potential for generative AI to be used to remotely execute harmful code, to modify access permissions, steal or delete data or to create content that emulates officials to aid in cyberattacks.
Generative AI ‘pioneers’
Risks notwithstanding, the report’s authors, a task force created by Newsom’s order that includes statewide Chief Information Officer Liana Bailey-Crimmins, strike a sanguine tone in the state’s press materials.
Bailey-Crimmins, who told StateScoop in an interview last month that the ultimate timeline for this work is on the order of years, not months, said in a press release accompanying the announcement that the state is excited to be “at the forefront” of government’s work in generative AI.
“With streamlined services and the ability to predict needs, the deployment of GenAI can make it easier for people to access government services they rely on, saving them time and money,” she said.
And Amy Tong, California’s government operations secretary, is quoted in the materials as saying that the state has an opportunity to “pioneer” new use cases.
“Through careful use and well-designed trials, we will learn how to deploy this technology effectively to make the work of government employees easier and improve services we provide to the people of California,” Tong said.
The report describes six major use cases that California state agencies stand to benefit from generative AI, including summarizing and classifying unwieldy collections of data, such as meeting notes and public outreach documentation; catering materials to the needs of California’s diverse population, such as by identifying demographic groups who currently struggle to access state services; and expanding the state’s use of foreign languages, such as by converting English educational materials into additional languages and formats, such as audiobooks, large print text and Braille documents.
The report names optimizing and converting old computer code into modern languages, or otherwise using it to streamline and “democratize” software development; finding insights that “empower and support” decision-makers, such as by spotting cybersecurity threats earlier; and optimizing the state’s operations in support of environmental considerations, such as by analyzing “traffic patterns, ride requests, and vehicle telemetry data to optimize routing and scheduling for state-managed transportation fleets like buses, waste collection trucks.”
Next steps
With the first step of Newsom’s generative AI order complete, officials are now tasked with developing new training materials for state employees, establishing new partnerships with regional institutions and designing new tools for testing generative AI products before they can be widely deployed. The order also requires an ongoing analysis of how AI is affecting the state.
The order requires the AI task force to develop a “procurement blueprint” that explicates how California can purchase such new software from private companies, while ensuring support of a “safe, ethical, and responsible innovation ecosystem inside state government.” This document is to be created with consideration of the federal government’s AI Bill of Rights and the National Institute for Science and Technology’s AI Risk Management Framework.
The state also plans to create formal partnerships with the University of California, Berkeley and Stanford University to better understand generative AI’s effects. The state plans next year to host a summit to discuss how generative AI is affecting the state and its workforce.