Advertisement

AI is beyond government control

State governments are forming new laws and policies that set "guardrails" to limit AI's potential harms. But officials know their reach is limited.
a balance
(Giannina Vera / Scoop News Group)

State governments are moving fast to protect their agencies and publics from the potential harms that could be caused by generative artificial intelligence, but, like the internet itself, the rapidly evolving technology is demonstrating a reach and influence beyond any single organization’s control.

Since ChatGPT opened to the public in November 2022, states have birthed hundreds of new bills, executive orders, committees, task forces and policies aimed at preempting the many potential harms that generative AI might exact on their communities, workforces and private data stores. State government’s leading technology officials have, loudly and often, made known their concerns about generative AI’s potential to amplify biases, spread misinformation and disrupt work and personal life.

The validity of those concerns is borne out each day as AI’s presence is felt on social media, in the software ecosystem and in the physical world. Leaders from Google and Microsoft’s public sector businesses told StateScoop they share government’s concerns and pointed to ethics policies, crafted over years of careful deliberation, as strategic bulwarks against potential misuses of AI. Representatives from both companies earnestly espoused an interest in aligning their ethical goals with government’s.

But sometimes even the best intentions can be knocked off track. Rebecca Williams, data governance program manager with the American Civil Liberties Union, pointed to the frequency with which government hires companies that wind up engaging in questionable practices, citing the data-mining company Palantir and the identity management firm ID.me.

Advertisement

“Government has strict guidelines that are more conservative than what the private sector has, but then they procure technologies that don’t necessarily follow government guidelines,” she said. “I think they’re talking a great talk, but I think they’re not just sort of dependent, but heavily dependent on the vendors.”

After this story was originally published, ID.me contacted StateScoop to point out that it complies with federal standards for identity authentication, including those set by the Department of Commerce and the National Institute of Standards and Technology.

“Our strict adherence to Federal data handling guidelines qualifies us as compliant with NIST requirements, as well as FedRAMP Moderate, amongst other security certifications,” an ID.me spokesperson wrote in an email.

Williams also pointed to government’s tendency to make decisions based on its “model of austerity,” a phenomenon elucidated by the rallying call heard throughout the public sector that it must “do more with less.” Generative AI, with its human-like output and superhuman speed, promises to save agencies untold hours of costly tedium. And while the IT officials StateScoop interviewed for this story unerringly expressed caution about using generative AI, there may come a time when its value proposition becomes irresistible, or a day comes when it’s so pervasive it simply can’t be avoided.

And even more to the point in 2024, government agencies don’t need to procure AI for it to infiltrate their walls. Digital assistants like OpenAI’s ChatGPT or Anthropic’s Claude can be freely accessed online, and, by all accounts, government employees frequently use those tools. An even trickier challenge for officials tasked with governing AI is the growing habit of software companies to plop new AI-powered functions into software already being used by tens of thousands of government employees.

Advertisement

Owning the process

Each state is managing the AI environment’s caprice a bit differently. Josiah Raiche, Vermont’s chief data and AI officer, said state leaders created his role, along with a state AI council, because they’re taking seriously the potential for AI misuse to breach the public’s trust.

“It is important to have somebody senior in the technology organization who really does feel that it’s their job to focus on ethics,” he said. “I think it’s worth it to have a director of AI just for that.”

It’s government’s responsibility, Raiche said, to establish policies that ensure its own AI use is ethical.

But Raiche also argued that including AI in a project to redesign a clumsy paper process, for example, isn’t necessarily meaningfully different than offloading work to a human intern. In either case, government must measure its outcomes and take care to be ethical.

Advertisement

“It’s still got to be the owner of the program who owns the process who also owns the technical tool that’s used in that,” he said. “In Vermont, we’ve said very broadly it’s generally OK to use AI for personal productivity things for your staff, but that’s at the discretion of the supervisor based on where they think there’s risk to damaging trust in Vermont’s institutions or some other type of risk.”

Most AI challenges are arriving at government’s doorstep unbidden. One state official shared with StateScoop an email they received from the graphic design platform Canva, which noted that hundreds of registered user accounts from their organization who were using the tool were “at risk of exposing the state’s intellectual property as we begin training our AI with user content.” The proposed solution: Consolidate the accounts and regain control of the state’s data through the purchase of an enterprise license.

In an email, a Canva spokesperson told StateScoop this may have been a miscommunication or misunderstanding.

“By default, all users are opted out of AI training, and we will never train on a user’s private content without their permission,” the company’s statement read. “When it comes to AI, we’ve taken a careful and considered approach while continuing to invest heavily in trust and safety through Canva Shield, our industry-leading collection of trust, safety, and privacy tools. … Enterprise account admins can control whether data from users on their team can be used to train AI models or not, rather than leaving it up to the individual user.”

‘It’s not happening’

Advertisement

Utah Chief Technology Officer Chris Williamson said his biggest AI challenge is cleaning the state’s datasets, which were never maintained for use in AI models. But he said the potential uses of state government’s “wealth of data” are endless. One theoretical example: correlating tax records with driver’s license data to understand road congestion.

“I can now make a map of what a commuting network looks like for our environment within the state of Utah and I know what theoretical road congestion is going to look like,” he said of the imagined project. “That’s going to give me an idea of what I have to do for road infrastructure, architecture, if I have to do major road maintenance. What’s going to happen when those individuals now have to take new arteries to get to work? And I could either plan or re-architect my road structure to manage those individuals, just by knowing where they live and theoretically where they’re working.”

Beyond the technical challenges of undertaking such a project, Williamson said, it’s not up to him to decide whether a project constitutes an ethical use of AI — that’s the job of the state’s lawmakers.

“It’s been put in code at the legislative level, but it’s also been put in code at the computer level. And we built our systems around protecting that citizen data with some very strict barriers,” he said.

Massachusetts State Sen. Barry Finegold is among the legislators who’ve drafted bills aimed at reining in AI. Finegold garnered media attention by using ChatGPT to help him write the text of his AI bills, which include one that would impose fines on political candidates who use deepfakes to deceive the public.

Advertisement

“First of all, this should be done on the federal level,” Finegold said. “I’ll be the first to admit that, but it’s not happening.”

In the absence of comprehensive federal AI legislation, states have been proactively governing generative AI with a gusto unseen in some previous technology revolutions. According to the National Conference of State Legislatures, at least 40 states introduced AI bills during their 2024 legislative sessions.

“Once upon a time, we thought Facebook was really cute,” Finegold said. “It was like college kids, and we saw how powerful it was. And we should have put up guardrails like we have in place now. … I feel this time around with AI, we’re better. But I’m still concerned that AI is moving so quickly that even with our best efforts we’re going to miss things.”

‘Bigger than all of us’

Would Indiana Chief Data Officer Josh Martin strap AI on top of all of his state’s data?

Advertisement

“Absolutely not,” he said. “It’s just not in a place where we understand where it all lives, who’s in charge of it, what the quality is of it, what’s valuable, what’s not. … Most of the metadata in these systems wasn’t fully completed when they were developed. It just wasn’t a priority at the time.”

Helping states figure out how to get their data ready for AI, while keeping operations secure and ethical, is Keith Bauer’s job. As managing director of Microsoft Public Sector, he said one of the most common questions he hears these days is: “How do we make sure we’re using AI responsibly?”

“It’s not something that Microsoft can solve, government can solve, users can solve, the public can solve,” he said. “It really, truly is a collective effort by everybody.”

He pointed to Microsoft’s Responsible AI Standard, the policies it uses internally to ensure AI’s accountability, transparency, fairness, reliability, safety, privacy, security and inclusivity. He pointed to Microsoft’s ready public support for the Biden administration’s AI executive order. And he repeatedly declared his company’s interest in being “in lockstep with our government customers on their AI journey.”

“We don’t train the underlying models when government customers use our technologies, and if a customer were to build their own generative AI solution, some of the things to take into consideration with that is the options they have for getting the results that they want in that generative AI solution,” he said.

Advertisement

Google maintains a similar list of AI principles that includes advisements to “be socially beneficial” and to “uphold high standards of scientific excellence.”

Chris Hein, director of customer engineering for Google’s public sector business, emphasized that the corporation he works for is separate from Google, allowing it closer alignment with any White House AI policies and the public sector at large. He gestured at comments made by Thomas Kurian, chief executive of Google Cloud, indicating an interest to build technologies explicitly with government in mind. And he pointed out that “a vast majority” of the company’s commercial products are compliant with the FedRAMP government security standard.

“You as a government agency, you can’t necessarily do anything about … the training, about the different weighting and all those different kinds of things that are happening in the background of a large language model,” he said. “So when you come to a vendor like Google, you’re trusting that vendor to have a certain amount of ethical responsibility in the training and in the weighting of those models and how they’ve been developed over time.”

Though government must outsource some of its ethical work if it wants to do AI, Hein said he thinks government can rest easy, because large companies developing the technology, like Google, share the same values.

He also said most government agencies aren’t interested in tuning their own models and prefer for things to work “out of the box.”

Advertisement

“When you’re using technology, we have this ‘shared fate’ kind of model when we think of things like security,” Hein said. “Google is going to be responsible, as a technology provider, for ensuring that there’s certain aspects of the system that you should not have to worry about as someone who is utilizing that cloud environment.”

Despite the assurances of Big Tech’s public sector businesses, Delaware Chief Information Officer Gregory Lane said he doesn’t believe the preferences of state government will have much influence on the future of AI technologies.

“It’s bigger than all of us, it’s happening around us,” Lane said. “We’re not going to control it. I just got off a meeting where it was suggested we have a list of [AI] tools people can use, and my comment was that’s like having a list of apps that are OK for your iPhone. That thing’s going to grow and spread faster than you can keep up with it.”

This story was featured in StateScoop Special Report: Artificial Intelligence 2024

Latest Podcasts