Will AI be good for democracy? That depends on the humans in charge

Taylor Armerding
7 min readNov 4, 2024

--

Artificial intelligence (AI) is, and will be, changing everything, for better and worse. Just a partial list includes education, manufacturing, finance, advertising, sports, crime, law enforcement, transportation, utilities, entertainment, and yes, our representative democracy.

That last one is extra relevant at the moment, given that tomorrow is the U.S. presidential election. And no matter which side of the political divide you inhabit, there are likely things about AI that you find useful and intriguing while other things about it leave you frustrated, worried, and enraged.

Or confused. One of the most malignant uses of AI is deception, and its deceptive capabilities are improving every day. It’s used for fakery of all kinds — robocalls, social media accounts from hostile nation states pretending to be Americans, “news” stories accusing one candidate or another of misconduct or crime, and on and on.

None of which should be a surprise to anyone who has been following its evolution. Search “deepfakes” online and you’ll get an almost endless list of results. And it has been very much on the radar of the leaders of the U.S. representative democracy.

It was almost exactly a year ago, on Oct. 30, that President Biden issued an executive order (EO) on “Safe, Secure, and Trustworthy Artificial Intelligence.” A press release accompanying the EO pronounced it a “landmark” that “establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more.”

Aspiration vs. reality

All worthy goals, and regulations that would implement at least some of the directives in the EO are under way. But don’t hold your breath. We’re a long way from having AI under the control of anybody.

First, keep in mind that Biden will soon be a former president.

Another reality is that even the EO and a 2023 “AI Summit” hosted by Senate Majority Leader Chuck Schumer with various tech titans haven’t moved the needle much.

Bruce Schneier, author, blogger, and chief of security architecture at Inrupt, Inc., noted in a recent blog post that the two federal agencies — the Federal Communications Commission and Federal Election Commission — that presumably have some authority over the use of AI in political campaigns, “have punted the ball, very likely until after the election.”

And Jimmy Rabon, director of product management with the software security company Black Duck, said “nothing in the EO that is material to making AI safer has been implemented. Questions like ‘What LLMs [large language models] did you use to generate software code?’ isn’t a required field anywhere. Even requiring an AI BOM [bill of materials], a listing of either proprietary or open source components, gives no indication of how the model was trained, who trained it, what is it susceptible to, etc.”

But even if every directive in the EO had been implemented, it’s unlikely that any single national leader, even if it’s the U.S., will be able to constrain or control the use of a global technology. Pick your cliché — the train has left the station, the horse is out of the barn, the genie is out of the bottle, that ship has sailed, Pandora’s box is already open, etc. All the nations in the world rarely agree on anything, and AI isn’t likely to be the exception.

Also, while the Biden EO addresses privacy, civil rights, cybersecurity, workers’ rights, innovation and competition, foreign policy, and responsible government use of AI, it doesn’t have a section specifically on democracy overall.

Perhaps that was intentional. Because there is a case being made that a governing system established in the 18th century simply can’t function effectively amid the technology of 250 years later.

It’s a bit like privacy. A couple of generations ago, all it took was to close and lock a door. That doesn’t work in a smart home with smart devices that are constantly collecting data on its residents regardless of whether the door is closed or not. Likewise, a vehicle is no longer a sanctuary in a world of surveillance cameras, GPS sensors and license plate readers.

Hacking democracy

Schneier made that point in an address at the 2023 RSA Conference in San Francisco, declaring that “the political and economic systems of governance that were created in the mid-18th century are poorly suited for the 21st century. They don’t align incentives well. And they are being hacked too effectively.”

He noted that “James Madison wrote about this in 1787, where he made two points. One, that representatives serve to filter popular opinions, limiting extremism. And two, that geographical dispersal makes it hard for those with extreme views to participate. It’s hard to organize. To be fair, these limitations are both good and bad. In any case, current technology — social media — breaks them both.”

“This is not about capitalism versus communism,” he added. “It’s not about democracy versus autocracy. It’s not even about humans versus AI. It’s something new, something we don’t have a name for yet.”

Schneier, who is cowriting a book about AI and its impact on democracy due out in a year, wrote in a recent blog post that the book will be “largely optimistic.” But he has written and spoken about the topic extensively over the past several years and provided abundant evidence for pessimism as well.

Among his observations are that “democracy is a socio-technical system. And all socio-technical systems can be hacked. By this I mean that the rules are either incomplete or inconsistent or outdated — they have loopholes [that] can be used to subvert the rules [like] gerrymandering, the filibuster, and must-pass legislation. Or tax loopholes, financial loopholes, regulatory loopholes.”

“Finding loopholes,” he noted, “is similar to finding vulnerabilities in software,” which can obviously lead to exploiting them. So AI “will supercharge hacking,” he wrote. “We have created a series of non-interoperable systems that actually interact, and AI will be able to figure out how to take advantage of more of those interactions: finding new tax loopholes or finding new ways to evade financial regulations. Creating ‘microlegislation’ that surreptitiously benefits a particular person or group.”

Both attacker and defender

But then, “AI can also be used to defend against hacking, finding vulnerabilities in computer code, finding tax loopholes before they become law and uncovering attempts at surreptitious micro-legislation,” he said.

It also could also improve the collection of public preferences that lead to policies. “This would be more accurate than polling. And maybe even elections,” he said.

Of course, much would depend on who is controlling the collection of those preferences. The ongoing debate over free speech and content moderation on social media platforms is evidence of ferocious disagreement over who gets to decide what is misinformation or disinformation that therefore should be suppressed or deplatformed.

Rabon, who confesses to being both “fairly terrified and incredibly interested in AI,” agrees that whether it is good or bad for democracy “depends on which companies or individuals control the AIs that are the most capable.”

He’s hopeful that control will lie with the many rather than the few. “I predict that commercial LLMs will be outperformed — with much less overall cost and much more efficiency — by open source LLMs that are tuned for specific tasks. If that is a reality, then we will have thousands of them, and the real security challenge will be in determining the aptitude of the model, and how it will change over time with training.”

“If my theory is correct, then AI will likely be good for democracy,” Rabon said.

Here to stay, for better and worse

Whether its effects are good or bad, AI will be ubiquitous in whatever shape democracy takes in coming decades. In another keynote at the RSA conference, this one earlier this year, Schneier said he expects AI to affect democracy in “politics, lawmaking, administration, the legal system, and finally, citizens themselves.”

He said he believes there is a good chance that impact will be beneficial. “As AI starts to look and feel more human, our human politicians will start to look and feel more like AI.”

“I think we will be OK with it, because it’s a path we’ve been walking down for a long time,” he said, noting that “any major politician today is just the public face of a complex socio-technical system” where everybody knows that speeches a president gives or campaign emails from legislators are all written by somebody else.

Probably, few people would object to AI handling government administrative and bureaucratic tasks, since they tend to be rote, time-consuming, and less political. But then there are elements of government, like writing legislation or the legal system, where the language of a bill or the enforcement of laws can depend very much on who is training or providing the prompts to an AI assistant.

Which brings it all back to the human factor. Like any effective tool, AI can amplify and accelerate what humans can do, for better and worse.

“There is a role for AI in decision-making: moderating discussions, highlighting agreements and disagreements, helping people reach consensus,” Schneier said in his 2023 keynote. “But it is an independent good that we humans remain engaged in — and in charge of — the process of governance.”

Rabon takes is a bit further. “Democracies depend on the power being in the hands of the people. Absolute power corrupts absolutely, and the more concentrated a power technology is, the more likely it will be used for control by some malevolent individual or group whose concern is staying in power. If it is consolidated in a few hands or nation states or whatever, then it would be the best tool for fascism ever.”

--

--

Taylor Armerding
Taylor Armerding

Written by Taylor Armerding

I write mainly about software security, data security, and privacy.

No responses yet