
LESSON 18
Democracy
Developed by Carl T. Bergstrom and Jevin D. West


In 2017, the FCC received 22 million comments about their proposal to repeal net neutrality consumer protections — but the vast majority of these comments were fake messages sent by bots. At the time, this sort of skullduggery was fairly easy to detect. No one was going to author 22 million different fake messages, so many of the messages were used thousands of times.
Today, that wouldn’t happen. With a push of a button, generative AI can write as many unique messages as one needs.
Our representative democracy requires that voters can communicate their wants and needs to government officials. When computer systems are used to disrupt this process by diluting the true sentiments of the community amidst a tsunami of fake feedback, we suffer a man-in-the-middle attack on democracy. The 2017 net neutrality campaign was a crude example. Seven years later, LLMs facilitate much more sophisticated efforts. In addition to writing an endless volume of online comments, generative AI systems can create photorealistic images of constituents who don’t actually exist, and even place authentic-sounding voice calls to comment lines.
Democracy requires an informed electorate; disinformation campaigns threaten democracy.
Today a single bad actor can use generative AI to fabricate an entire propaganda campaign on a desktop computer for a few hundred dollars: from primary news reports and subsequent OpEds, to article comments and social media posts.
That same bad actor can employ data brokers to provide massive amounts of data about each of us—our location and occupation and salary, our hobbies, our expenditures, our relationship status, our likes and dislikes—and use an LLM to craft individually targeted propaganda messages. Online advertising platforms make it easy to target these messages with pinpoint accuracy.
Bots that amplify extreme views and instigate arguments can drive political polarization and create a sense of irreconcilable division within a society. This can be disastrous. If we can be convinced that half of our fellow citizens are stupid, irrational, or evil, we begin to lose faith in the entire project of democracy.
Rather than trying to convince people of particular lies, a contemporary propaganda approach known as the Falsehood Firehose aims to overwhelm an audience with massive amounts of mutually contradictory information. Garry Kasparov explains:
The point of modern propaganda isn't only to misinform or push an agenda. It is to exhaust your critical thinking, to annihilate truth.”
LLM-based bots can do this on a scale that humans could only dream of.

The internet offered the possibility of safe, anonymous communication. Political dissidents in Saudi Arabia could organize; persecuted minorities in India could tell their stories; queer teens in Tennessee could find community.
But what happens when LLMs flood the internet with phony narratives from people who don’t exist? We can no longer trust anonymous speech to be genuine.
To believe what we read, we require authentication: ways of knowing that the authors we are reading and the people we are talking to are real.
But authentication threatens anonymity. If we want to ensure that the stories we read are coming from real people, we will end up excluding certain stories as they become no longer safe to tell.
Finally, an AI world has the potential to be a less democratic world.
In Lesson 5 we saw how difficult it is to understand why AI does what it does. When AIs make policy decisions, from individual decisions about loans and parole to society-wide decisions about social programs, those decisions are opaque. And that opacity is fundamentally undemocratic.
Then there is the question of access. The leading AI models are not public projects. They are corporate ventures shrouded by trade secrecy. Before 2025, even large government agencies can't compete; LLMs on the scale of ChatGPT, Claude, or Gemini cost on the order of a hundred million dollars to train.
If LLMs continue to require resources on this scale, commercial interests, not public ones, will shape how they are trained. Access to the most powerful model will be limited by ability to pay—and thus it will be corporate entities, not the public, who are best positioned to use them.
But this may change. On January 20th, 2025, a
Chinese company called DeepSeek released a new version of their LLM. This new model, known as R1, compares favorably with the state of the art from OpenAI, but DeepSeek made the model and weights openly available to the world so that others can implement it themselves. Moreover, the R1 model was trained for a cost of about six million dollars—a tiny fraction of what it cost to develop previous models.
Turbocharging disinformation.
Interfering with representative government.
Undermining anonymity.
Making opaque decisions.
Concentrating power in the hands of the few.
It's a lot.

Is democracy ready for all this?
We sure hope so.
PRINCIPLE
Representative democracy requires that an informed electorate be able to communicate with those to whom they assign authority. By misinforming the electorate and limiting their ability to petition their representatives, LLMs threaten our system of governance.
DISCUSSION
In the US, free speech is tightly connected with the idea of democracy itself through the First Amendment to the US Constitution. Is it possible to protect democracy from the abuses that LLMs make possible while also preserving broad free speech rights?

VIDEO