
LESSON 15
Artificial Intelligence and Human Stupidity
Developed by Carl T. Bergstrom and Jevin D. West


Some time around 2022, there was an explosion of concern that breakthroughs in AI would lead to hyper-intelligent machines that could become an existential threat to humanity.
We are not particularly worried. For the time being, artificial intelligence is a far lesser threat than human stupidity.

This is nothing new. Cybersecurity experts have known for decades that the big threat is not some sophisticated cryptographic algorithm for breaking into a security system. It's humans being careless, falling prey to social engineering exploits, and letting hackers in through the front door.
Indeed people use LLMs in all sorts of inappropriate ways, with new forms of abuse no doubt soon to be invented.
Among other things, we have:
- AI personas running for political office in the US, the UK, and Brazil;
- a proposal from Governor Gavin Newsom to use AI agents to manage the California budget;
- a New York City chatbot that provided landlords with advice about how to defraud renters;
- a Boston University dean advising faculty to replace striking graduate student teaching assistants with ChatGPT;
- a Stanford University AI expert who used ChatGPT to write his sworn testimony in court case about the risks of generative AI, and who was rebuked by the judge when his testimony was found to be full of fake citations;
- AI agents designed to write up police reports from bodycam audio;
- an AI transcription system that hallucinates extensively being used to create millions of medical records despite warnings from OpenAI against use in “high-risk domains”;
- and unregulated LLM chatbots intended to provide mental health services for teens.
Why are we seeing people implement these terrible ideas?
There's an old joke about the farmer who sits at his roadside farmstand, playing checkers with his dog. A passing motorist is awestruck: "My god, I've never seen anything like it! That dog is a genius!" But the old farmer is unimpressed: "Not really. I beat him two games out of every three."
Generative AI is a bit like the checkers-playing dog. People are so impressed by what an LLM can do that they forget it's not actually very good at most things they ask of it. In every case above, people have overestimated what an LLM can do.
Some companies switch from human-based support to AI replacements simply to lower costs. They don’t care whether the new systems work well or not.
Others suffer from Silicon Valley FOMO (fear of missing out). They hear that generative AI is the Next Big Thing, and don't want to be left behind.
How do we thrive in a world where this is happening?
First and foremost, don't fall for the hype yourself. We're all vulnerable to it, of course—the anthropoglossic design we discussed in Lesson 3 makes it all the more difficult to stay grounded. Ask: what are the alternatives to using a general-purpose LLM? How do they perform by comparison? What are failure modes of this LLM application? What are costs and consequences of failure?
Second, pay attention to others' mistakes. Watch people use LLMs for and what goes wrong. "As a dog that returneth to his vomit, so is the fool that repeateth his folly", the bible instructs us. Don't be that dog.
Third, don't accept the premise that intelligent computers can replace the majority of human interaction. As we discussed in Lesson 14, LLMs cannot provide the authentic connections that we crave. As anthropoglossic systems become increasingly powerful and increasingly integrated into every aspect of our world, spending time interacting face-to-face with other humans will become more essential, rather than less.
PRINCIPLE
Human stupidity is a bigger threat than artificial intelligence. People will use generative AI in inappropriate ways to create ineffective systems that will make the world a worse place. Not everything doable is worth doing.
DISCUSSION
You don't get to control what other people do with AI, so how can you limit your exposure to misuses of the technology?

VIDEO
Coming Soon.
NEXT: The First-Step Fallacy