
LESSON 2
The Nature of Bullshit
Developed by Carl T. Bergstrom and Jevin D. West



Large Language Models (LLMs) such as ChatGPT can do so many things so well that sometimes it feels as if humanity has discovered a modern-day oracle.
Yet we argue that their power comes from a superhuman ability to bullshit. This ability is their greatest strength—yes, it can be genuinely useful—and also their biggest threat.
If this sounds like a contradiction to you, you're not alone. The purpose of this course is to explore how a system that is fundamentally a bullshit machine can appear to be a powerful oracle.
To resolve that paradox, we first need to explain exactly what we mean by bullshit. In our course on Calling Bullshit, we define it as follows.
BULLSHIT involves language or other forms of communication intended to appear authoritative or persuasive without regard to its actual truth or logical consistency.
A lot of bullshit takes the form of words, but it doesn’t have to. Statistical figures, data graphics, images, videos — these can be bullshit as well. Bullshit aims to be convincing, but what makes it bullshit is the lack of allegiance to the truth.
According to philosopher Harry Frankfurt, a liar knows the truth and is trying to lead us in the opposite direction.
A bullshitter either doesn't know the truth, or doesn't care. They are just trying to be persuasive.
This is where Large Language Models come in. These systems have no ground truth, no underlying model of the world, and no rules of logic. Their words don't refer to things-in-the-world; they are just words in statistically-likely orders.
When they get things wrong, they aren't trying to lead us away from the truth. They couldn't do that if they wanted to—they don't know the truth.
LLMs are designed to generate plausible answers and present them in an authoritative tone. All too often, however, they make things up that aren't true.
Computer scientists have a technical term for this: hallucination. But this term is a misnomer because when a human hallucinates, they are doing something very different.
In psychiatric medicine, the term "hallucination" refers to the experience of false or misleading perceptions. LLMs are not the sorts of things that have experience, or perceptions.
Moreover, a hallucination is a pathology. It's something that happens when systems are not working properly.
When an LLM fabricates a falsehood, that is not a malfunction at all. The machine is doing exactly what it has been designed to do: guess, and sound confident while doing it.
When LLMs get things wrong they aren't hallucinating. They are bullshitting.

As we will see, LLMs are powerful tools. But they also make it easy for people to mislead us by accident, or on purpose.
The Bullshit Asymmetry Principle — also known as Brandolini's Law after the computer programmer who proposed it — is one of the most important principles in bullshit studies.
Brandolini's Law: The amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it.
Brandolini's law captures something essential about bullshit: it's a lot easier to create than it is to clean up, especially once it begins to spread organically.
Falsehood flies, and the truth comes limping after it.
Today, social media has accelerated the velocity at which bullshit spreads.
Generative AI enormously lowers the cost of producing bullshit in the first place. Yet cleaning up bullshit remains as expensive as ever.
Think about what it took to produce bullshit in the past.
Writing one bullshit post on Facebook never took much time, but posting bullshit to a hundred different social media accounts was an afternoon's work for a propagandist. Writing a disingenuous OpEd could take a day or two. Drafting a fake scientific paper could take a month, and required substantial expertise in the area to be convincing.
Today, Large Language Models can do any of these things in a matter of seconds.
Clearly we are heading into a world where there is even more bullshit to go around.
A lie will gallop halfway round the world before the truth has time to pull its breeches on.
PRINCIPLE
To understand how LLMs will change the world, we will need to understand what they are capable of, and what they are not. We will have to think about not only the benefits they provide, but also the ways that people will abuse them to produce insincere text at scale.
DISCUSSION
People are perfectly good at producing bullshit without AI assistance—but with AI, people can produce more bullshit, faster. Who might find that useful? How?

VIDEO
What happens when instructors use LLMs to write their lectures?