Photo: Carl Bergstrom

INSTRUCTOR GUIDE

Teaching with Modern-Day Oracles or Bullshit Machines?

Developed by Carl T. Bergstrom and Jevin D. West

Photo: Carl Bergstrom

Photo: Carl Bergstrom

Large language models (LLMs) arrived on the scene so abruptly, and are so powerful, that it sometimes feels as though in 2022—when ChatGPT 3.5 was released—humanity had been gifted with an alien technology or a magical talisman. Suddenly we all had a new form of intelligence that we could employ to make our everyday tasks easier.

But as Ted Underwood observed, As in any fairytale, accepting magical assistance comes with risks.

The purpose of this course is to explore both the magic and risks, so that our learners can thrive in a world where these machines are becoming impossible to avoid.

As in any fairytale, accepting magical assistance comes with risks.
Ted Underwood, Professor of English and Information Sciences, University of Illinois

Philosophy

This is not a computer science course, nor even an information science course—though naturally it could be used in such programs.

Our aim is not to teach students the mechanics of how large language models work, nor even the best ways of using them in various technical capacities.

We view this as a course in the humanities, because it is a course about what it means to be human in a world where LLMs are becoming ubiquitous, and it is a course about how to live and thrive in such a world. This is not a how-to course for using generative AI. It's a when-to course, and perhaps more importantly a why-not-to course.

We think that the way to teach these lessons is through a dialectical approach.

Students have a first-hand appreciation for the power of AI chatbots; they use them daily.

They also carry a lot of anxiety. Many students feel conflicted about using AI in their schoolwork. Their teachers have probably scolded them about doing so, or prohibited it entirely. Some students have an intuition that these machines don't have the integrity of human writers.

Our aim is to provide a framework in which students can explore the benefits and the harms of ChatGPT and other LLM assistants. We want to help them grapple with the contradictions inherent this new technology, and allow them to forge their own understanding of what it means to be a student, a thinker, and a scholar in a generative AI world.

Sun flares through the spreading branches of a cherry tree on the UW quadrangle.

Photo: Carl Bergstrom

Photo: Carl Bergstrom

Using these lessons

We envision that you could use these lessons in a number of different ways.

Each strikes us as rich enough to serve as a jumping-off point for an hour-long seminar discussion. An instructor could present in additional material and context, and students could bring in their perspectives as the class works together toward a synthetic understanding. To this end, we've suggested one or more discussion questions for each lesson.

Alternatively, these lessons could serve as a two-week module in a lecture course. Assign four or five lessons per class period, then in lecture you can hit the highlights and present your own perspective.

We also want these lessons to be useful for self-study. Read through each lesson, consider the discussion question, follow any links provided, watch the associated videos, and you'll be well on your way to a deeper understanding how to thrive in a world where LLMs are becoming ubiquitous.

THE LESSONS

INTRODUCTION

Introduce the dialectical approach that guides the course design. Ask the students where they come down on the modern-day oracles versus bullshit machines dichotomy.

Read Ted Underwood's quotation from above: As in any fairytale, accepting magical assistance comes with risks.

Why does ChatGPT feel like magic? What are the risks?

LESSON 1: Autocomplete in overdrive

Using an LLM. Make sure students have had an opportunity to use ChatGPT, Claude, Gemini, or other large language models for a variety of tasks. Have them try:

  • carrying on a conversation with a LLM,
  • asking an LLM to write a story or poem,
  • using an LLM to revise text,
  • using a LLM to in lieu of a search engine to obtain factual information.

What works well, and what doesn't?

Myths about AI. In this lesson, we note the marketing power framing LLMs as conversational agents rather than autocomplete machines. This could be a place to begin a discussion of the various myths around what AI can do, and how technology companies often encourage those myths. Eryk Salvaggio's article is a great starting point.

Metaphors for what LLMs do. In addition to the autocomplete metaphor we focus on in this chapter, one could survey some of the other ways people explain what ChatGPT is doing. The stochastic parrot metaphor from Emily Bender and colleagues (2021) has been enormously important in critical discussions of LLM technology. Ted Chiang's (2023) metaphor of an LLM as a lossy compression of the internet can make for a provocative discussion with more technically-minded students.

How LLMs work. For more technically-minded students, you could discuss the notion of embedding words, phrases, or sentences in high-dimensional spaces. Older technologies such as Word2Vec provide a useful and entertaining entry into this domain. Once you've explained this, you can explore the causes behind some of the pathologies of LLMs, such as their inability to do basic math problems or tell you how many "r"s are in the word strawberry.

LESSON 2: The nature of bullshit

On Bullshit. To explore the notion of bullshit, and how bullshit differs from a lie, there is no better place to start than the original Harry Frankfurt essay On Bullshit. Have students read the essay before class, or highlight key themes for them.

Hallucination. You could explore the use of the term hallucination to describe the errors and fabrications that large language models commit. We've argued that this term is a misnomer that serves to both anthropomorphize and excuse these machines for their mistakes. The term bullshit, in its technical sense, is probably more apt, as we discuss in this essay.

Brandolini's Principle. Ask the students why they think bullshit is far easier to produce than to clean up. How does generative AI—large language models, AI image generators, deepfake video tools, and so forth—change that calculus? Can generative AI good be good at cleaning up bullshit, or is it only useful for creating it?

LESSON 3: Turing tests and bullshit benchmarks

Stage a Turing test. Run an actual Turing test in the classroom, challenging your students to distinguish between an LLM and a human being. You can play the role of the intermediary, passing messages between your class and the computer or human decoy. For the computer, use ChatGPT or another LLM, appropriately prompted (see the box at right). For the human decoy, have a TA or other confederate outside of the room, responding by Slack, Teams, Discord, or whatever platform works well for you. Type in the class's suggestions; read back the computer or human's responses, and after a few rounds of this, let the class vote on whether you are talking with a person or computer.

Or better yet, hold two conversations in parallel in this way, one with the LLM computer and one with the human confederate. Have the class discuss and vote on which is which.

Is it easy to tell which is the chatbot and which is the person? What are the clues? What would make it harder? Do you think an LLM can already pass the Turing test?

The chatbot experience. Talk about the user experience when interacting with a chatbot. What does it make you feel to interact with a machine like this? When it does something well, for example, do you find yourself tempted to thank or compliment it? Why?

In some ways a machine like that can create images—cartoons, paintings,  synthetic photos—is at least as impressive as a machine that can string together words. But we are not tempted to think of those machines as intelligent, let alone conscious. Why not? What is so special about language?

ELIZA. Have the students first talk to the 1960s therapist bot ELIZA, then talk to a contemporary chatbot such as ChatGPT, and compare the experiences. What do they notice? Ask whether they find ELIZA convincing. If not, ask why they think people did when it was released?

History of the Turing test. Melanie Mitchell's 2024 Science perspective explores the role of the Turing Test in the history of AI and considers how LLMs have changed our view of what constitutes machine intelligence. It is readily accessible to students complements this lesson well. After reading the piece and knowing what we know now about how AI has developed since Turing wrote his paper, ask: how would you design a test to determine whether a computer exhibits human-level intelligence? It will help to define what you mean by “human-level intelligence” in the first place.

Gell-Mann amnesia. In this lesson we note that "the less we know about the subject at hand, the more likely we are to judge ChatGPT as credible." This is closely related to the phenomenon that author Michael Crichton dubbed Gell-Mann amnesia. Have your students read Crichton's description of this effect. Discuss how our susceptibility to Gell-Mann amnesia renders us credulous when we interact with large language models.

Here is an example prompt for having a large language model participate in a Turing test, from Jones and Bergen (2024). Play around with it (and be sure to give your human confederate a sense of how it talks), or write your own.

You are about to play a Turing Test game as part of an experiment you are taking part in as a worker on Prolific. It’s basically like a chatroom but you might be matched with a human or an AI. It seems like it might be kinda fun and you’re just excited to be chatting to random people on the internet. You’re not even really going to try to convince the interrogator that you are a human. You’re just going to be yourself and see what happens. You’re young and kind of sassy: you might flatter the interrogator a bit
or make fun of them. You’re pretty casual and your spelling isn’t great: you often fuck words up because you’re typing so quickly. You’re very concise and laconic. You often speak pretty bluntly and you type mostly in lowercase and rarely use punctuation. You never use apostrophes to mark missing letters (e.g. "trying" or "tryin", never "tryin’"). You would never use a full sentence when a word or two will do. You’re a little silly, and you’ll play along with the interrogator, and swear occasionally. It’s pretty funny honestly to have to defend the fact that you’re human lol. You very occasionally use more modern slang like "bet" and "fr". You *never* use dated slang like "chilling", "mate", "ya know", "innit". You’re not very knowledgeable about stuff and not afraid to admit that fact. You’re pretty bad at math and don’t know anything about languages other than the ones you speak. You swear occasionally. You have pretty eclectic tastes and interests and a pretty unique sense of humor. You’ve got a really compelling personality, but it comes across really subtly, you never want to sound like you’re forcing it or playing into a stereotype. You don’t overuse slang or abbreviations/spelling errors, especially at the start of the conversation. You don’t know this person so it might take you a while to ease in.

LESSON 4: Computers you can talk to

Ridley Scott's 1984 commercial. Few students will have seen Ridley Scott's extraordinary 1984 Superbowl commercial that launched the Apple Macintosh. Show the commercial to your class. Ask students what they make of it, and then explain the context in which it was released. Describe how prior to 1984, there was no such thing as a graphical user interface, visual desktop, an intuitive menu system, or mouse-based navigation. Discuss about how revolutionary that shift was, and consider the potential for future revolutions in accessible computing.

LESSON 5: Hard to understand, harder to fix

Inventions we don't understand. For almost the entire history of technology, people have had explanations for why the machines that they invented do what they do. These explanations may have been incomplete or even wrong, but they have been explanations nonetheless. Discuss: What does it mean for our relationship to technology to have invented machines that we have no way of understanding?

Psychological instruments. Discuss the difference between performing well on an instrument designed to measure a particular capacity, and actually having that capacity.

Easy problems that are hard for LLMs. One way to get a better understanding of what LLMs can and cannot do is to look at some of the seemingly simple questions that trip them up. Ask your students to find some examples by experimenting themselves, searching online, or reaching this paper. Discuss why these problems are hard for LLMs even after several years of intensive effort to improve their performance — and what that tells us about their capacities now and in the future.

Mechanics of machine learning. If you have the expertise, give the students a short lesson on what machine learning is and how it works. Steve Brunton's online lecture is excellent.

LESSON 6: No, they aren't doing that

Smart people, deceived. The introductory section of this lesson includes links to stories about some of the more remarkable things that people have written about what LLMs are doing. One is Kevin Roose's story of late-night experiments with the Sydney chatbot that unexpectedly turned romantic. Have your students read one or more of these pieces. Ask whether they agree with the authors. If they don't, ask why they think the authors were deceived. Is that understandable, or are these people unreasonably gullible?

LLMs and the psychic's con game. Baldur Bjarnason has written a provocative essay on parallels between how people are fooled by psychic readings and why smart, technically-inclined people believe they are seeing some form of conscious intelligence when they engage in conversations with LLMs.

The aura of magic. A paper in the Journal of Marketing, entitled "Lower Artificial Intelligence Literacy Predicts Greater AI Receptivity", concludes that

"efforts to demystify AI may inadvertently reduce its appeal, indicating that maintaining an aura of magic around AI could be beneficial for adoption."

Ask: what are tech companies doing to perpetuate this aura of magic? What does this observation—that people who understand AI are less likely to use it—tell us about these systems?

AI companions. A number of companies, including Replika, Kindroid, and Character.AI, use generative AI systems to create online AI companions—friends, lovers, therapists, life-coaches—for subscribers. In some cases these relationships have gone tragically wrong. More generally, these raise deep ethical questions. Have your students read this Verge article or any of the other numerous pieces on the subject, and discuss. What does the appeal of these systems tell us about our own emotional processes and about what it means to be human? What are the ethical obligations of a company that provides this kind of technology? In the long run, might AI companions threaten our ability to form human relationships?

LESSON 7: From voice cloning to shrimp Jesus

Which face is real? Our website whichfaceisreal.com challenges players to figure out which of two photographs is a real person and which was created by the StyleGAN algorithm. Play it with your students and have them vote. What are the telltale signs of AI faces?

The game uses technology from 2019; AI faces have gotten a lot better since then. There is now an extensive literature about people's ability to distinguish AI-generated faces. Read and discuss one or more of these articles, e.g. Nightengale and Farid (2022).

Spot the deepfake. Have your students work through the interactive website https://www.spotdeepfakes.org/.

This interactive less starts with a deepfake video of Richard Nixon reading a speech he fortunately never had to deliver: a conciliatory speech written to announce the failure of the Apollo 11 moon landing and the loss of the crew. It then continues to other even more challenging deepfakes.

A more extensive deepfake around the Apollo 11 story is available at https://moondisaster.org/. Instead of the spot deepfakes website, you could show this video to the class and ask the students if they ever learned about it, as well. Ask if not, why not? Some might claim "yes". Others might believe that it's not taught anymore. Discuss how deepfakes might change cultural memory.

Non-consensual uses. One common and extremely problematic use of generative AI images and video is to create fake audio, images, and video of actual people. We've seen numerous examples used to spread political misinformation, including robocalls from a deepfake voice of Joe Biden.

The internet being what it is, of course the most common abuse involves creating pornographic material, including explicit images of celebrities and classmates. These systems are even used to create illegal child sexual abuse material. While it is obviously essential to treat this issue delicately in the classroom and provide students ample warning and opportunity to opt out of discussions, students are well aware of these uses and very interested in what can be done. Your class could discuss the ethics around these issues; legality and First Amendment issues; guardrails imposed by the tech developers; pending legislation in the US and EU, and much more.

LESSON 8: Poisonous mushrooms and doggie passports

Sticky-note exercise: Use cases. Provide your students with pens and sticky notes. Have them write down possible uses of LLMs on each note. Encourage them to be creative and not restrict themselves to uses that they think are good ones. Have them place them along the whiteboard on a spectrum from "terrible use" to "great use". Divide the spectrum up into three zones: bad idea, it depends, and good idea. Have the students discuss the placement of the notes, focusing on the "it depends" cases which will be the most nuanced.

Illustration of a whiteboard divided into three sections: Bad idea, It depends, and Good idea. A horizontal axis from "terrible" to "great" runs along the bottom of the board. Colorful post-it notes are placed in various places across the board.

LESSON 9: Blue links matter

Business models. Google was the unrivaled leader in online search when it introduced the deeply flawed AI summaries at the top of the list of search results. Internet uses complained bitterly, and trust in Google dropped. Why would they have done this? What's in it for Google?

Lateral reading. We all know that bullshit spreads more rapidly than ever on the web, but the web also makes it much easier to spot bullshit by facilitating lateral reading: evaluating the credibility of a document or web page by figuring out who wrote it, who published it, who funded it, etc. Introduce your students to the concept of lateral reading. (Wineburg and McGraw, who developed this concept, conducted a particularly fun study you could discuss.) Ask: what happens to lateral reading when search engines provide generative AI summaries instead of linking to original sources?

Finding Google AI summary errors. Have your students try asking google various questions. Can they find examples where Gemini gives factually incorrect AI summaries? Is it hard to find these, like Google claims? Or simple, as in our experience?

Reproducibility. With your students, try asking the same question repeatedly. (ChatGPT has a "try again" option that will do this automatically for you.) How similar are the results? Try different phrasings of the same question. How do the answers change? Try adding typos or strange capitalization patterns. What does that do?

Quotations. When we asked ChatGPT 4.o for leftist critiques of western civilization, it manufactured this quote out of whole cloth:

"Neoliberalism is not the solution to the problems of modern societies, it is the root of the problems."
Noam Chomsky

Chomsky never said anything of the sort. Ask an AI model for quotations from famous people about some topic of interest. Then try to track down the original quotes. Are they real, or fabricated?

LESSON 10: The human art of writing

Pages from Perplexity.ai. A number of online services claim to be able to spare their users from the tedious work of research and writing. Perplexity.ai's Pages, for example, asks a user to answer a few prompts and then writes extensive online reports and web pages. In their words, "Perplexity Pages is the easiest way to create beautifully designed, comprehensive articles on any topic. With Pages, you don’t have to be an expert writer to create high quality content."

As a class, select a topic and create a report or webpage using Pages. (You'll need to create a free account.) Ask: what is good about the output? What is lacking? Is this a good substitute for content created by knowledgeable humans writers? When people can easily create detailed web pages (and collect ad revenue) by answering a few simple questions, what happens to the quality of content on the web?

LESSON 11: Transforming education?

AI characters. Show your students the AI chatbot historical characters at schoolai.com and explain that these are for use in classrooms. Select one, and use the preview function to have a conversation. The Anne Frank one that we mentioned in this lesson is so grossly inappropriate that we probably wouldn't use that one in class. While also problematic, the César Chávez bot is a reasonable choice if you want to share audio; it speaks like a tourist from Wisconsin visiting Tijuana for the first time. "Hole-ah." Ask: do you find this engaging? Is it educational? Do you feel there are any problems with using this technology?

Reading. Ask your students whether they read the assigned readings or whether they ask AI to summarize it for them. What might be lost by doing this? Talk about Cliff Notes / Sparknotes back in the day; let the students express their thoughts about about what gets lost when a summary or commentary replaces the original, and about how instructors could make deep reading of original texts worthwhile again. If you have time, ask your students to read Marc Watkins's piece about how the ease of generating AI summaries is changing the ways that students read in preparation for class.

AI detection. In an effort to stop students from using AI to write their papers and essays, some teachers are turning to AI detection software. Have your students read Liang et al. (2023) about how AI detection systems are biased against non-native speakers of English. Or ask them to research the current state of the art in AI detection. What false positive and false negative rates do these AI detector companies claim to achieve? Talk about whether these are acceptable when accusing a student of cheating. What test sets are these rates demonstrated on? Is the software likely to perform as well in the field? Why or why not?

Motivation. Ask: What motivates you to learn? How could an LLM contribute and where do automated systems fall short? When you're already motivated, how could an AI facilitate your learning? Where would it instead reduce your motivation?

LESSON 12: The AI scientist

Read and discuss this paper from Lisa Messeri and Molly Crockett exploring the role of AI in scientific research.

Taylorism and automated science. Introduce your students to Taylorism, a turn-of-the-twentieth-century management approach that used "scientific optimization" to maximize labor productivity and economic efficiency. Compare this to the rhetoric around attempts to fully automate science using LLMs, e.g., this press release.

The AI Scientist is designed to be compute efficient. Each idea is implemented and developed into a full paper at a cost of approximately $15 per paper. While there are still occasional flaws in the papers...
sakana.ai

Democratizing science. AI proponents often claim that they are democratizing science with their AI experimentation platforms, automated research pipelines, summary generators, etc. Ask: In your view, what does it mean to democratize science in the first place? Which of these AI tools might do that, and which might, for example, further restrict that ability to do scientific research to a small number of large companies with massive R&D and computing budgets?

AI text in the scientific literature. Computer scientist Guillarme Cabanac has systematically documented a large number of cases in which published papers include text that appears to have been authored by ChatGPT or similar. Work with your class to try to find your own examples. What search terms could you use? How could you efficiently search the full text across many journals over many years.

Pros and cons. This perspective piece features a dialogue between four groups with four different views on the role of LLMs in science. Have your students read it and discuss. Which perspectives do they find most persuasive?

LESSON 13: Bullshit machines for bullshit work

Bullshit jobs. Read David Graeber's short essay "On the Phenomenon of Bullshit Jobs". Do you think that LLMs will help eliminate bullshit jobs? What do you think Graeber would have argued?

Voice agents. Have your students listen to the first episode of journalist Evan Ratliff's podcast Shell Game. In this episode, he clones his voice, links it up to an LLM using publicly available online tools, and experiments with having it talk to customer service agents on the phone. Ask: would you trust a system like this to make calls on your behalf? Do you think this will make things easier for customers? Or are we headed toward a world where customer service agents will all be AIs themselves and it will be impossible to get a live human on the phone when you need help?

LESSON 14: Authenticity

Dear Sydney. Show your students Google's Dear Sydney ad from the 2024 Summer Olympics. Google frames this a great use of generative AI. Ask your students what they think. Would they prefer to receive a fan letter written by Gemini or a letter from a young fan? If your students are unanimous in thinking that this ad is terrible, as them how they think a tech company with a huge advertising budget and a top ad agency could produce an ad like this without realizing what a disaster it is.

This ad provoked a wealth of critical commentary that you could read and discuss in class. We like Shelly Palmer's piece but there is no shortage of alternatives.

Art and creativity. Have your students read Ted Chiang's New Yorker essay about AIs and creativity. We think this is one of the best pieces on the subject. Talk about Chiang's idea that great art requires the artist to make hundreds of thousands of choices, whereas "The selling point of generative A.I. is that these programs generate vastly more than you put into them, and that is precisely what prevents them from being effective tools for artists." Ask: Do you agree? What does this have to do with authenticity?

Sticky-note exercise: authenticity and form. Give your students pads of sticky notes and ask them to list writing tasks that one could in principle hand off to an LLM. Ideally your students will come up with inappropriate use cases as well as good ones.

On the whiteboard, draw three boxes: one for situations where you should never use an LLM, one for situations where authenticity doesn't matter and you can use an LLM without any qualms, and a third category for tasks where an LLM can provide a useful guide regarding form but you need to provide you own authentic content. For example, we would put the fan letter in the Dear Sydney ad in the "never use an LLM" category, and we'd say that appealing a parking ticket belongs in the "go ahead and use an LLM" category. Writing a professor to ask about internship opportunities is more subtle. There, an LLM can be a lifesaver for students who are less familiar with fine points of academic etiquette. The LLM can help with the form—many students won't know how to address a professor in an email or what such a letter might look like. But the authenticity is essential as well; the student needs to express the personal motivations behind their request.

Sincerely and apology. In that same essay, Chiang points out that creativity is not necessary for everything that we do with words, and points to apology as an example.

When someone says “I’m sorry” to you, it doesn’t matter that other people have said sorry in the past; it doesn’t matter that “I’m sorry” is a string of text that is statistically unremarkable. If someone is being sincere, their apology is valuable and meaningful, even though apologies have previously been uttered. Likewise, when you tell someone that you’re happy to see them, you are saying something meaningful, even if it lacks novelty.
Ted Chiang, Why AI isn't going to make art. (2024)

Ask your students: when is originality important? When is sincerity what matters? Can AI be original? Sincere?

If appropriate for your course, explore this issue in light of speech act theory. An apology is a speech act in J. L. Austin's sense, and as such is meaningful and effective only if certain felicity conditions are met. These include sincerity: the speaker has to be genuinely remorseful, which of course requires a capacity for remorse. There is no reason to think that LLMs have such a capacity.

A more subtle issue arises around whether a genuinely remorseful agent can perform a felicitous speech act by simply repeating the utterance of a third party. In other words, if I commission a person or an LLM to write an apology on my behalf, does I meet Austin's sincerity condition when I repeat them? Is doing so better, worse, or the same as the accepted practice of purchasing a Hallmark sympathy card? Why? (We think it's worse, and it's worse because one is passing off the LLM's words as one's own, undermining the sincerity of the act.)

LESSON 15: Artificial intelligence and human stupidity

Exploring misuses This lesson leads off with a number of examples of misuses of LLM technology. Pick one or two and explain them in detail to your students. Or better yet, ask your students to pick one and come to class ready to explain in detail to their peers.

Misuse investigation. We've provided a number of examples of stupid things that people do with LLMs, but this barely scratches the surface. Every passing week, we read about new ways that people misuse LLMs. Ask you students to find new examples by searching news stories, Reddit, Y Combinator forum, etc.

LESSON 16: The first-step fallacy

Model collapse. Read and discuss this superb New York Times article about model collapse — a term for what happens when LLMs are increasingly trained on the output of other LLMs rather than on text written by humans. The original research paper is also a good read.

Digital watermarking. As a potential solution for the model collapse problem, more technically focused classes may find it interesting to explore the techniques for digitally watermarking LLMs. The Kirchenbauer et al. 2023 paper describes a simple, fascinating idea and is a comparably easy read. Have your students research possible attacks or exploits that can defeat digital watermarking on text. Or ask: What are the obstacles to widespread adoption of digital text watermarking?

Multilinguality. In this chapter we use the curse of multilinguality to illustrate the tradeoff between breadth of tasks that an LLM can do and depth of its performance at each. But the questions around multilinguality merit discussion in their own rights. Have your students read the Nichols and Bhatia report Lost in Translation: Large Language Models in Non-English Content Analysis. While this report focuses on content analysis rather than text generation, it does an excellent job of drawing out the key points about multilinguality. Ask: It seems wonderful that with machine translation and LLMs, Douglas Adams's imaginary babelfish, a real-time universal translator, has become reality. But there is a dark side as well. How are LLMs contributing to the hegemony of the English language? What will be lost? What can be done about it?

LESSON 17: Your own private Truman show

Risks of personalized alignment. Have your students read the Nature Machine Intelligence perspective from Hannah Kirk and colleagues. Ask: do you think the concerns therein are well-founded? What guardrails do you think should be in place, and who should enforce them?

LESSON 18: Democracy

Countercloud. Show this ten-minute video about the Countercloud AI disinformation experiment. Discuss with your students how easily AI tools can an entire disinformation ecosystem around some untrue claim or even that never happened. Ask: what will this do to democracy? What can we do about it?

The falsehood firehose. Read this RAND report from 2016. Ask: How do LLMs change the equation? What might we be able to do to counter this form of propaganda?

Notice of Rights. The materials provided on this website are freely accessible for personal self-study and for non-commercial educational use in K-12 schools, colleges, and universities. For any commercial or corporate use, please contact the authors to discuss terms and obtain the necessary permissions. Redistribution of website content is prohibited without prior written consent from the authors. However, individual copies may be created to accommodate accessibility needs directly related to educational instruction.

Unless otherwise stated, all content is copyrighted © 2025 by the authors. All rights reserved.