
CHANGELOG
The record of a course evolving


This is a living document, in two senses. First, these lessons were decidedly imperfect when released, and though the generous feedback—and pushback—of the community we are continually improving our own understanding of the domain and modifying the course accordingly. Second, the area of LLMs and AI more broadly is evolving as rapidly as any in the history of technology. Examples that are relevant today quickly will become outdated. Assertions we have made may turn out to be false in a few years or even a few months. Even some of our core principle, while chosen to capture truths we will believe will stand the test of time, may turn out to be misguided.
In the interest of transparency, but even more importantly in the interest of illustrating our own fallibility and the difficulty of forecasting the future of AI technologies, we here enumerate the substantive changes that we have made to this course since launch. We will not detail the trivial fixes: typo corrections and and layout changes, for example. Nor will we list additions to the course that are simply involved in fleshing out the scaffold presented at first launch in February 2025: new videos added for each lesson, new exercises for the instructor guide. But we will list each place where we back off a claim, change our minds, or add new content based on new developments in the technology or the scientific understanding of this technology that have arisen subsequent to our initial launch.
CHANGES
March 22, 2025
Lesson 9
The original version of the search chapter did not delve into retrieval-augmented generation (RAG). We've added a paragraph about how poorly sourcing works under RAG with the major commercial LLMs, based on a Columbia Review of Journalism report.
More recent models can also pull information from web searches and use these search queries to provide sources. At present, however, this process suffers from the same fabrications and confident false assertions that characterize LLMs more generally. One study found that commercial LLMs cite incorrect or non-existent sources from 37% (Perplexity.ai) to 94% (Grok) of the time.
March 9, 2025
Lesson 8
Stefan Ciobaca pointed out that it's not advisable to run any code that you don't understand because it could do harm to your system or even incorporate malware. The more critical your system and the more important it is to protect data privacy, the more cautious one should be about using an LLM as a coding assistant. We've modified our coding example accordingly.
Request help with writing computer code to read the information in a file.
Read the code and make sure you understand it*; then cCompile the code and see if your program imports the data successfully.
*We think it's risky to run LLM-derived code that you don't understand. It could erase important data, introduce malware, or cause other forms of harm.
March 7, 2025
Lesson 3
Karen Hausdoerffer had a comment that we thought was so insightful that it deserved mention in our discussion sci-fi visions of artificial intelligence and how they differ from current large language models.
And another key difference: think about any artificial intelligence from any science fiction world you like. 2001: A Space Odyssey, Star Wars, Wall-E — we have always imagined artificial intelligences as single sentient beings, rather than all-pervasive information systems that subtly infiltrate human efforts to make sense of world.
February 23, 2025
Lesson 18
We were struck by by Bluesky discussions of news outlets using LLMs to write their stories and then attributing quotations to people who never said those things. In light of this, we felt it was important to add some discussion of how it's not just propaganda that drives misinformation; profit motives do as well and can have a similar effect on the electorate.
Sometimes the motive may be profit rather than propaganda, but the consequences can be similar.
Low-quality news stories written by LLM are further undermining our trust in the media. Small-time scammers and once-storied magazines alike have used LLMs to write vapid and often false stories that get published as news. They also cross lines that even disreputable outlets have traditionally avoided. For example, a Texas A&M professor and public figure recently reported that AI-authored news stories are referencing non-existent interviews and attributing quotations to her that she never said.
It is unsurprising that an LLM that generates likely strings of text and has no underlying factual model would do this. But as the internet becomes increasingly littered with fabricated content which subsequently populates search results and is used to train future generations of AI models, we risk losing any sort of anchoring to truth. That, we feel, poses an existential risk to democracy.
February 13, 2025
Lesson 1
The original version of the paragraph at right claimed that LLMs don't engage in logical reasoning. While we believe this is accurate with regard to the basic models, a number of readers pointed out that Chain-of-Thought models engage in processes that one could argue are indeed forms of logical reasoning. We could try to carve out a definition of logical reasoning that excludes chain of thought models, but that's not a hill are eager to die on. Suffice to say that if they reason logically, they do so differently than humans do.
By thinking about what LLMs do, we can better understand what they don’t do. They don’t engage in logical reasoning. They don't reason the way that people do. They don’t have any sort of embodied understanding of the world. They don’t even have a fundamental sense of truth and falsehood.
Notice of Rights. The materials provided on this website are freely accessible for personal self-study and for non-commercial educational use in K-12 schools, colleges, and universities. For any commercial or corporate use, please contact the authors to discuss terms and obtain the necessary permissions. Redistribution of website content is prohibited without prior written consent from the authors. However, individual copies may be created to accommodate accessibility needs directly related to educational instruction.
Unless otherwise stated, all content is copyrighted © 2025 by the authors. All rights reserved.