my presentation of the motivating idea for the class (my slides)
discussion of logistics
sign-ups for dates and topics
Required reading/watching:
Videos 1, 2, 3, and 5 of the 3blue1brown sequence on neural networks (it's ok if you don't follow all the math of part 2 - if you do, then you should watch 4 as well).Â
Timothy B Lee, "How computers got shockingly good at recognizing images", Wired, Dec. 18, 2018
Timothy B. Lee and Sean Trott, "A jargon-free explanation of how AI large language models work", Wired, July 31, 2023
At least skim Chapter 1 of Russell and Norvig's Artificial Intelligence: A Modern Approach, from 2010
Further resources:
General neural nets:
Neural network playground - a place to try out the basics of neural nets with simple datasets
The rest of the 3blue1brown sequence on neural networks
Michael Nielsen, Neural Networks and Deep Learning - a free online book that teaches you to build a neural net
Emergent Garden, Why neural networks can learn (almost) anything, March 12, 2022
Art of the problem, How neural networks learn - backpropagation intuition, Nov. 14, 2019
Wikipedia, History of artificial neural nets
Large language models:
Vaswani et al, "Attention is all you need", June 12, 2018 - the paper that launched large language models
Vox, Computers just got a lot better at writing, March 4, 2020 - a video about GPT-3 - notice the date!
Textsynth - a site to try out pure predict-the-next-word LLMs. Note that they tend to work better if you give them paragraphs of text as a prompt, but still make some attempt even with just part of a sentence as a prompt.
Ben Levinstein's 5-part series explaining large language models from Jan/Feb 2023
Understandability:
Welch Labs, The moment we stopped understanding AI, July 1, 2024
Golden Gate Claude, May 23, 2024 - a step forward in interpretability, from Anthropic (see some examples here, or by searching for others)
Robotics:
Brian Potter, "Robot dexterity still seems hard", April 24, 2025
Assigned reading:
Gilbert Ryle, "Knowing how and knowing that: the presidential address" (1946)
Jason Stanley and Timothy Williamson, "Knowing How" (2001)
Supplementary material:
Gilbert Ryle, The Concept of Mind, chapter 2, "Knowing how and knowing that" (1949)
Barry Smith, "Knowing how vs. knowing that" (1988)
Lewis Carroll, "What the tortoise said to Achilles" (1895)
Jason Stanley, Know How, Oxford University Press (2011)
Stephen Hetherington, How to Know: A Practicalist Conception of Knowledge, Wiley (2011)
Ephraim Glick, "Practical modes of presentation" (2015)
Alexander Kocurek and Ethan Jerzak, "Knowing what to do" (2024)
Possible supplements:
Knowing how and motor control
Required reading:
Jason Stanley and John Krakauer, "Motor skill depends on knowledge of facts" (2013)
Banin and Yunlong will also discuss, but not everyone has to read these:
Neil Levy, "Embodied savoir-faire: knowledge-how requires motor representations" (2017)
Ellen Fridland, "Skill and motor control: intelligence all the way down" (2017)
David Papineau, "In the Zone" (2013)
Other possibly relevant readings:
Alexander Mugar Klein, Consciousness is Motor, 2025
Brian Potter, "Robot Dexterity Still Seems Hard", April 2025
Rodney Brooks, "Why Today's Humanoids Won't Learn Dexterity", Sept. 2025
Required readings:
Alvin Goldman, "Discrimination and perceptual knowledge" (1976)
Ernest Sosa, "How to defeat opposition to Moore" (1999)
J. Adam Carter and Duncan Pritchard, "Knowledge-how and epistemic luck" (2015)
More on modal properties of knowledge - safety, sensitivity, reliability, tracking, etc.
Ernest Sosa, "How must knowledge be modally related to what is known?" (1999)
Alvin Goldman and Bob Beddor, Reliabilism (in the Stanford Encyclopedia of Philosophy) (2021)
More on knowing how and luck
Katherine Hawley, "Success and knowledge-how" (2003)
Carlotta Pavese, "Know-how, action, and luck" (2018)
Required readings:
Alan Turing, "Computing Machinery and Intelligence" (1950)
Hubert Dreyfus, "From Socrates to expert systems: the limits of calculative rationality" (1987)
Supplemental readings:
Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach, chapter 26
Richard Ngo, Towards a Scale-Free Theory of Intelligent Agency, (2025)
Georg Wilhelm Friedrich Hegel, "Who thinks abstractly?" (1807)
Hubert Dreyfus and Stuart Dreyfus, "What artificial experts can and cannot do" (1991)
Hubert Dreyfus, "Overcoming the Myth of the Mental: How Philosophers Can Profit from the Phenomenology of Everyday Expertise" (2005)
Dreyfus (and Heidegger and Merleau-Ponty?) on skill
Stuart Dreyfus and Hubert Dreyfus, "A five-stage model of the mental activities involved in directed skill acquisition" (1980)
Hubert Dreyfus, "Why Heideggerian AI failed and how fixing it would require making it more Heideggerian" (2007)
System 1 and System 2 thinking
Steven Sloman, "The empirical case for two systems of reasoning" (1996)
Jonathan St.B.T. Evans, "In two minds: dual-process accounts of reasoning" (2003)
Gideon Keren and Yaacov Schul, "Two is not always better than one: A critical evaluation of two-system theories" (2009)
Jonnathan St. B. T. Evans and Keith Stanovich, "Dual-process theories of higher cognition: Advancing the debate" (2013)
The connection of words and know-how
Katherine Hawley, "Testimony and knowing how" (2010)
Thi Nguyen, "Transparency is Surveillance", 2021
Relevant additional readings:
Ellen Fridland, "Learning our way to intelligence: Reflections on Dennett and appropriateness" (2015)
Possible supplements:
Carlotta Pavese, "Practical knowledge first" (2022)
Joshua Habgood-Coote, "What's the point of knowing how?" (2019)
Yuri Cath, "Knowing How Without Knowing That" (2011)
Knowing how and skill
Ellen Fridland, "They've lost control: reflections on skill" (2014)
Carlotta Pavese, "Skill in epistemology I: Skill and knowledge", "Skill in epistemology II: Skill and know how" (2016)
Stanley and Williamson, "Skill" (2017)
Meaning and LLMs
Emily Bender and Alexander Koller, "Climbing towards NLU: On meaning, form, and understanding in the age of big data", 2020
Harvey Lederman and Kyle Mahowald, "Are language models more like libraries or like librarians? Bibliotechnism, the reference problem, and the attitudes of LLMs", 2024
Matthew Mandelkern and Tal Linzen, "Do language models' words refer?", 2024
Logistics of the class:
Students who are enrolled in the class will do one in-class presentation, and will write a term paper, likely on a related topic. Ideally, we can schedule everyone's presentations and choose the broad topics of them in the first week or two. I will be happy to meet with you a few days before your presentation to make sure you're prepared, and also earlier if you want some help getting ready (and possibly choosing readings, if you don't just want to use the default ones I have for each topic). I'm also happy to meet with you at various points over the course of the term, and especially towards the end, to talk about term paper ideas and/or to give feedback on outlines or drafts.