The Dartmouth Conference

Dartmouth, 1956: When “Artificial Intelligence” Became a Research Program

Between June 18 and August 17, 1956, a small group of researchers gathered at Dartmouth College for what is now widely treated as the field-defining event for artificial intelligence: the Dartmouth Summer Research Project on Artificial Intelligence.

The phrase “artificial intelligence” wasn’t folklore or an after-the-fact label—it appears explicitly in the 1955 proposal for the project, authored by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon. If there’s a “moment” the term enters the historical record, it’s here: not as a settled discipline, but as a daring heading for a summer experiment.


The Wager at the Center of the Workshop

What Dartmouth attempted wasn’t a single invention. It was a bet about tractability—a claim that intelligence could be treated as a technical object rather than a mystery.

The proposal’s core conjecture is blunt:

“every aspect of learning… can in principle be so precisely described that a machine can be made to simulate it.”

This line captures both the promise and vulnerability of early AI:

  • If intelligence can be described precisely, then it can be engineered.
  • But you immediately inherit the hard question: what counts as a “precise description” of learning, language, or common sense?

Why “Calculation” Was Enough

It’s hard to remember from our world of smartphones and cloud computing what a “computer” meant in the mid-1950s: large, scarce machines, used primarily for formal calculation.

Yet calculation was never the end goal. The Dartmouth mindset treated calculation as the substrate—useful because it could be organized into:

  • Search (exploring possibilities)
  • Symbolic manipulation (representing and transforming ideas)
  • Optimization (finding the best solution)

In practice, this meant reframing “thinking” into procedures a machine could execute:

  1. Represent a problem as states and rules
  2. Generate candidate moves or inferences
  3. Evaluate outcomes using heuristics
  4. Repeat—faster than a human, without fatigue

This approach—turning mental acts into formal processes—became the intellectual bridge from “mere arithmetic” to programs that could appear to reason.


Who Was in the Room—and Why That Matters

A striking detail for students: Dartmouth didn’t convene “computer scientists,” because that identity hadn’t fully formed yet.

The organizers and attendees came from:

  • Mathematics
  • Engineering
  • Psychology
  • Physics-adjacent research cultures

They were building a new discipline while debating its boundaries. Early AI wasn’t born as a subfield of software engineering. It emerged as a cross-disciplinary attempt to formalize intelligence itself—part theory of mind, part applied mathematics, part hardware reality check.


What Dartmouth Did (and Did Not) Settle

Dartmouth is often misremembered as the moment machines “learned to think.” A better framing: Dartmouth made intelligence a legitimate research agenda and helped define the problem set that would occupy decades.

The workshop crystallized these enduring questions:

  • How do we represent knowledge so it’s usable by a machine?
  • How do we search huge spaces of possibilities without brute force?
  • Can language be treated as structure that programs can transform?
  • Can a machine improve with experience, rather than only follow fixed rules?

These questions weren’t solved in 1956. But the workshop normalized the idea that they were solvable in principle—and that principle attracted funding, students, and a research identity.


A Decade Later: What Was Actually Achievable (1956–1966)

If you want concrete, classroom-friendly proof that Dartmouth’s wager produced results quickly—without pretending the problems were “finished”—the 1956–1966 window is rich:

Automated Reasoning (Logic Theorist, 1956)

Newell and Simon’s work showed that a program could prove theorems in symbolic logic, making “reasoning as search” more than a slogan.

Learning from Experience (Samuel’s Checkers, late 1950s)

Arthur Samuel built checkers programs that improved their play—an early demonstration of machine learning as performance improving with experience.

Early Neural-Style Learning (Perceptron, 1957–1958)

Frank Rosenblatt’s perceptron work formalized a learning system inspired by neurons. The perceptron became a lasting reference point for later neural network revivals.

Natural Language Interaction (ELIZA, 1966)

Weizenbaum’s ELIZA demonstrated how shallow pattern-matching could produce surprisingly compelling “conversation.” Perfect for teaching the difference between behavior that feels intelligent and actual understanding.


The Bigger Picture

None of these achievements imply general intelligence. But together they show something more historically accurate—and more teachable: the Dartmouth conjecture was productive. It generated methods and demonstrations that were real, measurable, and reproducible.


Classroom Use

Learning Objectives

Students should be able to:

  1. Explain Dartmouth’s central conjecture about describing intelligence formally
  2. Distinguish symbolic reasoning (rules/search) from learning systems (weight adjustment from experience)
  3. Identify why early successes didn’t immediately “scale” to real-world intelligence (context, ambiguity, incomplete knowledge)

Practical Demonstrations

Live Checkers AI Demo →

Option A: “Transparent Checkers AI” (High engagement, easy to explain)

Have students play a simple checkers engine that:

  • searches a few moves ahead
  • scores positions with a weighted evaluation function
  • updates weights after self-play or after a match

This mirrors the Samuel-style arc: search + evaluation + improvement.

Option B: “ELIZA in 50 Rules” (Fast, memorable, discussion-friendly)

Students implement a tiny ELIZA-like chatbot (keyword → template response), then discuss why it can feel intelligent despite lacking a model of meaning.


Discussion Questions

  • Dartmouth’s proposal claims intelligence can be “precisely described.” What parts of human intelligence feel easiest to formalize—what parts resist it?

  • Why do demos like ELIZA often convince people more than theorem provers—even if the theorem prover is “deeper”?

  • What’s the difference between competence in a toy world and robustness in the real world?


References

  • McCarthy, J., Minsky, M., Rochester, N., & Shannon, C. (1955). A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence.

  • Dartmouth College. (n.d.). Artificial Intelligence (AI) Coined at Dartmouth.

  • IEEE Spectrum. (2023). The meeting of the minds that launched AI.

  • Stanford Encyclopedia of Philosophy. (2018). Artificial Intelligence.

  • Weizenbaum, J. (1966). ELIZA—A computer program for the study of natural language communication between man and machine.

  • Rosenblatt, F. (1958). The perceptron: A probabilistic model for information storage and organization in the brain.

  • Smithsonian (NMAH). (n.d.). Electronic Neural Network, Mark I Perceptron.

  • Sutton, R. S., & Barto, A. G. (n.d.). Reinforcement Learning (Samuel’s checkers discussion).