How would Socrates use ChatGPT?

In Plato’s Phaedrus, Socrates recounts an ancient myth about the Egyptian gods, Thamus and Theuth. In this story, the Egyptian inventor god, Theuth, visits another god, Thamus, to demonstrate a number of his new inventions, including his greatest new invention, that of letters and writing. Upon seeing this invention of writing for the first time, Thamus responds skeptically:

… this discovery of yours will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing …

Proponents of new technologies often lean on the story of Thamus and Theuth as a rhetorical device for arguing that people have always feared how new technologies might damage our ability to think. They’ll say that we shouldn’t worry so much about calculators, the Internet, AI, whatever comes next—just look at what Theuth thought writing would do to us, and look how wrong he was!

I’m reminded of this story because the MIT Media Lab recently published a study, which demonstrates exactly how using AI as a crutch in an academic context can indeed impede one’s ability to learn and think for oneself. And I’ve noticed, once again, folks attempting to use Plato’s famous dialog as a tool for assuaging our worries about AI potentially making us dumb.

But, here’s the thing: Neither Plato (nor Socrates, by extension), ever argued against writing! They were not saying that writing would somehow impede our ability to think. The Phaedrus isn’t an argument against technology or cognitive support.

Lane Wilkinson pointed this out in a timeless blog post nearly 15 years ago, reminding us that Plato (and again, Socrates, by extension) only argued that writing was inferior to the dialectic in the pursuit of knowledge, not that it was harmful:

Writing, Socrates explains, is a noble pastime, creating “memorials to be treasured against the forgetfulness of old age” (276d). However, the good of writing pales compared to the dialectician, who proceeds through exploratory argument, defending the truth when needed and acquiescing in the face of contrary evidence. To Socrates, the problem with writing is not that it “creates forgetfulness in the learners” but that people mistakenly hold the written word up as the only path to knowledge, when in reality, books are just information and the real knowledge comes from within the reader.

Ironically, then, instead of serving as a kind of strawman against the use of AI, the arguments in the Phaedrus might actually point us towards healthier ways to use AI.

Unlike books—which are static—large language models offer us the ability, for the first time ever, to converse with large bodies of information in order to develop our own insights, without the aid of another human. We now have a technology that can emulate the Socratic interlocutor … if only we choose to use it as such.

As the MIT paper shows, rather than lean on AI as an aid for our own deeper thinking, humans tend to offload the hard bits of thinking to AI instead. “Conversations” with AI rarely resemble a collaborative dialectic inquiry, and instead look like a series of commands and responses between a lazy human master and a overzealous robot assistant.

The problem with this, of course, is that AI makes stuff up, and in lazily offloading our thinking to the machine, we can inadvertently, and unquestioningly accept whatever the machine produces as fact. Socrates, however, would never operate this way. He asked questions, relentlessly pursuing the truth at all costs, thinking deeply, often refuting whatever proposition his conversation partner was offering in order to dig deeper.

The design of a tool influences its patterns of use, and patterns of use matter. Writing can “create forgetfulness” if all we do is read. But it’s also an immensely powerful tool for thought and remembering if we write and author things on our own. The same can be true for AI if we are careful and constrained in how we use it.

The design of today’s chat bots does not encourage us to engage them in a dialectic by default. Ask a question or give a command, get an overconfident response, move on. But a Socratic approach to interacting with AI could change things. It might put us humans in a default posture of skepticism, potentially offering some protection against dangerous effect of hallucinations, and might—just might—transform AI from a crutch into an actual thinking partner.