SSometimes those who forget history are doomed to repeat it. For those of us with long memories, the current hubbub, not the hysteria, surrounding ChatGPT (and “generative AI” in general) sounds familiar.
We’ve been here before: January 1966, to be precise. That was the moment when Joseph Weizenbaum, a computer scientist at MIT, unveiled eliza, which would have been called the world’s first chatbot if that term had existed at the time. Weizenbaum wrote the software (in a programming language intriguingly called MAD-SLIP) to demonstrate that communications between humans and computers were inevitably superficial. He did this by providing a text box in which one could exchange written communications with the machine. Inside the show was a script (named DOCTOR by Weizenbaum) that parodied the exchanges a patient might have with a psychotherapist who practiced person centered therapy started by Carl Rogers. (The name of the show comes from Eliza Doolittle, the Cockney girl who was taught to “speak properly” in Shaw’s play Pygmalion.)
The way it works is that the program takes what you’ve typed and processes it to generate a response. Suppose you type (as I just did): “I’m a bit down about UK politics.” Eliza: “Do you think coming here will help you not get depressed?” I hope so”. Eliza: “You say that you wait for it for some special reason?” You get the point, but you can try it yourself: just go to masswerk.at/elizabot/.
Weizenbaum wrote the program to show that while machines could ostensibly copy human behavior, it was actually like a magician pulling a rabbit out of a hat: an illusion. And once you know how the trick was done, Weizenbaum thought, it’s no longer an illusion. There was nothing secret about Eliza: if you read the code then you could understand how he did his things. What puzzled its creator was that even if people knew it was just a show, they seemed to take it seriously. There is a famous story about her secretary asking her to leave the room while she was having her “conversation” with Eliza. The people was absolutely ecstatic about it. (I saw this myself when I once ran it on a PC at my university open house and had to get people off the machine so others in the queue could try.)
After the publication of Weizenbaum’s article on ElizaIt didn’t take long for some people (including some practicing psychiatrists) to start saying that if a machine could do this sort of thing, who needed psychotherapists? Weizenbaum was as appalled by this as today’s educators and artists are by contemporary drooling over the tools of generative AI. For him, as an insightful commentator put it, “there was something about the relationship between a person and their therapist that was fundamentally an encounter between two human beings. In language sometimes reminiscent of Martin Buber’s ‘I and you’ formulation, Weizenbaum remained obsessed with the importance of interaction between human beings. In that sense, he was not only a distinguished computer scientist, but also a notable humanist.
This humanistic outrage fueled his lifelong opposition to the technological determinism of “artificial intelligence.” And reported his 1976 book, The power of the computer and human reasonwhich confirmed his role as a thorn in the side of the AI crowd and ranks with Norbert Wiener Human use of humans by exposing the reservations of a tech-savvy about the direction of humanity’s journey towards “automating everything”.
The intriguing echo of Eliza’s thinking about ChatGPT is that people consider it magical even though they know how it works, like a stochastic parrot” (in the words of Timnit Gebru, a well-known researcher) or as a machine for “high tech plagiarism” (Noam Chomsky). But we actually don’t know half of it yet, not the CO2 the emissions incurred in training its underlying language model or the carbon footprint of all those delighted interactions people have with it. EITHER, passed Chomsky, that the technology only exists because of his unauthorized appropriation of the creative work of millions of people who happened to be on the web? Which is the business model behind these tools? Etc. Answer: we don’t know.
In one of his lectures, Weizenbaum pointed out that we are incessantly making Faustian deals with this technology. In such contracts, both parties get something: the devil gets the human soul; humans obtain the services that delight us. Sometimes compensation works for us, but with these things, if we finally decide that it doesn’t work, it will be too late. This is the deal that generative AI now brings to the table. Are we prepared for it?
what i’ve been reading
self-esteem
He New York Times’ self obsession is an excoriant political Jack Shafer column.
visions of hell
Ken Burns in his biggest movie is an interview by Baris Weiss on the Free Press website about American attitudes toward the Holocaust.
monopoly rules
Understanding the antitrust case against Google is a nice explanation from Matt Stoller at Substack of a really tricky subject.