EITHERn February 14, Kevin Roose, the New York Times technology columnist, had a two-hour conversation with Bing, Microsoft’s ChatGPT-enhanced search engine. She came out of the experience an apparently changed man, because the chatbot had told her, among other things, that she would like to be human, that she harbored destructive desires, and that she was in love with him.
The transcript of the conversation.along with Roose’s appearance on the front page of the newspaper The newspaper podcast, immediately heightened the moral panic that was already brewing over the implications of large language models (LLMs) like GPT-3.5 (which ostensibly backs Bing) and other “generative AI” tools now loose on the world. These are variously viewed as chronically unreliable artifacts, as examples of technology that is out of control, or as precursors to so-called artificial general intelligence (AGI) – that is, human-level intelligence – and therefore pose a problem. existential threat to humanity
Accompanying this hysteria is a new gold rush, as venture capitalists and other investors scramble to get in on the action. It seems that all that money is burning holes in very deep pockets. Fortunately, this has its comical sides. It suggests, for example, that chatbots and LLMs have replaced cryptocurrencies and web 3.0 as the next big thing, which in turn confirms that the tech industry collectively has the attention span of a newt.
However, the strangest of all is that the pandemonium has been caused by what one of its main researchers called “stochastic parrots” – by which he means that LLM-powered chatbots are machines that continuously predict which word is statistically most likely to follow the previous one. And this is not black magic, but a computational process that is well understood and has been clearly described by Professor Murray Shanahan and elegantly stuffed by computer scientist Stephen Wolfram.
How can we make sense of all this madness? A good place to start is to steer people away from their incurable desire to interpret machines in an anthropocentric way. Since then Eliza by Joe Weizenbaum, humans who interact with chatbots seem to want to humanize the computer. This was absurd with Eliza, who was simply running a script written by her creator, so it is perhaps understandable that humans now interacting with ChatGPT, which can apparently respond intelligently to human input, would fall for the same trap. But it’s still dumb.
Persistently changing LLM as “IA” doesn’t help either. These machines are certainly artificial, but to consider them “intelligent” seems to me to require a rather poor conception of intelligence. However, some observers, such as the philosopher Benjamin Bratton and the computer scientist Blaise Agüera y Arcas, are less dismissive. “It’s possible,” grant, “that these types of AI are ‘intelligent’, and even ‘aware’ in some way, depending on how those terms are defined”, but “none of these terms can be very useful if they are defined in a strongly anthropocentric way”. They argue that we should distinguish sentience from intelligence and consciousness and that “the real lesson for the philosophy of AI is that reality has outgrown the available language to analyze what is already available. A more precise vocabulary is essential.”
It is. For the moment, though, we’re stuck in hysteria. A year is a long time in this industry. Remember, just two years ago, the next big things would be cryptography/web 3.0 and quantum computing. The former has collapsed under the weight of its own absurdity, while the latter, like nuclear fusion, is still on the horizon.
With chatbots and LLMs, the most likely outcome is that they will eventually be seen as a significant increase in human capabilities (spreadsheets on steroids, as one cynical colleague put it). If that happens, the main beneficiaries (as in all previous gold rushes) will be the providers of picks and shovels, which in this case are the cloud computing resources needed by LLM technology and owned by large corporations.
Given that, isn’t it interesting that the only thing nobody talks about right now is the environmental impact of lots of computing necessary to train and operate LLM? A world dependent on them might be good for business, but it would certainly be bad for the planet. Maybe that’s what Sam Altman, the CEO of OpenAI, the team that created ChatGPT, had in mind when he he observed that “AI will probably lead to the end of the world, but in the meantime, there will be big companies.”
what i’ve been reading
pain profiles
Social media is one of the main causes of the epidemic of mental illness in adolescents is an impressive survey by psychologist Jonathan Haidt.
delighted the public
What the poet, the playboy and the bubble prophet can still teach us is a beautiful essay by Tim Harford on the madness of crowds, among other things.
technological royalty
What Mary Queen of Scots can teach today’s computer security fanatics is an intriguing post by Rupert Goodwins on the Register.