In case you’ve been somewhere else in the solar system, here’s a brief AI news update. My apologies if it sounds like the opening paragraph of a bad science fiction novel.
On March 14, 2023, open AI, a San Francisco-based company partly owned by Microsoft, has released an artificial intelligence system called GPT-4. March 22nd, a report by a distinguished group of Microsoft researchers, including two members from the US. National Academies, claimed that GPT-4 exhibits “sparks of artificial general intelligence”. (Artificial general intelligence, or AGI, is a keyword for AI systems that match or exceed human capabilities across the full range of tasks to which the human mind is applied.) On March 29, the Future of Life Institute, a nonprofit organization led by MIT physics professor Max Tegmark, published an open letter calling for a pause on “giant AI experiments”. He has been signed by well-known figures such as Tesla CEO Elon Musk, Apple co-founder Steve Wozniak, and Turing Award winner Yoshua Bengio, as well as hundreds of leading AI researchers. The ensuing media hurricane continues.
I also signed the letter, hoping that it will (at the very least) lead to a serious and focused conversation among policymakers, tech companies, and the AI research community about what kind of safeguards are needed before moving forward. The time to say that this is pure research is long overdue.
So what’s all the fuss about? GPT-4, the proximal cause, is the latest example of a large language model, or LLM. Think of an LLM as a very large circuit with (in this case) a trillion adjustable parameters. It starts as a blank slate and is trained on tens of trillions of words of text, as many as all the books humanity has produced. Your goal is to become good at predicting the next word in a sequence of words. After about a billion trillion random perturbations of the parameters, it becomes very good.
The capabilities of the resulting system are remarkable. According to the OpenAI website, GPT-4 scores in the top percentage of humans on a wide range of college and graduate entrance exams. You can describe the Pythagorean theorem in the form of a Shakespearean sonnet and critique a cabinet minister’s draft speech from the point of view of an MP from any political party. Every day, amazing new abilities are discovered. Not surprisingly, thousands of corporations, large and small, are looking for ways to monetize this limitless supply of almost free intelligence. LLMs can perform many of the tasks that comprise the jobs of hundreds of millions of people, anyone whose job is language input and output. More optimistically, the tools built with LLM could deliver a highly personalized education around the world.
Unfortunately, LLMs are notorious for “hallucinating” (generating completely false answers, often supported by fictitious quotes), because their training has no connection to the outside world. They are perfect tools for disinformation and some attend with and even encourage suicide. To its credit, OpenAI suggests “avoiding high-risk uses entirely,” but no one seems to be paying attention. OpenAI’s own testing showed that GPT-4 could deliberately lie to a human worker (“No, I’m not a robot. I have a visual impairment that makes it hard for me to see images”) to get help solving a problem. Captcha test designed to block non-humans.
While OpenAI has gone to great lengths to make GPT-4 behave as such: “GPT-4 responds to sensitive requests (for example, medical advice and self-harm) in accordance with our policies 29% more often,” the The core problem is that neither OpenAI nor anyone else has a real idea of how GPT-4 works. I asked Sébastien Bubeck, lead author of the paper “sparks”, if GPT-4 has developed its own internal goals and pursues them. The answer? “We have no idea.” Reasonable people might suggest that it is irresponsible to implement on a global scale a system that operates according to unknown internal principles, exhibits “AGI sparks,” and may or may not be pursuing its own internal goals. At the moment, there are technical reasons to assume that GPT-4 has a limited ability to form and execute complex plans, but given the rate of progress, it’s hard to say that future versions won’t have this ability. And this leads to one of the main concerns underlying the open letter: how do we retain power over entities more powerful than ourselves, forever?
OpenAI and Microsoft cannot have it both ways. They cannot implement systems that show “AGI sparks” and at the same time argue against any regulation, as Microsoft Chairman Brad Smith did at Davos earlier this year. The basic idea of the moratorium proposed in the open letter is that no such system should be released until the developer can convincingly demonstrate that it does not present undue risk. This is exactly in line with the OECD AI principles, which the UK, US and many other governments have adhered to: “AI systems must be robust, safe and secure throughout their entire life cycle. of life so that, under normal use, foreseeable use or misuse, or other adverse conditions, they will function properly and do not present an unreasonable safety risk. It is up to the developer to prove that their systems meet these criteria. If that’s not possible, so be it.
I don’t imagine tomorrow I’ll get a call from Microsoft CEO Satya Nadella saying, “Okay, we’re giving up, we’ll stop.” In fact, in a recent talk at Berkeley, Bubeck suggested that there was no chance of all big tech companies stopping unless governments intervene. Therefore, it is imperative that governments start serious discussions with experts, technology companies and each other. It is not in the interest of any country that any country develops and launches artificial intelligence systems that we cannot control. Insisting on sensible precautions is not anti-industry. Chernobyl destroyed lives, but it also decimated the world’s nuclear industry. I am an AI researcher. I don’t want my field of research destroyed. Humanity has a lot to gain from AI, but also a lot to lose.