In today's episode of Decoder, Let's try to discover the “digital god”. I thought we've been doing this for a long time, let's move on. Can we build an artificial intelligence so powerful that it changes the world and answers all our questions? The ai industry has decided the answer is yes.
In September, Sam Altman of OpenAI published a blog post claiming that we will have superintelligent ai in “a few thousand days.” And earlier this month, Dario Amodei, CEO of OpenAI competitor Anthropic, published a 14,000 word post outlining what exactly he thinks such a system will be capable of when it arrives, which he says could be as early as 2026.
What's fascinating is that the visions laid out in both publications are very similar: both promise spectacular superintelligent ai that will bring massive improvements to work, to science and healthcare, and even to democracy and prosperity. Digital god, baby.
But while the visions are similar, the companies are, in many ways, openly opposed: Anthropic is the origin story of the OpenAI defection. Dario and a group of fellow researchers left OpenAI in 2021 after becoming concerned about its increasingly commercial direction and approach to security, and created Anthropic to be a safer, slower ai company. And until recently the emphasis was really on security; just last year, a technology/anthropic-ai-claude-chatbot.html”>important New York Times Company Profile he called it the “hot center of ai fatalism.”
But the launch of ChatGPT and the generative ai boom that followed kicked off a colossal technological arms race, and now Anthropic is as much in the game as anyone. It has received billions in funding, primarily from amazon, and created Claude, a chatbot and language model that rivals OpenAI's GPT-4. Now, Dario is writing long blog posts about spreading democracy with ai.
So what's going on here? Why is the head of Anthropic suddenly speaking so optimistically about ai, when it was previously known to be the safer, slower alternative to OpenAI, which advances at all costs? Is this just more ai hype to woo investors? And if AGI is really around the corner, how can we even measure what it means to be safe?
To break it all, I brought Edge Senior ai reporter Kylie Robison to discuss what it means, what's happening in the industry, and whether we can trust these ai leaders to tell us what they really think.
If you'd like to read more about some of the news and topics we discuss in this episode, check out the links below:
- Loving Grace Machines | Dario Amodei
- The age of intelligence | Sam Altman
- Anthropic CEO believes ai will lead to a utopia | The edge
- ai manifestos flood the technology zone | ai-manifesto-anthropic-dario-amodei”>axios
- OpenAI just raised $6.6 billion to build bigger and bigger ai models | The edge
- OpenAI was a research laboratory; now it's just another technology company | The edge
- Anthropic's latest ai upgrade can use a computer on its own | The edge
- Agents are the future that ai companies promise and desperately need | The edge
- California Governor Vetoes Major ai Safety Bill | The edge
- Inside the hot center of ai doomerism | NOW
- The close partnership between Microsoft and OpenAI shows signs of wear | technology/microsoft-openai-partnership-deal.html”>NOW
- The issue of the 14 billion dollars that divides OpenAI and Microsoft | tech/ai/the-14-billion-question-dividing-openai-and-microsoft-71cf7d37″>WSJ
- Anthropic has raised a $40 billion valuation in financing talks | The information
Decoder with Nilay Patel /
A podcast from The Verge about big ideas and other problems.