Have you ever vomited and had diarrhea at the same time? I did, and when it happened, I was listening to a fan-made audiobook version of Harry Potter and the methods of rationality (HPMOR), a fan fiction written by Eliezer Yudkowsky.
No, the double ending body horror was not incited by the fanfic, but the two experiences are inextricable in my mind. I was surprised to discover years later that the 660,000-word fanfic I marathoned while sick has some strange intersections with the ultra-rich technorati, including many of the figures involved in the current OpenAI debacle.
Specific case: in a ai-fanfic/” target=”_blank” rel=”noopener”>easter egg seen by 404 Media (which was too minor for anyone else, even me, someone who actually read the thousand-odd page fanfic, to notice), there is a Quidditch player once mentioned in the extensive story named Emmett Shear. Yes, the same Emmett Shear who co-founded Twitch and was just named interim CEO of OpenAI, arguably the most influential company of the 2020s. Shear was a fan of Yudkowsky’s work and followed the serialized story as it was published online. So, as a birthday gift, they gave him a cameo.
Shear is a long-time fan of Yudkowsky’s writings, as are many of the ai industry’s key players. But this Harry Potter fanfic is Yudkowsky’s most popular work.
HPMOR is an alternate universe rewrite of the Harry Potter series, starting with the premise that Harry’s aunt Petunia married an Oxford biochemistry professor, instead of the abusive idiot Vernon Dursley. So Harry grows up as a know-it-all kid obsessed with rationalist thinking, an ideology that values experimental scientific thinking to solve problems, eschewing emotions, religion, or other imprecise measures. Not three pages into the story when Harry cites the Feynman Lectures on Physics to try to resolve a disagreement between his adoptive parents over whether magic is real or not. if you thought current Harry Potter can be a little frustrating at times (why does he never ask Dumbledore the most obvious questions?), be prepared to this Harry Potter, which could compete with the homonymous “Young Sheldon”.
It makes sense that Yudkowsky moves in the same circles as many of the most influential people in ai today, as he is a long-time ai researcher himself. In a 2011 New Yorker article about Silicon Valley techno-libertarians, George Packer reports from a dinner at the home of billionaire venture capitalist Peter Thiel, who would later co-found and invest in OpenAI. While “blondes in black” serve wine to the men, Packer has dinner with PayPal co-founders like David Sacks and Luke Nosek. Also at the party is Patri Friedman, a former Google engineer who got funding from Thiel to start a nonprofit that aims to build floating, anarchist marine civilizations inspired by the Burning Man festival (after fifteen years, the organization It doesn’t seem to have made much progress.) And then there’s Yudkowsky.
To further connect the parties involved, here: a ten-month-old selfie of now-ousted OpenAI CEO Sam Altman, Grimes, and Yudkowsky.
Yudkowsky is not a household name like Altman or Elon Musk. But he tends to show up repeatedly in the stories behind companies like OpenAI, or even behind the great romance that brought us kids named X Æ A-Xii, Exa Dark Sideræl, and Techno Mechanicus. No, really: Musk once wanted to tweet a joke about “Roko’s Basilisk”, a thought experiment in artificial intelligence that originated on LessWrong, Yudkowsky’s blog and community forum. But it turned out that Grimes had already made the same joke about a “rococo basilisk” in the music video for her song “Flesh Without Blood.”
HPMOR is literally a recruiting tool for the rationalist movement, finding its virtual home in Yudkowsky’s LessWrong. Through a certainly entertaining story, Yudkowsky uses the familiar world of Harry Potter to illustrate rationalist ideology, showing how Harry fights against his cognitive biases to become a master problem solver. In a final confrontation between Harry and Professor Quirrell – his mentor in rationalism who turns out to be evil – Yudkowsky broke the fourth wall and gave his readers a “final exam.” As a community, readers had to come up with rationalist theories explaining how Harry could get out of a fatal predicament. Fortunately, for the sake of happy endings, the community passed.
But the moral of HPMOR is not just to be a better rationalist or as “least wrong” as possible.
“For me, a lot of HPMOR is about how rationality can make you incredibly effective, but incredibly effective can still be incredibly evil,” my only other friend who has read HPMOR told me. “I feel like the point of HPMOR is that rationality is irrelevant at the end of the day if your alignment is with evil.”
But of course, we can’t all agree on a definition of good versus evil. This brings us back to the upheaval at OpenAI, a company that is trying to build ai that is smarter than humans. OpenAI wants to align this artificial general intelligence (AGI) with human values (such as the human value of not dying in an ai-induced apocalyptic event), but it turns out that this “alignment research” is Yudkowsky’s specialty.
In March, thousands of leading ai figures signed an open letter calling for all “ai labs to pause immediately for at least six months.”
Signatories included engineers from Meta and Google, founders of Skype, Getty Images and Pinterest, Stability ai founder Emad Mostaque, Steve Wozniak and even Elon Musk, co-founder of Open ai who resigned in 2018. But Yudkowsky did not sign the letter. , and instead, written ai-eliezer-yudkowsky-open-letter-not-enough/” target=”_blank” rel=”noopener”>an opinion piece in TIME magazine argue that a six-month pause is not radical enough.
“If someone builds an ai that is too powerful, under current conditions, I expect every member of the human species and all biological life on Earth to die shortly thereafter,” Yudkowsky wrote. “There is no proposed plan for how we could do such a thing and survive. OpenAI openly declared intention is to have some future ai do our ai alignment task. Just listening to that This is the plan should be enough to make any sensible person panic. The other leading ai lab, DeepMind, has no plan.”
While Yudkowsky defends the doomist approach when it comes to ai, the OpenAI leadership uproar has highlighted the wide range of different beliefs about how to navigate technology that is arguably an existential threat.
Acting as interim CEO of OpenAI, Shear – now one of the most powerful people in the world, and not a Quidditch seeker in a fanfic – is posting memes about the different factions in the ai debate.
There are the techno-optimists, who support the growth of technology at all costs, because they believe that any problems caused by this “growth at all costs” mentality will be solved by technology itself. Then there are the effective accelerationists (e/acc), who seem like a kind of techno-optimism, but with more language about how growth at all costs is the only way forward because the second law of thermodynamics says so. Safetyists (or “decels”) support the growth of technology, but only in a way that is regulated and safe (meanwhile, in his “Techno-Optimist Manifesto,” venture capitalist Marc Andreessen denounces “trust and security” and “technological ethics”). ”As his enemy). And then there are the fatalists, who think that when ai outsmarts us, it will kill us all.
Yudkowsky is a leader among fatalists, and he’s also someone who has spent the last few decades running in the same circles as what appears to be half of OpenAI’s board of directors. A popular theory about Altman’s ouster is that the board wanted to appoint someone who aligned more closely with his “slowdown” values. Then enters Shear, who we know is inspired by Yudkowsky and also considers himself a fatalist-safety bar.
We still don’t know what’s going on at OpenAI and the story seems to change about once every ten seconds. For now, tech circles on social media continue to fight over the ideology of slowdown versus e/acc, using the backdrop of the OpenAI chaos to make their arguments. And in the midst of all this, I can’t help but find it fascinating that, if you squint, this all goes back to some really tedious Harry Potter fanfic.