youEconomist Bryan Caplan was sure that the artificial intelligence built into ChatGPT wasn’t as smart as people thought. The Question: Could the AI Pass Your Undergraduate Class’s 2022 Midterm Exam?
Caplan, of George Mason University in Virginia, seemed to be in a good position to judge. He has made a name for himself placing bets on a variety of newsworthy topics, from Donald Trump’s 2016 election chances to future US college attendance rates and the almost always winsoften betting against predictions that he considers hyperbolic.
That was the case with the wild claims about ChatGPT, the AI chatbot that has become a global phenomenon. But in this case, he resembles Caplan, a libertarian professor whose arguments range from calls for open borders to critique of feminist thought – you will lose your bet.
After the original ChatGPT got a D on his test, he bet that “no AI could get A’s on 5 out of 6 of my exams by January 2029.” But, “to my surprise and no small dismay,” he wrote on his blog, the new version of the system, GPT-4, earned an A just a few months later, with a score of 73/100, which, had he been a student, would have been the fourth highest score in the class. Given the impressive rate of improvement, Caplan says his chances of winning seem slim.
So is the hype justified this time around? The Guardian spoke to Caplan about what the future of AI could look like and how he became an avid gambler.
The conversation has been edited and condensed for clarity.
I bet no AI could get A’s on five out of six of their exams by January 2029, and now one has. How much did you bet?
I tried it for 500 bucks. I think it is a reasonable forecast that I will lose the bet at this point. I just hope I’m lucky.
So what do you think this means for the future of AI? Should we be excited or worried or both?
I would say excited, overall. All progress is bad for someone. Vaccines are bad for funeral homes. The general rule is that anything that increases human production is good for the human standard of living. Some people lose, but if you go and say we just want progress that benefits everyone, then there could be no progress.
I have another bet on AI with Eliezer Yudkowsky: he’s the most prominent naysayer and probably the most extreme of AI, in the sense that he thinks it’s going to work and then he’s going to kill us off. So I have a bet with him that, due to AI, we will be wiped off the surface of the Earth on January 1, 2030. And if you’re wondering how you can possibly make a bet like that, when you’re one of the people that it is going to be wiped out, the answer is that I paid it in advance. I just gave him the money upfront and then if the world doesn’t end, he owes me.
How could we theoretically be annihilated?
What I consider a strange argument [more broadly] is that once the AI becomes smart enough to increase its own intelligence, it will go to infinite intelligence in an instant and that will be it for us. [That view is endorsed by] very intelligent people, very eloquent. They don’t look crazy, but I think they are.
They have become a kind of corner. You start with this definition of: imagine there is an infinitely intelligent AI. How can we stop him from doing what he wants? Well, once you put it that way, we couldn’t. But why should you think that this thing will exist? Nothing else has ever been infinite. Why would there ever be anything infinite?
What comes into your mind when you decide: Is this worth betting on?
The kind of bets that pique my interest are those where someone seems to be making exaggerated hyperbolic claims, pretending to be much more confident in the future than I think they might be. So far, it has served me perfectly. I’ve had 23 bets come through; I have won all 23.
I had many other cases of people telling me how great the AI was, and then I checked it out for myself and they were clearly exaggerating a lot. So I thought the hype was still going on and sometimes you’re wrong. Sometimes someone says something that seems ridiculously exaggerated and that’s how they say it.
In other words, you tend to reject the most dramatic possible outcomes.
I almost always bet against drama. Because it appeals to the human psyche to say exciting things, and my opinion is that, in reality, the world is generally not that exciting. The world usually remains basically the way it was. “The best predictor of the future is the past” is an adage that seems so wise to me, undeniable. If someone doesn’t take it seriously, then I have trouble taking it seriously.
So if you lose the AI bet, is that an indicator that the hyperbole is justified?
I think it shows for this particular case that GPT-4 progressed much faster than I expected. I think that means that the economic effects will be much larger than expected. Since I was expecting very little effect, it could be 10 times bigger than I thought it would be and still not be huge. But definitely on this subject, as I have rethought my point of view.
The only story I could think of that would redeem my original skepticism would be if they simply added my blog post to the training data, and then just spit my own responses at me. But here’s the thing: I actually have a new post where I gave GPT-4 a totally new test that I never discussed on the internet, and it got the highest score, so I think it’s genuine.
And what happens next?
There is a general rule of thumb that even when a technology seems incredible, it usually takes much longer than expected to have big economic effects.
The first telephones were in 1870; it takes about 80 years before this technology gives us reliable phone calls to Europe. It seemed that electricity took several decades to become widely adopted, and the Internet also seemed to take longer than it should.
I remember several years when the back key didn’t work in email. I don’t know how old you are, but I remember when you couldn’t go back in an email. And it went on like this for years. You would think this would be resolved in three minutes. But every time humans are involved in adopting technology, there are a lot of different problems, different drawbacks. So as to whether or not GPT will actually transform the economy in a few years, I’d still consider that pretty amazing. It’s almost unprecedented.