This transcript was created using speech recognition software. While it has been reviewed by human transcribers, it may contain errors. Please review the episode audio before quoting from this transcript and email [email protected] with any questions.
Kevin, there are now more cybertrucks on the road.
Yes.
And that means that cybertrucks are starting to appear on social media.
Yes.
And one of the ways that I have seen these cybertrucks presented is being, like, stuck in the mud.
Have you seen the stuck-in-the-mud posts?
Is this the Christmas tree one?
There was the Christmas tree one, where it’s trying to tow a tree, and it’s gotten itself into a terrible spot of trouble. And I refer to these things last week as a panic room that can drive. And over on Threads, my friend James messaged me. He said, Casey, there’s actually a better name now for the Cybertruck. Do you know what they’re calling it?
What?
They’re calling it a sport futility vehicle. (KEVIN LAUGHS)
It’s an SFV, Kevin. So if you —
That’s very good.
If you want to accomplish a task but don’t actually want to accomplish the task, Cybertruck could be for you.
If you want to go pick up a Christmas tree, and you’re more worried about being shot at on your way to get the Christmas tree than actually being able to haul the Christmas tree out of the Christmas tree place, get a Cybertruck.
Sport futility vehicle.
It’s really good.
So good.
It’s really good.
So good. (MUSIC PLAYING)
I’m Kevin Roose, a tech columnist for “The New York Times.”
I’m Casey Newton for “Platformer.”
And this is “Hard Fork.”
This week, it’s an epic win over Google in an important antitrust case. We’ll tell you what it means for the digital economy. Then, Kevin investigates Silicon Valley’s hottest new subculture — Effective Accelerationism. And finally, Cloudflare CEO Matthew Prince drops by to talk about how the internet changed in 2023 and what’s coming in 2024.
(MUSIC PLAYING)
So this week, one of the biggest stories in the tech world is that Epic Games, the maker of Fortnite, just won a big lawsuit against Google.
It was an epic win.
Yes, an epic win. And this lawsuit was over the Google Play Store and whether it stifled competition and maintained a monopoly. A jury this week decided that it did. This is a big deal.
It is a big deal, because it all comes down to, how do people make money on the internet? If you are an app developer, if you have a business, these days, you probably need to have an app, and there are only a couple of app stores. And so what rules can those app stores set for you? How much of your money can they take? That is what was at stake here in this trial.
Yeah, so this is sort of a fascinating story. Epic Games is the maker of Fortnite and other popular games. And they’re run by this guy, Tim Sweeney, who has sort of made it his mission. He’s sort of on this one man quest to break up these app store monopolies.
So for years now, he has kind of been laying the groundwork for this legal assault on Apple and Google, which run the two biggest app stores. Epic also brought a case against Apple over its treatment of apps in the App Store. Epic mostly lost that case.
A judge decided that Apple did not have an illegal monopoly in the app world. But a jury found that Google, in this case, actually was operating illegally. So let’s talk about this case, because you wrote about this this week in your newsletter, and I’ve been following it a little bit, although admittedly not as much as you have. So just — can you just remind us of what the claims were here and what Google and Epic were trying to prove?
Yeah, it’s really tricky. And in order for Epic to win this, the jury had to agree to a multi-step argument for exactly why what Google did was illegal. And what it came down to was this idea that for many years, Google positioned Android as the open alternative to Apple, right? We all know Apple has very stringent rules about what you can do on their devices. But Google said, hey, Android is open. You can take it. You can fork it. You can Hard Fork it, if you want to. Live and let live. Let 1,000 Androids bloom.
But at the same time, as the years went on, they went from this position of being very laissez-faire to saying, well, actually, if you want to have Gmail on your phone, if you want to have YouTube on your phone, if you want to take advantage of Google services, then you’re going to have our App Store on your device, and you’re going to need to use our billing system. And it tied its billing system together with its App Store, and it said, you have to use these two things. And of course, people didn’t like this. Because they said, Google, we want to live and let live, like you told us that we could. We want to have our own app store. We want to run our own billing system.
And so that was the spark of this lawsuit — was Tim Sweeney saying, I’m going to put my money where my mouth is, and I’m going to introduce my own billing system into Fortnite on Android. And Google said, nope, and it pulled it out of the Google Play Store, and that is where this lawsuit began.
Right. So Tim Sweeney and Epic Games — they sort of, basically, set up this direct payment tool, which would let Fortnite users pay them instead of going through Google and having to give Google 30 percent of that revenue. Google says, no, you can’t do that. And so they sue.
And basically, my understanding is, the thing that they are asking for is to essentially be allowed to make it so that if you are in Fortnite and you want to buy, let’s say, a skin for your Fortnite character, they want you to be able to buy that through their app store, their payment processor. They want to keep the money from that, rather than doing what they currently have to do, which is to give a big chunk of that to Google.
Right, potentially as much as 30 percent. Google generally takes a 15 percent fee for app subscriptions and a 30 percent fee for purchases made within apps. It also says that the vast majority of developers qualify for a fee of 15 percent or lower and that it is only the big guys like Epic that have to pay that 30 percent. But a lot of people are still paying 30 percent, so we’re going to use 30 percent throughout this conversation when we talk about what folks are paying.
Right.
Now, this does get tricky. Because maybe you’re an Android phone user and you’re saying, but Casey, I can already put another app store on my phone, because I can sideload it. And that’s right, and that was one of the ways that Google attempted to get out of this lawsuit — was by saying, look, if you really, really want to bring in another app store, you can do it. But the jury found that that actually was not enough and that Google had put enough restrictions on these developers, that it did constitute an illegal monopoly.
Yeah, that was one of the most interesting things about this case for me — was sort of the insight that it has given us into how companies like Google operate. There was this thing that came up during the trial, called Project Hug. So Project Hug was this codename inside Google for this initiative that started in response to Epic sort of coming up with all these ways to bypass the Play Store.
Google got worried that other game developers would try the same thing, that they would say, well, why are we giving 30 percent of our revenue to you? We don’t want to do that either.
And real quick — one of my favorite details about this was that Google looked into the possibility that other developers would follow Epic’s lead on this — essentially, if they were allowed to do what Epic did, how many of them would follow. And they estimated that up to 100 percent of top developers would do this. Because why would you give Google 30 percent if you didn’t have to?
Totally. So because Google is worried about losing all this revenue from all these mobile game developers, it did what it called Project Hug, which is to basically go around to all of the top mobile gaming developers and essentially pay them off, right? Like, give them some sort of deal to launch in the Google Play Store to basically incentivize them to want to stay within Google’s garden and not go out and build their own thing.
Yeah. The basic idea was, OK, to prevent this full-scale revolt against the Play Store, we’re going to run around. We’re going to cut a bunch of sweetheart deals. It was just their way of trying to buy off everyone who is about to run screaming out the door.
Yeah.
Yeah.
So what was Google’s defense to all of this? Why did they say that this kind of thing was legal?
It really came down to this. Their argument was, we are not a monopoly. We are in a duopoly. We compete against Apple, and so we can essentially do what we want, because Apple exists.
Right.
Yeah.
And the jury did not seem to buy that.
They did not. And this is interestingly one of the reasons why the outcome was different here than in the Apple case. Because in the Apple case, which was decided by a judge rather than a jury, the judge found that Apple and Google were part of the same market. And if one thing about antitrust, and that’s about as much as I know about antitrust, it all comes down to how you define the market, right?
Because if I want to come in and I say, hey, you have too much control over this market, the first line of defense is always to say, well, the market is actually huge, right? Like, if you were to say, you know, well, Amazon clearly has a monopoly on its — over its sellers, Amazon would come along, and they would say, but look at all of the other e-commerce companies that exist, right? And this is how they wind up getting out of antitrust issues. This is what Apple was able to do in its own antitrust trial. They were able to say like, look, we’re on this very competitive market. We don’t have a monopoly. But in the Google case, this argument didn’t work. The jury was convinced that the market could be limited to just Android. And when you do that, it’s pretty clear who has control over Android.
Totally. So what happens now? I mean, is it — now that this case has been won by Epic Games, does this mean that suddenly, Google is not allowed to charge a percentage of revenue to app developers? Are app developers allowed to make their own app stores? What are the ramifications of this?
The answer is we don’t know yet. So in January, both sides are going to come back, and they’re going to write up these sort of post-trial briefs where they begin to talk about what remedies they think might be appropriate. Google has told me that, yes, they are planning to appeal this case, so this will drag on for some time.
But at the same time, in February or so, we might hear from the judge about what he thinks the proposed remedies are. The most extreme case is that, yes, he would come along and say something like, Google, you’re not allowed to charge a fee on third-party billing, something like that. My guess is that he will not do something that extreme. But you know, if the verdict is upheld, life will probably become at least a little bit easier for Android developers.
Right. So Epic — they’ve won this case, but we still don’t know what they’re going to get out of it. And obviously, this will probably get appealed.
So I asked Google what it made of all this. It shared with me a comment from Wilson White, its Vice President of Government Affairs and Public Policy. And Wilson said, quote, “The trial made clear that we compete fiercely with Apple and its app store, as well as app stores on Android devices and gaming consoles. We will continue to defend the Android business model and remain deeply committed to our users, partners, and the broader Android ecosystem.”
So this is obviously not the result that Google wanted. They wanted to win this one. But how bad is this for them? Like, I don’t really have a sense of how much of Google’s money is earned from the Google Play Store versus its search ads or something like that. So how much do they stand to lose from this decision?
So I mean, I think the best way to get a sense of how bad this is for Google is to look at the stock market, which — after this was announced, Google stock declined less than 1 percent, OK? The Google Play Store is big by the standards of most businesses. Epic’s expert estimated during the trial that Google earned $12 in operating profit from the Play Store in 2021.
And its profit margin on that, by the way, was 71 percent. So this is just sort of a pure profit machine. If that $12 billion were to go away or to be cut in half, well, now, Google has, what, $4 or $6 billion hole it has to fill somewhere.
But the nice thing about being Google, Kevin, is that you own 4,600 different businesses. You have monopoly control over the web. And you’ll probably be able to scrounge that up in your couch cushions or just throw another ad onto mobile search. And there. There’s your $4 to $6 billion and everything’s fine.
And I just want to say, this is one of the reasons why I find Google’s behavior in this trial so exasperating. There are multiple reasons. But one of them is just, they don’t need this money.
This is a company that earned $19.7 billion in profit in the last quarter alone. And they are going to nickel and dime these developers to death. And when you ask them how they justify it, all they really say is, like, Casey, these are industry standards. 30 percent is industry standards. Right?
Like, these are obviously just arbitrary numbers. If Google is making a 71 percent profit margin, that tells you that they’re not reinvesting most of this back into the Google Play Store. This is just a very rich company that wants to get very richer, and I’m not here for it.
Generally, when companies like Google and Apple are asked, why do you charge so much money to developers in your app stores, they’ll say, well, we invented the app store. This is our platform. We spend a lot of money trying to keep it safe and make sure that people aren’t submitting apps filled with malware or that will scam them in some way.
And I can see the rationale for that, up to a certain point. Like, it doesn’t cost zero to maintain a big app store, but it also doesn’t cost $12 billion a year either.
Yeah, that’s right. And look, I mean, Google absolutely should be able to collect something from these developers. It has invested many billions of dollars into Android. It should be able to recoup that investment in some way.
It’s just also clear that 30 percent is an arbitrary number. And given the size of some of the businesses on its platform, Epic included, I just don’t know how you justify taking hundreds of millions of dollars from these folks over time.
Yeah. Now, Casey, you’re a gamer. How is the Fortnite community specifically reacting to this news? Are they flossing in the streets?
They’re flossing in the streets. They’re doing every emote you can think of. They’re dabbing again, Kevin.
They are — they’re wearing their best skins, and, yeah, it’s a real party on that bus with a parachute on it.
(LAUGHS): I guess if we step back, do you think that we’re starting to see these kind of massive app stores crumble? Do you think we’re starting to see the beginning of the end of the big app store monopoly that just has a tollbooth on it, that takes 30 percent of whatever comes in and out?
I do. I believe that we are beginning to see the end of 30 percent. In fact, there was a story on the day that we record that Europe is set to take a similar action against Apple after Spotify complained about its rules where Spotify has to give a good chunk of money to Apple for all of these subscriptions that flow through that app.
It’s not allowed to point people to its website where it can just get them to sign up there without having to pay Apple. And according to Bloomberg, Europe is about to crack down on that. So look, Apple and Google are going to be dragged into this new world kicking and screaming. They’re going to fight for every single cent, because they have no incentive not to.
But little by little, this world is starting to crumble. And I just hope that more of this money starts to flow back into the small and medium-sized businesses that want to build on these app stores. I think it’s an interesting question. If, instead of having to give 15 percent to 30 percent of all of your revenue to these two companies that don’t need the money, could you just keep that for yourself? Would we maybe have a more vibrant internet? I bet we would.
Well, it seems like this case, and these cases by Epic Games, have been sort of framed as, like, the underdogs taking on the evil empire, right? Like, Epic Games and the small developers of the world sort of taking on these app stores. But it’s also true that these are not plucky, small developers.
Epic Games is a huge — I mean, they make one of the most popular games in the world. And so they also have leverage. When they want to send people to their own app store or to pay through their own billing system, they can do that. So I wonder if small developers are actually going to benefit from this, or if it’s mostly going to be these medium – to large-sized developers.
I mean, there’s no doubt that Epic would benefit hugely from this. That’s the reason that they undertook this whole thing. But I think it’s worth saying that they probably could have gotten a sweetheart deal, too, if they wanted one, right?
Like, they did not have to choose this fight. Anytime you take on a big legal case like this, it is its own distraction. Epic’s business is like — it’s doing OK, but Fortnite is pretty mature. They have not really pulled another rabbit out of their hat in a long time. So I’m sure that they would love to have those extra millions or hundreds of millions of dollars to be able to use for R&D to come up with something new.
Yeah. Another thing that stuck out to me about this case is that one of the things that the judge and Epic Games took issue with was Google’s habit of having its employees hide their chat logs. Basically, in Google Chats, you can set it to auto-delete after 24 hours. And there were a number of examples cited in the trial where executives or employees at Google would be having some discussion about antitrust, it would get a little spicy, and then someone would say, like, hey, everyone, the chat history is turned on, and then the transcript would go dark after that, because presumably, they turned on the auto-disappearing mode. That did not go over well with the judge.
No, it did not. This judge, James Donato, called Google’s behavior, quote, “the most serious and disturbing evidence I have ever seen in my decade on the bench with respect to a party intentionally suppressing relevant evidence.” He also called it, quote, “a frontal assault on the fair administration of justice” he has promised to investigate. So one more thing to add about this case is that Epic was able to prove its case while still missing, probably, most of the relevant evidence, because Google had destroyed it.
Right.
Now, what Google would say is that a lot of the material that we’re talking about that was deleted was deleted because in Google Chat, where these conversations were taking place, by default, the conversations auto-delete after 24 hours, and I guess executives are sort of changing those defaults now so this doesn’t happen anymore. But, like, come on.
Oh, whoopsie. I accidentally auto-deleted my incriminating antitrust conversation. Hate when that happens.
Yeah. So the whole thing is — it’s giving chicanery. It’s giving antics, hijinx, and it is — I mean, look, when was the last time we had a frontal assault on the fair administration of justice on the show, Kevin?
Yeah, I’m not a lawyer, but I think when a judge says that to you, you’re having a bad day.
Yeah, a bad day. So naughty Google.
Yeah, Google is on the naughty list for the Play Store this year, and Epic Games is on Santa’s nice list.
That’s right. Instead of getting 30 percent of revenues in their stocking this year, they’re getting a lump of coal.
(KEVIN LAUGHS)
When we come back, Kevin insists that I learn something about a person known as Based Beff Jezos.
(MUSIC PLAYING)
So Kevin, this week, you wandered into another wild San Francisco subculture — some might say, a religion. And these people are called the Effective Accelerationists, or e/accs for short. And your story opens with a scene at a party where e/accs are putting up banners that say things like “Accelerate or die.” So I guess my first question is, how worried do I need to be about these people?
(LAUGHS): So I was not actually at this party, but I did hear a lot about it from people who were there.
Once again, a party you were not invited to.
(LAUGHING): I know, I know. Well, I actually was invited to this one, but it was late on a school night, and I thought —
What’s that, like 7 PM?
No, you know, parties — they start at, like, 11:00, and I’m too old for that.
That’s fair.
But so this was a party that was thrown by this subculture that is calling itself e/acc. And this is something that I’ve been tracking for a while — about a year, actually.
Do you have an e/acc tracker?
Yeah, I have an e/acc tracker. And this sort of was born on Twitter. A lot of people who are kind of in the ai world or adjacent to it in some way were sort of getting annoyed around the same time with all of this ai doomerism, or what they saw as ai doomerism, that was coming, a lot of it, out of the Effective Altruism movement, which we’ve talked about on this show. This is a group of — I would call them data-driven do-gooders who like to kind of research how to do philanthropy but in recent years have been very, very concerned about ai safety. And so a lot of the more worried folks in that world have ties to Effective Altruism. So there was this group of people on the internet who were basically like, all these Effective Altruists — they’re kind of taking over the conversation. They’re getting all this attention. They’re raising all these alarms about ai and how it could go rogue and kill us all.
And we don’t believe that, and actually, we feel like that’s a dangerous ideology. And so we’re going to start our own ideology that’s effectively the opposite of EA, and we’re going to call it Effective Accelerationism. And our platform is basically going to be that we think ai and other technologies should just be allowed to go as fast as possible, and that we are sort of heading toward this glorious utopia of ai and superhuman intelligence and that we should just kind of get out of the way and let it happen.
Got it. So at its root, then, e/acc is a reaction to Effective Altruism.
Yes. Actually, there’s a funny line this week. A writer, Zvi Mowshowitz, who covered this, said that basically, e/acc is functionally a Waluigi to Effective Altruism’s Luigi at this point. It’s basically sort of the opposite movement, with a lot of the opposite beliefs.
And when I first encountered this, it was, like, a few dozen people, most of them kind of anonymous accounts or pseudonyms, who would just gather in these kind of late-night Twitter spaces, and they would talk about politics and philosophy and ai. And it didn’t seem, at the time, influential enough or important enough for me to write about. But I would say that started to change over the past few months.
You have people like Marc Andreessen, who — we’ve talked about his sort of techno-optimist manifesto. He has declared himself an e/acc, and he has also cited some of the founders of e/acc as his sort of inspirations for some of his ideology. Garry Tan, the president of Y Combinator, the influential startup incubator here in San Francisco, has also declared that he is part of the e/acc movement.
And you’re just seeing a lot of people change their display names on X or put “e/acc” in their bio somewhere, and they’re throwing these parties. It just — it seems to be gathering momentum in a way that made me feel like, OK, maybe it’s time to write about these guys.
Well, so let’s try to take their core argument seriously for a second here. Do we think that pessimism about ai is getting in the way of progress and stopping a bunch of wonderful things from happening?
Well, there are a couple of ways of answering that. One is, is it limiting the rate of ai progress? And I would say the answer to that is pretty clearly no, right? Companies are racing ahead with this stuff. I mean, you could argue that maybe we would have gotten GPT 4 six months earlier if OpenAI hadn’t had all these Effective Altruists inside it trying to make the systems as safe as possible.
I think they’re not really sort of materially slowing down overall ai progress, because obviously, as we’ve talked about, there’s this huge race going on. I think it’s more the culture and the discussions and the discourse around ai that they object to. They see things like regulators in Washington and Europe being very worried about these catastrophic risks coming from ai.
They see these open letters going around, calling for six-month pauses on ai development. They hear people like us in the media being worried about ai, and they just think all these people are blowing this stuff out of proportion. And so that is sort of the idea that they have risen up to promote — is like, don’t slow this stuff down. Every time you slow this stuff down, you’re just delaying the inevitable.
Right. And when you say it like that, that sounds reasonable. I could see how somebody would think, OK, these doomers are a little out of control. I think we should move faster on some of this stuff.
But at the same time, when you look deeply at this subculture, there are some pretty radical ideas in there, right? I was struck, in your piece, by how many people do not seem like they would be at all bothered if some sort of artificial general intelligence did emerge and actually just overtake human beings, right? Like, talk about that religious aspect of these folks.
Yeah, so it’s, complicated right? Because all these groups have sort of bundles of ideas in them, and not everyone subscribes to everything. So I talked to a bunch of e/accs while I was writing this story, and some of them were sort of like, I just kind of like to go to the parties. I like the vibes. These are, like, fun people to hang out with. They’re more optimistic than the Effective Altruists, who are always talking about doomsday scenarios. Some of them were sort of basically just kind of libertarians who think that capitalism is good and regulation is bad, and in general, the government should stay out of like regulating ai. And some of them have these very, sort of, I would say, dark ideas.
The idea of accelerationism itself is actually not a new idea. It has been around for decades, and it was popularized by this philosopher, Nick Land, who basically believed that there were these forces of capitalism and of ai and technology that were going to collide and produce something called the technocapital singularity. And that would be sort of this point where technology just runs the world. It overtakes — we can’t control it anymore.
And that is an idea that some of the e/accs have run with. And so e/accs that I’ve talked to — they kind of actually agree with the Effective Altruists that we could have superhuman AGI very soon. They’re just not worried about it.
Some of them think, well, this is just sort of the natural evolution of things. Like, they have this idea of the successor species, which is that maybe our job as humans is to birth this thing, this form of intelligence that is smarter than us. And if it wipes us out or subjugates us or makes us its slave, like, maybe that’s just kind of the natural order of things, and we shouldn’t be too worried about it. I will say, that’s not something that a lot of e/accs believe, I think. But that is something that the movement’s leaders, who we should talk about, have actually said.
Well, so let’s talk about that. So one of the movement’s leaders is this pseudonymous gentleman who goes by Beff Jezos. And he is one of the people who has said that the goal of ai is to — I believe his quote was, “usher in the next evolution of consciousness, creating unthinkable next-generation lifeforms.” So who is this guy?
Yeah, so e/acc was started last year by this group, this small group of people. They all had these pseudonyms, like BasedBeffJezos, and Bayeslord was another one. And Forbes, earlier this month, revealed the identity of BasedBeffJezos, who is a guy named Guillaume Verdon.
He’s 31. He’s French-Canadian. He used to work at Google X, which is sort of their experimental lab. He made some money on NFTs, strangely enough, and used that to bootstrap his new hardware company. He runs a company called Extropic.
And he’s just kind of like an engineer-philosopher guy who has some ideas. I was sort of going back through some of their old conversations and their old Twitter spaces. And at one point, Guillaume, this guy, starts talking about why he decided to start this movement.
And he basically says that it’s because he and his friends who work in tech are constantly just being told that they’re the bad guys, right? Like, you’re creating this stuff, this technology, this ai that’s going to hurt society in all these ways.
You’re bad. You’re irresponsible. You should slow down. And he was basically like, I wanted to create a movement where the engineers and the builders would be the heroes. And so that’s what he tried to do.
Finally, we could send our engineers and builders in Silicon Valley, instead of the HR departments and vice presidents for business.
Right. And I think there’s a little bit of, like, a persecution complex going on?
A little bit?
Yeah.
But so Based Beff Jezos is this guy, Guillaume Verdon. The rest of the e/acc founders are still under pseudonyms and, I think, prefer to stay that way. I did ask Guillaume for an interview, and he declined, although he did say he’s going on the “Lex Fridman Podcast.” So we’ll be hearing more from him.
So tune into that, and let us know what he says. We’ll be interested to hear.
(LAUGHING): Yeah.
I am really struck by two things. One is, you compared this a minute earlier to libertarianism. And I have to say, listening to you describe it, reading your story, it really does feel, at least for some segment of the e/acc believers, like this just is rebranded libertarianism. I want to say that 100 percent of Marc Andreessen’s interest in e/acc is just that it gives a new coat of paint to an old idea, which is that you should not regulate capitalism, because that reduces the amount of money that you make as a venture capitalist. Does that seem like a fair assessment?
Yeah, I think a lot of it is — if you dig one level below the memes and kind of the social media of it all, a lot of it is very standard libertarian stuff. And I should say, it’s also not a new idea in Silicon Valley. There are all these groups that sort of popped up during the dot-com boom and even earlier.
Like, there were these groups like the Transhumanists and the Extropians. There were sort of all the early internet — kind of the whole Earth Catalog-era people. And a lot of those people were kind of techno-libertarians. They believed that the internet was this liberating thing and that the governments should stay away from regulating it.
And there was sort of this strain of idealism that basically said, like, technology deserves to be free, and we should not regulate it. And so to me, e/acc is kind of fusing the sort of hardcore libertarian economics of people like Hayek or Milton Friedman, with this kind of Silicon Valley subcultures, like the Extropians and the Transhumanists, with a healthy dose of just sort of pure rage against the Effective Altruists.
Yeah. The second thing that strikes me, you also mentioned, which is that the e/accs and the EAs really are two sides of the same coin. And what is notable about that, to me, is that you do have these growing and relatively powerful contingents that both do agree that AGI is coming and might be here soon. And that just sort of seems worth saying, right? That for all of the ways that the e/accs might want to make fun of the decels, they do share a lot of core beliefs.
Yeah, so I wasn’t able to interview anyone who defended the most extreme version of the argument, which is like the AGIs are going to take over and kill us all, and that’s good, or that’s sort of the natural order of things if it happens. What most of them would say is some version of, ai is going to just be an incredibly positive thing for humanity, and the sooner, it gets here, the sooner we can cure diseases, the sooner people can live longer, the sooner we can fix all these problems with our society.
And so the people trying to slow ai down are really just preventing all of those good things from emerging quickly. And I hear that, and I think that’s actually something that some Effective Altruists also believe, but they’re also weighing the risks. And I think the e/accs just don’t think the risks are that serious, or at least as serious as people are making them out to be within the EA community.
From the way that you’re describing this, Kevin, it sounds like these folks believe that technology is the one and only solution to all of our problems and that if we just sort of build enough tech, all of our problems will take care of themselves. Is that a fair read, or is there a political and social dimension to their thought, too?
It depends who you ask. I think — I talked to a bunch of e/accs, and I think they would answer these questions in slightly different ways. I think among the most hardcore e/accs and some of the leaders, there is this feeling that technology and capitalism are these inevitable forces, that you can stand in the way or you can get out of the way, but ultimately, they’re too powerful to be permanently resisted.
And so they’re just this idea that there are these currents that are just pulling us in the direction of the technocapital singularity, and that we can delay it, but that ultimately, it’s inevitable. And I do think you’re right that that’s limited in some ways. Because if you just think about something like climate change, that’s something that a lot of ai optimists will say that ai could help us fix. But I don’t think anyone is saying that it’s going to fix itself, right?
We need humans and politicians and governments and companies to come together to solve this. It’s a social and a political problem, not just a technological problem. And so I think that’s one place that a lot of people of disagree with e/accs — is just this sort of belief.
And I would say also, this exists, to a certain extent, in the Effective Altruism community, too — is this belief that there is this kind of inexorable march of technology, that sort of our only options are to stand in the way and hold up our hands and say, stop, or get on board.
Yeah. So you also write that there are a number of splinter groups that are forming out of e/acc, including a/acc, bio/acc, and d/acc.
(KEVIN LAUGHS)
How do they differ? And do I really have to remember all of this?
Yes. As with any good religious movement, there are splinter groups. So Grimes, also known as Elon Musk’s ex and a musician who has played — she actually DJ’d the e/acc rave that I wrote about in my story. You know, she has proposed something called a/acc, which stands for Aligned Accelerationism, which is basically, what if we just accelerated but, like, a little more carefully, and making sure that the robots actually want us around?
Let’s just accelerate the good parts.
Yes, exactly. There’s also something called bio/acc, which is sort of like taking Effective Accelerationism to the world of biology, and putting chips in our bodies, and augmenting ourselves so that we can more effectively compete and live in a world with lots of superintelligence in it.
Sure.
And then, there’s d/acc, which is — Vitalik Buterin, who’s the founder of ethereum, proposed this idea. I think it stands for Defensive Accelerationism or Decentralized Accelerationism. He didn’t really specify which of those it stands for. But basically, it was kind of like a compromise. It was kind of, what if we accelerated the good parts, but also didn’t stop worrying about the potential bad parts?
Got it. OK, so I’m just going to forget about all of those things immediately, but we congratulate everyone who’s coming up — I actually identify as an l/yacc. Have you heard of this?
No.
That’s a fan of Linda Yaccarino, the CEO of X.
(KEVIN LAUGHS) I think she is one of the most interesting CEOs in Silicon Valley.
Yeah.
Yeah.
So please attend our l/yacc rave later this month.
(LAUGHS): All right. So at the end of all of this, Kevin, where do you — where do you shake out on the e/accs? Do you think they have some good ideas that are worth paying attention to, or should we hope that they disappear?
So I’ve said before that I think we should be celebrating progress in technology and other fields more in this country. Like, I think we should have had parades for the people who invented the COVID vaccines. And I think there’s something to the kind of aesthetics of e/acc, where they are sort of taking a conversation that has been very, I would say, dominated by negativity and pessimism and kind of injecting some optimism into that. I think there’s something appealing about that for a lot of people, especially in Silicon Valley. The thing that I’m sort of worried about, and then I’ll be very interested to see how people react to, is this kind of idea that we should celebrate progress, even if that progress hurts people, right? We know that ai is already starting to harm people in vulnerable communities and that the smarter it gets, the more it potentially could cause job losses and things like that.
So I just think it’s going to be a very different conversation when people can actually see the harms from ai in their own lives and communities. And so I do think there’s some kind of natural limit to the number of people who are going to sign up for something like e/acc. I don’t think their most extreme versions of their ideas are very popular at all.
That said, I do think it’s an interesting social phenomenon. And I think we are headed into the era of the ai religion where you will have just these factions, these sects that are kind of working, operating, functioning as kind of online tribes, people declaring their allegiance to them, you know, prophets sort of rising up within these movements to give everyone directions. I just think we were headed into a very interesting time of people not just having sort of political identities, but also kind of identities around how they feel about progress and technology and ai.
That makes sense. My feeling about all of this is, I think it’s OK to want to accelerate certain kinds of projects. If you’re working on an ai system that is going to help identify cancer at earlier stages, by all means, go as fast as you possibly can. And maybe we do even need to tweak some regulations so you can go a little bit faster with some of that stuff, right?
But I just want to keep our accelerationism really, really specific. I think a broad-based movement that just says accelerate everything simultaneously is bound to cause really bad harms. And so to the e/accs, I fortunately do have to say, knock it off.
(LAUGHS): All right, that’s e/acc. When we come back, we talk about how the internet changed in 2023, with Cloudflare CEO Matthew Prince.
(MUSIC PLAYING)
Well, Kevin, it’s been a big year on the internet.
It sure has.
And there have been many trends that have emerged, and there have been people observing those trends and writing about those trends.
Yeah, including us.
Including us.
So today, we’re going to have a conversation about how the internet changed in 2023 and what we can expect to change in 2024. And I wanted to bring in Matthew Prince. Matthew is the CEO and co-founder of Cloudflare. And you may be wondering, what the heck is Cloudflare?
I would say it’s one of these companies that does something that, on the surface, seems incredibly boring, but if it disappeared overnight, the entire internet as we know it would basically collapse. Cloudflare is an online security and data company. They sort of help websites operate. They help data move around the internet. And they also provide security services that help websites protect themselves from hackers and DDoS attacks like that.
One of the most important things Cloudflare does is serve as a free security guard for a huge chunk of the web. So a reason that more sites are not just taken down by random attacks is because Cloudflare has stood up and said, we are going to protect these sites.
Yeah, so I’ve known Matthew Prince for a while. He is an unusually thoughtful tech CEO. He’s been around for a while, and he just has — because of his position at Cloudflare, overseeing this vast chunk of the internet, he just has a very expansive view onto how the internet is changing, what is going on, and what we should be aware of and worried about. So today, we’re going to talk to Matthew about how he sees the internet changing and what he thinks the next year will hold.
Matthew Prince, welcome to “Hard Fork.”
Thank you for having me.
So I always say that you guys are like the plumbers and bouncers of the internet. Cloudflare protects, like, a vast chunk of the internet from things like hackers, but also helps to route information around the internet in ways that I only kind of understand. But I know that your company is very important and also has a very broad view of the internet and what’s going on the internet.
And I want to dive into that, because you all just put out some really interesting research on this. But first, I just think we should define what we’re talking about when we talk about the internet in the year 2023. I think some people think the internet is just web pages. It doesn’t include walled gardens like TikTok or Instagram. But what do you think the internet is? How would you define the internet in 2023?
I think that anything that you’re doing on your phone, anything you’re doing on your laptop, a lot of things that you’re doing with your smart refrigerator or your smart vacuum cleaner — all of that, behind the scenes, is getting connected to the internet in one way or another. And at Cloudflare, somewhere between 20 percent and 25 percent of the web, but a huge percentage of the internet as well, runs through our pipes. And that gives us the ability to see a lot of what’s going on online and just understand the trends of how 2023 was different than 2022.
So let’s talk about that. So you all just put out this year-in-review report, which is one of my favorite things to look at every year. Because it’s just — it’s stuff that I don’t really think about, like how much did internet traffic grow this year. And I was sort of shocked by this.
Internet traffic grew by 25 percent overall this year. I didn’t know we could spend any more time on the internet, but apparently, that’s true. So how do you all measure that, and what does that tell you about where we are on the internet’s life cycle?
Yeah, so I think the first thing is that while all of us who live in the US — and we’re here in San Francisco — are using the internet just almost continuously — frankly, almost pathologically — it’s amazing that still 1/2 of the world’s population isn’t connected and online. And I think the biggest driver of growth over the last year has just been that 4 billion people that weren’t online — some of them got online this year. And that turns out to be one of the biggest ways that you can drive more growth.
So we saw significant growth across India, Africa, a lot of Southeast Asia. And that was driving a lot of what that growth was. How we’re able to see that is, as you are sending a text message on your phone, or as you’re interacting with your smart vacuum cleaner, there’s a good chance that that is actually traffic which is passing through Cloudflare. And we don’t see all the details of that, but we do see enough to be able to measure general trends.
And is it the case then that you’re still seeing growth in the United States as well, places where people are already online? Is the fact that they now have the smart vacuum cleaners and the smart refrigerators — are you seeing more growth there?
Yeah, I think we’re continuing to see more people spend time online. Does not look like we’ve seen a dropoff. Actually, out of the pandemic, we definitely did see some services start to decline. You didn’t see as much people on streaming services, and so that’s continued to be kind of the general case.
But if anything, that’s leveled out, and we are seeing that as more things do connect to the internet, that’s just more traffic across the network. And again, even in developed countries that are highly connected, we’re still seeing growth to overall internet usage.
And sort of, when you step away from the specific statistics, we could go through a list of these things. Google is, again, the most popular internet service, with — TikTok is now in fourth place, after Facebook and Apple. Facebook is the number-one website in the social media category, followed by TikTok, Instagram, and X-slash-Twitter.
But I just want to step back from that and ask you, as you look back on 2023 and sort of what’s been happening, not just on the part of the internet that Cloudflare services, but just the entire internet and the entire online ecosystem, what were some of the biggest changes that you think we went through in 2023?
You know, 2023 almost feels like what I would have predicted 2022 was going to be like. You know, I think that the big story out of 2022 was, obviously, the Russian invasion of Ukraine. And we anticipated that in 2022, there would be a massive rise in cyberattacks originating from Russia and Russian-born hackers going after Western allies of Ukraine.
And that then didn’t largely happen. It was actually quite quiet on the sort of cyber front. And it kind of had us scratching our heads, asking, like, why has this been the case? 2023 made up for that. So we saw a dramatic increase, especially after July of this year, in the amount of attacks that were going on online. That even accelerated more with the Hamas attack on —
Attacks, meaning cyberattacks, meaning hacking into websites.
Yeah, hacking into websites, trying to disrupt websites, trying to do various things. And while 2022 was very quiet, 2023, especially the second half of 2023, was extremely busy. And I think what we’re seeing very much is that whatever is happening in the physical world gets reflected very much in the digital world.
And so, almost simultaneous with the Hamas attack on Israel, we saw a substantial increase in cyberattacks. And over time, I think the digital world is really reflecting what it is that we’re seeing. And in a very tumultuous world that we’re seeing today in 2023, I think it’s been a very tumultuous world online as well.
How resilient are the Western allies turning out to be against these sorts of attacks? Have you seen sort of anything really scary and new, or are people mostly just trying the same tactics that they’ve been using for years?
I think that the attacks range. We’ve had what I would call a series of just kind of patriotic Russian kids that are launching sort of just really kind of not very sophisticated but disruptive attacks. And they can cause damage, because they can knock things offline.
But they’re sort of the equivalent of a caveman with a club, in terms of sophistication. On the other side, I think that as there has been more distraction around what is happening in the Middle East, what is happening in Ukraine, we’re seeing that there are attackers out of China. There are attackers out of North Korea.
They’re launching much, much, much more sophisticated attacks, oftentimes out of North Korea, targeting the crypto space, China oftentimes targeting either critical infrastructure in the United States or various places where there’s a lot of intellectual property. And in those cases, those are extremely sophisticated attacks, and even some of the most fortified organizations that are out there have problems with that.
You saw with the attack against Okta that happened this year — again, a lot of sophistication going into that. And so I think that as there is this sort of general noise around what’s going on online in the cyberspace, the more sophisticated attackers are using that as almost cover to be doing much more damage.
The big story of the year, obviously, in the tech world has been ai. And all kinds of predictions out there about how ai could reshape the internet, fill everything up with spammy, ai-generated garbage, help people create new cyber weapons, change the open web in all of these different ways. What do you see ai doing to the internet this year, and what do you think we should look for next year?
Yeah. Again, I think it almost is the big story of this year, but I think it will actually have the big effects next year. I don’t think we’ve seen a huge amount of change. I think there are a bunch of headlines and things to worry about.
There’s headlines of parents getting tricked into sending fraudsters money because their daughter is in a Mexican prison, where it’s not even their daughter, or fake news, and sort of the ideas of what people can create, some people manipulating Google’s algorithm on SEO and trying to inflate their own rankings. But I think these are the leading indicators for what is going to become a real challenge next year.
And I think the thing that we’re looking to the most is, regardless of what your politics are, the 2024 election is going to be really a fulcrum where a lot of these things come together. And so that’s a place that we’re spending a lot of time. I think it’s a place where, if I were working for “The New York Times,” I would be trying to say, how can I help tell what is human-generated, versus what is machine-generated.
And I think we’re seeing the early indications of what those headlines are. We’re seeing the risk. But we haven’t seen a ton of what has actually been that effect. I’m actually generally pretty positive on how ai will affect cybersecurity.
At some level, Cloudflare has always been an ai company. We would never describe ourselves that way, but the whole theory of the company was, if you can get enough traffic passing through your systems, then you can look at that and analyze it and make predictions on what the next cyberattacks are, that someone would be able to say — and the same way that, in the last 18 months, it felt kind of ai systems went from jokes to being really interesting, internally, about 18 months ago was the first time that our system started to pick up new cyber threats and new attacks that no human had ever identified before.
And that went from something that was really novel at the time to something that’s now happening on a relatively regular basis. And I think that that’s — the good news is that I think that those systems are really good at helping us protect it. And if you look at a lot of the ai companies, they actually are Cloudflare customers, where we’re using our ai systems to protect their ai systems. And —
That’s sweet. It’s like friends looking out for friends, bots just looking out for other bots.
That’s exactly right.
Matthew, you and I have talked about ai a bunch before. And I would describe you as sort of an ai centrist. Like, you’re not one of these people who thinks we’re doomed and we should go into the bunkers and start hiding from the robot apocalypse, but you’re also not, like, a sort of wild-eyed techno-optimist who thinks everything’s going to be OK.
I’m surprised that — again, I think I’m sort of a centrist on a lot of these things, and so it was surprising to get an invitation to be on this. Because I thought the way you get ratings these days is be on one extreme or the other.
But one of the ideas that you’ve talked with me about that I found interesting is this idea that ai could actually be less global than we think, that it could sort of balkanize or splinter the internet into different countries sort of running their own ai systems. So explain what you meant by that, because I think that was a really interesting idea.
I think that if you — today, if you look at ai systems, it’s about 95 percent of the infrastructure that is running ai, so the NVIDIA GPU chips, the systems that are actually cranking out these ai models, actually running inference on these ai models — and 95 percent of that is being deployed in the United States today. And I live in the United States, and we’re sitting here in the United States right now.
And if you’re in the United States, you should be like, wow, that’s really cool. We’re leading that innovation. But I think if you look at what came out of the EU this last week in terms of —
The ai Act.
— their ai Act, and you talk to regulators around the rest of the world, what you hear time and time and time again is, we don’t want to make the same mistake with this next technological revolution that we did with cloud, that we did with the internet, that we did with mobile. Like, this time, it’s going to be different.
The mistake being, letting American companies run the whole thing?
Correct. Yeah. And so I think that there’s very much a sense that if this is another sea change in terms of technological movements, that they want to be able to make sure that they are part of that — the data is going to stay local in their own regions, and that they’re going to be able to either take advantage of it or maybe shut it down.
And so I think that this is one of those periods of time where there is a real force to say, we want to — for some really noble and good reasons, but also for some just purely protectionist reasons, we want to be able to control these new systems as they come online. And so someone joked the other day that ai is going to be the first industry that’s regulated before it becomes an industry.
And it has that feel. And I think that that’s actually kind of a somewhat dangerous set of things to go down. We don’t know what this is going to turn into. This is — pick your best metaphor, but if you’re a baseball fan, we’re at the top of the first in terms of the ai and I think — and what this is going to be.
And so I think there is a rush to be able to control this, in part because there’s so much extremes around here. I think that smart regulators will hang back a little bit, see what’s going on, and then let this develop before it goes forward. But I do think that more and more regulators around the world are saying, we want more control of how the internet works, and they’re using ai as a way to try and put the internet effectively back in a box.
I mean, even before ai, we had seen the internet starting to splinter into zones, right? And it seems like over the past decade or so, we’ve gone from having maybe, like, a Western and a non-Western internet, to an American internet, a European internet. India sort of has its own internet, right?
This seems like a trend that is accelerating to me. I wonder, do you just see the continued balkanization of the internet accelerating? And does ai wind up playing a role in that?
You know, if you think of the first 40 years of the internet, traditional sources of power, whether that’s media, religion, education, family, government — like, the internet was a massive disruption to those things. And I think 2016 was this turning point. And depending on where you are in the world, you see it as a turning point for different reasons. In the US, it was the Trump election. In Europe, it’s Brexit. In Asia, it’s a lot of consolidation of Xi’s power and a number of other conflicts that happen in that region. I tend to look at something that’s much, much more mundane, which is that 2016 — July of 2016 was the year that the “Associated Press” said you no longer had to capitalize the “I” in “internet” anymore.
And again, that’s not the cause, but it’s actually an effect. It’s that point in time where we were like, oh, yeah, we just take this for granted. It’s like oxygen. It’s everywhere.
And what also happened at that same time is we started to — the problems of the internet were always there, but we started to, as we took it for granted, start to say, oh, let’s look at all of the downsides. And I think the next 40 years are exactly that.
I think that that’s what we’re in the midst of. And I think that those traditional sources of power are very much trying to put the internet back in a box. And right now — historically, there have been two internets, as you said.
There was the Chinese internet, and China was smart, in a lot of ways, to say, we’re just never going to let this in, recognizing the threat to the systems and traditions and culture that they had. We’re never going to let the internet in. I think the race right now is, is Russia able to recreate the Chinese internet? Is Iran able to recreate the Chinese internet? Is Turkey, is India, is Brazil? And if the answer is yes, then I think we are balkanized.
I think the good news is that while a lot of people are talking about doing that, in Russia, if you want Western media, you can still get access to it. They have not figured out how to rebuild China. It’s very hard, once the horse is out of the barn, to get it back into the barn.
And that’s, I think, the race right now — is the rest of the world going to be able to figure out how to balkanize the internet or not? And I think that’s the struggle of at least the next 35 years.
Fascinating to note that Russia has still not banned YouTube, which, like — you just would assume that at this point in this war, they would have. But they haven’t.
Maybe Vladimir Putin just really likes mukbang videos.
He’s like, don’t take away my YouTube.
What will that Mr. Beast Get up to next?
(LAUGHS): Matthew, I want to ask you about content moderation, which is a subject that we talk a lot about on this show. Cloudflare is not a social media company, but you all have had your share of run-ins with content moderation. And I think we should just briefly explain why that is.
Basically — and correct me if I’m wrong — my understanding is that the internet is just swarming with hackers looking for websites to take down all the time. And if you don’t have a service like Cloudflare protecting you, especially if you are an extremist website or something that a lot of people have strong feelings about, they’re just going to be — it’s going to be trivially easy for someone to come in and hack your website, DDoS your website, take it down.
And so basically, if Cloudflare takes away protection from some extremist site, you’re basically taking away their security, so they just get hacked and die. And for a long time, you had this kind of absolutist stance that you would protect any website, no matter what was on it. But we started talking after 2017, when this white nationalist rally in Charlottesville happened.
And in response, Cloudflare decided to ban The Daily Stormer, the neo-Nazi website. And you did that kind of thing a couple more times, once in 2019 after the El Paso mass shooting, when Cloudflare took away security protections from 8chan, and then last year, you all banned Kiwi Farms, which is a site where people were violently harassing and doxing and stalking trans people. So you have been a kind of unwilling content moderator, but you also felt really weird about it.
I think you said something like, I woke up in a bad mood and decided someone shouldn’t be allowed on the internet. And you basically didn’t think that you should have that power. And I bring this all up, because I think we’re now at a really interesting moment with content moderation where, kind of, everyone wants to be a content moderator. Governments want to moderate content. Elon Musk wanted to moderate content so much, he bought Twitter so that he could change the rules.
You couldn’t pay me $44 billion to run Twitter, just to be totally clear. I can’t explain Elon, so.
So, like, where are we with content moderation in 2023? And kind of, do you think we’re — I don’t know — the pendulum is swinging away from that being a solution to problems? Or do you think more and more people are just going to start trying to influence what can be seen online?
Yeah, you know, I — so first of all, while this is something that we end up talking about from time to time, it actually doesn’t end up being a hard issue for us all that often. And I think the reason why is, for the most part, governments are good at regulating these things. And they’re good at taking things which are harmful and making sure that they’re illegal.
Now, the US — it’s hard to overstate how radically libertarian the US view of speech is. And it is not the view of speech around the rest of the world. And we have to operate around the entire world, and so there is content which you can access in the US on our network, that you can’t access in Germany, or you can’t access in Turkey, or you can’t access in Egypt.
And we have to follow those rules. And for the most part, that’s pretty straightforward. I think the challenge is that every once in a while, there’s something which is technically legal, but clearly, extremely harmful.
And you listed three instances of this. We’ve been around for about 12 or 13 years. So sort of the mean time for us seeing these things is, about once every four years on average, something really bad kind of crosses into that zone.
And at some level, I think that that’s a challenge for policymakers. That’s a challenge for people who have some political legitimacy. We don’t — we’re just — most people have no idea who we are, so it’s, again, surprising you had me on your show.
But in those cases, every once in a while, I think we will have to take some action, but it’s pretty rare. What I think has been interesting has been to watch, for example, what Elon is doing at X, which is, when I’ve struggled with this, I’ve gone back and tried to say, OK, how should we think about what the right way to approach these questions is? And I actually pulled down a whole bunch of philosophy books from college, and went back and read my Aristotle, and then — and then read Madison. Because I think that when —
Like, while you were trying to decide whether or not to kick off 8chan?
Not — no.
You’re like, I have to read some Rawls first?
Usually afterwards. And Rawls is a little bit later. But I think the interesting thing is, when a platform gets to a certain size and scale, it starts to almost approximate the trust challenges that governments have.
If you think about Facebook, Facebook has the population of the southern hemisphere in it. And so as they think about, how do we continue to have trust with that large population, it’s the closest thing that you have to something like a government. And so going back and reading Aristotle’s “Politics,” going back and reading what Madison wrote on the Bill of Rights, I think that gives you some information on, how do you actually build that trust.
And what I found interesting is — like, Elon turning to that — should we let Alex Jones back online or not, and putting a poll out — I mean, that’s almost kind of the next — that’s sort of this almost democracy-like system, where, listen, I’m going to let the people decide. And I think that there’s some — again, there’s a lot of things that you can criticize about that, but there’s a lot of — I wouldn’t be surprised if — I’d be a little surprised, but I wouldn’t be completely surprised if Elon was out there reading his Aristotle and thinking, OK, how do I have some level of legitimacy as I’m making these decisions?
Because if it is just one person making that decision, that’s a very uncomfortable place to be, and it’s very hard to create trust. Doing something like a poll is as close as we get to how we’ve actually assembled governments to have legitimacy over time. So it’s — I think this is a fascinating time. But again, I think it’s — I would never want to run Twitter or X or whatever we call it now.
Well, I mean, there are much more sophisticated ways of doing this. Like, over the past year or so, Meta has experimented with this thing they’re calling platform democracy. And the basic idea is, we’ll take a policy issue that we haven’t decided yet. We will put together a panel of our users, selected basically at random.
We’ll pay them for their time. We’ll bring them in. We’ll educate them about the issue, and we’ll let them deliberate. And then at the end, they will present us with their recommendation for, what do you think our policies should be around climate change.
That seems really smart to me. That’s good, right? Like, I would agree with you that the spirit of, let’s let users have a voice and who belongs on that platform — I think that’s a good idea. I think maybe just sort of throwing open to an X poll is not as good of an idea.
Again, I think these all map to systems of government that we would be familiar with. I mean, the X version is direct democracy, which is what we have here in California. I mean, those people who hand out polls to you — I mean, that’s —
We famously have a lot of crazy stuff on our ballots.
That is — there might be some downsides to that. What you just described with Facebook — I mean, that’s a republic. They’ve essentially selected a group of people. They’ve created a senate. And then, they’ve used that in order to create systems.
I think that that’s — I think what everybody in these tech systems is struggling with is, basically, rule-of-law challenges. Again, they’re not governments, but they have the scale and size that they start to behave in almost that way. And I think thinking about what are forms of government that work, and how do you build trust around that, and stability — I think that’s a lot of what these organizations are thinking about, even if that’s not how they frame it.
Yeah. Here’s my content-moderation question for you. And by the way, I really do appreciate how thoughtful you guys — so I always really enjoy reading the stuff you put out on this subject, because —
It’s strange — strange how many people are like, I just hope some more neo-Nazis sign up for Cloudflare so they’ll write another blog post.
Well, like, the thing that’s really unusual about your role in this ecosystem is that you are not like a web host. You are not GoDaddy. You are not a traditional content moderator. Right?
It makes sense to us that Facebook is going to have to make calls every day on what posts stay up and come down. But for somebody that is just sort of protecting the general traffic of the web, that’s much more unusual. In some of these recent high-profile cases, my understanding is that the core service that you were providing was sort of anti-DDoS, right?
You were preventing these websites from just being hit by many, many servers simultaneously, which has the effect of bringing them down. This is a service that you choose to provide for free to most sites on the internet to, basically, anyone who wants it, right? My question for you is, why did you make the decision, we want to be a free security guard for everyone on the internet, and we’re only going to not be your free security guard in the most extreme circumstances?
You know, I think that maybe it’s a little bit of a penance. In college, I wrote my college thesis back in ‘95 or ‘96 on why the internet was a fad.
Wow, and we think we’ve made some bad predictions on this show.
And that was clearly wrong. And I think that this is one of the — just great inventions of human history. I think that there are clearly harms that come from it. It turns out that if you connect everyone in the world, some bad things are going to happen.
But by and large, I see, time and time and time again, when — again, we started by talking about, why did the internet grow 25 percent. Largely, that’s because some of the people who haven’t had access to all of the resources of the internet now have access to that. And I think that that has genuinely made their lives better. And so I think that if we allowed a system where anyone could basically take anyone offline unless you had the money to pay for it, then again, we’re denying what is really great about the internet.
Yeah. Last question, and then we’ll let you go. But I think one of the defining questions of 2023, at least for me, is really, who should be in charge of this. This is the question that came up in conversations Casey and I have had around content moderation, around ai, around crypto. And there are just so many different answers to this question.
And it seems like power is really shifting between various groups. Now, a lot of governments, including state governments in the US, are starting to try to be in charge of what appears on social media platforms. I would not say that’s been going well.
When you think about who should be in charge, do you think we need new kinds of governance to make sure that the internet works for us? Or do you think our existing institutions are proving capable of governing this thing?
You know, I — I think this is an interesting challenge. And I think that the fight of the future is, how extensive does a local institution’s regulation end up applying? So for instance, there was a German court recently that said that a certain set of providers — we were one; a service called 9.9.9.9 was one — that we needed to block certain websites from using our infrastructure — which, again, if it only applied in Germany, that would be one thing.
But they said that because a user in Germany could then use a VPN to pretend like they were coming from Austria or Sweden or Mexico or wherever, that we actually had to apply that regulation globally. So this Leipzig court said, you have to follow this rule on a global basis. At the same time, we have the Montana legislature that says they’re going to ban TikTok.
And again, if that just applies to Montana and the people in Montana say that that’s what they want, maybe that’s fine. But using the exact same rationale, what could happen is, the Montana court could say, well, somebody could use a VPN to pretend they’re coming from Mexico, and so we actually have to ban TikTok globally.
I think there’s real danger if we get to the point where there are not just, at the nation-state level, but down to the individual locality level, organizations or governments or institutions that are saying that our rules have global effect. And if that happens, then we’re going to fall to what I’ve sort of been describing as the Teletubby internet, where everything falls to the absolute lowest common denominator. And it’s actually interesting to look at —
Wait, wait, what do the Teletubbies have to do with this?
Well, and then the story of that was that Jerry Falwell tried to get them banned. So —
Ah.
Tinky Winky was accused of being gay.
That’s right.
(LAUGHING): That’s true.
And I’ll say, not without evidence.
(LAUGHING): Yes. Yeah. I mean — but if you think about television, I mean, television was this new technology that came along. And the concern, if you were one of the television broadcasters in the United States, just looking at the United States, was not so much about competition. Because there were only — for reasons of physics, at the time, there were only three broadcast stations.
The real threat to your business was regulation. And so by and large, you had newscasters who came from the center of the United States. Kansas was overrepresented. They were all men. The newscasts were basically all the same.
You covered every political convention from opening speech to balloon drop, which is terrible TV, if you think about it. Like, why would everyone do exactly the same thing from exactly the same pool feed? And yet, that was the best way to avoid regulation.
What I worry about in the internet is, if we all have to play to the lowest common denominator, it’s not going to be Kansas anymore, but it’s going to be probably somewhere in Mumbai that gets to set what the global internet looks like. And that’s probably — if you’re sitting in San Francisco or you’re listening to “The New York Times”— probably not a world that you want to live in. So I think the fight is going to be, how do we make sure that local regulations stay local and that the people who have the authority are answering to that authority in a local space? And yeah, if a Leipzig court says something is illegal in Leipzig, then we should block it in Leipzig. That’s easy. But that Leipzig court shouldn’t be able to have that same rule apply to Montana. If we can do that, I actually think we have the right institutions in place. I think the problem is going to be when countries start to say, our rules apply on a global basis. And that’s going to be, I think, the real fight the next period of time.
Yeah. Any predictions for 2024 on the internet?
Unfortunately, I think it’s going to be a really difficult year. I think the election is going to catalyze a lot of the worst of things that happen online. And hopefully, it turns out to be a lot quieter, but I think it’s going to be a busy 2024 on the internet, and especially in the cybersecurity space.
Given how hard a year you think it’s going to be, would you say it’s even more important that people listen to “Hard Fork” in 2024?
Absolutely. I think this is the only way that you can save democracy and save the internet. And the only thing I would ask is that if you could push back and try and get “The New York Times” to capitalize the “I” in “internet” going forward, I think that that actually — and I know you think I’m kidding. But this is — if you believe in the internet, there should be one. If you don’t want it to be balkanized, it should be a proper noun. It’s like “Earth” or “Mars.” It should be capitalized. So capitalize the “I”—
I’ll take that up with the style editors.
Well, Matthew, thank you so much for coming on. Great to have you.
Thank you so much.
Thank you. It was great.
And happy holidays.
Happy holidays.
Happy holidays. (MUSIC PLAYING)
“Hard Fork” is produced by Davis Land and Rachel Cohn. We’re edited by Jen Poyant. This episode was fact-checked by Caitlin Love. Today’s show was engineered by Alyssa Moxley. Original music by Marion Lozano, Rowan Niemisto, and Dan Powell.
Our audience editor is Nell Gallogly, video production by Ryan Manning and Dylan Bergeson. If you haven’t already, check us out on YouTube at youtube.com/hardfork. Special thanks to Paula Szuchman, Pui-Wing Tam, Kate LoPresti, and Jeffrey Miranda. You can email us at [email protected]. Let us know what kind of acc you are.
(KEVIN LAUGHS)
I’m a hacky sack.
Ooh, you’re a hack, all right. You’re an h/acc. That’s a hack, baby.
(KEVIN LAUGHS)