As artificial intelligence continues to grow, so do calls to regulate the technology. Despite its positive potential, many experts and legislators express concern about the possible risks. Ian Bremmer, President and Founder of The Eurasia Group joined The street to talk about the inherent risks of ungoverned ai as we approach the 2024 elections.
Full video transcript below:
SARA SILVERSTEIN: What is the biggest near-term risk to ungoverned ai?
IAN BREMMER: You know, I'm a big ai enthusiast, I think the level of productivity is extraordinary and the technology is advancing very quickly. We will see it used in all sectors, in all companies and therefore there will not be really powerful companies and individuals trying to stop it, which is what usually happens with the technological revolution. You get, you know, post-carbon energy and then all the coal and oil people try to stop it, try to lobby against it. That's not happening in ai. Therefore, it will actually lead to a much bigger advantage than people expect and much faster. But technology advances much faster than the ability to govern it. And that means that the negative externalities that would be expected from such a transformative technology will occur very quickly. And they are not going to be contained or restricted.
What type of negative externalities? Well, an obvious one is deepfakes and artificial intelligence used to misinform. So, you know, in an election like the US, where the stakes are so high, where people are so angry with us, where so much chaos could ensue, we're moving from disinformation to ai-driven disinformation. That is a very significant disruptive risk. Then there's also the question of what bad actors can do to just blow things up. So you use ai to code. It is very impressive. Use ai to create malware that is very dangerous and expensive. Use ai to create vaccines. We love that. That actually got us out of COVID much faster than we would have otherwise. Use ai to create new viruses and new diseases. We don't like that so much. And as these amazing new ai tools are being deployed that everyone has access to, some of which are open source, they will be used not only for productive purposes, but also by experts and bad actors. This is the first year we will start to see the negative impact of this more broadly.