Here are some other highlights from Mr. Pichai’s comments:
On the lukewarm initial reception of Google’s Bard chatbot:
We knew that when we were removing Bard, we wanted to be careful… So I’m not surprised that’s the reaction. But in a way, I feel like we took an upgraded Civic and put it in a race with more powerful cars. And what surprised me is how well it works on many, many, many kinds of queries. But we’re going to be iterating fast. We clearly have more capable models. Very soon, maybe as this goes live, we’ll be upgrading Bard to some of our more capable PaLM models, which will bring more capabilities, whether it’s in reasoning, coding, it can answer math questions better. So you’ll see progress over the course of the next week.
On whether ChatGPT’s success was a surprise:
With OpenAI, we had a lot of context. There are some incredibly good people, some of whom had been at Google before, so we knew the caliber of the team. So I think the progress of OpenAI didn’t surprise us. I think ChatGPT… you know, I give them credit for finding something that fits the product and the market. I think the reception from the users was a pleasant surprise, perhaps even for them, and for many of us.
On his concerns about tech companies rushing into AI advances:
Sometimes I worry when people use the word “race” and “be first.” I’ve thought about AI for a long time, and we’re definitely working with technology that will be incredibly beneficial, but clearly has the potential to do damage in profound ways. And I think it’s very important that we all take responsibility for how we approach it.
On the return of Larry Page and Sergey Brin:
I have had a few meetings with them. Sergey has been dating our engineers for a while now. He is a deep mathematician and a computer scientist. So for him, the underlying technology, I think if I had to use his words, I would say it’s the most exciting thing he’s ever seen in his life. So it’s all that emotion. And I get happy. They have always said: “Call us when you need it.” And I call them.
About him open letter, signed by nearly 2,000 AI researchers and tech luminaries, including Elon Musk, who urged companies to halt development of powerful AI systems for at least six months:
In this area, I think it’s important to listen to concerns. There are a lot of thoughtful people behind this, including people who have thought about AI for a long time. I remember talking to Elon eight years ago and back then he was deeply concerned about the safety of the AI. I think he has been constantly worried. And I think he has merit in worrying about it. While he may not agree with everything in there and the details of how he would do it, I think the spirit of [the letter] it’s worth being there.
On whether he is concerned about the danger of creating artificial general intelligence, or AGI, an AI that surpasses human intelligence:
When is AGI? What is it? How do you define it? When did we get here? Those are all good questions. But for me, it almost doesn’t matter because I’m very clear that these systems are going to be very, very capable. And therefore, it almost doesn’t matter if he hit the AGI or not; you’re going to have systems that are capable of delivering benefits on a scale we’ve never seen before and that can do real harm. Can we have an AI system that can cause disinformation on a large scale? Yes. Is it AGI? Really do not care.
On why climate change activism makes him hopeful about AI:
One of the things that gives me hope about AI, like climate change, is that it affects everyone. Over time, we live on one planet, so both are problems that have similar characteristics in that you can’t get security in AI unilaterally. By definition, it affects everyone. That tells me that the collective will come in time to address all of this responsibly.