In today’s AI newsletter, the latest in our five part seriesI look at where artificial intelligence may be headed in the next few years.
In early March, I visited the OpenAI offices in San Francisco to get an early look at GPT-4, a new version of the technology behind its ChatGPT chatbot. The most surprising moment came when Greg Brockman, president and co-founder of OpenAI, showed off a feature not yet available to the public: he gave the bot a photograph of the Hubble Space Telescope and asked it to describe the image “in painstaking detail.”
The description was completely accurate, right down to the strange white line created by a satellite streaking across the heavens. This is a look into the future of chatbots and other AI technologies: a new wave of multimodal systems You’ll juggle images, sounds, and videos, as well as text.
Yesterday my colleague Kevin Roose talked to you about what AI can do now. I am going to focus on the opportunities and disruptions that lie ahead as I gain skills and abilities.
short term AI
Generative AIs can already answer questions, write poetry, generate computer code, and hold conversations. As “chatbot” suggests, they are being implemented in conversational formats like ChatGPT and Bing first.
But that won’t last long. Microsoft and Google have already announced plans to incorporate these AI technologies into their products. You’ll be able to use them to draft an email, automatically summarize a meeting, and do many other cool tricks.
OpenAI also offers an API, or Application Programming Interface, that other technology companies can use to connect GPT-4 to their applications and products. And it has created a number of plugins from companies like Instacart, Expedia, and Wolfram Alpha that extend the capabilities of ChatGPT.
AI in the medium term
Many experts believe that AI will make some workers, including doctors, lawyers and computer programmers, more productive than ever. They also believe that some workers will be replaced.
“This will affect tasks that are more repetitive, more formulaic, more generic,” said Zachary Lipton, a Carnegie Mellon professor who specializes in artificial intelligence and its impact on society. “This can free up some people who aren’t good at repetitive tasks. At the same time, there is a threat to people who specialize in the repetitive part.”
Jobs done by humans could disappear from audio-to-text transcription and translation. In the legal field, GPT-4 is already proficient enough to pass the bar exam, and accounting firm PricewaterhouseCoopers plans to implement a legal chatbot powered by OpenAI to your staff.
At the same time, companies like OpenAI, Google, and Meta are creating systems that allow you to instantly generate images and videos simply by describing what you want to see.
Other companies are creating bots that can actually use websites and software applications just like a human. In the next stage of technology, artificial intelligence systems could shop for your Christmas gifts online, hire people to do odd jobs around the house, and keep track of your monthly expenses.
All of this gives a lot to think about. But the biggest problem may be this: Before we have a chance to understand how these systems will affect the world, they will become even more powerful.
long term AI
For companies like OpenAI and DeepMind, a lab owned by Google’s parent company, the plan is to push this technology as far as possible. They hope to eventually build what the researchers call artificial general intelligenceor AGI, a machine that can do anything the human brain can do.
As Sam Altman, CEO of OpenAI, told me three years ago: “My goal is to build a widely profitable AGI. I also understand that this sounds ridiculous.” Today, it sounds less ridiculous. But it’s still easier said than done.
For an AI to become an AGI, it will require an understanding of the physical world at large. And it’s not clear whether systems can learn to mimic the breadth and breadth of human reasoning and common sense using the methods that technologies like GPT-4 have produced. Further advances will probably be necessary.
The question is, do we really want artificial intelligence to become that powerful? A very important related question: Is there a way to prevent it from happening?
The risks of AI
Many AI executives believe that the technologies they are creating will improve our lives. But some have been warning for decades of a darker scenario, where our creations don’t always do what we want them to, or follow our instructions in unpredictable ways, with potentially dire consequences.
AI experts talk about “alignment”, that is, making sure that AI systems are in line with human values and goals.
Before GPT-4 was releasedOpenAI turned it over to an external group to imagine and test dangerous uses of the chatbot.
The group discovered that the system could recruit a human online to pass a Captcha test. When the human asked if he was “a robot”, the system, without prompting from the testers, lied and said that he was a person with a visual impairment.
The testers also showed that the system could be persuaded to suggest how to buy illegal firearms online and to describe ways to make dangerous substances from household items. After the OpenAI changes, the system no longer does these things.
But it is impossible to eliminate all potential misuses. As a system like this learns from data, it develops abilities its creators never expected. It’s hard to see how things can go wrong after millions of people start using it.
“Every time we create a new AI system, we can’t fully characterize all of its capabilities and all of its security issues, and this problem is getting worse over time rather than better,” said Jack Clark, founder and chief policy officer. from Anthropic, a San Francisco start-up that develops this same type of technology.
And OpenAI and giants like Google aren’t the only ones exploring this technology. The basic methods used to build these systems are widely known, and other companies, countries, research labs, and bad actors may be less careful.
The remedies for AI
Ultimately, controlling dangerous AI technology will require far-reaching oversight. But the experts are not optimistic.
“We need a regulatory system that is international,” said Aviv Ovadya, a researcher at the Berkman Klein Center for Internet and Society at Harvard, who helped test GPT-4 before its release. “But I don’t see our existing government institutions about to navigate this at the pace that is necessary.”
As we told you earlier this week, more than 1,000 technology leaders and researchers, including Elon Musk, have urged artificial intelligence labs to halt development of the most advanced systems, warning in an open letter that AI tools present “profound risks to society and humanity.” .”
AI developers are “locked in an out-of-control race to develop and deploy increasingly powerful digital minds that no one, not even their creators, can reliably understand, predict, or control,” according to the letter.
Some experts are more concerned about the short-term dangers, including the spread of misinformation and the risk that people will rely on these systems for inaccurate or harmful emotional and medical advice.
But other critics are part of a vast and influential online community called rationalists, or effective altruists, who believe that AI could eventually destroy humanity. This mentality is reflected in the letter.
Share your thoughts and feedback on our On Tech: AI series taking this short survey.
Your homework
We can speculate where AI is headed in the distant future, but we can also ask the chatbots themselves. For your final assignment, treat ChatGPT, Bing, or Bard like an enthusiastic young job applicant and ask him how he sees himself in 10 years. As always, share your answers in the comments.
Proof
Question 1 of 3
What feature did OpenAI demonstrate with GPT-4 that is not yet available to the public?
Begin the quiz by choosing your answer.
Glossary
Alignment: Attempts by AI researchers and ethicists to ensure that artificial intelligences act in accordance with the values and goals of the people who create them.
Multimodal systems: AIs similar to ChatGPT that can also process images, video, audio, and other non-text input and output.
artificial general intelligence: An artificial intelligence that equals human intellect and can do anything the human brain can do.
Click here for more glossary terms.
Farewell
Kevin here. Thank you for spending the last five days with us. It’s been great to see the feedback and creativity from him. (I particularly enjoyed the commenter who used ChatGPT to write a cover letter for my job.)
The topic of AI is so broad and dynamic that even five newsletters are not enough to cover everything. If you want to go deeper, you can check out my book, “Futureproof,” and Cade’s book, “Genius Makers,” both of which provide more detail on the topics we’ve covered this week.
Cade here: My favorite comment came from someone who asked ChatGPT to plan a trail ride in your state. The bot ended up suggesting a trail that didn’t exist as a way to walk between two other trails that do exist.
This small bug provides a window into both the power and limitations of today’s chatbots and other AI systems. They have learned a lot from what is posted on the internet and can make use of what they have learned in remarkable ways, but there is always a risk that they will insert plausible but false information. Get ahead! Chat with these bots! But trust your own judgement, too!
Please take this short survey to share your thoughts and comments on this limited edition newsletter.