When Meta shared the raw computer code needed to build a chatbot last year, rival companies said Meta was releasing poorly understood and perhaps even dangerous technology into the world.
Now, in a sign that critics of sharing ai technology are losing ground to their industry peers, Google is taking a similar step. Google on Wednesday released the computer code that powers its online chatbot, after keeping this type of technology hidden for many months.
Like Meta, Google said the benefits of freely sharing the technology – called the large language model – outweighed the potential risks.
The company said in a blog post that it was launching two artificial intelligence language models that could help third-party companies and independent software developers build online chatbots similar to Google's own chatbot. Called Gemma 2B and Gemma 7B, they are not Google's most powerful ai technologies, but the company argued that they rival many of the industry's leading systems.
“We hope to re-engage the third-party developer community and ensure that” Google-based models become an industry standard for how modern ai is built, Tris Warkentin, managing director of Google DeepMind products. .
Google said it had no current plans to release its flagship ai model, Gemini, for free. Because it is more effective, Gemini could also cause more damage.
This month, Google started charging for access to the most powerful version of Gemini. By offering the model as an online service, the company can more closely control the technology.
Concerned that ai technologies are being used to spread disinformation, hate speech and other toxic content, some companies, such as OpenAI, the maker of the online chatbot ChatGPT, have become increasingly secretive about the methods and software they use. They support their products.
But others, such as Meta and French startup Mistral, have argued that sharing code freely (called open source) is the safest approach because it allows outsiders to identify problems with the technology and suggest solutions.
Yann LeCun, chief ai scientist at Meta, has argued that consumers and governments will refuse to adopt ai unless it is out of the control of companies like Google, Microsoft and Meta.
“Do you want all ai systems to be under the control of a couple of powerful American companies?” he told the New York Times last year.
In the past, Google has open-sourced many of its major ai technologies, including Fundamental technology for ai chatbots. But under competitive pressure from OpenAI, it became more secretive about how they were built.
The company decided to make its ai available again for free because of developer interest, Jeanine Banks, Google's vice president of developer relations, said in an interview.
As it prepared to launch its Gemma technologies, the company said it had worked to ensure they were secure and that using them to spread disinformation and other harmful material violated its software license.
“We're making sure we're launching completely secure approaches in both the private and open realms as much as possible,” Mr. Warkentin said. “With the launches of these 2B and 7B models, we are relatively confident that we have taken an extremely safe and responsible approach to ensuring they can land well in the industry.”
But bad actors could still use these technologies to cause problems.
Google is allowing people to download systems that have been trained with huge amounts of digital text scraped from the Internet. The researchers call this “releasing the weights,” referring to the particular mathematical values the system learns while analyzing the data.
Analyzing all that data typically requires hundreds of specialized computer chips and tens of millions of dollars. These are resources that most organizations (let alone individuals) do not have.