<img src="https://www.thestreet.com/.image/c_fit%2Ch_800%2Cw_1200/MjAxMTIyNDEwODY1MzcwOTA2/tech-ceos-attend-sen-schumers-senate-ai-forum.jpg” />
In a notable push toward open source artificial intelligence, IBM and Meta on Tuesday launched a group called the ai Alliance, an international coalition of corporations, universities and organizations that are collectively committed to “open science” and “open technologies.”
The Alliance, according to a statement, will be “action-oriented” and aim to better shape the equitable evolution of technology.
Some notable members of the organization include AMD, Cornell University, Harvard University, Yale University, NASA, Hugging Face, and Intel.
Related: IBM executive explains the difference between him and his main competitors in ai
The group's objective, according to a ai-Alliance-Launches-as-an-International-Community-of-Leading-technology-Developers,-Researchers,-and-Adopters-Collaborating-Together-to-Advance-Open,-Safe,-Responsible-ai“>statement, is to promote responsible innovation by guaranteeing trust, security and scientific rigor. To achieve that goal, it will drive the development of benchmarks and assessment standards, support the development of ai skills around the world, and highlight responsible use of ai by members.
The Alliance, which plans to partner with government and nonprofit initiatives, said it will establish a board of directors and a technical oversight committee to help the group achieve those goals, but has not done so yet. The organization did not say when this board would be established.
IBM SVP Darío Gil ai-important-closed-doors-ibm-meta-artificial-intelligence-alliance-tech-dario-gil/”>wrote On Tuesday, in light of the recent drama at OpenAI, it is even more important that ai is not relegated to just “a few personalities and institutions.”
“The future of ai is approaching a fork in the road. One path is dangerously close to creating consolidated control of ai, driven by a small number of companies that have a closed and proprietary view of the ai industry” Gil said.
“On the other road is a wide, open road: a road that belongs to the many, not the few, and is protected by the guardrails we created together.”
Related: The Ethics of artificial intelligence: A Path to Responsible ai
Critical transparency in ai
The statement provided by the companies does not detail how they will achieve and ensure the safety or accountability of these shared ai models.
Is there such a thing as “trustworthy ai”? Will the material you are obtaining openly be reliable? (with what metric?) https://t.co/3gpZ749VAI
—GaryMarcus (@GaryMarcus) December 5, 2023
Still, the Alliance's premise gives rise to a key element of the ai debate: closed technology versus open source technology.
Closed source models, which include ChatGPT, created by OpenAI, and models produced by Microsoft and Google, are closed, meaning that while users can interact with the technology through an Internet interface, no one else can. that the companies themselves have access to the software. (or training data).
Meanwhile, open source models, such as Llama de Meta and IBM's geospatial model, which was open sourced through Hugging Face, are designed for greater accessibility.
Proponents of open source ai say the approach democratizes the technology, something the Alliance refers to in its statement, and also allows for the kind of transparency that is so vital (and often lacking) in the industry.
With closed-source models, research into individual models is nearly impossible, making it difficult for researchers, and then regulators, to understand the legitimate capabilities of a given model, as well as the environmental cost of training and running such a model.
“With large language models, most of them are closed source, so you don't really have access to their details,” artificial intelligence expert Dr. Sasha Luccioni told TheStreet in June. “And so it's hard to do any kind of meaningful study on them because you don't know where they're going, how big they are. Not much is known about them.”
More ai businesses:
- The ethics of artificial intelligence: a path towards responsible ai
- Google targets Microsoft and ChatGPT with big new product launch
- ai is a sustainability nightmare, but it doesn't have to be
ai researcher Dr. John Licato told TheStreet in May that the key to achieving ethical and safe ai revolves around transparent research into current models.
When that research is done only by for-profit companies, he said, “that's when all the things we fear could happen with ai are much more likely to happen.”
ai-no-exemptions/”>Critics However, open source advocates claim that an open model is much more likely to be misused.
ai expert Gary Marcus said in November post that “no one has strong positive assurances that there are no potentially serious consequences of open source ai,” in terms of the potential for generating misinformation and creating biological weapons.
“That said, we don't have any solid positive guarantees,” he added.
Clément Delangue, co-founder and CEO of Hugging Face, responded to Marcus' post, saying that his points also apply to non-open source ai, and on a potentially larger scale, as proprietary ai is deployed in mass.
“Open source is the only way to keep non-open source under control” he said. “Without it, you will have an extreme concentration of power/knowledge/secrecy with 1,000 times the risk. Open source is more the solution than the problem for ai risks.”
In fact, the problems of a lack of democratic decision-making around these technologies, and how that can affect regulation, have a litany of experts more concerned about ai than anything else.
Related: Think tank director warns of danger of “non-democratic tech leaders deciding the future”
The Alliance is not a “miracle solution”
Those issues of power concentration in ai apply even to IBM and Meta's ai Alliance, ai expert and Ivanti CPO Srinivas Mukkamala told TheStreet.
The Alliance, he said, appears to be the private sector's attempt to grapple with the ways ai could change the world and the complexities around how the technology will be regulated.
The Alliance alone, while a noble step, is not enough to address all the important issues, he said.
“While the ai Alliance is trying to solve many of the foreseeable problems created by ai, we have not yet begun to fight to create truly equitable access to data,” said Mukkamala. “The ai Alliance is not the panacea that will be able to address all the risks and inequalities of ai.”
“We need to have more alliances beyond this one that addresses governance and the use of ai, and make sure we are not concentrating power in the hands of a lucky few,” he added.
His opinion is shared by the American public.
Surveys from the artificial intelligence Policy Institute have revealed that an overwhelming portion of the population does not trust technology companies to self-regulate when it comes to ai.
Mukkamala's biggest concern is a world in which uneven adoption of ai accelerates global inequality and poverty at an enormous rate.
“We must take action now to avoid a future of digital haves and have-nots, and while the ai Alliance is a start, to truly anticipate and resolve the dangers of ai we need more global oversight and cooperation,” Mukkamala said.
Regardless of the impact the Alliance will have, IBM executives have publicly shared the impression that everyone should be part of the regulatory conversation.
“You can't let the rules be written by a handful of companies that are the most powerful in the world right now,” Christina Montgomery, IBM's chief privacy officer, told TheStreet in an interview in September. “We are very concerned that that will influence the regulatory environment in some way that is not helpful in terms of innovation.”
Contact Ian with suggestions via email, [email protected] or Signal at 732-804-1223.
Related: artificial intelligence is a sustainability nightmare, but it doesn't have to be
Get exclusive access to portfolio managers' stock picks and proven investment strategies with Professional with real money. Start now.