Few debates have been more protracted and contentious in the computer industry than one: Is “open source” better than “closed” when it comes to software development?
That debate has been reignited as companies like Google, Meta, OpenAI and Microsoft have diverged over how to compete for supremacy in artificial intelligence systems. Some choose a closed model while others take an open approach.
This is what you should know.
What does open source software mean?
Source code makes up the basic components of the applications you use. Developers can write tens of thousands of lines of source code to create programs that will run on a computer.
Open source software is any computer code that can be freely distributed, copied or modified for the developer's own purposes. the non-profit organization Open source initiativeAn industry organization, it sets other stipulations and standards for what software is considered open source, but it is largely about the code being free and open so anyone can use and improve it.
What are some examples of open source software?
Some of the most famous software systems are open source, such as Linux, the operating system on which Google's Android mobile system was built. The best-known open source products include Firefox, the free-to-download web browser created by the Mozilla Foundation.
So what is the open versus closed debate and how does this relate to artificial intelligence?
tech companies like Google, OpenAI, and Anthropic have spent billions of dollars creating “closed” or proprietary ai systems. People who are not employees of those companies cannot view or modify their underlying source code, nor can customers who pay to use it.
For a long time this was not the norm. Most of these companies opened up their ai research so other technologists could study and improve the work. But when technology executives began to realize that the search for more advanced artificial intelligence systems could be worth billions, they began to block their research.
tech companies argue that this is for the good of humanity because these systems are powerful enough to cause catastrophic social harm if placed in the wrong hands. Critics say the companies simply want to keep the technology out of the hands of hobbyists and competitors.
Meta has taken a different approach. Mark Zuckerberg, CEO of Meta, decided to open source his company's large language model, a program that learns skills by analyzing large amounts of digital text extracted from the Internet. Zuckerberg's decision to open-source Meta's model, LLaMA, allows any developer to download it and use it to create their own chatbots and other services.
In a recent podcast interviewZuckerberg said no organization should have “any truly super-intelligent capabilities that are not widely shared.”
Is it better open or closed?
It depends on who you ask.
For many technologists and those who embrace hardcore hacker culture, open source is the way to go. The software tools that will change the world should be freely distributed, they say, so that anyone can use them to create interesting and exciting technology.
Others believe that ai has advanced so rapidly that it should be closely guarded by the makers of these systems to protect against misuse. Developing these systems also costs enormous amounts of time and money, and closed models should be paid for, they say.
The debate has already spread beyond Silicon Valley and computer enthusiasts. The legislators in the ai-is-counterproductive/” title=”” rel=”noopener noreferrer” target=”_blank”>European Union and in Washington they have held meetings and taken steps toward frameworks to regulate ai, including the ai-a-regulatory-review” title=”” rel=”noopener noreferrer” target=”_blank”>risks and rewards of open source ai models.