Editor’s Image
artificial intelligence is undoubtedly the buzzword of our time. Its popularity, particularly with the rise of generative ai applications like ChatGPT, has brought it to the forefront of technology debates.
Everyone is talking about the impact of ai-generating applications like ChatGPT and whether it is fair to take advantage of their capabilities.
However, amidst all this perfect storm, there has been a sudden emergence of numerous myths and misconceptions surrounding the term artificial intelligence or ai.
I bet you’ve probably heard a lot of these already!
Let’s delve into these myths, destroy them and understand the true nature of ai.
Contrary to popular belief, ai is not intelligent at all. Nowadays, most people think that ai-powered models are really smart. This could be due to the inclusion of the term “intelligence” within the name “artificial intelligence.”
But what does intelligence mean?
Intelligence is a trait exclusive to living organisms defined as the ability to acquire and apply knowledge and skills. This means that intelligence allows living organisms to interact with their environment and thus learn to survive.
ai, on the other hand, is a machine simulation designed to imitate certain aspects of this natural intelligence. Most ai applications we interact with, especially on online and business platforms, rely on machine learning.
<img decoding="async" alt="Six ai Myths Debunked: Separating Fact from Fiction” width=”100%” src=”https://technicalterrence.com/wp-content/uploads/2023/11/1699053352_259_Six-AI-Myths-Debunked-Separating-Fact-from-Fiction.png”/><img decoding="async" src="https://technicalterrence.com/wp-content/uploads/2023/11/1699053352_259_Six-AI-Myths-Debunked-Separating-Fact-from-Fiction.png" alt="Six ai Myths Debunked: Separating Fact from Fiction” width=”100%”/>
Image generated by Dall-E
These are specialized artificial intelligence systems trained on specific tasks that use large amounts of data. They excel at the tasks assigned to them, whether it’s playing games, translating languages, or recognizing images.
However, out of their reach, they are often quite useless… The concept of an ai that possesses human-like intelligence across a spectrum of tasks is called general ai, and we are far from reaching this milestone.
The race between tech giants often revolves around boasting about the size of their ai models.
Llama’s second open source LLM release surprised us with a powerful 70 billion feature build, while Google’s Palma has 540 billion features and OpenAI’s latest release ChatGPT4 shines with 1.8 trillion features. functions.
However, billions of LLM features do not necessarily translate into better performance.
Data quality and training methodology are often more critical determinants of a model’s performance and accuracy. This has already been demonstrated with the Stanford alpaca experiment where a simple Llama-based LLM with 7 billion features could unite a staggering 176 billion features with ChatGPT 3.5.
So this is a clear NO!
Bigger is not always better. Optimizing both the size of LLMs and their corresponding performance will democratize the use of these models locally and allow us to integrate them into our daily devices.
A common mistake is to think that ai is a mysterious black box, lacking transparency. In reality, while ai systems can be complex and still quite opaque, significant efforts are being made to improve their transparency and accountability.
Regulators are pushing for ai to be used ethically and responsibly. Important movements such as technology/stanford-researchers-issue-ai-transparency-report-urge-tech-companies-reveal-2023-10-18/” rel=”noopener” target=”_blank”>Stanford ai Transparency Report and the ai-act-first-regulation-on-artificial-intelligence?&at_campaign=20226-Digital&at_medium=Google_Ads&at_platform=Search&at_creation=RSA&at_goal=TR_G&at_advertiser=Webcomm&at_audience=ai%20eu&at_topic=Artificial_intelligence_Act&at_location=ES&gclid=CjwKCAjwysipBhBXEiwApJOcu1LmQb4wn6mx4gvY-k5rCK9AVDIknxLTvi2jTRjrLUNBM8-ZUsVTgBoCtfoQAvD_BwE” rel=”noopener” target=”_blank”>European ai Law Do they aim to prompt companies to improve the transparency of their ai and provide a basis for governments to formulate regulations in this emerging domain?
Transparent ai has become a central point of discussion in the ai community, covering a wide range of issues, such as processes that allow people to thoroughly test ai models and understand the logic behind them. ai decisions.
That’s why data professionals around the world are already working on methods to make ai models more transparent.
So while this may be partially true, it’s not as bad as usual!
Many believe that artificial intelligence systems are perfect and incapable of making mistakes. This is far from the truth. Like any system, ai performance depends on the quality of its training data. And this data is often, if not always, created or curated by humans.
If this data contains biases, the ai system will inadvertently perpetuate them.
An MIT team’s analysis of widely used pre-trained language models revealed pronounced biases in associating gender with certain professions and emotions.. For example, roles such as flight attendant or secretary were linked primarily to feminine qualities, while lawyer and judge were linked to masculine traits. The same behavior has been observed regarding emotions.
Other biases detected are those related to race. As LLMs find their way into health systems, fears are emerging that they could perpetuate harmful medical practices based on racereflecting the biases inherent in the training data.
It is essential that human intervention monitors and corrects these deficiencies, ensuring the reliability of the ai. The key is to use representative and unbiased data and perform algorithmic audits to counteract these biases.
One of the most widespread fears is that ai will lead to mass unemployment.
History, however, suggests that while technology can make certain jobs obsolete, it at the same time creates new industries and opportunities.
<img decoding="async" alt="Six ai Myths Debunked: Separating Fact from Fiction” width=”100%” src=”https://technicalterrence.com/wp-content/uploads/2023/11/1699053352_935_Six-AI-Myths-Debunked-Separating-Fact-from-Fiction.png”/><img decoding="async" src="https://technicalterrence.com/wp-content/uploads/2023/11/1699053352_935_Six-AI-Myths-Debunked-Separating-Fact-from-Fiction.png" alt="Six ai Myths Debunked: Separating Fact from Fiction” width=”100%”/>
Picture of LinkedIn
For example, the World Economic Forum projected that although ai could replace 85 million jobs by 2025 and will create 97 million new ones.
The latest and most dystopian. Popular culture, with films like The Matrix and Terminator, paints a grim picture of ai‘s potential to enslave humanity.
While influential voices like Elon Musk and Stephen Hawking have expressed concern, the current state of ai is far from this dystopian image.
Current ai models, such as ChatGPT, are designed to help with specific tasks and do not possess the capabilities or motivations described in science fiction stories.
So for now… we’re still safe!
In conclusion, as ai continues to evolve and integrate into our daily lives, it is crucial to separate fact from fiction.
Only with a clear understanding can we harness its full potential and address its challenges responsibly.
Myths can cloud judgment and impede progress.
Armed with knowledge and a clear understanding of the true scope of ai, we can move forward and ensure that the technology serves the best interests of humanity.
Joseph Ferrer He is an analytical engineer from Barcelona. He graduated in physical engineering and currently works in the field of Data Science applied to human mobility. He is a part-time content creator focused on data science and technology. You can contact him at LinkedIn, Twitter either Half.