The pace at which the advanced ai landscape is advancing is dizzying. But so are the risks that come with it.
The situation is such that it is difficult for experts to foresee the risks.
While most leaders are increasingly prioritizing ai” rel=”noopener” target=”_blank”>GenAI applications in the coming monthsThey are also skeptical about the risks involved: concerns about data security and biased results, to name a few.
ai-development-principles” rel=”noopener” target=”_blank”>Mark Suzmanexecutive director of the Bill and Melinda Gates Foundation, believes that “while this technology can lead to advances that can accelerate scientific progress and boost learning outcomes, the opportunity is not without risks.”
<img decoding="async" alt="The new ethical implications of generative artificial intelligence” width=”100%” src=”https://technicalterrence.com/wp-content/uploads/2023/11/The-new-ethical-implications-of-generative-artificial-intelligence.png”/><img decoding="async" src="https://technicalterrence.com/wp-content/uploads/2023/11/The-new-ethical-implications-of-generative-artificial-intelligence.png" alt="The new ethical implications of generative artificial intelligence” width=”100%”/>
Image by author
Let’s start with the data
Consider this: A famous creator of the generative ai model states: “It is technology/news/story/japan-warns-chatgpt-creator-openai-over-data-privacy-says-they-will-take-action-if-needed-2388592-2023-06-04″ rel=”noopener” target=”_blank”>collects personal information such as name, email address and payment information when necessary for business purposes.”
Recent times have shown multiple ways in which things can go wrong without a guiding framework.
- technology-65139406″ rel=”noopener” target=”_blank”>Italy expressed concern for illegally collecting personal data from users, citing “technology-65139406″ rel=”noopener” target=”_blank”>no legal basis to justify the massive collection and storage of personal data to ‘train’ the algorithms underlying the operation of the platform.”
- The Japan Personal Information Protection Commission also technology/news/story/japan-warns-chatgpt-creator-openai-over-data-privacy-says-they-will-take-action-if-needed-2388592-2023-06-04″ rel=”noopener” target=”_blank”>issued a warning for minimal data collection to train machine learning models.
- Industry leaders in ai” rel=”noopener” target=”_blank”>HBR echo concerns about data security and biased results
Because generative ai models are trained with data from almost the entire Internet, we are a fractional part hidden in those neural network layers. This emphasizes the need to comply with data privacy regulations and not train models with users’ data without their consent.
Recently, one of the companies was fined for creating a facial recognition tool by scraping selfies from the internet, resulting in a privacy violation and a hefty fine.
<img decoding="async" alt="The new ethical implications of generative artificial intelligence” width=”100%” src=”https://technicalterrence.com/wp-content/uploads/2023/11/1699819797_689_The-new-ethical-implications-of-generative-artificial-intelligence.png”/><img decoding="async" src="https://technicalterrence.com/wp-content/uploads/2023/11/1699819797_689_The-new-ethical-implications-of-generative-artificial-intelligence.png" alt="The new ethical implications of generative artificial intelligence” width=”100%”/>
Fountain: ai-another-cnil-gspr-fine/” rel=”noopener” target=”_blank”>TechCrunch
However, data security, privacy, and bias have existed since the days of pregenerative ai. So what has changed with the launch of Generative ai applications?
Well, some existing risks have only become riskier, given the scale at which models are trained and deployed. Let’s understand how.
Hallucinations, immediate injection and lack of transparency
Understanding the inner workings of such colossal models to trust their response has become even more important. In Microsoft’s words, these emerging risks are because LLMs are “designed to generate text that appears coherent and contextually appropriate rather than adhering to factual accuracy.”
Consequently, the models could produce misleading and incorrect responses, commonly called hallucinations. They can arise when the model lacks confidence in the predictions, leading to the generation of less accurate or irrelevant information.
Furthermore, the prompts are how we interact with the language models; Now, bad actors could generate harmful content by injecting messages.
Responsibility when ai fails?
The use of LLM raises ethical questions about accountability and responsibility for the results generated by these models and biased results, as is prevalent in all ai models.
The risks are exacerbated in high-risk applications such as healthcare – think about the impact of incorrect medical advice on a patient’s health and life.
The bottom line is that organizations must create ethical, transparent and responsible ways to develop and use generative ai.
If you’re interested in learning more about who is responsible for making generative ai work properly, consider reading this post that outlines how we can all come together as a community to make it work.
Copyright infringement
As these large models are built on material from all over the world, it is very likely that they consumed creation: music, videos or books.
If copyrighted data is used to train ai models without obtaining the necessary information permission, credit or compensation the original creators, leads to copyright infringement and can cause serious legal problems for the developers.
<img decoding="async" alt="The new ethical implications of generative artificial intelligence” width=”100%” src=”https://technicalterrence.com/wp-content/uploads/2023/11/1699819797_889_The-new-ethical-implications-of-generative-artificial-intelligence.png”/><img decoding="async" src="https://technicalterrence.com/wp-content/uploads/2023/11/1699819797_889_The-new-ethical-implications-of-generative-artificial-intelligence.png" alt="The new ethical implications of generative artificial intelligence” width=”100%”/>
Picture of Search Engine Diary
Deepfake, misinformation and manipulation
The one that has a high probability of creating uproar at scale is deepfakes; We wonder what the capacity of deepfakes can lead us to?
They are synthetic creations (text, images or videos) that can digitally manipulate facial appearance through deep generative methods.
Result? Intimidation, misinformation, hoax calls, revenge or fraud: something that does not fit the definition of a prosperous world.
The publication aims to make everyone aware that ai is a double-edged sword: not everything is magic that only works on important initiatives; Bad actors are part of it too.
That is where we must raise our guard.
Take for example the latest news of a fake video highlighting the withdrawal of one of the political figures from the upcoming elections.
What could be the reason? – you can think. Well, that misinformation spreads like fire in no time and can severely impact the direction of the electoral process.
So, what can we do to avoid being victims of such false information?
There are various lines of defense, let’s start with the most basic ones:
- Be skeptical and doubtful of everything you see around you.
- Change your default mode: “it may not be true,” instead of taking everything at face value. In short, question everything around you.
- Confirm potentially suspicious digital content from multiple sources
Prominent ai researchers and industry experts such as Yoshua Bengio, Stuart Russell, Elon Musk, Steve Wozniak and Yuval Noah Harari have also expressed their concerns and called for a pause in the development of such ai systems.
There is great looming fear that the race to build advanced ai, to match the prowess of generative ai, could quickly spiral out of control and out of control.
ai-copyright-lawsuits/ar-AA1go5ha?ocid=msedgntp&cvid=d677c68aad4e4ed680aa1c12658ff514&ei=15″ rel=”noopener” target=”_blank”>microsoft has recently announced that it will protect buyers of its ai products from the implications of copyright infringement as long as they comply with firewalls and content filters. This is a significant relief and shows the correct intention to take responsibility for the repercussions of using their products, which is one of the basic principles of ethical frameworks.
It will ensure that authors retain control of their rights and receive fair compensation for their creation.
This is great progress in the right direction! The key is to see to what extent it resolves the authors’ concerns.
So far, we have discussed the key ethical implications related to corrective technology. However, what arises from the successful use of this technological advance is the risk of job loss.
There is a fear that ai will replace most of our work. Mckinsey recently shared a report on what the future of work will look like.
This topic requires a structural change in the way we think about work and deserves a separate post. So stay tuned for the next post, which will look at the future of work and the skills that can help you survive in the GenAI era and thrive.
Vidhi Chugh is an ai strategist and digital transformation leader working at the intersection of product, science, and engineering to build scalable machine learning systems. She is an award-winning innovation leader, author and international speaker. Her mission is to democratize machine learning and break down the jargon so everyone can be a part of this transformation.