Sponsored content
There is no doubt that ai adoption is booming, with demand for ai and machine learning specialists expected to grow by 40%, or 1 million jobs, by 2027 (World Economic Forum, Future Report of employment in 2023). With this growth also comes awareness and responsibility. Read on to learn more about generative ai and responsible innovation.
You've seen the impact of generative ai at home, work or school. Whether it's jump-starting the creative process, sketching out a new approach to a problem, or creating sample code, if you've used generative ai tools a few times, then you know that the hype around generative ai is more than a little . exaggerated. It has enormous potential for practical use, but it is important to know ai” target=”_blank” rel=”noopener”>when it is and when it is not useful.
Generative ai, as part of a broader ai and analytics strategy, is transforming the world. Less known is how these techniques work. A data scientist can make better use of these tools if he understands the models behind the machine and how to combine these techniques with others in the analytics and artificial intelligence toolbox. Understanding a little about the types of GenAI systems, synthetic data generation, transformers, and large language models helps enable smarter, more effective use of the methods and hopefully prevents you from trying to cram generative ai into places where probably not useful.
Want to learn more?
Free SAS eLearning Courses
Generative ai using SAS
SAS developed the free e-learning course, Generative ai using SAS, for analytics professionals who need to know more than how to write a message in an LLM. If you want to learn a little about how generative ai works and how it can be integrated into the analytics lifecycle, check it out.
Knowing how to use generative ai is not enough; It is equally important to know how to develop ai systems responsibly. Any type of ai, and especially generative ai, can pose risks to businesses, humanity, the environment, and more. Sometimes the risks of ai are negligible and sometimes unacceptable. There are countless ai/” target=”_blank” rel=”noopener”>real world examples illustrating both the importance of assessing and mitigating bias and risk, and the need for trustworthy ai.
Responsible innovation and trustworthy ai
SAS developed another free e-learning course, Responsible innovation and trustworthy ai, for data scientists, business leaders, analysts, consumers, and ai system targets. Anyone implementing ai must have a fundamental understanding of the principles of trustworthy ai, including transparency, accountability, and human focus.
The urgency of building trustworthy ai is growing with the approval of the European Union artificial intelligence Law in March 2024 and US Executive Order on Safe and Trustworthy artificial intelligence in October 2023. Just as the GDPR has ushered in industry-wide data privacy reforms since 2016, the EU ai Law affects not only EU companies, but also companies that They do business with EU citizens.
In other words, almost all of us. While the idea of legislation makes some business leaders uncomfortable, it's great to see governments taking the risks and opportunities of ai seriously. Such regulations are designed to keep everyone safe from unacceptable, high-risk ai systems, while encouraging responsible, low-risk ai innovation to improve the world.
Expand your knowledge of ai by taking both Generative ai using SAS and Responsible innovation and trustworthy ai of SAS.
To learn how generative ai works and how it can be integrated into the analytics lifecycle, we must also understand the principles of trustworthy ai.
More learning resources: