Many have described 2023 as the year of ai, and the term appeared on several “words of the year” lists. While it has had a positive impact on productivity and efficiency in the workplace, ai has also presented a number of emerging risks for businesses.
For example, a recent ai-adoption-by-employees-exceeds-risk-management-controls/” target=”_blank” rel=”noopener”>Harris Poll Commissioned by AuditBoard, it revealed that about half of employed Americans (51%) currently use ai-powered tools for work, no doubt powered by ChatGPT and other generative ai solutions. At the same time, however, nearly half (48%) said they enter company data into ai tools not supplied by their companies to help them in their work.
This rapid integration of generative ai tools into the workplace presents ethical, legal, privacy, and practical challenges, creating the need for companies to implement new, robust policies around generative ai tools. As it stands, most have not done so yet: a recent Gartner reporttechnology/business-ai–technology.html” target=”_blank” rel=”noopener”> survey revealed that more than half of organizations lack an internal policy on generative ai, and Harris Poll found that only 37% of employed Americans have a formal policy regarding the use of ai-powered tools not provided by the company.
While it may seem like a daunting task, developing a set of policies and standards now can save organizations from major headaches later.
ai use and governance: risks and challenges
Developing a set of policies and standards now can save organizations from major headaches later.
The rapid adoption of generative ai has made it difficult for companies to keep pace with ai risk management and governance, and there is a clear disconnect between adoption and formal policies. The aforementioned Harris Poll found that 64% perceive the use of ai tools as safe, indicating that many workers and organizations may be overlooking the risks.
These risks and challenges can vary, but three of the most common include:
- Overconfidence. The Dunning-Kruger effect is a bias that occurs when our own knowledge or abilities are overestimated. We have seen this play out in relation to the use of ai; many overestimate the capabilities of ai without understanding its limitations. This could produce relatively harmless results, such as providing incomplete or inaccurate results, but could also lead to much more serious situations, such as results that violate legal usage restrictions or create intellectual property risks.
- Security and privacy. ai needs access to large amounts of data to be fully effective, but this sometimes includes personal data or other sensitive information. There are inherent risks that come with using unvetted ai tools, so organizations should ensure they use tools that meet their data security standards.