Responsible ai is a long-standing commitment at amazon. From the beginning, we have prioritized responsible ai innovation by building safety, fairness, robustness, security, and privacy into our development processes and training our employees. We strive to improve the lives of our customers while establishing and implementing the necessary safeguards to help protect them. Our practical approach to transforming responsible ai from theory to practice, coupled with tools and expertise, enables AWS customers to effectively implement responsible ai practices within their organizations. To date, we have developed over 70 internal and external offerings, tools, and mechanisms that support responsible ai, published or funded over 500 research papers, studies, and scientific blogs on responsible ai, and provided tens of thousands of hours of responsible ai training to our amazon employees. amazon also continues to expand its portfolio of free responsible ai training courses for people of all ages, backgrounds, and experience levels.
Today we are sharing an update on the progress of our responsible ai efforts, including the introduction of new tools, partnerships, and testing that improve the safety and transparency of our ai services and models.
New tools and capabilities have been released to safely build and scale generative ai, backed by adversarial-style testing (i.e. red teaming)
In April 2024, we announced the general availability of Guardrails for amazon Bedrock and Model Evaluation on amazon Bedrock to make it easier to introduce safeguards, prevent harmful content, and evaluate models against key safety and accuracy criteria. We recently added contextual grounding checks in Guardrails to detect hallucinations in model responses for applications using RAG and rollup applications. Contextual grounding checks add to the industry-leading safety protection in Guardrails to ensure that the LLM response is based on the correct business source data and evaluates the LLM response to confirm that it is relevant to the user’s query or instruction. Contextual grounding checks can detect and filter out over 75% of hallucinated responses for RAG and rollup workloads. Additionally, to support the choice of guardrail applications that use different FMs, Guardrails now supports an ApplyGuardrail API to evaluate user inputs and model responses for any custom and third-party FMs available outside of amazon Bedrock.
<iframe title="NEW DEMO – Build responsible ai apps with Guardrails for amazon Bedrock | amazon Web Services” width=”500″ height=”281″ src=”https://www.youtube-nocookie.com/embed/srQxO_o9KgM?feature=oembed” frameborder=”0″ allow=”accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share” referrerpolicy=”strict-origin-when-cross-origin” allowfullscreen=”” sandbox=”allow-scripts allow-same-origin”>
In May we published a New ai service card for amazon Titan Text Premier to further support our investments in responsible and transparent generative ai. ai Service Cards are a form of responsible ai documentation that provides customers with a single place to find information about intended use cases and constraints, responsible ai design choices, and best practices for implementation and performance optimization for our ai services and models. We have created over 10 ai Service Cards so far to provide transparency to our customers as part of our end-to-end development process that addresses fairness, explainability, truthfulness and robustness, governance, transparency, privacy and safety, security, and controllability.
ai systems can also have performance flaws and vulnerabilities that can increase risk in relation to security threats or harmful content. At amazon, we test our ai systems and models, such as amazon Titan, using a variety of techniques, including manual teamwork. Teamwork involves human testers investigating an ai system for flaws in an adversarial manner, and complements our other testing techniques, which include automated benchmarking against private and publicly available datasets, human evaluation of completions against private datasets, and more. For example, we have developed private evaluation datasets of challenging prompts that we use to assess development progress on Titan Text. We test with multiple use cases, prompts, and datasets because a single evaluation dataset is unlikely to be able to provide an absolute picture of performance. In total, Titan Text has gone through multiple iterations of teamwork on issues including safety, security, privacy, veracity, and fairness.
A watermark was introduced to allow users to determine whether visual content is ai-generated.
A common use case for generative ai is the creation of digital content such as images, videos, and audio, but to help prevent misinformation, users need to be able to identify ai-generated content. Techniques like watermarking can be used to confirm whether it came from a particular ai model or vendor. To help reduce the spread of misinformation, all images generated by amazon Titan Image Generator have an invisible watermark by default. It is designed to be tamper-resistant, helping to increase transparency around ai-generated content and combat misinformation. We are also introducing a new API (preview) in amazon Bedrock that checks for the existence of this watermark and helps you confirm whether an image was generated by Titan Image Generator.
Promoted collaboration between businesses and governments on trust and security risks.
Collaboration between businesses, governments, researchers and the ai community is essential to foster the development of safe, responsible and trustworthy ai. In February 2024, amazon joined the US artificial intelligence Security Institute Consortiumestablished by the National Institute of Standards and technology (National Institute of Standards and technology (NIST))amazon is collaborating with NIST to establish a new measurement science that enables the identification of scalable and interoperable measurements and methodologies to advance the development of trustworthy ai. We are also contributing $5 million in AWS compute credits to the Institute for the development of tools and methodologies to assess the safety of basic models. Also in February, amazon joined the ai-Elections-Accord-16-Feb-2024.pdf” target=”_blank” rel=”noopener”>“technology agreement to combat the misleading use of ai in the 2024 elections” at the Munich Security Conference. This is an important part of our collective work to promote safeguards against deceptive activities and protect the integrity of elections.
We continue to look for new ways to engage and foster information sharing between businesses and governments as technology continues to evolve. This includes our work with amazon-csam-transparency-report-2023″ target=”_blank” rel=”noopener”>Thorn and all technology are human Safely design our generative ai services to reduce the risk of them being misused for child exploitation. We are also members ofamazon-and-meta-join-the-frontier-model-forum-to-promote-ai-safety/” target=”_blank” rel=”noopener noreferrer” data-stringify-link=”https://www.frontiermodelforum.org/updates/amazon-and-meta-join-the-frontier-model-forum-to-promote-ai-safety/” data-sk=”tooltip_parent”> The Frontier Model Forum Promote science, standards and best practices in the development of cutting-edge ai models.
ai was used as a force for good to address society’s greatest challenges and initiatives that foster education were supported.
At amazon, we are committed to advancing the safe and responsible development of ai as a force for good. We continue to see examples across industries where generative ai is helping to address climate change and improve healthcare. Brainbox ai, a pioneer in commercial building technology, launched the world’s first generative ai-powered virtual building assistant on AWS to provide facility managers and building operators with insights that will help optimize energy use and reduce carbon emissions. Gilead, an American biopharmaceutical company, accelerates the development of life-saving medicines with AWS generative ai by understanding the feasibility of a clinical study and optimizing site selection through ai-powered protocol analysis using internal and real-world datasets.
<iframe loading="lazy" title="amazon Bedrock-powered ai Assistant Can Reduce a Building’s CO2 Footprint | amazon Web Services” width=”500″ height=”281″ src=”https://www.youtube-nocookie.com/embed/vsiDWjxhnPE?start=115&feature=oembed” frameborder=”0″ allow=”accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share” referrerpolicy=”strict-origin-when-cross-origin” allowfullscreen=”” sandbox=”allow-scripts allow-same-origin”>
As we explore the transformative potential of these technologies, we believe that education is the foundation for harnessing their benefits and mitigating the risks. That's why we offer education on the potential risks associated with generative ai systems. amazon employees have spent Tens of thousands of hours of training since July 2023covering a range of critical topics such as risk assessments, as well as deep dives into complex considerations around fairness, privacy, and model explainability. As part of amazon’s “ai Ready” initiative to provide free ai skills training to 2 million people around the world by 2025, we’ve launched new free training courses on the safe and responsible use of ai in our digital learning centers. Courses include “ai” target=”_blank” rel=”noopener”>Introduction to Responsible ai” for new cloud learners on AWS Educate and courses like “Responsible ai practices”, and “ai-solutions” target=”_blank” rel=”noopener”>Security, compliance and governance for ai solutions” in AWS Skill Builder.
Delivering breakthrough innovation with trust as a priority
As an ai pioneer, amazon continues to foster the safe, responsible, and trustworthy development of ai technology. We are dedicated to driving innovation on behalf of our customers while establishing and implementing necessary safeguards. We are also committed to working with businesses, governments, academic institutions, and researchers alike to deliver breakthrough innovation in generative ai with trust at the forefront.
About the Author
Vasi Philomin is Vice President of Generative ai at AWS. He leads generative ai initiatives including amazon Bedrock and amazon Titan.