Generative ai has revolutionized industries by creating content, from text and images to audio and code. Although it can unlock numerous possibilities, integrating generative ai into applications demands meticulous planning. amazon Bedrock is a fully managed service that provides access to large language models (LLMs) and other foundation models (FMs) from leading ai companies through a single API. It provides a broad set of tools and capabilities to help build generative ai applications.
Starting today, I’ll be writing a blog series to highlight some of the key factors driving customers to choose amazon Bedrock. One of the most important reason is that Bedrock enables customers to build a secure, compliant, and responsible foundation for generative ai applications. In this post, I explore how amazon Bedrock helps address security and privacy concerns, enables secure model customization, accelerates auditability and incident response, and fosters trust through transparency and responsible ai. Plus, I’ll showcase real-world examples of companies building secure generative ai applications on amazon Bedrock—demonstrating its practical applications across different industries.
Listening to what our customers are saying
During the past year, my colleague Jeff Barr, VP & Chief Evangelist at AWS, and I have had the opportunity to speak with numerous customers about generative ai. They mention compelling reasons for choosing amazon Bedrock to build and scale their transformative generative ai applications. Jeff’s video highlights some of the key factors driving customers to choose amazon Bedrock today.
<iframe title="Top reasons to build & scale generative ai applications on amazon Bedrock | amazon Web Services” width=”500″ height=”281″ src=”https://www.youtube-nocookie.com/embed/_Jjmdi__bes?feature=oembed” frameborder=”0″ allow=”accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share” referrerpolicy=”strict-origin-when-cross-origin” allowfullscreen=”” sandbox=”allow-scripts allow-same-origin”>
As you build and operationalize generative ai, it’s important not to lose sight of critically important elements—security, compliance, and responsible ai—particularly for use cases involving sensitive data. The OWASP Top 10 For LLMs outlines the most common vulnerabilities, but addressing these may require additional efforts including stringent access controls, data encryption, preventing prompt injection attacks, and compliance with policies. You want to make sure your ai applications work reliably, as well as securely.
Making data security and privacy a priority
Like many organizations starting their generative ai journey, the first concern is to make sure the organization’s data remains secure and private when used for model tuning or Retrieval Augmented Generation (RAG). amazon Bedrock provides a multi-layered approach to address this issue, helping you ensure that your data remains secure and private throughout the entire lifecycle of building generative ai applications:
- Data isolation and encryption. Any customer content processed by amazon Bedrock, such as customer inputs and model outputs, is not shared with any third-party model providers, and will not be used to train the underlying FMs. Furthermore, data is encrypted in-transit using TLS 1.2+ and at-rest through AWS Key Management Service (AWS KMS).
- Secure connectivity options. Customers have flexibility with how they connect to amazon Bedrock’s API endpoints. You can use public internet gateways, AWS PrivateLink (VPC endpoint) for private connectivity, and even backhaul traffic over AWS Direct Connect from your on-premises networks.
- Model access controls. amazon Bedrock provides robust access controls at multiple levels. Model access policies allow you to explicitly allow or deny enabling specific FMs for your account. AWS Identity and Access Management (IAM) policies let you further restrict which provisioned models your applications and roles can invoke, and which APIs on those models can be called.
tech-behind-dru-ai-magic” target=”_blank” rel=”noopener”>Druva provides a data security software-as-a-service (SaaS) solution to enable cyber, data, and operational resilience for all businesses. They used amazon Bedrock to rapidly experiment, evaluate, and implement different LLM components tailored to solve specific customer needs around data protection without worrying about the underlying infrastructure management.
“We built our new service Dru — an ai co-pilot that both IT and business teams can use to access critical information about their protection environments and perform actions in natural language — in amazon Bedrock because it provides fully managed and secure access to an array of foundation models,”
– David Gildea, Vice President of Product, Generative ai at Druva.
Ensuring secure customization
A critical aspect of generative ai adoption for many organizations is the ability to securely customize the application to align with your specific use cases and requirements, including RAG or fine-tuning FMs. amazon Bedrock offers a secure approach to model customization, so sensitive data remains protected throughout the entire process:
- Model customization data security. When fine-tuning a model, amazon Bedrock uses the encrypted training data from an amazon Simple Storage Service (amazon S3) bucket through a private VPC connection. amazon Bedrock doesn’t use model customization data for any other purpose. Your training data isn’t used to train the base amazon Titan models or distributed to third parties. Nor is other usage data, such as usage timestamps, logged account IDs, and other information logged by the service, used to train the models. In fact, none of the training or validation data you provide for fine tuning or continued pre-training is stored by amazon Bedrock. When the model customization work is complete—it remains isolated and encrypted with your KMS keys.
- Secure deployment of fine-tuned models. The pre-trained or fine-tuned models are deployed in isolated environments specifically for your account. You can further encrypt these models with your own KMS keys, preventing access without appropriate IAM permissions.
- Centralized multi-account model access. AWS Organizations provides you with the ability to centrally manage your environment across multiple accounts. You can create and organize accounts in an organization, consolidate costs, and apply policies for custom environments. For organizations with multiple AWS accounts or a distributed application architecture, amazon Bedrock supports centralized governance and access to FMs – you can secure your environment, create and share resources, and centrally manage permissions. Using standard AWS cross-account IAM roles, administrators can grant secure access to models across different accounts, enabling controlled and auditable usage while maintaining a centralized point of control.
With seamless access to LLMs in amazon Bedrock—and with data encrypted in-transit and at-rest—BMW Group securely delivers high-quality connected mobility solutions to motorists around the world.
“Using amazon Bedrock, we’ve been able to scale our cloud governance, reduce costs and time to market, and provide a better service for our customers. All of this is helping us deliver the secure, first-class digital experiences that people across the world expect from BMW.”
– Dr. Jens Kohl, Head of Offboard Architecture, BMW Group.
Enabling auditability and visibility
In addition to the security controls around data isolation, encryption, and access, amazon Bedrock provides capabilities to enable auditability and accelerate incident response when needed:
- Compliance certifications. For customers with stringent regulatory requirements, you can use amazon Bedrock in compliance with the General Data Protection Regulation (GDPR), Health Insurance Portability and Accountability Act (HIPAA), and more. In addition, AWS has successfully extended the registration status of amazon Bedrock in Cloud Infrastructure Service Providers in Europe Data Protection Code of Conduct (CISPE CODE) Public Register. This declaration provides independent verification and an added level of assurance that amazon Bedrock can be used in compliance with the GDPR. For Federal agencies and public sector organizations, amazon Bedrock recently announced FedRAMP Moderate, approved for use in our US East and West AWS Regions. amazon Bedrock is also under JAB review for FedRAMP High authorization in AWS GovCloud (US).
- Monitoring and logging. Native integrations with amazon CloudWatch and AWS CloudTrail provide comprehensive monitoring, logging, and visibility into API activity, model usage metrics, token consumption, and other performance data. These capabilities enable continuous monitoring for improvement, optimization, and auditing as needed – something we know is critical from working with customers in the cloud for the last 18 years. amazon Bedrock allows you to enable detailed logging of all model inputs and outputs, including IAM invocation role, and metadata associated with all calls that are performed in your account. These logs facilitate monitoring model responses to adhere to your organization’s ai policies and reputation guidelines. When you enable log model invocation logging, you can use AWS KMS to encrypt your log data, and use IAM policies to protect who can access your log data. None of this data is stored within amazon Bedrock, and is only available within a customer’s account.
Implementing responsible ai practices
AWS is committed to developing generative ai responsibly, taking a people-centric approach that prioritizes education, science, and our customers, to integrate responsible ai across the full ai lifecycle. With AWS’s comprehensive approach to responsible ai development and governance, amazon Bedrock empowers you to build trustworthy generative ai systems in line with your responsible ai principles.
We give our customers the tools, guidance, and resources they need to get started with purpose-built services and features, including several in amazon Bedrock:
- Safeguard generative ai applications– Guardrails for amazon Bedrock is the only responsible ai capability offered by a major cloud provider that enables customers to customize and apply safety, privacy, and truthfulness checks for your generative ai applications. Guardrails helps customers block as much as 85% more harmful content than protection natively provided by some FMs on amazon Bedrock today. It works with all LLMs in amazon Bedrock, fine-tuned models, and also integrates with Agents and Knowledge Bases for amazon Bedrock. Customers can define content filters with configurable thresholds to help filter harmful content across hate speech, insults, sexual language, violence, misconduct (including criminal activity), and prompt attacks (prompt injection and jailbreak). Using a short natural language description, Guardrails for amazon Bedrock allows you to detect and block user inputs and FM responses that fall under restricted topics or sensitive content such as personally identifiable information (PII). You can combine multiple policy types to configure these safeguards for different scenarios and apply them across FMs on amazon Bedrock. This ensures that your generative ai applications adhere to your organization’s responsible ai policies as well as provide a consistent and safe user experience.
- Provenance tracking. Now available in preview, Model Evaluation on amazon Bedrock helps customers evaluate, compare, and select the best FMs for their specific use case based on custom metrics, such as accuracy and safety, using either automatic or human evaluations. Customers can evaluate ai models in two ways—automatic or with human input. For automatic evaluations, they pick criteria such as accuracy or toxicity, and use their own data or public datasets. For evaluations needing human judgment, customers can easily set up workflows for human review with a few clicks. After setting up, amazon Bedrock runs the evaluations and provides a report showing how well the model performed on important safety and accuracy measures. This report helps customers choose the best model for their needs, even more important when helping customers are evaluating migrating to a new model in amazon Bedrock against an existing model for an application.
- Watermark detection. All amazon Titan FMs are built with responsible ai in mind. amazon Titan Image Generator creates images embedded with imperceptible digital watermarks. The watermark detection for amazon Titan Image Generator allows you to identify images generated by amazon Titan Image Generator, a foundation model that allows users to create realistic, studio-quality images in large volumes and at low cost, using natural language prompts. With this feature, you can increase transparency around ai-generated content by mitigating harmful content generation and reducing the spread of misinformation. It also provides a confidence score, allowing you to assess the reliability of the detection, even if the original image has been modified. Simply upload an image in the amazon Bedrock console, and the API will detect watermarks embedded in images created by Titan Image Generator, including those generated by the base model and any customized versions.
- ai Service Cards provide transparency and document the intended use cases and fairness considerations for our AWS ai services. Our latest services cards include amazon Titan Text Premier and amazon Titan Text Lite and Titan Text Express with more coming soon.
Aha! is a software company that helps more than 1 million people bring their product strategy to life.
“Our customers depend on us every day to set goals, collect customer feedback, and create visual roadmaps. That is why we use amazon Bedrock to power many of our generative ai capabilities. amazon Bedrock provides responsible ai features, which enable us to have full control over our information through its data protection and privacy policies, and block harmful content through Guardrails for Bedrock.”
– Dr. Chris Waters, co-founder and Chief technology Officer at Aha!
Building trust through transparency
By addressing security, compliance, and responsible ai holistically, amazon Bedrock helps customers to unlock generative ai’s transformative potential. As generative ai capabilities continue to evolve so rapidly, building trust through transparency is crucial. amazon Bedrock works continuously to help develop safe and secure applications and practices, helping build generative ai applications responsibly.
The bottom line? amazon Bedrock makes it effortless for you to unlock sustained growth with generative ai and experience the power of LLMs. Get started today – Build ai applications or customize models securely using your data to start your generative ai journey with confidence.
Resources
For more information about generative ai and amazon Bedrock, explore the following resources:
About the author
Vasi Philomin is VP of Generative ai at AWS. He leads generative ai efforts, including amazon Bedrock and amazon Titan.