In this post, we configure an agent that uses amazon Bedrock Agents to act as a software application creation assistant.
Agent Workflows are a new perspective in creating dynamic and complex workflows based on business use cases with the help of Large Language Models (LLM) as reasoning engine or brain. These agent workflows decompose natural language query-based tasks into multiple actionable steps with iterative feedback loops and self-reflection to produce the final result using tools and APIs.
amazon Bedrock Agents helps you accelerate the development of generative ai applications by organizing multi-step tasks. amazon Bedrock Agents use the reasoning power of Core Models (FM) to break down user-requested tasks into multiple steps. They use instructions provided by the developer to create an orchestration plan and then carry out the plan by invoking company APIs and accessing knowledge bases using Retrieval Augmented Generation (RAG) to provide a final response to the end user. This offers enormous use case flexibility, enables dynamic workflows, and reduces development costs. amazon Bedrock Agents play an instrumental role in customizing and tailoring applications to help meet specific project requirements while protecting private data and securing your applications. These agents work with the managed infrastructure capabilities of AWS and amazon Bedrock, reducing infrastructure management overhead. Additionally, agents streamline workflows and automate repetitive tasks. With the power of ai automation, you can increase productivity and reduce costs.
amazon Bedrock is a fully managed service that offers a selection of high-performance FMs from leading ai companies such as AI21 Labs, Anthropic, Cohere, Meta, Mistral ai, Stability ai and amazon through a single API, along with extensive set of capabilities. to build generative ai applications with security, privacy, and responsible ai.
Solution Overview
Typically, a three-tier software application has a user interface tier, a middle tier (the backend) for business APIs, and a database tier. The generative ai-based app building wizard in this post will help you accomplish tasks at all three levels. You can generate and explain code snippets for the user interface and backend levels in the language of your choice to improve developer productivity and facilitate rapid development of use cases. The agent can recommend architecture and software design best practices using the AWS Well-Architected Framework for overall system design.
The agent can generate SQL queries using natural language questions using a DDL (data definition language for SQL) database schema and execute them on a database instance for the database tier.
We use amazon Bedrock Agents with two knowledge bases for this wizard. amazon Bedrock Knowledge Bases inherently use the recovery augmented generation (RAG) technique. A typical RAG implementation consists of two parts:
- A data pipeline that ingests document data typically stored in amazon Simple Storage Service (amazon S3) into a knowledge base, that is, a vector database such as amazon OpenSearch Serverless, so that it is available for search when received a question.
- An application that receives a question from the user, searches the knowledge base for relevant information (context), creates a message that includes the question and the context, and provides it to an LLM to generate a response.
The following diagram illustrates how our application creation wizard acts as a coding assistant, recommends AWS design best practices, and assists in generating SQL code.
Based on the three workflows in the figure above, let's explore the type of task you need for different use cases:
- Use case 1 – If you want to write and validate an SQL query against a database, use existing DDL schemas configured as knowledge base 1 to generate the SQL query. Below are examples of user queries:
- What are the total sales amounts per year?
- What are the five most expensive products?
- What is the total income of each employee?
- Use case 2 – For recommendations on design best practices, search the AWS Well-Architected Framework Knowledge Base (Knowledge Base 2). Below are examples of user queries:
- How can I design secure VPCs?
- What are some S3 best practices?
- Use case 3 – You may want to create some code, such as helper functions like validating email or using existing code. In this case, use quick engineering techniques to call the default LLM agent and generate the email validation code. Below are examples of user queries:
- Write a Python function to validate the email address syntax.
- Explain the following code to me in lucid and natural language.
$code_to_explain
(This variable is populated using the code content of any code file of your choice. More details can be found at laptop).
Prerequisites
To run this solution in your AWS account, complete the following prerequisites:
- Clone the GitHub repository and follow the steps explained in the README.
- Configure an amazon SageMaker notebook on an amazon Elastic Compute Cloud (amazon EC2) ml.t3.medium instance. For this post, we provide an AWS CloudFormation templateavailable in the GitHub repository. The CloudFormation template also provides the required access to AWS Identity and Access Management (IAM) to configure the vector database, SageMaker resources, and AWS Lambda.
- Gain access to models hosted on amazon Bedrock. Choose Manage model access in the navigation panel of the amazon Bedrock console and choose from the list of available options. We used Anthropic's Claude v3 (Sonnet) on amazon Bedrock and amazon Titan Embeddings Text v2 on amazon Bedrock for this post.
Implement the solution
In the GitHub repository laptopWe cover the following learning objectives:
- Choose the underlying FM for your agent.
- Write a clear and concise agent statement to use one of the two knowledge bases and the base agent LLM. (Examples are given later in the post).
- Create and associate an action group with an API schema and a Lambda function.
- Create, associate, and ingest data into the two knowledge bases.
- Create, invoke, test, and deploy the agent.
- Build UI and backend code with LLM.
- Recommend AWS best practices for system design using AWS Well-Architected Framework guidelines.
- Build, execute, and validate SQL from natural language understanding using LLM, brief examples, and a database schema as a knowledge base.
- Clean up agent resources and their dependencies using a script.
Instructions for the agent and instructions for the user
The instructions for the App Builder Assistant Agent look like the following.
Each user question to the agent includes the following system message by default.
Note: The following system message remains the same for each agent invocation, only the {user_question_to_agent}
is replaced with the user's query.