The generative ai is rapidly transforming the modern workplace, offering unprecedented abilities that increase the way we interact with the text and data. In amazon Web Services (AWS), we recognize that many of our clients trust the set of family applications of Microsoft Office, including Word, Excel and Outlook, such as the backbone of their daily workflows. In this blog post, we show a powerful solution that perfectly integrates AWS generative capabilities in the form of large language models (LLM) based on amazon Bedrock in the office experience. By taking advantage of the latest advances in generative ai, we train employees to unlock new levels of efficiency and creativity within the tools they already use every day. Whether writing a convincing text, analyze complex data sets or obtain more information in depth of information, integrate the generative ai with the offices suite transforms the way the equipment addresss their essential work. Unique us while we explore how your organization can take advantage of this transformative technology to boost innovation and increase employee productivity.
General solution of the solution
Figure 1: General description of solution architecture
The architecture of solutions in Figure 1 shows how office applications interact with a backend without server hosted in the AWS cloud through a complement. This architecture allows users to take advantage of the generative capabilities of the generative amazon Bedrock directly from the Office Suite, which allows greater productivity and knowledge within their existing workflows.
Depth components
Office accessories
Office accessories Allow extending office products with personalized extensions built in standard web technologies. Using AWS, organizations can accommodate and serve the office supplements for users worldwide with a minimal infrastructure overload.
An office complement is composed of two elements:
The code fragment below demonstrates part of a function that could be executed every time a user invokes the complement, performing the following actions:
- Initiate a request to the generative backend of ai, providing the context of the user's application and available in the application body
- Integrate the results of the backend response in the word document using Microsoft Office Javascript API. Keep in mind that these APIs use objects such as names of names, relieving the need for explicit imports. Instead, we use the spaces of names available worldwide, such as
Word
To directly access the relevant APIs, as shown in the following example fragment.
Generative backend infrastructure
The AWS cloud backend consists of three components:
- amazon Api Gateway acts as an entry point, receiving requests from the complement of office applications. API Gateway admits multiple mechanisms to control and manage access to an API.
- AWS Lambda handles the integration of API Rest, processing applications and invoking appropriate AWS services.
- amazon Bedrock is a fully managed service that makes base models (FMS) the main ai and amazon startups available through an API, so you can choose between a wide range of FMS to find the model that best suits its use case. With the experience without Bedrock server, you can start quickly and privately customize FMS with your own data and integrate them and implement them quickly in your applications using AWS tools without having to administer the infrastructure.
LLM Signicing
amazon Bedrock allows you to choose from a wide selection of base models to request. Here, we use the sonnet Claude 3.5 from Anthrope on amazon Bedrock to complete. The system indicator that we use in this example is as follows:
You are an office assistant helping humans to write text for their documents.
(When preparing the answer, take into account the following text: {context})
Before answering the question, think through it step-by-step within the tags.
Then, detect the user's language from their question and store it in the form of an ISO 639-1 code within the tags.
Then, develop your answer in the user’s language within the tags.
In the notice, we first give the LLM a person, indicating that he is an office assistant that helps humans. The second optional line contains text that has been selected by the user in the document and is provided as a context for the LLM. We specifically instruct the LLM that first imitates a step -by -step thought process to reach the response (reasoning of the thought chain), an effective measure of rapid engineering to improve the quality of the output. Next, we indicate that you detect your question user's language so that we can then refer to it. Finally, we instruct the LLM to develop your response using the user language previously detected within the response labels, which are used as a final response. While here, we use the default configuration for inference parameters, such as temperature, which can be quickly configured with each LLM message. Then, the user's entry is added as a user message to the application and is sent through the amazon Rock Rock Messages API to the LLM.
Demonstration implementation and configuration details on an AWS account
As a previous requirement, we must ensure that we are working in an AWS region with amazon's mother rock support for the base model (here, we use the Sonnet Claude 3.5 of Anthrope). In addition, access to amazon amazon Bedrock Foundation's required models must be added. For this demonstration configuration, we describe the manual steps taken on the AWS console. If necessary, this configuration can also be defined in infrastructure as a code.
To configure integration, follow these steps:
- Create an AWS Lambda function with the execution time of Python and the lower code to be the backend for the API. Make sure we have Powertools for AWS Lambda (Python) Available in our execution time, for example, attaching aLambda layer To our function. Make sure the LAMBDA Function Role provides access to the required FM, for example:
The following block of code shows a sample implementation for Lambda API Rest integration based on a Powertools for AWS Lambda (Python) Rest Api Event Handler:
- Create an API API Gateway Rest with an integration of Proxy Lambda to expose the Lambda function through an API Rest. You can follow this tutorial to create an API Rest for the Lambda function using the API Gateway console. By creating proxy lambda integration with a proxy resource, we can enruption of applications to resources to the Lambda function. Follow the tutorial to implement the API and take note of the URL Invok of the API. Be sure to configure the appropriate access control for the API Rest.
Now we can invoke and try our function through the URL Invoke of the API. The following example uses curl
To send a request (be sure to replace all position markers in curly orthopedic devices as necessary) and the response generated by the LLM:
If necessary, the resources created can be cleaned 1) Eliminate the API API Gateway Rest API, and 2) Eliminate the Rest Api Lambda and the associated IAM paper.
Example of use cases
To create an interactive experience, the office complement is integrated with the cloud back-end that implements conversation capabilities with support for an additional context recovered from API Office JavaScript.
Next, we demonstrate two different use cases compatible with the proposed solution, text generation and text refinement.
Text generation
Figure 2: Case demo of use of text generation
In the demonstration of Figure 2, we show how the complement is asking the LLM to produce a text from scratch. The user enters his consultation with some context in the complement text entry area. When sending, the Backend will ask the LLM to generate a respective text and return it to the Fronte. From the complement, it is inserted in the Word document in the cursor position using the API of Office JavaScript.
Text refinement
Figure 3: Case demo use of text refinement
In Figure 3, the user highlighted a text segment in the work area and entered a request in the complementary text entry area to reformulate the text segment. Once again, the user entry and the highlighted text are processed by the backend and are returned to the complement, thus replacing the previously highlighted text.
Conclusion
This blog post shows how the transforming power of generative ai can be incorporated into office processes. We describe a sample from end to the end of integration of office products with a complement to the generation and manipulation of text with the power of the LLM. In our example, we use LLM of administration at amazon Bedrock for the generation of text. The backend is lodged as an application totally without AWS cloud server.
The generation of text with LLMS in the office supports employees by rationalizing their writing process and increasing productivity. Employees can take advantage of the power of the generative ai to generate and edit high quality content quickly, releasing time for other tasks. In addition, integration with a family tool such as Word provides a perfect user experience, minimizing interruptions in existing workflows.
For more information on how to increase productivity, build differentiated experiences and innovate faster with AWS, visit the generative ai on the AWS page.
About the authors
Martin Maritch He is an architect of generative in Aws Proserve focusing on the generative and the MLOP. It helps business customers to achieve commercial results by unlocking the entire potential of AWS IA/ml services.
Miguel Pestana He is an architect of cloud applications in the AWS professional services team with more than 4 years of experience in the automotive industry that offers native cloud solutions. Outside work, Miguel enjoys spending his days on the beach or with a paddle racket in one hand and a glass of bleeding in the other.
Carlos Antonio Perea Gomez It is a builder with professional AWS services. It allows customers to become impressive during their trip to the cloud. When you are not in the cloud, enjoy diving in the waters.