Generative ai is revolutionizing the way developers approach programming by providing intelligent assistance and automation throughout the coding process. With the power of advanced language models and machine learning (ML) algorithms, generative ai can understand the context and intent behind a programmer’s code, offering valuable suggestions, completing code snippets, and even generating entire functions or modules based on high-level descriptions. This technology empowers developers to focus on higher-level problem-solving and architecture, while the ai handles the tedious and repetitive aspects of coding. One of the key advantages of large language models (LLMs) in programming is their ability to learn from the vast amounts of existing code and programming patterns they were trained on. This knowledge allows them to generate context-aware code, detect potential bugs or vulnerabilities, and offer optimizations to improve code quality and performance.
In this post, we highlight how the AWS Generative ai Innovation Center collaborated with SailPoint Technologies to build a generative ai-based coding assistant that uses Anthropic’s Claude Sonnet on amazon Bedrock to help accelerate the development of software as a service (SaaS) connectors.
amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading ai companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral ai, Stability ai, and amazon through a single API, along with a broad set of capabilities to build generative ai applications with security, privacy, and responsible ai.
SailPoint specializes in enterprise identity security solutions. Over 3,000 enterprises worldwide use SailPoint to help defend against today’s dynamic, identity-centric cyber threats while enhancing productivity and efficiency. Their products are designed to manage and secure access to applications and data through the lens of identity, at speed and scale, for users inside an organization and for external parties such as non-employees. SailPoint’s unified, intelligent, and extensible environment provides comprehensive identity governance capabilities, including access certifications, policy management, access request and provisioning, password management, and data access governance. This helps organizations make sure the right individuals have the right access to the right resources at the right times, thereby enforcing security policies and compliance requirements. Founded in 2005, SailPoint has grown to be a key player in identity security, serving customers globally across various industries.
SailPoint connectors and SaaS connectivity
SailPoint’s identity security solutions interface with various software as a service (SaaS) applications to retrieve the necessary information, such as account and access information, from an identity security standpoint. Each SaaS application implements these functionalities in slightly different ways and might expose their implementation through REST-based web APIs that are typically supported by OpenAPI specifications. SailPoint connectors are TypeScript modules that interface with a SaaS application and map the relevant identity security information (such as accounts and entitlements) to a standardized format understood by SailPoint. Based on the APIs exposed by the application, SailPoint connectors can create, update, and delete access on those accounts. SailPoint connectors help manage user identities and their access rights across different environments within an organization, supporting the organization’s compliance and security efforts.
Although a typical connector exposes several functions, for this post, we focus on developing the list user function of a connector that connects to an API endpoint for listing users, retrieving all the users, and transforming them into the format required by SailPoint.
In the following sections, we detail how we used Anthropic’s Claude Sonnet on amazon Bedrock to automatically create the list user connector, a critical component of the broader SaaS connectivity.
Understanding the list user connector
Connectors are modules that can connect to an external service and retrieve and update relevant information from a SaaS application. To better understand how connectors are built, we give an example of the connector function that connects to DocuSign’s REST API’s getUsers endpoint. The following TypeScript code defines an asynchronous function listUsers that retrieves a list of user accounts from an external API and constructs a structured output for each user:
The following is a breakdown of what each part of the code does:
- Imports – The code imports several types and interfaces from
@sailpoint/connector-sdk
. These includeContext, Response, StdAccountListHandler, and StdAccountListOutput
, which are used to handle the input and output of the function in a standardized way within a SailPoint environment. - Function definition –
listUsers
is defined as an asynchronous function compatible with theStdAccountListHandler
It uses theContext
to access configuration details like API keys and the base URL, and aResponse
to structure the output. - Retrieve API key and host URL – These are extracted from the
context
parameter. They are used to authenticate and construct the request URL. - URL construction – The function constructs the initial URL using the hostUrl and
organizationId
from thecontext
. This URL points to an endpoint that returns users associated with a specific organization. - Loop through pages – The
while
loop continues as long as there are more pages of data (hasMore
is true). It serves the following functions:- Fetch data – Inside the
while
loop, afetch
request is made to the API endpoint. The request includes anAuthorization
header that uses theapiKey
. The API’s response is converted to JSON format. - Process users – Inside the
while
loop, it extracts user data from the API response. The process loops through each user, constructing anStdAccountListOutput
object for each one. This object includes user identifiers and attributes like user names, names, status, email, and group IDs. - Pagination – Inside the while loop, it checks if there is a next page URL in the pagination information (
results.paging.next
). If it exists, it updates theurl
for the next iteration of the loop. If not, it sets hasMore to false to stop the loop.
- Fetch data – Inside the
Understanding this example helps us understand the step-by-step process of building this function in a connector. We aim to reproduce this process using an LLM with a prompt chaining strategy.
Generate a TypeScript connector using an LLM prompt chain
There are several approaches to using pre-trained LLMs for code generation, with varying levels of complexity:
- Single prompt – You can use models like Anthropic’s Claude to generate code by direct prompting. These models can generate code in a variety of languages, including TypeScript, but they don’t inherently possess domain-specific knowledge relevant to the task of building a connector. All the required information, including API specifications and formatting instructions, must be provided in the prompt, similar to the instructions that would be given to a developer. However, LLMs tend to struggle when given a long list of complex instructions. It’s also difficult for the prompt engineer to understand which steps are challenging for the LLM.
- Agentic frameworks with LLMs – Agents are a sophisticated framework that can use tools to perform a sequence of complex tasks. In this case, the agent starts by breaking down the user requests into steps, searches for necessary information using tools (a knowledge base or web browser), and autonomously generates code from start to finish. Although they’re powerful, these frameworks are complex to implement, often unstable in their behavior, and less controllable compared to other methods. Agents also require many LLM calls to perform a task, which makes them rather slow in practice. In the case where the logic to perform a task is a fixed sequence of steps, agents are not an efficient option.
- Prompt chain – A solution that finds a good trade-off between the two previous approaches involves using a prompt chaining technique. This method breaks the complex problem into a series of more manageable steps and integrates them to craft the final code. Each step has clear instructions that are easier for the LLM to follow, and a human in the loop can control the output of each step and correct the LLM if needed. This approach strikes a balance between flexibility and control, avoiding the extremes of the other two methods.
We initially tested the LLM’s ability to generate connector code based on a single prompt and realized that it struggles to generate code that addresses all aspects of the problem, such as pagination or nested data structures. To make sure the LLM would cover all the necessary components of the connector functions, and because creating a connector follows a fixed sequence of steps, prompt chaining was the most natural approach to improve the generated code.
The chain we used for connector generation consists of the following high-level steps:
- Parse the data model of the API response into prescribed TypeScript classes.
- Generate the function for user flattening in the format expected by the connector interface.
- Understand the pagination of the API specs and formulate a high-level solution.
- Generate the code for the
ListUsers
function by combining all the intermediate steps.
Step 1 is used as an input to Step 2, but Step 3 is separate. Both Step 2 and Step 3 results are fed to Step 4 for the final result. The following diagram illustrates this workflow.
In the following sections, we will dive into the prompting techniques we used for each of these steps.
System prompt
The system prompt is an essential component of LLM prompting that typically provides the initial context to guide the model’s response. For all the prompts in the chain, we used the following system prompt:
More specifically, the system prompt is used to establish the role of the LLM (expert web developer), give it a general goal (understand API specs and write TypeScript code), give high-level instructions (add comments in the code) and set boundaries (do not make up information).
Data model parsing
In this step, we prompt the LLM to understand the structure of the API response and create TypeScript classes corresponding to the objects in the response. Although this step isn’t strictly necessary for generating the response, it can help the LLM immensely in generating a correct connector. Similar to chain-of-thought reasoning for arithmetic problems, it is forcing the LLM to “think” before responding.
This step offers two primary benefits:
- Verbose API response simplification – API responses specified in the documentation can be quite verbose. By converting the response structure into TypeScript classes, we compress the information into fewer lines of code, making it more concise and less complicated for the LLM to comprehend. This step helps ensure that the essential information is prominently displayed at the start.
- Handling fragmented user responses – In some APIs, the user response is composed of several fragments because of the reuse of data structures. The OpenAPI specification uses the
$ref
tag to reference these reusable components. By converting the user response into TypeScript classes, we can consolidate all the relevant information into a single location. This consolidation simplifies the downstream steps by providing a centralized source of information.
We use the following task prompt to convert the API response into prescribed TypeScript classes:
In the preceding prompt template, the variable {api_spec}
is replaced with the API specification of the endpoint. A specific example for a DocuSign ListUsers
endpoint is provided in the appendix.
The following code is an example of the LLM-generated classes when applied to the DocuSign API specs. This has been parsed out of the tags.
User flattening function generation
The expected structure for each user is an object consisting of two properties: an identifier and a dictionary of attributes. The attributes dictionary is a map that associates string keys with either primitive attributes (number, Boolean, or string) or an array of primitive attributes. because of the potential for arbitrarily nested JSON object structures in the response, we use the capabilities of an LLM to generate a user flattening and conversion function. Both the user ID and the attributes are extracted from the response. By employing this approach, we effectively separate the intricate task of converting the user structure from the REST API response into the required format for the SailPoint connector SDK (hereafter referred to as the connector SDK).
The benefits of this approach are twofold. First, it allows for a cleaner and more modular code design, because the complex conversion process is abstracted away from the main code base. Second, it enables greater flexibility and adaptability, because the conversion function can be modified or regenerated to accommodate changes in the API response structure or the connector SDK requirements, without necessitating extensive modifications to the surrounding code base.
We use the following prompt to generate the conversion function, which takes as input the data model generated in the previous step:
In the preceding prompt template, we replace the {data_model}
variable with the data model of TypeScript classes extracted in the previous generation step of parsing the data model.
The following code is an example of the LLM-generated user flattening function when applied to the DocuSign API:
Pagination understanding
As mentioned earlier, the REST API can implement one or more pagination schemes. Often, the pagination details aren’t explicitly mentioned. During the development of the chain, we found that when there are multiple pagination schemes, the LLM would mix up elements of different pagination schemes and output code that isn’t coherent and sometimes also contains errors. Because looping over the paged results is a crucial step, we separate out this step in the code generation to let the LLM understand the pagination scheme implemented by the API and formulate its response at a high level before outputting the code. This allows the LLM to think step by step in formulating the response. This step generates the intermediate reasoning, which is fed into the next and final step: generating the list users function code.
We use the following prompt to get the pagination logic. Because we’re using Anthropic’s Claude Sonnet on amazon Bedrock, we ask the LLM to output the logic in XML format, which is known to be an efficient way to structure information for that model.
In the preceding prompt template, the variable {api_spec}
is replaced with the API specification. An example of the DocuSign API is provided in the appendix at the end of this post. The variable {api_info}
can be replaced with additional API documentation in natural language, which is left as an empty string in the DocuSign example.
The following is the LLM’s response for the pagination logic extraction in the case of the DocuSign API, parsed out of the tags:
ListUsers function generation
This final step in the chain combines the information extracted in the previous steps in addition to the user flattening function generated in the previous steps to formulate the final response, which is the TypeScript function that retrieves a list of users from the provided API.
We use the following prompt to generate the complete TypeScript function:
In this prompt, we replace {flatten_user_function}
with the flattenUser
that was generated earlier and {pagination_logic}
with the one that was generated earlier. We provide a template for the listUsers
function to make sure the final output meets the requirements for the connector function. The resulting output is the following listUsers
function, which uses the flattenUser
function from earlier:
Lessons learned
In this post, we demonstrated how LLMs can address complex code generation problems by employing various core prompting principles and the prompt chaining technique. Although LLMs excel at following clearly defined instructions and generating small code snippets, this use case involved a substantial amount of contextual information in the form of API specifications and user instructions. Our findings from this exercise are the following:
- Decomposing complex problems – Breaking down a complex code generation problem into several intermediate steps of lower complexity enhances the LLM’s performance. Providing a single complex prompt can result in the LLM missing some instructions. The prompt chaining approach enhances the robustness of the generation, maintaining better adherence to instructions.
- Iterative optimization – This method allows for iterative optimization of intermediate steps. Each part of the chain can be refined independently before moving to the next step. LLMs can be sensitive to minor changes in instructions, and adjusting one aspect can unintentionally affect other objectives. Prompt chaining offers a systematic way to optimize each step independently.
- Handling complex decisions – In the section on understanding pagination, we illustrated how LLMs can reason through various options and make complex decisions before generating code. For instance, when the input API specification supports multiple pagination schemes, we prompted the LLM to decide on the pagination approach before implementing the code. With direct code generation, without using an intermediate reasoning step, the LLM tended to mix elements of different pagination schemes, resulting in inconsistent output. By forcing decision-making first, in natural language, we achieved more consistent and accurate code generation.
Through automated code generation, SailPoint was able to dramatically reduce connector development time from hours or days to mere minutes. The approach also democratizes code development, so you don’t need deep TypeScript expertise or intimate familiarity with SailPoint’s connector SDK. By accelerating connector generation, SailPoint significantly shortens the overall customer onboarding process. This streamlined workflow not only saves valuable developer time but also enables faster integration of diverse systems, ultimately allowing customers to use SailPoint’s identity security solutions more rapidly and effectively.
Conclusion
Our ai-powered solution for generating connector code opens up new possibilities for integrating with REST APIs. By automating the creation of connectors from API specifications, developers can rapidly build robust connections to any REST API, saving developer time and reducing the time to value for onboarding new customers. As demonstrated in this post, this technology can significantly streamline the process of working with diverse APIs, allowing teams to focus on using the data and functionality these APIs provide rather than getting overwhelmed by connector code details. Consider how such a solution could enhance your own API integration efforts—it could be the key to more efficient and effective use of the myriad APIs available in today’s interconnected digital landscape.
About the Authors
Erik Huckle is the product lead for ai at SailPoint, where he works to solve critical customer problems in the identity security ecosystem through generative ai and data technologies. Prior to SailPoint, Erik co-founded a startup in robotic automation and later joined AWS as the first product hire at amazon One. Erik mentors local startups and serves as a board member and tech committee lead for a edtech nonprofit organization.
Tyler McDonnell is the engineering head of ai at SailPoint, where he leads the development of ai solutions to drive innovation and impact in identity security world. Prior to SailPoint, Tyler led machine learning research and engineering teams at several early to late-stage startups and published work in domains spanning software maintenance, information retrieval, and deep learning. He’s passionate about building products that use ai to bring positive impact to real people and problems.
Anveshi Charuvaka is a Senior Applied Scientist at the Generative ai Innovation Center, where he helps customers adopt Generative ai by implementing solutions for their critical business challenges. With a PhD in Machine Learning and over a decade of experience, he specializes in applying innovative machine learning and generative ai techniques to address complex real-world problems.
Aude Genevay is a Senior Applied Scientist at the Generative ai Innovation Center, where she helps customers tackle critical business challenges and create value using generative ai. She holds a PhD in theoretical machine learning and enjoys turning cutting-edge research into real-world solutions.
Mofijul Islam is an Applied Scientist II at the AWS Generative ai Innovation Center, where he helps customers tackle complex, customer-centric research challenges using generative ai, large language models (LLM), multi-agent learning, and multimodal learning. He holds a PhD in machine learning from the University of Virginia, where his work focused on multimodal machine learning, multilingual NLP, and multitask learning. His research has been published in top-tier conferences like NeurIPS, ICLR, AISTATS, and AAAI, as well as IEEE and ACM Transactions.
Yasin Khatami is a Senior Applied Scientist at the Generative ai Innovation Center. With more than a decade of experience in artificial intelligence (ai), he implements state-of-the-art ai products for AWS customers to drive efficiency and value for customer platforms. His expertise is in generative ai, large language models (LLM), multi-agent techniques, and multimodal learning.
Karthik Ram is a Principal Solutions Architect with amazon Web Services based in Columbus, Ohio. He works with Independent Software Vendors (ISVs) to build secure and innovative cloud solutions, including helping with their products and solving their business problems using data-driven approaches. Karthik’s area of depth is Cloud Security with a focus on Infrastructure Security and threat detection.
Appendix
The following API specifications were used for the experiments in this post: