In Portkey ai, the Gateway Framework is replaced by an important component, Guardrails, which is installed to make interaction with the large language model more reliable and secure. Specifically, Guardrails can ensure that requests and responses are formatted according to predefined standards, reducing the risks associated with variable or harmful LLM results.
On the other hand, Portkey ai offers an integrated, fully-secured platform that works in real-time to ensure that LLM behaviors pass all prescribed checks every time. This would be important because LLMs are inherently fragile and often fail in the most unexpected ways. Traditional failures can manifest themselves through API downtimes or unexplained error codes such as 400 or 500. More insidious are failures whereby a response with a 200 status code still disrupts an application’s workflow because the output does not match or is incorrect. The guardrails in the Gateway framework are designed to address the challenges of validation at the input and output against predefined checks.
The Guardrail system includes a set of predefined regular expression matches, JSON schema validation, and code detection in languages such as SQL, Python, and TypeScript. In addition to these deterministic checks, Portkey ai also supports LLM-based Guardrails that can detect Gibberish or scan for warning injections, protecting against even more insidious types of flaws. Currently, over 20 types of Guardrail checks are supported, each configurable to your needs. It integrates with any Guardrail platform, including Aporia, SydeLabs, and Pillar Security. By adding API keys, the user can include those other platforms’ policies in their Portkey calls.
It is very easy to put Guardrails into production with four steps: create Guardrail checks, define Guardrail actions, enable Guardrails using configurations, and attach these configurations to requests. A user can create a Guardrail by selecting one of the given checks and then defining what actions to take based on the results. These can include logging the result, rejecting the request, creating an evaluation dataset, falling back to another model, or retrying the request.
The Portkey Guardrail system is highly configurable, depending on the outcome of the various checks that Guardrail performs on an application. This means that, for example, the configuration can ensure that if a check fails, the request will not be processed or will be processed with a certain status code. This is a key flexibility if an organisation wants to strike a balance between security concerns and operational efficiency.
One of the most powerful aspects of Portkey Guardrails is their relationship to the broader Gateway framework, which orchestrates the handling of requests. That orchestration considers whether the Guardrail is configured to run asynchronously or synchronously. In the former case, Portkey records the outcome of the Guardrail, which does not affect the request; in the latter, a verdict from the Guardrail directly affects how a request will be handled. For example, checking for synchronous mode might return a specially defined status code, such as 446, indicating that the request should not be processed if it fails.
Portkey ai keeps records of Guardrail results, including how many checks are passed or failed, how long each check takes, and the feedback provided for each request. This logging capability is very important for an organization creating an evaluation dataset to continuously improve the quality of ai models and protect them with Guardrails.
In conclusion, the Gateway Framework guardrails in Portkey ai represent one of the most robust solutions to the intrinsic risk factors associated with running LLMs in a production environment. With comprehensive controls and actions, Portkey ensures that ai applications are secure, compliant, and reliable against unpredictable LLM behavior.
Take a look at the ai/gateway/wiki/Guardrails-on-the-Gateway-Framework” target=”_blank” rel=”noreferrer noopener”>GitHub and ai/docs/product/guardrails” target=”_blank” rel=”noreferrer noopener”>Details. All credit for this research goes to the researchers of this project. Also, don't forget to follow us on twitter.com/Marktechpost”>twitter and join our Telegram Channel and LinkedIn GrAbove!. If you like our work, you will love our fact sheet..
Don't forget to join our Subreddit with over 48 billion users
Find upcoming ai webinars here
Asif Razzaq is the CEO of Marktechpost Media Inc. As a visionary engineer and entrepreneur, Asif is committed to harnessing the potential of ai for social good. His most recent initiative is the launch of an ai media platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is technically sound and easily understandable to a wide audience. The platform has over 2 million monthly views, illustrating its popularity among the public.
<script async src="//platform.twitter.com/widgets.js” charset=”utf-8″>