Microsoft is launching a new feature called “Remediation” that builds on the company’s efforts to combat ai inaccuracies. Customers using Microsoft Azure to power their ai systems can now use the ability to automatically detect and rewrite incorrect content in ai results.
The remediation feature is available in preview as part of Azure ai Studio, a set of security tools designed to detect vulnerabilities, find “hallucinations,” and block malicious messages. Once enabled, the remediation system will scan and identify inaccuracies in ai results by comparing them to the customer’s original material.
From there, it will highlight the error, provide information on why it is incorrect, and rewrite the content in question — all “before the user can see” the inaccuracy. While this seems like a useful way to address the nonsense that ai models often espouse, it may not be a completely reliable solution.
Vertex ai, Google’s cloud platform for companies building ai systems, has a feature that “grounds” ai models by comparing results to Google Search, the company’s own data, and (soon) third-party datasets.
ai-hallucinations-but-experts-caution-it-has-shortcomings/?guccounter=1″>In a statement to TechnologyCrunchA Microsoft spokesperson said the “correction” system uses “small and large language models to align results with base documents,” meaning it’s not immune to making mistakes either. “It’s important to note that base detection doesn’t solve ‘accuracy,’ but it does help align generative ai results with base documents,” Microsoft said. TechnologyCrunch.