Designing computational workflows for ai applications such as chatbots and coding assistants is complex due to the need to manage numerous heterogeneous parameters such as prompts and ML hyperparameters. Post-deployment errors require manual updates, adding to the challenge. The study explores optimization problems aimed at automating the design and updating of these workflows. Given their intricate nature, involving interdependent steps and semi-black-box operations, traditional optimization techniques such as Bayesian optimization and reinforcement learning often need to be made more efficient. LLM-based optimizers have been proposed to improve efficiency, but most still rely on scalar feedback and handle workflows with only one component.
Researchers at Microsoft Research and Stanford University propose a framework called Trace to automate the design and updating of ai systems such as coding assistants and robots. Trace treats the computational workflow as a graph, similar to neural networks, and optimizes heterogeneous parameters using Optimization with Trace Oracle (OPTO). Trace efficiently converts workflows into OPTO instances, allowing a general-purpose optimizer, OptoPrime, to update parameters based on execution traces and feedback in an iterative manner. This approach improves optimization efficiency across multiple domains, outperforming optimizers specialized in tasks such as fast optimization, hyperparameter tuning, and robot controller design.
Existing frameworks such as LangChain, Semantic Kernels, AutoGen, and DSPy allow for composing and optimizing computational workflows, primarily using scalar feedback and black-box search techniques. In contrast to these, Trace uses execution tracing for automatic optimization, generalizing the computational graph to fit multiple workflows. Trace’s OPTO framework supports joint optimization of directions, hyperparameters, and code with rich feedback and dynamically adapts to changes in the workflow structure. It extends the principles of AutoDiff to non-differentiable workflows, enabling efficient self-adaptive agents and general-purpose optimization in diverse applications, outperforming specialized multi-task optimizers.
OPTO forms the foundation of Trace, defining a graph-based abstraction for iterative optimization. A computational graph is a DAG where nodes represent objects and edges denote input-output relationships. In OPTO, an optimizer selects parameters and the Oracle Trace returns trace feedback consisting of a computational graph and input at the output. This feedback can include scores, gradients, or natural language suggestions. The optimizer uses this feedback to iteratively update parameters. Unlike black-box settings, execution tracing provides a clear path to the output, enabling efficient parameter updates. Trace leverages OPTO to optimize various workflows by abstracting design and domain-specific components.
The LLM-based optimization algorithm OptoPrime is designed for the OPTO problem. It leverages the coding and debugging capabilities of LLMs to handle execution trace subgraphs. Trace feedback is a pseudo-algorithm that allows the LLM to suggest parameter updates. OptoPrime includes a memory module to keep track of previous parameter-feedback pairs, which improves robustness. Experiments show the effectiveness of OptoPrime in numerical optimization, traffic control, prompt optimization, and long-term robot control tasks. OptoPrime demonstrates superior performance compared to other optimizers, particularly when leveraging execution trace information and memory.
Trace converts computational workflow optimization problems into OPTO problems, which is effectively demonstrated by the OPTO optimizer, OptoPrime. This marks an initial step toward a new optimization paradigm with several future directions. Improvements in LLM reasoning, such as Chain-of-Thought, Few-Shot Prompting, Tool Use, and Multi-Agent Workflows, could improve or inspire new OPTO optimizers. A hybrid workflow combining LLM and search algorithms with specialized tools could lead to a general-purpose OPTO optimizer. Specializing the propagator for specific computations, particularly large graphs, and developing optimizers capable of counterfactual reasoning could improve efficiency. Nontextual contexts and feedback could also broaden Trace's applicability.
Review the ai-agents/” target=”_blank” rel=”noreferrer noopener”>Details, Project, and GitHubAll credit for this research goes to the researchers of this project. Also, don't forget to follow us on twitter.com/Marktechpost”>twitter and join our Telegram Channel and LinkedIn GrAbove!. If you like our work, you will love our Newsletter..
Don't forget to join our Over 47,000 ML subscribers on Reddit
Find upcoming ai webinars here
Sana Hassan, a Consulting Intern at Marktechpost and a dual degree student at IIT Madras, is passionate about applying technology and ai to address real-world challenges. With a keen interest in solving practical problems, she brings a fresh perspective to the intersection of ai and real-life solutions.
<script async src="//platform.twitter.com/widgets.js” charset=”utf-8″>