Large Language Models (LLMs) can complete various tasks without the need for fine tuning with the help of few shot demos or samples of the inputs and outputs for a task. Thought chain prompts, which provide intermediate steps for task reasoning, can help LLMs perform even better. However, proof quality significantly impacts the low-trial performance of LLMs, particularly for reasoning tasks that require varied and sophisticated reasoning patterns. It is costly and time consuming to manually create a large and varied set of instances for demo selection, and relying on a small number of demos could prevent LLMs from generalizing and accommodating multiple test inputs.
This study suggests a novel approach called SYNTHETIC IMPULSES, which employs the knowledge and creative power of LLMs to supplement a small set of demos with auto-generated instances. LLMs are then instructed to reason more effectively using self-generated examples. Specifically, they direct an LLM to produce further instances of toggling between two methods after providing a small number of initial examples, each of which includes a query and a series of deductive steps: (1) The reverse method, which implies that the LLM synthesizes a question. based on a self-generated chain of reasoning and ensures that the question is understandable and well defined; and (2) The direct method, in which the LLM creates a chain of reasoning for the synthesized question, which is then refined to make the chain of reasoning more accurate and consistent with the question.
They keep doing this until they have enough artificial instances. By grouping the proofs and choosing the most complicated one (the one with the longest chain of reasoning) from each group, they present a new within-group complexity-based selection technique that attempts to increase the variety and information of the proofs. Finally, they ask the LLM to create a chain of reasoning for a test question and use it to find the solution by providing the chosen proofs. They test their approach using a variety of reasoning tasks, such as symbolic, algorithmic, and numerical reasoning.
They show, using low-shot preconditions, that their approach can greatly improve the performance of LLMs, outperforming state-of-the-art techniques by up to 15.6% in absolute terms. Their contributions are as follows:
• Offer SYNTHETIC HINTS, a unique technique that uses hints to supplement a small set of proofs with instances that the LLM auto-synthesizes to enhance reasoning in the LLM.
• To choose interesting and informative proofs from the improved set for inference, they provide an approach based on within-group complexity.
• Using three reasoning problems, they show how their approach works and how it significantly outperforms previous methods.
There is no publicly available code implementation as of now.
review the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to join our 13k+ ML SubReddit, discord channel, and electronic newsletterwhere we share the latest AI research news, exciting AI projects, and more.
Aneesh Tickoo is a consulting intern at MarktechPost. She is currently pursuing her bachelor’s degree in Information Science and Artificial Intelligence at the Indian Institute of Technology (IIT), Bhilai. She spends most of her time working on projects aimed at harnessing the power of machine learning. Her research interest is image processing and she is passionate about creating solutions around her. She loves connecting with people and collaborating on interesting projects.