The LLM have shown a strong performance in the response of the questions of the knowledge graphics (KGQA) taking advantage of the planning and interactive strategies to consult knowledge graphs. Many existing approaches are based on sparql -based tools to recover information, allowing models to generate precise answers. Some methods improve the reasoning skills of the LLM through the construction of reasoning routes based on tools, while others use decision -making frames that use environmental feedback to interact with knowledge graphics. Although these strategies have improved the precision of KGQA, they often blur the distinction between the use of the tool and real reasoning. This confusion reduces interpretability, decreases readability and increases the risk of invocations of hallucinated tools, where models generate incorrect or irrelevant responses due to excessive dependence on parametric knowledge.
To address these limitations, researchers have explored memory aquatic techniques that provide external knowledge storage to support complex reasoning. The previous work has integrated memory modules for long -term context retention, allowing more reliable decision making. KGQA's first methods used key value and graphic neural networks to infer answers, while recent LLM -based approaches take advantage of large -scale models for improved reasoning. Some strategies use supervised fine adjustments to improve understanding, while others use discriminatory techniques to mitigate hallucinations. However, existing KGQA methods still struggle to separate the reasoning of the invocation of the tool, which leads to the lack of focus on logical inference.
Researchers from the Harbin Institute of technology propose the reconstruction of aquatic consultations (MEMQ), a framework that separates the reasoning of the invocation of tools in KGQA based on LLM. MEMQ establishes a structured consultation memory using descriptions generated by LLM of decomposed consultation statements, allowing independent reasoning. This approach improves readability by generating explicit reasoning steps and recovering relevant memory based on semantic similarity. MEMQ improves interpretability and reduces the use of the hallucinated tool by eliminating dependence on the unnecessary tool. The experimental results show that MEMQ achieves the avant -garde performance at the webqsp and CWQ reference points, which demonstrates its effectiveness to improve KGQA reasoning based on LLM.
MEMQ is designed to separate the reasoning of the invocation of tools in KGQA based on LLM through three key tasks: memory construction, knowledge reasoning and consultation reconstruction. The construction of the memory implies storing consultations of consultations with corresponding natural language descriptions for efficient recovery. The knowledge reasoning process generates structured reasoning plans of several steps, ensuring the logical progression to respond. The reconstruction of consultations then recovers relevant consultation statements based on the semantic similarity and assembles them in a final consultation. MEMQ improves reasoning through the adjustment of LLM with peers of explanation of explanation and uses an adaptive memory recovery strategy, exceeding the above methods at the webqsp and CWQ reference points with latest generation results.
The experiments evaluate the MEMQ performance in the knowledge chart, the response of questions using webqsp and CWQ data sets. The blows@1 and F1 serve as evaluation metrics, with comparisons with baselines based on tools such as Rog and Tog. Memq, based on flame2-7b, surpasses the previous methods, which shows improved reasoning through an aquatic memory approach. Analytical experiments highlight structural and upper edge precision. Ablation studies confirm the effectiveness of MEMQ in the use of tools and the stability of reasoning. Additional analyzes explore reasoning errors, hallucinations, data efficiency and universality of the model, which demonstrates its adaptability between architectures. MEMQ significantly improves structured reasoning while reducing errors in several steps consultations.
In conclusion, the study introduces MEMQ, an aquatic memory frame that separates the reasoning of LLM from the invocation of tools to reduce hallucinations in KGQA. MEMQ improves the reconstruction of consultations and improves the clarity of reasoning by incorporating a consultation memory module. The approach allows the reasoning of natural language by mitigating errors in the use of the tool. Experiments at the webqsp and CWQ points show that Memq exceeds existing methods, achieving latest generation results. When addressing the confusion between the use and reasoning of the tool, MEMQ improves the readability and precision of the responses generated by LLM, offering a more effective approach to KGQA.
Verify he Paper. All credit for this investigation goes to the researchers of this project. In addition, feel free to follow us <a target="_blank" href="https://x.com/intent/follow?screen_name=marktechpost” target=”_blank” rel=”noreferrer noopener”>twitter And don't forget to join our 80k+ ml subject.

Sana Hassan, a consulting intern in Marktechpost and double grade student in Iit Madras, passionate to apply technology and ai to address real world challenges. With great interest in solving practical problems, it provides a new perspective to the intersection of ai and real -life solutions.