The Langgraph <a target="_blank" href="https://github.com/langchain-ai/langgraph-reflection” target=”_blank” rel=”nofollow noopener”>Reflection Framework is a type of agent frame that offers a powerful way of improving the results of the language model through an iterative criticism process using generative. This article breaks how to implement a reflection agent that validates the python code using pyright and improves its quality using GPT-4o Mini. ai agents play a crucial role in this framework, automating decision -making processes by combining reasoning, reflection and feedback mechanisms to improve model performance.
Learning objectives
- Understand how the Langgraph reflection framework works.
- Learn how to implement the frame to improve the quality of the Python code.
- Experience how well the framework works through a practical test.
This article was published as part of the Blogathon of Data Sciences.
Langgraph reflection frame architecture
Langgraph's frame of reflection follows a simple but effective agent architecture:
- Main agent: Generates initial code based on the user application.
- Agent: Validate the code generated using pyright.
- Reflection process: If errors are detected, the main agent is called again to refine the code until there are no problems.

Also read: Agent frames for generative applications of ai
How to implement Langgraph's reflection framework
Here is a Step by step guide For an illustrative implementation and use:
Step 1: Environment configuration
First, install the required units:
pip install langgraph-reflection langchain pyright
Step 2: Code analysis with pyright
We will use Pyright to analyze the generated code and provide error details.
Piright analysis function
from typing import TypedDict, Annotated, Literal
import json
import os
import subprocess
import tempfile
from langchain.chat_models import init_chat_model
from langgraph.graph import StateGraph, MessagesState, START, END
from langgraph_reflection import create_reflection_graph
os.environ("OPENAI_API_KEY") = "your_openai_api_key"
def analyze_with_pyright(code_string: str) -> dict:
"""Analyze Python code using Pyright for static type checking and errors.
Args:
code_string: The Python code to analyze as a string
Returns:
dict: The Pyright analysis results
"""
with tempfile.NamedTemporaryFile(suffix=".py", mode="w", delete=False) as temp:
temp.write(code_string)
temp_path = temp.name
try:
result = subprocess.run(
(
"pyright",
"--outputjson",
"--level",
"error", # Only report errors, not warnings
temp_path,
),
capture_output=True,
text=True,
)
try:
return json.loads(result.stdout)
except json.JSONDecodeError:
return {
"error": "Failed to parse Pyright output",
"raw_output": result.stdout,
}
finally:
os.unlink(temp_path)
Step 3: Main assistant model for code generation
GPT-4O Model Settings
def call_model(state: dict) -> dict:
"""Process the user query with the GPT-4o mini model.
Args:
state: The current conversation state
Returns:
dict: Updated state with the model response
"""
model = init_chat_model(model="gpt-4o-mini", openai_api_key = 'your_openai_api_key')
return {"messages": model.invoke(state("messages"))}
Note: Use OS.Environ (“Openai_api_Key”) = “Your_api_Key“Surely, and never hardify the key in your code.
Step 4: Extraction and validation of the code
Types of code extraction
# Define type classes for code extraction
class ExtractPythonCode(TypedDict):
"""Type class for extracting Python code. The python_code field is the code to be extracted."""
python_code: str
class NoCode(TypedDict):
"""Type class for indicating no code was found."""
no_code: bool
System application for GPT-4o Mini
# System prompt for the model
SYSTEM_PROMPT = """The below conversation is you conversing with a user to write some python code. Your final response is the last message in the list.
Sometimes you will respond with code, othertimes with a question.
If there is code - extract it into a single python script using ExtractPythonCode.
If there is no code to extract - call NoCode."""
Validation function of the pyright code
def try_running(state: dict) -> dict | None:
"""Attempt to run and analyze the extracted Python code.
Args:
state: The current conversation state
Returns:
dict | None: Updated state with analysis results if code was found
"""
model = init_chat_model(model="gpt-4o-mini")
extraction = model.bind_tools((ExtractPythonCode, NoCode))
er = extraction.invoke(
({"role": "system", "content": SYSTEM_PROMPT}) + state("messages")
)
if len(er.tool_calls) == 0:
return None
tc = er.tool_calls(0)
if tc("name") != "ExtractPythonCode":
return None
result = analyze_with_pyright(tc("args")("python_code"))
print(result)
explanation = result("generalDiagnostics")
if result("summary")("errorCount"):
return {
"messages": (
{
"role": "user",
"content": f"I ran pyright and found this: {explanation}\n\n"
"Try to fix it. Make sure to regenerate the entire code snippet. "
"If you are not sure what is wrong, or think there is a mistake, "
"you can ask me a question rather than generating code",
}
)
}
Step 5: Creation of the reflection graph
Building the main graphics and judges
def create_graphs():
"""Create and configure the assistant and judge graphs."""
# Define the main assistant graph
assistant_graph = (
StateGraph(MessagesState)
.add_node(call_model)
.add_edge(START, "call_model")
.add_edge("call_model", END)
.compile()
)
# Define the judge graph for code analysis
judge_graph = (
StateGraph(MessagesState)
.add_node(try_running)
.add_edge(START, "try_running")
.add_edge("try_running", END)
.compile()
)
# Create the complete reflection graph
return create_reflection_graph(assistant_graph, judge_graph).compile()
reflection_app = create_graphs()
Step 6: Execute the application
Execution of example
if __name__ == "__main__":
"""Run an example query through the reflection system."""
example_query = (
{
"role": "user",
"content": "Write a LangGraph RAG app",
}
)
print("Running example with reflection using GPT-4o mini...")
result = reflection_app.invoke({"messages": example_query})
print("Result:", result)
Output analysis


What happened in the example?
Our Langgraph reflection system was designed to do the following:
- Take an initial code fragment.
- Execute Pyright (a static type verifier for Python) to detect errors.
- Use the GPT-4O MINI model to analyze errors, understand them and generate improved code suggestions
Iteration 1 – Identified errors
1. Import “Faiss” could not be solved.
- Explanation: This error occurs when the FAISS library is not installed or the Python environment does not recognize import.
- Solution: The agent recommended executing:
pip install faiss-cpu
2. You cannot access the “embed” attribute for the class “OpenaEMBeddings”.
- Explanation: The referenced code .embed, but in the newest versions of Langchain, the raw methods are .embed_documes () or .embed_query ().
- Solution: The agent correctly replaced .embed with .embed_query.
3. Arguments missing for the parameters “docstore”, “index_to_docstore_id”.
- Explanation: The Faiss Vector store now requires a docstore object and an index_to_docstore_id allocation.
- Solution: The agent added both parameters creating a Memorydocstore and a dictionary mapping.
Iteration 2 – Progression
In the second iteration, the system improved the code but was still identified:
1. Import “Langchain.document” could not be solved.
- Explanation: The code tried to import an incorrect module document.
- Solution: The agent updated the import of Langchain.Docstore Import Document.
2. “Immorydocstore” is not defined.
- Explanation: The missing importation for immemorlydocstore was identified.
- Solution: The agent added correctly:
from langchain.docstore import InMemoryDocstore
Iteration 3 – Final Solution
In the final iteration, the reflection agent successfully addressed all problems by:
- Import Faiss correctly.
- Change .embed a .embed_query to embed functions.
- Add a valid immemor andDocstore for document management.
- Creation of an adequate assignment of index_to_docstore_id.
- Correctly access the document content using .page_content instead of treating documents as simple chains.
The improved code was executed successfully without errors.
Why does this matter
- Automatic error detection: Langgraph's reflection framework simplifies the purification process by analyzing code errors using Pyright and generating processable ideas.
- Iterative improvement: The framework continuously refines the code until errors are resolved, imitating how a developer could purify and manually improve its code.
- Adaptive learning: The system adapts to changing code structures, such as updated library syntax or version differences.
Conclusion
Langgraph's frame of reflection demonstrates the power to combine ai critics with robust static analysis tools. This intelligent feedback loop allows faster code correction, improved coding practices and better general development efficiency. Either for experienced beginners or developers, Langgraph Reflection offers a powerful tool to improve code quality.
Key control
- When combining Langchain, Pyright and GPT-4o Mini within the Langgraph reflection framework, this solution provides an effective way to automatically validate the code.
- The framework helps the LLM to generate improved solutions in iterative form and also guarantees higher quality outputs through reflection and criticism cycles.
- This approach improves the robustness of the code generated by ai and improves performance in real world scenarios.
The means shown in this article are not owned by Analytics Vidhya and are used at the author's discretion.
Frequent questions
A. Langgraph's reflection is a powerful framework that combines a primary ai agent (for the generation of code or the execution of tasks) with a critical agent (to identify problems and suggest improvements). This iterative loop improves the final output taking advantage of feedback and reflection.
A. The reflection mechanism follows this workflow:
– Main Agent: Generates the initial output.
– Agent critic: Analyze the output generated for errors or improvements.
– Improvement loop: If problems are found, the main agent is reinvocked with comments for refinement. This loop continues until the output meets the quality standards.
A. will need the following units:
-Langgraph-Reflection
– Langchain
– Pyright (for code analysis)
– FAISS (for the search for vectors)
-Openai (for GPT -based models)
To install them, execute: Pip Install Langgraph-Reflection Langchain Pyright Faiss OpenAI
A. Langgraph's reflection stands out in tasks such as:
– Validation and improvement of the Python code.
-Natural language responses that require fact verification.
– Summary of documents with clarity and integrity.
-Suchor that the content generated by ai adheres to security guidelines.
A. No, although we have shown Pyright in the code correction examples, the frame can even help improve the text summary, data validation and chatbot response refinement.
Log in to continue reading and enjoying content cured by experts.
(Tagstotranslate) Blogathon