In our previous tutorial, we build an Agent of ai capable of answering consultations browsing the web and add persistence to maintain the State. However, in many scenarios, you may want to put a human in the cycle to monitor and approve the agent's actions. This can be easily achieved with Langgraph. Let's explore how this works.
Agent configuration
We will continue from where we leave it in the last lesson. First, configure the environment variables, make the necessary imports and configure the checkpointer.
pip install langgraph==0.2.53 langgraph-checkpoint==2.0.6 langgraph-sdk==0.1.36 langchain-groq langchain-community langgraph-checkpoint-sqlite==2.0.1
import os
os.environ('TAVILY_API_KEY') = ""
os.environ('GROQ_API_KEY') = ""
from langgraph.graph import StateGraph, END
from typing import TypedDict, Annotated
import operator
from langchain_core.messages import AnyMessage, SystemMessage, HumanMessage, ToolMessage, AIMessage
from langchain_groq import ChatGroq
from langchain_community.tools.tavily_search import TavilySearchResults
from langgraph.checkpoint.sqlite import SqliteSaver
import sqlite3
sqlite_conn = sqlite3.connect("checkpoints.sqlite",check_same_thread=False)
memory = SqliteSaver(sqlite_conn)
# Initialize the search tool
tool = TavilySearchResults(max_results=2)
Definition of the agent
class Agent:
def __init__(self, model, tools, checkpointer, system=""):
self.system = system
graph = StateGraph(AgentState)
graph.add_node("llm", self.call_openai)
graph.add_node("action", self.take_action)
graph.add_conditional_edges("llm", self.exists_action, {True: "action", False: END})
graph.add_edge("action", "llm")
graph.set_entry_point("llm")
self.graph = graph.compile(checkpointer=checkpointer)
self.tools = {t.name: t for t in tools}
self.model = model.bind_tools(tools)
def call_openai(self, state: AgentState):
messages = state('messages')
if self.system:
messages = (SystemMessage(content=self.system)) + messages
message = self.model.invoke(messages)
return {'messages': (message)}
def exists_action(self, state: AgentState):
result = state('messages')(-1)
return len(result.tool_calls) > 0
def take_action(self, state: AgentState):
tool_calls = state('messages')(-1).tool_calls
results = ()
for t in tool_calls:
print(f"Calling: {t}")
result = self.tools(t('name')).invoke(t('args'))
results.append(ToolMessage(tool_call_id=t('id'), name=t('name'), content=str(result)))
print("Back to the model!")
return {'messages': results}
Agent status configuration
Now we configure the agent status with a slight modification. Previously, the messages were scored with Operator.add, adding new messages to the existing matrix. For human interactions in the loop, sometimes we also want to replace existing messages with the same ID instead of adding them.
from uuid import uuid4
def reduce_messages(left: list(AnyMessage), right: list(AnyMessage)) -> list(AnyMessage):
# Assign IDs to messages that don't have them
for message in right:
if not message.id:
message.id = str(uuid4())
# Merge the new messages with the existing ones
merged = left.copy()
for message in right:
for i, existing in enumerate(merged):
if existing.id == message.id:
merged(i) = message
break
else:
merged.append(message)
return merged
class AgentState(TypedDict):
messages: Annotated(list(AnyMessage), reduce_messages)
Add a human in the loop
We introduce an additional modification when compiling the graph. The Interrupt_Bore = (“Action”) parameter adds an interruption before calling the action node, ensuring manual approval before executing tools.
class Agent:
def __init__(self, model, tools, checkpointer, system=""):
# Everything else remains the same as before
self.graph = graph.compile(checkpointer=checkpointer, interrupt_before=("action"))
# Everything else remains unchanged
Executing the agent
Now, we will initialize the system with the same message, model and checkpointer as before. When we call the agent, we pass the subprocess configuration with a subprocessing ID.
prompt = """You are a smart research assistant. Use the search engine to look up information. \
You are allowed to make multiple calls (either together or in sequence). \
Only look up information when you are sure of what you want. \
If you need to look up some information before asking a follow up question, you are allowed to do that!
"""
model = ChatGroq(model="Llama-3.3-70b-Specdec")
abot = Agent(model, (tool), system=prompt, checkpointer=memory)
messages = (HumanMessage(content="Whats the weather in SF?"))
thread = {"configurable": {"thread_id": "1"}}
for event in abot.graph.stream({"messages": messages}, thread):
for v in event.values():
print(v)
The answers are transmitted backwards and the process stops after message ai, indicating a tool call. However, the interrupt_before parameter prevents immediate execution. We can also obtain the current status of the graph for this thread and see what it contains and also contains the next node that will be called ('action' here).
abot.graph.get_state(thread)
abot.graph.get_state(thread).next
To continue, we call the transmission again with the same subprocess configuration, passing none as an entrance. This transmits results, including the message of the tool and the final message. Since no interruption was added between the action node and the LLM node, the execution continues without problems.
for event in abot.graph.stream(None, thread):
for v in event.values():
print(v)
Interactive human approval
We can implement a simple loop that asks the user for approval before continuing the execution. A new subprocess ID for a new execution is used. If the user chooses not to continue, the agent stops.
messages = (HumanMessage("What's the weather in LA?"))
thread = {"configurable": {"thread_id": "2"}}
for event in abot.graph.stream({"messages": messages}, thread):
for v in event.values():
print(v)
while abot.graph.get_state(thread).next:
print("\n", abot.graph.get_state(thread), "\n")
_input = input("Proceed? (y/n): ")
if _input.lower() != "y":
print("Aborting")
break
for event in abot.graph.stream(None, thread):
for v in event.values():
print(v)
Excellent! Now you know how you can involve a human in the loop. Now, try to experiment with different interruptions and see how the agent behaves.
References: Deplearning.ai (<a target="_blank" href="https://learn.deeplearning.ai/courses/ai-agents-in-langgraph/lesson/6/human-in-the-loop”>https://learn.deeplearning.ai/courses/ai-agents-in-langograph/lesson/6/human-in-the-loop)
Vineet Kumar is a consulting intern in Marktechpost. He is currently pursuing his BS of the Indian Institute of technology (IIT), Kanpur. He is an automatic learning enthusiast. He is passionate about research and the latest advances in deep learning, computer vision and related fields.