A deep dive into two libraries by the same creator — LangChain and LangGraph: their key building blocks, how they handle core pieces of their functionality, and deciding between them for your use case
Language models have unlocked possibilities for how users can interact with ai systems and how these systems can communicate with each other — through natural language.
When enterprises want to build solutions using Agentic ai capabilities one of the first technical questions is often “what tools do I use?” For those that are eager to get started, this is the first roadblock.
In this article, we will explore two of the most popular frameworks for building Agentic ai Applications — LangChain and LangGraph. By the end of this article you should have a thorough understanding of the key building blocks, how each framework differs in handling core pieces of their functionality, and be able to form an educated point of view on which framework best fits your problem.
Since the practice of widely incorporating Generative ai into solutions is relatively new, open-source players are actively competing to develop the “best” agent framework and orchestration tools. This means that although each player brings a unique approach to the table, they are rolling out new functionality near constantly. When reading this piece keep in mind that what’s true today, might not be true tomorrow!
Note: I originally intended to draw the comparison between AutoGen, LangChain, and LangGraph. However, AutoGen has announced that it launching AutoGen 0.4, a complete redesign of the framework from the foundation up. Look out for another article when AutoGen 0.4 launches!
By understanding the different base elements of each framework, you will have a richer understanding of the key differences on how they handle certain core functionality in the next section. The below description is not an exhaustive list of all of the components of each framework, but serves as a strong basis to understand the difference in their general approach.
LangChain
There are two methods for working with LangChain: as a sequential chain of predefined commands or using LangChain agents. Each approach is different in the way it handles tools and orchestration. A chain follows a predefined linear workflow while an agent acts as a coordinator that can make more dynamic (non linear) decisions.
- Chains: A sequence of steps that can include calls to an llm, agent, tool, external data source, procedural code, and more. Chains can branch, meaning a single chain to split into multiple paths based on logical conditions.
- Agents or Language Models: A Language Model has the ability to generate responses in natural language. But the Agent uses a language model plus added capabilities to reason, call tools, and repeat the process of calling tools in case there are any failures.
- Tools: Code based functions that can be called in the chain or invoked by an agent to interact with external systems.
- Prompts: This can include a system prompt that instructs the model how to complete a task and what tools are available, information injected from external data sources that provided the model more context, and the user prompt or task for the model to complete.
LangGraph
LangGraph approaches ai workflows from a different standpoint. Much like the name suggests, it orchestrates workflows like a graph. Because of its flexibility in handling different flows between ai agents, procedural code, and other tools, it is better suited for use cases where a linear chain method, branched chain, or simple agent system wouldn’t meet the needs of the use case. LangGraph was designed to handle more complex conditional logic and feedback loops compared to LangChain.
- Graphs: A flexible way of organizing a workflow that can include calls to an llm, tool, external data source, procedural code, and more. LangGraph supports cyclical graphs as well; which means you can create loops and feedback mechanisms so nodes can be revisited multiple times.
- Nodes: Represent steps in the workflow, such as an LLM query, an API call, or tool execution.
- Edges and Conditional Edges: Edges define the flow of information by connecting the output of one node as the input to the next. A conditional edge defines the flow of information from one node to another if a certain condition is met. Developers can custom define these conditions.
- State: State is the current status of the application as information flows through the graph. It is a developer defined mutable TypedDict object that contains all the relevant information for the current execution of the graph. LangGraph automatically handles the updating of state at each node as information flows through the graph.
- Agents or Language Models: Language models within a graph are solely responsible for generating a text response to an input. The agent capability leverages a language model but enables the graph to have multiple nodes representing different components of the agent (such as reasoning, tool selection, and execution of a tool). The agent can make decisions about which path to take in the graph, update the state of the graph, and perform more tasks than just text generation.
LangGraph and LangChain overlap in some of their capabilities but they approach the problem from a different perspective. LangChain focuses on either linear workflows through the use of chains or different ai agent patterns. While LangGraph focuses on creating a more flexible, granular, process based workflow that can include ai agents, tool calls, procedural code, and more.
In general, LangChain require less of a learning curve than LangGraph. There are more abstractions and pre-defined configurations that make LangChain easier to implement for simple use cases. LangGraph allows more custom control over the design of the workflow, which means that it is less abstracted and the developer needs to learn more to use the framework effectively.
Tool Calling:
LangChain
In LangChain there are two ways tools can be called depending on if you are using a chain to sequence a series of steps or are just using its agent capabilities without it being explicitly defined in a chain. In a chain, tools are included as a pre-defined step in the chain — meaning that they aren’t necessarily called by the agent because it was already predetermined they were going to be called in the chain. However, when you have an agent not defined in a chain, the agent has autonomy to decided what tool to invoke and when based on the list of tools it is privy to.
Example of Flow for a Chain:
- Create the function that represents the tool and make it compatible with the chain
- Incorporate the tool into the chain
- Execute the chain
Example of Flow for an Agent :
- The tool is defined
- The tool is added to the agent
- The agent receives a query and decides whether and when to use the search tool. The agent may use the tool multiple times if needed.
LangGraph
In LangGraph, tools are usually represented as a node on the graph. If the graph contains an agent, then then it is the agent that determines which tool to invoke based on its reasoning abilities. Based on the agent’s tool decision, the graph navigates to the “tool node” to handle the execution of the tool. Conditional logic can be included in the edge from the agent to the tool node to add additional logic that determines if a tool gets executed. This gives the developer another layer of control if desired. If there is no agent in the graph, then much like in LanchChain’s chain, the tool can be included in the workflow based on conditional logic.
Example of Flow for a Graph with anAgent:
- The tool is defined
- the tool is bound to the agent
- The agent decides if a tool is needed, and if so which tool.
- The LangGraph framework detects a tool call is required and navigates to the tool node in the graph to execute the tool call.
- The tool output is captured and added to the state of the graph
- The agent is called again with the updated state to allow it to make a decision on what to do next
Example of Flow for a graph without an Agent:
- The tool is defined
- The tool is added to the graph as a node
- Conditional edges can be used to determine when to use a certain tool node and control the flow of the graph
- The tool can be configured to update the state of the graph
If you want to learn more about tool calling, my friend Tula Masterman has an excellent ai-agents-the-intersection-of-tool-calling-and-reasoning-in-generative-ai-ff268eece443″ rel=”noopener”>article about how tool calling works in Generative ai.
Note: Neither LangChain nor LangGraph support semantic functions out of the box like MSFT’s Semantic Kernel.
Conversation History and Memory
LangChain
Langchain offers built-in abstractions for handling conversation history and memory. There are options for the level of granularity (and therefore the amount of tokens) you’d like to pass to the llm which include the full session conversation history, a summarized version, or a custom defined memory. Developers can also create custom long term memory systems where they can store memories in external databases to be retrieved when relevant.
LangGraph
In LangGraph, the state handles memory by keeping track of defined variables at every point in time. State can include things like conversation history, steps of a plan, the output of a language model’s previous response, and more. It can be passed from one node to the next so that each node has access to what the current state of the system is. However, long term persistent memory across sessions is not available as a direct feature of the framework. To implement this, developers could include nodes responsible to store memories and other variables in an external database to be retrieved later.
Out of the box RAG capabilities:
LangChain
LangChain can handle complex retrieval and generation workflows and has a more established set of tools to help developers integrate RAG into their application. For instance LangChain offers document loading, text parsing, embedding creation, vector storage, and retrieval capabilities out of the box by using langchain.document_loaders, langchain.embeddings, and langchain.vectorstores directly.
LangGraph
In LangGraph, RAG needs to be developed from scratch as part of the graph structure. For example there could be separate nodes for document parsing, embedding, and retrieval that would be connected by normal or conditional edges. The state of each node would be used to pass information between steps in the RAG pipeline.
Parallelism:
LangChain
LangChain offers the opportunity to run multiple chains or agents in parallel by using the RunnableParallel class. For more advanced parallel processing and asynchronous tool calling, the developer would have to custom implement these capabilities by using python libraries such as ayncio.
LangGraph
LangGraph supports the parallel execution of nodes, as long as there aren’t any dependencies (like the output of one language model’s response as an input for the next node). This means that it can support multiple agents running at the same time in a graph as long as they are not dependent nodes. Like LangChain, LangGraph can use a RunnableParallel class to run multiple graphs in parallel. LangGraph also supports parallel tool calling by using python libraries like ayncio.
Retry Logic and Error Handling:
LangChain
In LangChain, the error handling is explicitly defined by the developer and can either be done by introducing retry logic into the chain its self or in the agent if a tool call fails.
LangGraph
In LangGraph you can actually embed error handling into your workflow by having it be its own node. When certain tasks fail you can point to another node or have the same node retry. The best part is that only the particular node that fails is re-tried, not the entire workflow. This means the graph can resume from the point of failure rather than having to start from the beginning. If your use case requires many steps and tool calls, this could be imortant.
You can use LangChain without LangGraph, LangGraph without LangChain, or both together! It’s also completely possible to explore using LangGraph’s graph based orchestration with other Agentic ai frameworks like MSFT’s AutoGen by making the AutoGen Agents their own nodes in the graph. Safe to say there are a lot of option — and it can feel overwhelming.
So after all this research, when should I use each? Although there are no hard and fast rules, below is my personal option:
Use LangChain Only When:
You need to quickly prototype or develop ai workflows that either involve sequential tasks (such as such as document retrieval, text generation, or summarization) that follow a predefined linear pattern. Or you want to leverage ai agent patterns that can dynamically make decisions, but you don’t need granular control over a complex workflow.
Use LangGraph Only When:
Your use case requires non-linear workflows where multiple components interact dynamically such as workflows that depend on conditions, need complex branching logic, error handling, or parallelism. You are willing to build custom implementations for the components that are not abstracted for you like in LangChain.
Using LangChain and LanGraph Together When:
You enjoy the pre-built extracted components of LangChain such as the out of the box RAG capabilities, memory functionality, etc. but also want to manage complex task flows using LangGraph’s non-linear orchestration. Using both frameworks together can be a powerful tool for extracting the best abilities from each.
Ultimately, whether you choose LangChain, LangGraph, or a combination of both depends on the specific needs of your project.