Sitemap

Giving Your AI Agents a Memory: Persistence and State in LangGraph

6 min readAug 4, 2025

In our previous tutorial on Mastering LangGraph, we explored how LangGraph empowers developers to build robust AI agents using a graph-based approach. We delved into the core concepts of nodes, edges, and state, and even built a basic chatbot with tool integration and a multi-agent customer support system.

However, for an AI agent to truly be intelligent and engaging, it needs to remember. Imagine a conversation with someone who forgets everything you said a moment ago — it would be frustrating and unproductive. The same applies to AI agents. To have meaningful, multi-turn interactions, agents need memory and persistence.

This blog post will dive into how LangGraph handles memory and persistence, allowing your AI agents to remember previous conversations, learn from past interactions, and maintain context across sessions. We’ll build upon the concepts and examples from our previous tutorial to illustrate these powerful features.

Why Memory and Persistence are Crucial for AI Agents

Conversational AI, by its very nature, is sequential. Users expect agents to recall information from earlier in the conversation, refer back to previous statements, and build upon shared context. Without memory, every interaction is a fresh start, leading to:

  • Repetitive Interactions: Users constantly have to re-state information.
  • Lack of Personalization: Agents cannot adapt to user preferences or history.
  • Inefficient Workflows: Agents cannot leverage past computations or decisions.
  • Broken User Experience: The conversation feels unnatural and unintelligent.

This is where LangGraph’s approach to state management and persistence becomes invaluable. It provides the mechanisms for agents to maintain a coherent understanding of the ongoing interaction and even recall information from past, disconnected sessions.

LangGraph’s Approach to Memory: State and Persistence

LangGraph distinguishes between two primary forms of memory:

  1. Short-Term Memory (State): This refers to the immediate context of the current conversation or workflow. In LangGraph, this is inherently managed by the AgentState that flows through your graph. As we saw in the previous tutorial, the AgentState is a mutable object that nodes can read from and write to, allowing information to be accumulated and modified within a single invocation of the graph.
  2. Long-Term Memory (Persistence/Checkpointers): This allows your agent to remember information across multiple invocations or even after the application has been restarted. LangGraph achieves this through checkpointers, which save the entire state of the graph at specific points, enabling you to resume a conversation or workflow exactly where it left off.

Let’s revisit our previous examples and see how we can integrate these memory concepts.

Implementing Short-Term Memory with AgentState

In our previous tutorial, we briefly introduced the AgentState as the shared data structure that passes between nodes. This AgentState is the foundation of short-term memory in LangGraph. For conversational agents, the most common use of AgentState for memory is to store the history of messages.

Recall our basic chatbot example. The AgentState was defined as:

from typing import TypedDict, Annotated
from langgraph.graph.message import add_messages
class AgentState(TypedDict):
messages: Annotated[list, add_messages]

Here, messages: Annotated[list, add_messages] is key. The Annotated type hint with add_messages is a special LangGraph construct that tells the graph how to handle updates to the messages list. Instead of overwriting the list, add_messages ensures that new messages are appended to the existing list, effectively building a conversational history. This simple yet powerful mechanism allows your LLM to always have access to the full context of the conversation.

When you pass a new user message to your graph, it gets added to this messages list. Subsequent nodes, especially your LLM calls, can then process the entire list of messages to understand the conversation's flow and respond contextually.

Implementing Long-Term Memory with Checkpointers

While AgentState handles the current conversation, what if you want your agent to remember a user's preferences, past interactions, or learned knowledge across different sessions? This is where persistence comes in, enabled by LangGraph's checkpointers.

A checkpointer is a mechanism that saves the state of your graph to a persistent storage (like a database) after each step or at specific points. This allows you to:

  • Resume Conversations: If a user closes their chat window and comes back later, the agent can pick up exactly where they left off.
  • Handle Crashes: If your application crashes, the state can be recovered, preventing loss of progress.
  • Learn Over Time: Agents can build a long-term understanding of users or topics.

LangGraph provides several built-in checkpointer implementations, such as SqliteSaver for local development or RedisSaver for distributed systems. Let's modify our basic chatbot example to include persistence using SqliteSaver.

First, you’ll need to install the necessary package:

pip install langgraph[sqlite]

Now, let’s integrate the SqliteSaver into our graph compilation:

from langgraph.graph import StateGraph, END
from langgraph.checkpoint.sqlite import SqliteSaver
from typing import TypedDict, Annotated
from langgraph.graph.message import add_messages
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage# Define the state (same as before)
class AgentState(TypedDict):
messages: Annotated[list, add_messages]
# Define the tool (same as before)
@tool
def search_web(query: str):
"""Searches the web for the given query."""
return f"Search results for: {query}"
# Define the LLM (same as before)
llm = ChatOpenAI(model="gpt-4o")
# Define the nodes (same as before)
def call_llm(state: AgentState):
messages = state["messages"]
response = llm.invoke(messages)
return {"messages": [response]}
def call_tool(state: AgentState):
last_message = state["messages"][-1]
tool_call = last_message.tool_calls[0]
tool_output = search_web.invoke(tool_call.args["query"])
return {"messages": [{"tool_outputs": [tool_output]}]}
# Define the conditional edge logic (same as before)
def should_continue(state: AgentState):
last_message = state["messages"][-1]
if "tool_calls" in last_message.additional_kwargs:
return "call_tool"
return "end"
# Build the graph (same as before)
workflow = StateGraph(AgentState)
workflow.add_node("llm", call_llm)
workflow.add_node("call_tool", call_tool)
workflow.add_edge("call_tool", "llm")
workflow.add_conditional_edges(
"llm",
should_continue,
{
"call_tool": "call_tool",
"end": END
}
)
workflow.set_entry_point("llm")
# Initialize the checkpointer
memory = SqliteSaver.from_conn_string(":memory:") # Use a file path like "sqlite:///my_memory.sqlite" for persistent storage
# Compile the graph with the checkpointer
app = workflow.compile(checkpointer=memory)
# Example usage with persistence
# To start a new conversation or resume an existing one, you use a 'thread_id'.
# For a new conversation, you can generate a unique ID.
# For resuming, you use the same ID.
# First turn of a new conversation (thread_id="1")
inputs_1 = {"messages": [HumanMessage(content="Hello, who are you?")]}
config_1 = {"configurable": {"thread_id": "1"}}
response_1 = app.invoke(inputs_1, config=config_1)
print(f"Response 1: {response_1['messages'][-1].content}")
# Second turn of the same conversation (thread_id="1")
inputs_2 = {"messages": [HumanMessage(content="What can you do?")]}
config_2 = {"configurable": {"thread_id": "1"}}
response_2 = app.invoke(inputs_2, config=config_2)
print(f"Response 2: {response_2['messages'][-1].content}")
# Now, let's simulate a new session or restart and resume the conversation
# The 'app' object can be re-initialized, but the memory (SqliteSaver) will retain the state.
# For demonstration, we'll just re-invoke with the same thread_id.
inputs_3 = {"messages": [HumanMessage(content="Tell me more about that.")]}
config_3 = {"configurable": {"thread_id": "1"}}
response_3 = app.invoke(inputs_3, config=config_3)
print(f"Response 3: {response_3['messages'][-1].content}")
# You can also inspect the full state for a thread_id
# full_state = app.get_state(config_1)
# print(f"Full state for thread 1: {full_state.values}")

In this updated code:

  1. We import SqliteSaver from langgraph.checkpoint.sqlite.
  2. We initialize a SqliteSaver instance. Using ":memory:" creates an in-memory SQLite database, which is useful for testing. For actual persistence, you would provide a file path like "sqlite:///my_memory.sqlite".
  3. When compiling the workflow, we pass the checkpointer argument: app = workflow.compile(checkpointer=memory).
  4. When invoking the graph, we pass a config dictionary with a configurable key, which contains a thread_id. This thread_id is crucial for the checkpointer to identify and load the correct conversational state. If a thread_id is new, a new state is created; if it exists, the previous state is loaded.

This setup ensures that every message exchanged within a given thread_id is saved and loaded, allowing the agent to maintain a continuous conversation history, even across application restarts or long periods of inactivity.

Conclusion

Giving your AI agents the ability to remember is not just a feature; it’s a necessity for building truly intelligent and user-friendly conversational applications. LangGraph provides elegant and robust solutions for both short-term and long-term memory through its AgentState and checkpointers.

By leveraging Annotated[list, add_messages] for conversational history and integrating checkpointers like SqliteSaver, you can ensure your LangGraph agents maintain context, personalize interactions, and provide a seamless user experience. This capability transforms your agents from stateless responders into intelligent, context-aware conversational partners.

We encourage you to experiment with different checkpointer implementations and explore how memory can enhance your LangGraph agents. The ability to remember is a cornerstone of advanced AI, and LangGraph makes it accessible and manageable.

References:

--

--

krishankant singhal
krishankant singhal

Written by krishankant singhal

Angular,Vuejs,Android,Java,Git developer. i am nerd who want to learn new technologies, goes in depth.

No responses yet