One of the general states of graphs is a list of messages. Normally, only add messages to that state. But sometimes Remove message You may need to.
for this RemoveMessage Modifiers are available. And, RemoveMessage Modifier reducer Is to have a key.
basic MessagesState has a messages key, and the reducer of that key is RemoveMessage Allow modifiers.
This reducer RemoveMessage Delete the message from the key using
Settings
First, let's build a simple graph that uses messages. Essential reducer Include MessagesState Please note that you are using.
Copy
# Configuration file for managing API keys as environment variables
from dotenv import load_dotenv
# Load API key information
load_dotenv()
Copy
True
Copy
# Set up LangSmith tracking. https://smith.langchain.com
# !pip install -qU langchain-teddynote
from langchain_teddynote import logging
# Enter a project name.
logging.langsmith("CH17-LangGraph-Modules")
Copy
Build a basic LangGraph for tutorial progress
RemoveMessage Build the basic LangGraph required to use modifiers.
Copy
Visualize the graph.
Copy
Copy
Copy
Copy
Copy
Copy
Copy
Delete message using RemoveMessage modifier
First, let's look at how to delete messages manually. Let's check the status of the current thread.
Copy
Copy
update_state When you call and pass id of the first message, that message is deleted.
Copy
Copy
Now if you check the messages, you can confirm that the first message has been deleted.
Copy
Copy
Dynamically delete more messages
You can also delete messages programmatically inside the graph.
Let's take a look at how to modify the graph to delete old messages (messages earlier than 3 previous messages) when the graph execution ends.
Copy
Visualize the graph.
Copy
Now you can try this. graph You can call 2 times and then check the status.
Copy
Copy
Copy
Copy
When you check the final state, you can see that there are only three messages.
This is because I just deleted the previous messages.
from typing import Literal
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import MessagesState, StateGraph, START, END
from langgraph.prebuilt import ToolNode, tools_condition
# Initialize memory object for checkpoint storage
memory = MemorySaver()
# Define a tool function that mimics the web search function.
@tool
def search(query: str):
"""Call to surf on the web."""
return "Web search results: LangGraph 한글 튜토리얼은 https://wikidocs.net/233785 You can check it at."
# Create a tool list and initialize the tool node
tools = [search]
tool_node = ToolNode(tools)
# Model Initialization and Tool Binding
model = ChatOpenAI(model_name="gpt-4o-mini")
bound_model = model.bind_tools(tools)
# # Function to determine the next execution node based on the conversation status
def should_continue(state: MessagesState):
last_message = state["messages"][-1]
if not last_message.tool_calls:
return END
return "tool"
# LLM model call and response processing functions
def call_model(state: MessagesState):
response = model.invoke(state["messages"])
return {"messages": response}
# Initializing a state-based workflow graph
workflow = StateGraph(MessagesState)
# Adding Agents and Action Nodes
workflow.add_node("agent", call_model)
workflow.add_node("tool", tool_node)
# Set the starting point to the agent node
workflow.add_edge(START, "agent")
# Conditional Edge Setting: Defining the execution flow after the agent node
workflow.add_conditional_edges("agent", should_continue, {"tool": "tool", END: END})
# Added an edge that returns to the agent after running the tool
workflow.add_edge("tool", "agent")
# Compile the final executable workflow with checkpoints
app = workflow.compile(checkpointer=memory)
from langchain_teddynote.graphs import visualize_graph
visualize_graph(app)
from langchain_core.messages import HumanMessage
# Initialize default settings object with thread ID 1
config = {"configurable": {"thread_id": "1"}}
# Perform the first question
input_message = HumanMessage(
content="Hello! My name is Teddy. Nice to meet you."
)
# Process messages and output responses in stream mode, display details of the last message
for event in app.stream({"messages": [input_message]}, config, stream_mode="values"):
event["messages"][-1].pretty_print()
================================ Human Message =================================
Hello! My name is Teddy. Take good care of me.
================================== Ai Message ==================================
Hello, Teddy! Nice to meet you. How can I help you?
# Ask follow-up questions
input_message = HumanMessage(content="내 이름이 뭐라고요?")
# Process the second message in stream mode and output the response
for event in app.stream({"messages": [input_message]}, config, stream_mode="values"):
event["messages"][-1].pretty_print()
================================ Human Message =================================
What is my name?
================================== Ai Message ==================================
You said Teddy! Is that correct?
# Step-by-step status check
messages = app.get_state(config).values["messages"]
for message in messages:
message.pretty_print()
================================ Human Message =================================
Hello! My name is Teddy. Take good care of me.
================================== Ai Message ==================================
Hello, Teddy! Nice to meet you. What can you help?
================================ Human Message =================================
What is my name?
================================== Ai Message ==================================
You said Teddy! Is that correct?
# Extract and save message list from app status
messages = app.get_state(config).values["messages"]
# Return a list of messages
for message in messages:
message.pretty_print()
================================== Ai Message ==================================
Hello, Teddy! nice to meet you. How can I help you?
================================ Human Message =================================
What is my name?
================================== Ai Message ==================================
You said Teddy! Is that correct?
from langchain_core.messages import RemoveMessage
# Remove the first message from the message array based on ID and update the app state.
app.update_state(config, {"messages": RemoveMessage(id=messages[0].id)})
# Extract message list from app status and view saved conversation history
messages = app.get_state(config).values["messages"]
for message in messages:
message.pretty_print()
================================== Ai Message ==================================
Hello, Teddy! Nice to meet you. What can you help?
================================ Human Message =================================
What is my name?
================================== Ai Message ==================================
You said Teddy! Is that correct?
from langchain_core.messages import RemoveMessage
from langgraph.graph import END
# When the number of messages exceeds 3, delete old messages and keep only the latest messages.
def delete_messages(state):
messages = state["messages"]
if len(messages) > 3:
return {"messages": [RemoveMessage(id=m.id) for m in messages[:-3]]}
# Logic to determine the next execution node based on message status
def should_continue(state: MessagesState) -> Literal["action", "delete_messages"]:
"""Return the next node to execute."""
last_message = state["messages"][-1]
# Execute message deletion function if no function call is made
if not last_message.tool_calls:
return "delete_messages"
# Execute an action when a function call is made
return "action"
# Defining a message state-based workflow graph
workflow = StateGraph(MessagesState)
# Adding Agents and Action Nodes
workflow.add_node("agent", call_model)
workflow.add_node("action", tool_node)
# Add a Delete Message node
workflow.add_node(delete_messages)
# Connecting from the start node to the agent node
workflow.add_edge(START, "agent")
# Controlling flow between nodes by adding conditional edges
workflow.add_conditional_edges(
"agent",
should_continue,
)
# Connecting from action node to agent node
workflow.add_edge("action", "agent")
# Connect from the message delete node to the end node
workflow.add_edge("delete_messages", END)
# Compile workflow using memory checkpointer
app = workflow.compile(checkpointer=memory)
from langchain_teddynote.graphs import visualize_graph
visualize_graph(app)
# Import HumanMessage class for LangChain message processing
from langchain_core.messages import HumanMessage
# Initialize a settings object containing a thread ID.
config = {"configurable": {"thread_id": "2"}}
# Perform the first question
input_message = HumanMessage(
content="Hello! My name is Teddy. Nice to meet you."
)
for event in app.stream({"messages": [input_message]}, config, stream_mode="values"):
print([(message.type, message.content) for message in event["messages"]])
[('human','Hello! My name is Teddy. Take good care of me.')]
[('human','Hello! My name is Teddy. Take good care of me.'), ('ai','Hello, Teddy! nice to meet you. How can I help you?')]
# Perform the second question
input_message = HumanMessage(content="What is my name?")
for event in app.stream({"messages": [input_message]}, config, stream_mode="values"):
print([(message.type, message.content) for message in event["messages"]])
[('human','Hello! My name is Teddy. Take good care of me.'), ('ai','Hello, Teddy! nice to meet you. How can I help you?'), ('human','What is my name saying?')]
[('human','Hello! My name is Teddy. Take good care of me.'), ('ai','Hello, Teddy! nice to meet you. How can I help you?'), ('human','What is my name saying?'), ('ai','Teddy said! Is that correct?')]
[('ai','Hello, Teddy! nice to meet you. How can I help you?'), ('human','What is my name saying?'), ('ai','Teddy said! Is that correct?')]
# Extract and save message list from app status
messages = app.get_state(config).values["messages"]
# Return a list of messages
for message in messages:
message.pretty_print()
================================== Ai Message ==================================
Hello, Teddy! nice to meet you. How can I help you?
================================ Human Message=================================
What is my name?
================================== Ai Message ==================================
You said Teddy! Is that correct?