11. How to generate branches for parallel node execution

Parallel execution of nodes is essential to speed up the entire graph operation. LangGraph Supports parallel execution of the node by default, which can significantly improve the performance of graph-based workflows.
This parallelization fan-out and fan-in Implemented through mechanisms, standard edges conditional_edges Use
Preferences
Copy
Copy# Configuration file for managing API keys as environment variables
from dotenv import load_dotenv
# Load API key information
load_dotenv()Copy
True Copy
# Set up LangSmith tracking. https://smith.langchain.com
# !pip install -qU langchain-teddynote
from langchain_teddynote import logging
# Enter a project name.
logging.langsmith("CH17-LangGraph-Modules")Copy
Parallel nodes fan-out and fan-in
fan-out / fan-in
In parallel processing fan-out and fan-in Silver is a concept that describes the process of sharing and gathering work.
Fan-out (extended) : Squat large tasks into multiple small tasks. For example, when making pizza, you can prepare dough, sauce, and cheese separately. It is fan-out to split each part like this and process it simultaneously.
Fan-in (collection) : Combine small jobs divided into one again. Like the process of raising all the ingredients prepared for pizza to make a finished pizza, it is fan-in to collect the results and complete the final work after several tasks.
In other words, fan-out Distribute the work, fan-in This is the flow of combining the results to get the final result.
In this example Node A in B and C Panout D Shows the process of being a fan.
In State reducer(add) Specifies the operator. This combines or accumulates values instead of simply overwriting existing values for specific keys within the State. For list, it means connecting the new list with the existing one.
LangGraph to specify the reducer function for a specific key in State Annotated Use type. This is the original type for type inspection list ), but without changing the type itself, the reducer function add ) Can be attached to the type.
Copy
Visualize the graph.
Copy

reducer The values added to each node through accumulation You can check what is being.
Copy
Copy
Copy
Response when an exception occurs during parallel processing
LangGraph executes nodes within "super-step (full process step where multiple nodes are handled)", which means that even if the parallel branch runs simultaneously, the entire super-step transaction It means it is processed in a way.
So, if any of these quarters have an exception, the update to the status Not at all Does not apply (full super-step is error handled).
super-step: full process step where multiple nodes are handled

If you have an error-prone task (e.g. unstable API call handling), LangGraph provides two ways to fix it.
You can capture and handle exceptions by writing common Python code within the node.
retry_policy You can set the graph to retry the node where a certain type of exception occurred. You only have to retry the failed branch, so you don't have to worry about performing unnecessary tasks.
These features give you full control over parallel execution and exception handling.
Fan-out and fan-in of parallel nodes with additional steps
In the example above, when each path is a single step fan-out and fan-in Showed how. But what if there are multiple steps in one path?
Copy
Visualize the graph.
Copy
Copy
Copy
Conditional branching
If fan-out is not decisive, add_conditional_edges You can use it yourself.
When there is a known "sink" node to be connected after a conditional branch, when creating a conditional edge then="ì‹¤í–‰í• ë…¸ë“œëª…" Can be provided.
Copy
Below is the reference code. then When using grammar, then="e" You don't have to add and add edge connections.
Copy
Visualize the graph.
Copy
Copy
Copy
Copy
Copy
Copy
Copy
Sort by confidence of fan-out values
Nodes unfolded in parallel are one" super-step Runs with ". Updates that occur in each super-step are applied to the state sequentially after the corresponding super-step is completed.
If a consistent predefined update sequence is required in the parallel super-step, the output value is recorded in a separate field of state with an identification key, then general from each panned node to the gathering point edge You need to combine them in the "sink" node by adding.
For example, consider when you want to sort the output of a parallel step according to "reliability".
Copy
Visualize the graph.
Copy

Sort results by reliability when running nodes in parallel.
Reference
b: reliability=0.1c: reliability=0.9d: reliability=0.5
Copy
Copy
Copy
Copy
Copy
Copy
Last updated