RunnableParallel One within the silver sequence Runnable Next output Runnable It can be useful for manipulating to match the input format of.
The input to the prompt here is expected in the form of a map with the keys "context" and "question".
User input is simply a question. So you need to use retriever to get the context, and pass the user input under the "question" key.
Copy
# Configuration file for managing API keys as environment variables.
from dotenv import load_dotenv
# Load API key information
load_dotenv()
Copy
True
Copy
# LangSmith set up tracking. https://smith.langchain.com
# !pip install langchain-teddynote
from langchain_teddynote import logging
# Enter a project name.
logging.langsmith("LCEL-Advanced")
Copy
Copy
Copy
Different Runnable with RunnableParallel When configuring, type conversion is handled automatically RunnableParallel Please note that there is no need to separately wrap the dict input injected as input in the class.
The trivalent methods below all handle the same.
Copy
Using itemgetter as shortcut
Python when combined with RunnableParallel itemgetter You can extract data from map using shortcuts.
In the example below itemgetter Extract a specific key from the map using
Copy
Copy
Step by step understanding of parallelism
RunnableParallel Using multiple Runnable Run in parallel, Runnable It becomes easier to return the output of to the map.
Copy
Copy
As shown below, it can be run regardless of the variable in the input template with chain stars.
Copy
Copy
Parallel processing
RunnableParallel Each on a silver map Runnable Because it runs in parallel, it is also useful for running independent processes in parallel.
For example, we looked ahead area_chain , capital_chain , map_chain silver map_chain Despite running both of these different chains Almost the same run time You can check that there is.
from langchain_community.vectorstores import FAISS
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
# Generate FAISS vector storage from text.
vectorstore = FAISS.from_texts(
["Teddy is an AI engineer who loves programming!"], embedding=OpenAIEmbeddings()
)
# Use vector storage as a search engine.
retriever = vectorstore.as_retriever()
# Define a template.
template = """Answer the question based only on the following context:
{context}
Question: {question}
"""
# Generate chat prompts from templates.
prompt = ChatPromptTemplate.from_template(template)
# Initialize the ChatOpenAI model.
model = ChatOpenAI(model="gpt-4o-mini")
# Constructs a search chain.
retrieval_chain = (
{"context": retriever, "question": RunnablePassthrough()}
| prompt
| model
| StrOutputParser()
)
# Run a search chain to get answers to your questions.
retrieval_chain.invoke("What is Teddy's occupation?")
"Teddy's occupation is an AI engineer."
# Wrapped in its own RunnableParallel
1. {"context": retriever, "question": RunnablePassthrough()}
2. RunnableParallel({"context": retriever, "question": RunnablePassthrough()})
3. RunnableParallel(context=retriever, question=RunnablePassthrough())
from operator import itemgetter
from langchain_community.vectorstores import FAISS
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
# Generate FAISS vector storage from text.
vectorstore = FAISS.from_texts(
["Teddy is an AI engineer who loves programming!"], embedding=OpenAIEmbeddings()
)
# Use vector storage as a search engine.
retriever = vectorstore.as_retriever()
# Define a template.
template = """Answer the question based only on the following context:
{context}
Question: {question}
Answer in the following language: {language}
"""
# Generate chat prompts from templates.
prompt = ChatPromptTemplate.from_template(template)
# Form a chain.
chain = (
{
"context": itemgetter("question") | retriever,
"question": itemgetter("question"),
"language": itemgetter("language"),
}
| prompt
| ChatOpenAI(model="gpt-4o-mini")
| StrOutputParser()
)
# Call the chain to answer questions.
chain.invoke({"question": "What is Teddy's occupation?", "language": "Korean"})
'Teddy's job is an AI engineer.'
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnableParallel
from langchain_openai import ChatOpenAI
model = ChatOpenAI() # Initialize the ChatOpenAI model.
# Defines a chain of questions asking about the capital.
capital_chain = (
ChatPromptTemplate.from_template("{country} 의 수도는 어디입니까?")
| model
| StrOutputParser()
)
# Define a chain for questions asking about area.
area_chain = (
ChatPromptTemplate.from_template("{country} 의 면적은 얼마입니까?")
| model
| StrOutputParser()
)
# Create a RunnableParallel object that can execute capital_chain and area_chain in parallel.
map_chain = RunnableParallel(capital=capital_chain, area=area_chain)
# Call map_chain to ask for the capital and area of South Korea.
map_chain.invoke({"country": "대한민국"})
{'capital':'The capital of the Republic of Korea is Seoul.','area':'The area of the Republic of Korea is about 100,363.4 km²'}
# Defines a chain of questions asking about the capital.
capital_chain2 = (
ChatPromptTemplate.from_template("{country1}What is the capital of?")
| model
| StrOutputParser()
)
# Define a chain for questions asking about area.
area_chain2 = (
ChatPromptTemplate.from_template("{country2}What is the capital of?")
| model
| StrOutputParser()
)
# Create a RunnableParallel object that can execute capital_chain and area_chain in parallel.
map_chain2 = RunnableParallel(capital=capital_chain2, area=area_chain2)
# Call map_chain, passing a value for each key.
map_chain2.invoke({"country1": "korea", "country2": "미국"})
{'capital':'The capital of Korea is Seoul.','area':'The area of the United States is about 9,834 million square kilometers.'}
%%timeit
# Call the chain asking for the cotton and measure the execution time.
area_chain.invoke({"country": "korea"})
907 ms ± 132 ms per loop (mean ± std. dev. of 7 runs, 1 loop approach)
%%timeit
# Call the chain asking for the capital and measure the execution time.
capital_chain.invoke({"country": "korea"})
790 ms ± 116 ms per loop (mean ± std. dev. of 7 runs, 1 loop approach)
%%timeit
# Call the parallel configured chain and measure the execution time.
map_chain.invoke({"country": "korea"})
853 ms ± 159 ms per loop (mean ± std. dev. of 7 runs, 1 loop approach)