Answer 1-50
🔗 Basic Components
What is a Chain in LangChain? A Chain is a sequence of calls to components (LLMs, prompts, tools, etc.) that are executed in a defined order to perform complex tasks.
Explain the role of LLMChain in LangChain.
LLMChainconnects a prompt with an LLM and handles input/output. It’s the most basic chain used for text generation based on a template.What is a PromptTemplate in LangChain? It’s a parameterized prompt format that supports variable substitution, used to create dynamic prompts at runtime.
How does SimpleSequentialChain work in LangChain? Executes multiple chains in a fixed sequence. The output of one chain becomes the input of the next.
What is the use of ConversationChain? Manages dialogue by maintaining memory (chat history) and generating responses in a conversational context.
🧠 Memory, Tools, and Agents
Explain how Memory is used in LangChain chains. Memory stores conversation history or state and injects it into prompts to enable contextual continuity.
What are Tools in LangChain and how are they used? Tools are functions (e.g., calculator, search API) that agents can call to solve tasks that go beyond LLM capabilities.
How do you create a custom Agent in LangChain? Define a custom prompt, set up tools, and use
AgentExecutororinitialize_agentwith custom logic.What is a Retriever in LangChain and how does it interact with RAG? Retrievers fetch relevant documents based on queries. Used in RAG (Retrieval-Augmented Generation) to provide grounded context to LLMs.
Explain the function of Callbacks in LangChain execution. Callbacks allow logging, streaming, or debugging events during chain/agent execution (e.g., start, end, error).
📄 Document Processing
What is a Document object in LangChain? A container with
.page_contentand.metadataused to standardize text inputs for processing.How does
load_qa_chainwork in LangChain? It loads a pre-built chain for question-answering over a set of documents.What is a VectorStore in LangChain? A database for storing and retrieving embeddings of documents, enabling semantic search.
How do you use FAISS with LangChain? Store document embeddings in FAISS and use a retriever wrapper for RAG or search use cases.
What is the purpose of RetrievalQA? Combines a retriever and a QA chain to answer questions based on external document context.
How does
load_summarize_chainfunction? Loads a chain for document summarization, often using map-reduce or refinement strategies.
💬 Prompting and Templates
What is a ChatPromptTemplate in LangChain? Specialized template for chat models that supports structured message construction (
HumanMessage,SystemMessage, etc.).How do you add multiple input variables to a prompt in LangChain? Use a
PromptTemplatewith multiple{variables}and pass a dict to.format().
⚙️ Advanced Execution
What is the RunnableSequence and how does it differ from a regular chain?
RunnableSequencefrom LCEL (LangChain Expression Language) allows functional, declarative composition of components.How do you stream LLM outputs in LangChain? Enable streaming via
streaming=Trueon supported LLMs and use aStreamingStdOutCallbackHandler.What is LLMMathChain and when should you use it? A chain for solving math problems using LLM and a Python evaluator for accuracy.
What is the role of ToolExecutor in LangChain? Manages execution and resolution of tool calls in multi-tool environments for agents.
How does MultiPromptChain select between different prompts? It uses an LLM-based router or selector to choose the appropriate prompt based on input.
How do you add custom tools to an agent in LangChain? Define a
Toolwithname,func, anddescription, and add it to the agent's tool list.What’s the difference between ReActAgent and ConversationalAgent?
ReActAgent: Combines reasoning and acting via intermediate steps.
ConversationalAgent: Optimized for multi-turn dialogue, using memory and conversational context.
🧩 Persistence & Execution
How do you save and load LangChain components using JSON or pickle? Use
.save()and.load()methods withLangChainHuborpicklefor local serialization.What is AgentExecutor in LangChain? It’s the runtime that manages tool calls, agent decision-making, and step-by-step execution.
How does LangChain handle error propagation between nodes? Errors are raised as exceptions; custom handlers or retry logic can be added using callbacks or LCEL logic.
What logging or tracing tools are integrated with LangChain? LangSmith (official tool), OpenTelemetry, and custom callback handlers.
How do you set temperature and max tokens in an LLMChain? Configure these in the LLM initialization:
OpenAI(temperature=0.5, max_tokens=150).
📦 LangChainHub, Embeddings, and Runnables
How do you use LangChainHub to load shared chains or prompts? Use
load_chain,load_prompt, orload_agentwith a public hub path.What is a RunnableLambda and when should you use it? A lightweight wrapper for arbitrary Python functions inside an LCEL pipeline.
How do you use OpenAIEmbeddings in LangChain?
OpenAIEmbeddings().embed_query(text)returns a vector. Plug into a VectorStore for semantic search.
📚 Loaders and Splitters
What are DocumentLoaders in LangChain and how do they work? They parse various input sources (PDF, TXT, Web) and return a list of
Documentobjects.How does TextSplitter work in LangChain? Splits large texts into smaller chunks based on characters, tokens, or structure for embedding/processing.
What’s the difference between RecursiveCharacterTextSplitter and CharacterTextSplitter?
Recursive: Splits hierarchically using multiple separators.
Character: Uses one separator, may break mid-sentence.
🧠 RAG and Evaluation
How do you use LangChain with Chroma vector store? Use
Chroma.from_documents()orChroma.add_documents()and connect it to a retriever.What is a RetrievalQAWithSourcesChain? Similar to
RetrievalQA, but returns source documents in addition to the answer for transparency.How can you evaluate LangChain output using QA Eval Chain? Use chains like
QAEvalChainto auto-score outputs against references.How do you cache LLM responses in LangChain for efficiency? Use
langchain.cachewithInMemoryCache,SQLiteCache, or Redis.
🔧 LCEL & Runtime
How does LangChain Expression Language (LCEL) simplify workflows? Offers a functional way to compose chains (e.g., using
|pipe operator) with support for debugging and tracing.What is a RunnableParallel and when would you use it? Used to execute multiple chains or components in parallel and combine their results.
How do you use AzureOpenAI with LangChain? Use
AzureOpenAIclass withdeployment_name,api_key, andapi_baseparameters.What’s the purpose of MapReduceDocumentsChain? Summarizes or processes documents in parallel (map step) and combines the results (reduce step).
How can you monitor token usage in a LangChain pipeline? Use LangSmith or
TokenCallbackHandlerto track token count per component.How do you pass session or user data through LangChain components? Pass via the
inputsdictionary and useRunnableWithConfigfor structured session context.What’s the difference between Tool and ToolSet?
Toolis a single callable unit;ToolSetis a grouped collection of tools often tied to a domain.How can you integrate LangChain with external APIs using RequestsTool? Use
RequestsToolto make HTTP calls directly from agents, define URL, headers, and parse response.How do you build a multi-step reasoning agent using LangChain components? Combine memory, tools, a custom prompt, and a ReAct agent with
AgentExecutor.What’s the best way to unit test LangChain components? Use
unittestorpytestwith mocked LLM responses andassertchecks for chain outputs.
Last updated