Answer 1-50


🔗 Basic Components

  1. What is a Chain in LangChain? A Chain is a sequence of calls to components (LLMs, prompts, tools, etc.) that are executed in a defined order to perform complex tasks.

  2. Explain the role of LLMChain in LangChain. LLMChain connects a prompt with an LLM and handles input/output. It’s the most basic chain used for text generation based on a template.

  3. What is a PromptTemplate in LangChain? It’s a parameterized prompt format that supports variable substitution, used to create dynamic prompts at runtime.

  4. How does SimpleSequentialChain work in LangChain? Executes multiple chains in a fixed sequence. The output of one chain becomes the input of the next.

  5. What is the use of ConversationChain? Manages dialogue by maintaining memory (chat history) and generating responses in a conversational context.


🧠 Memory, Tools, and Agents

  1. Explain how Memory is used in LangChain chains. Memory stores conversation history or state and injects it into prompts to enable contextual continuity.

  2. What are Tools in LangChain and how are they used? Tools are functions (e.g., calculator, search API) that agents can call to solve tasks that go beyond LLM capabilities.

  3. How do you create a custom Agent in LangChain? Define a custom prompt, set up tools, and use AgentExecutor or initialize_agent with custom logic.

  4. What is a Retriever in LangChain and how does it interact with RAG? Retrievers fetch relevant documents based on queries. Used in RAG (Retrieval-Augmented Generation) to provide grounded context to LLMs.

  5. Explain the function of Callbacks in LangChain execution. Callbacks allow logging, streaming, or debugging events during chain/agent execution (e.g., start, end, error).


📄 Document Processing

  1. What is a Document object in LangChain? A container with .page_content and .metadata used to standardize text inputs for processing.

  2. How does load_qa_chain work in LangChain? It loads a pre-built chain for question-answering over a set of documents.

  3. What is a VectorStore in LangChain? A database for storing and retrieving embeddings of documents, enabling semantic search.

  4. How do you use FAISS with LangChain? Store document embeddings in FAISS and use a retriever wrapper for RAG or search use cases.

  5. What is the purpose of RetrievalQA? Combines a retriever and a QA chain to answer questions based on external document context.

  6. How does load_summarize_chain function? Loads a chain for document summarization, often using map-reduce or refinement strategies.


💬 Prompting and Templates

  1. What is a ChatPromptTemplate in LangChain? Specialized template for chat models that supports structured message construction (HumanMessage, SystemMessage, etc.).

  2. How do you add multiple input variables to a prompt in LangChain? Use a PromptTemplate with multiple {variables} and pass a dict to .format().


⚙️ Advanced Execution

  1. What is the RunnableSequence and how does it differ from a regular chain? RunnableSequence from LCEL (LangChain Expression Language) allows functional, declarative composition of components.

  2. How do you stream LLM outputs in LangChain? Enable streaming via streaming=True on supported LLMs and use a StreamingStdOutCallbackHandler.

  3. What is LLMMathChain and when should you use it? A chain for solving math problems using LLM and a Python evaluator for accuracy.

  4. What is the role of ToolExecutor in LangChain? Manages execution and resolution of tool calls in multi-tool environments for agents.

  5. How does MultiPromptChain select between different prompts? It uses an LLM-based router or selector to choose the appropriate prompt based on input.

  6. How do you add custom tools to an agent in LangChain? Define a Tool with name, func, and description, and add it to the agent's tool list.

  7. What’s the difference between ReActAgent and ConversationalAgent?

  • ReActAgent: Combines reasoning and acting via intermediate steps.

  • ConversationalAgent: Optimized for multi-turn dialogue, using memory and conversational context.


🧩 Persistence & Execution

  1. How do you save and load LangChain components using JSON or pickle? Use .save() and .load() methods with LangChainHub or pickle for local serialization.

  2. What is AgentExecutor in LangChain? It’s the runtime that manages tool calls, agent decision-making, and step-by-step execution.

  3. How does LangChain handle error propagation between nodes? Errors are raised as exceptions; custom handlers or retry logic can be added using callbacks or LCEL logic.

  4. What logging or tracing tools are integrated with LangChain? LangSmith (official tool), OpenTelemetry, and custom callback handlers.

  5. How do you set temperature and max tokens in an LLMChain? Configure these in the LLM initialization: OpenAI(temperature=0.5, max_tokens=150).


📦 LangChainHub, Embeddings, and Runnables

  1. How do you use LangChainHub to load shared chains or prompts? Use load_chain, load_prompt, or load_agent with a public hub path.

  2. What is a RunnableLambda and when should you use it? A lightweight wrapper for arbitrary Python functions inside an LCEL pipeline.

  3. How do you use OpenAIEmbeddings in LangChain? OpenAIEmbeddings().embed_query(text) returns a vector. Plug into a VectorStore for semantic search.


📚 Loaders and Splitters

  1. What are DocumentLoaders in LangChain and how do they work? They parse various input sources (PDF, TXT, Web) and return a list of Document objects.

  2. How does TextSplitter work in LangChain? Splits large texts into smaller chunks based on characters, tokens, or structure for embedding/processing.

  3. What’s the difference between RecursiveCharacterTextSplitter and CharacterTextSplitter?

  • Recursive: Splits hierarchically using multiple separators.

  • Character: Uses one separator, may break mid-sentence.


🧠 RAG and Evaluation

  1. How do you use LangChain with Chroma vector store? Use Chroma.from_documents() or Chroma.add_documents() and connect it to a retriever.

  2. What is a RetrievalQAWithSourcesChain? Similar to RetrievalQA, but returns source documents in addition to the answer for transparency.

  3. How can you evaluate LangChain output using QA Eval Chain? Use chains like QAEvalChain to auto-score outputs against references.

  4. How do you cache LLM responses in LangChain for efficiency? Use langchain.cache with InMemoryCache, SQLiteCache, or Redis.


🔧 LCEL & Runtime

  1. How does LangChain Expression Language (LCEL) simplify workflows? Offers a functional way to compose chains (e.g., using | pipe operator) with support for debugging and tracing.

  2. What is a RunnableParallel and when would you use it? Used to execute multiple chains or components in parallel and combine their results.

  3. How do you use AzureOpenAI with LangChain? Use AzureOpenAI class with deployment_name, api_key, and api_base parameters.

  4. What’s the purpose of MapReduceDocumentsChain? Summarizes or processes documents in parallel (map step) and combines the results (reduce step).

  5. How can you monitor token usage in a LangChain pipeline? Use LangSmith or TokenCallbackHandler to track token count per component.

  6. How do you pass session or user data through LangChain components? Pass via the inputs dictionary and use RunnableWithConfig for structured session context.

  7. What’s the difference between Tool and ToolSet? Tool is a single callable unit; ToolSet is a grouped collection of tools often tied to a domain.

  8. How can you integrate LangChain with external APIs using RequestsTool? Use RequestsTool to make HTTP calls directly from agents, define URL, headers, and parse response.

  9. How do you build a multi-step reasoning agent using LangChain components? Combine memory, tools, a custom prompt, and a ReAct agent with AgentExecutor.

  10. What’s the best way to unit test LangChain components? Use unittest or pytest with mocked LLM responses and assert checks for chain outputs.


Last updated