IVQ 51-100

  1. How do you chain multiple retrievers together in LangChain?

  2. What is RouterChain and how is it used to route inputs dynamically?

  3. How can LangChain be used to build a chatbot with memory?

  4. What are the key types of memory available in LangChain?

  5. How does ConversationBufferMemory differ from ConversationSummaryMemory?

  6. How do you limit memory window size in a long-running agent session?

  7. What is BufferWindowMemory and when is it useful?

  8. How do you inject historical chat data into a prompt with LangChain?

  9. How can you visualize a LangChain workflow or DAG?

  10. How do you implement tool usage tracking in LangChain agents?

  11. What is StructuredTool and how does it improve agent accuracy?

  12. How does LangChain support multi-modal inputs (e.g., image + text)?

  13. Can you use LangChain with Whisper for audio transcription?

  14. How do you build an agent with conditional logic in LangChain?

  15. What’s the role of OutputParser in a custom chain?

  16. How do you handle rate limits with retry strategies in LangChain?

  17. What is RunnableBranch and how is it used for branching logic?

  18. How do you inject system-level prompts into LangChain chat agents?

  19. How do you enable function calling in OpenAI models via LangChain?

  20. What is RunnablePassthrough and when is it used?

  21. How does LangChain integrate with LangFuse for trace logging and debugging?

  22. What is the purpose of RunnableRetry and how do you configure it?

  23. How do you log intermediate outputs between steps in a LangChain pipeline?

  24. How can LangChain be used to implement Retrieval-Augmented Generation (RAG)?

  25. What components are essential to building a RAG system with LangChain?

  26. How does ParentDocumentRetriever enhance retrieval accuracy?

  27. How do you connect LangChain with Pinecone or Weaviate for vector storage?

  28. What’s the difference between stuff, map_reduce, and refine chains?

  29. How can you score or rank retrieved documents in LangChain?

  30. How does LangChain support multi-agent workflows?

  31. What is RunnableMap and when should you use it in a chain?

  32. How do you build a LangChain application with real-time user feedback capture?

  33. What are RunnableConfig and how do you use them for fine control?

  34. How do you wrap legacy APIs as tools for use with LangChain agents?

  35. How can you implement tool-calling with structured arguments?

  36. How do you track and analyze prompt drift across chain updates?

  37. How does MultiRetrievalQAChain compare to RetrievalQAChain?

  38. What’s the benefit of using ToolExecutor over calling tools directly?

  39. How do you build and visualize conditional tool usage in LangChain agents?

  40. What deployment options does LangChain support (e.g., FastAPI, Streamlit, Docker)?

  41. How do you implement A/B testing between different chains in LangChain?

  42. What is the best way to log and trace prompt versions in LangChain pipelines?

  43. How do you handle streaming responses with memory-enabled agents?

  44. Can LangChain support multi-turn QA with source attribution?

  45. How do you implement custom output validation in a LangChain agent?

  46. How can LangChain agents be designed to self-reflect or critique their own outputs?

  47. What are the pros and cons of using ConversationalRetrievalChain?

  48. How do you use LangChain in serverless environments (e.g., AWS Lambda)?

  49. How do you scale LangChain applications with background workers (e.g., Celery, Prefect)?

  50. How do you implement a feedback loop where user ratings improve prompt accuracy over time?

Last updated