02. Caching (Cache)

Caching

LangChain offers a selective caching layer above LLM.

This is useful for two reasons.

  • For LLM providers if multiple requests for the same completion Reduce the number of API calls to reduce costs You can.

  • For LLM providers Reduce the number of API calls to speed up your application There is.

Copy

# API KEY a configuration file for managing environment variables
from dotenv import load_dotenv

# API KEY Load information
load_dotenv()

Copy

true

Copy

# LangSmith Set up tracking. https://smith.langchain.com
# !pip install langchain-teddynote
from langchain_teddynote import logging

# Enter a project name.
logging.langsmith("CH04-Models")

Copy

Generates models and prompts

Copy

Copy

Copy

InMemoryCache

Save the answer to the same question using the inmemory cache, and return the answer stored in the cache.

Copy

Copy

Copy

Copy

SQLite Cache

Copy

Copy

Copy

Last updated