04. Long context rearrangement (LongContextReorder)

Regardless of the model's architecture, performance will drop considerably if you include more than 10 searched documents.

Simply put, when a model needs access to relevant information in the middle of a long context, it tends to ignore the documents provided.

For more information, see the following paper

To avoid this problem, you can rearrange the order of documents after searching to prevent performance degradation.

  • Chroma The ability to store and retrieve text data using vector storage retriever Generate.

  • retriever of invoke Use methods to search for relevant documents for a given query.

Copy

# API A configuration file for managing keys as environment variables.
from dotenv import load_dotenv

# API Load key information
load_dotenv()

Copy

True

Copy

# LangSmith Set up tracking. https://smith.langchain.com
# !pip install langchain-teddynote
from langchain_teddynote import logging

# Enter a project name
.
logging.langsmith("CH11-Retriever")

Copy

Copy

Perform a search by entering a query in the finder.

Copy

Copy

Copy

Copy

Copy

Create an inquiry-response chain using Context Reordering

Copy

Copy

Copy

Copy

Output rearranged documents.

Copy

Copy

Copy

question Enter a query in language Enter the language in.

  • Also check the search results for rearranged documents.

Copy

Copy

Output the answer.

Copy

Copy

Last updated