07. Vector Storage Search Memory (VectorStoreRetrieverMemory)

VectorStoreRetrieverMemory

VectorStoreRetrieverMemory Saves memory in the vector store and queries the most'noticeable' top K documents each time it is called.

This is the order of conversation Not explicitly tracking It is different from most other memory classes in.

Copy

# API KEY A configuration file for managing environment variables
from dotenv import load_dotenv

# API KEY load information
load_dotenv()

Copy

True

Copy

First, initialize the vector store.

Copy

import faiss
from langchain_openai import OpenAIEmbeddings
from langchain.docstore import InMemoryDocstore
from langchain.vectorstores import FAISS


# Define an embedding model.
embeddings_model = OpenAIEmbeddings()

# Vector Store Initializes.
embedding_size = 1536
index = faiss.IndexFlatL2(embedding_size)
vectorstore = FAISS(embeddings_model, index, InMemoryDocstore({}), {})

In real use k Set to a higher value, k=1 Use to display as follows:

Copy

Returns the most relevant conversation in the Vector Store lot 1 (because it is k=1) when asked the following questions:

  • Question: "What is your interviewer major?"

Copy

Copy

This time, we extract the most relevant 1 conversation through other questions.

  • Question: "What role did the interviewer play in the project?"

Copy

Copy

Last updated