05. OllamaEmbeddings
Ollama is an open source project that makes it easy to run a large language model (LLM) in a local environment. This tool allows you to download and run various LLMs as simple instructions, helping developers experiment and use AI models directly on their computers. Ollama is a user-friendly interface and fast performance, making AI development and experimentation more accessible and efficient.
Copy
texts = [
"Hello, nice to meet you.",
"LangChain simplifies the process of building applications with large language models",
"LangChain Korean Tutorial is the official documentation of LangChain, It is structured to help users utilize LangChain more easily and effectively, based on a cookbook and various practical examples.",
"LangChain simplifies the process of building applications with large language models.",
"Retrieval-Augmented Generation (RAG) is an effective technique for improving AI responses.",
]Check supported embedding models
Copy
from langchain_community.embeddings import OllamaEmbeddings
ollama_embeddings = OllamaEmbeddings(
model="nomic-embed-text",
# model="chatfire/bge-m3:q8_0" # BGE-M3
)Query Embed.
Copy
Copy
Embed documents.
Copy
Similarity outputs the calculation result.
Copy
Copy
Last updated