14. Automating evaluation using online evaluation

Online Evaluators

Sometimes you want to evaluate the products recorded in your project.

Copy

# installation
# !pip install -qU langsmith langchain-teddynote

Copy

# Configuration file for managing API KEY as environment variable
from dotenv import load_dotenv

# Load API KEY information
load_dotenv()

Copy

 True 

Copy

# LangSmith Set up tracking. https://smith.langchain.com
# !pip install -qU langchain-teddynote
from langchain_teddynote import logging

# Enter a project name.
logging.langsmith("CH16-Auto-Evaluation-Test")

Copy

Chain setting for online Evaluation

Copy

Copy

Run the test chain to see if Runs reflects the results.

Copy

Online LLM-as-judge creation

Online LLM-as-judge creation

Secrets & API Keys designation (OpenAI API Key)

Provider, Model, Prompt Settings

Provider, Model, Prompt Settings

For facts, specify output.context (change to suit your settings)

answer designates output.answer (changed to suit your settings)

Preview ensures that data is entered in the correct place

Last updated