Skip to main content

Evaluate and improve your application

This guide shows you how to use an evaluation datasets to evaluate quality, identify issues, and iteratively improve your app.

In this guide, we will use traces from a deployed app to create the evaluation dataset, but this same workflow applies no matter how you created your evaluation dataset. Refer to the create evaluation datasets guide to learn about other approaches for creating the dataset.

What you'll learn:

  • Create evaluation datasets from real usage
  • Evaluate quality with MLflow's predefined scorers using the evaluation harness
  • Interpret results to identify quality issues
  • Improve your app based on evaluation results
  • Compare versions to verify improvements worked and didn't cause regressions

Prerequisites

  1. Install MLflow and required packages

    Bash
    pip install --upgrade "mlflow[databricks]>=3.1.0" openai
  2. Create an MLflow experiment by following the setup your environment quickstart.

  3. Access to a Unity Catalog schema with CREATE TABLE permissions in order to create an evaluation dataset.

    note

    If you are using a Databricks trial account, you have CREATE TABLE permissions on the Unity Catalog schema workspace.default.

Step 1: Create your application

In this guide, we will evaluate an email generation app that:

  1. Retrieves customer information from a CRM database
  2. Generates personalized follow-up emails based on the retrieved information

Let's build our email generation app. The retrieval component is marked with span_type="RETRIEVER" to enable MLflow's retrieval-specific scorers.

Python
import mlflow
from openai import OpenAI
from mlflow.entities import Document
from typing import List, Dict

# Enable automatic tracing for OpenAI calls
mlflow.openai.autolog()

# Connect to a Databricks LLM via OpenAI using the same credentials as MLflow
# Alternatively, you can use your own OpenAI credentials here
mlflow_creds = mlflow.utils.databricks_utils.get_databricks_host_creds()
client = OpenAI(
api_key=mlflow_creds.token,
base_url=f"{mlflow_creds.host}/serving-endpoints"
)

# Simulated CRM database
CRM_DATA = {
"Acme Corp": {
"contact_name": "Alice Chen",
"recent_meeting": "Product demo on Monday, very interested in enterprise features. They asked about: advanced analytics, real-time dashboards, API integrations, custom reporting, multi-user support, SSO authentication, data export capabilities, and pricing for 500+ users",
"support_tickets": ["Ticket #123: API latency issue (resolved last week)", "Ticket #124: Feature request for bulk import", "Ticket #125: Question about GDPR compliance"],
"account_manager": "Sarah Johnson"
},
"TechStart": {
"contact_name": "Bob Martinez",
"recent_meeting": "Initial sales call last Thursday, requested pricing",
"support_tickets": ["Ticket #456: Login issues (open - critical)", "Ticket #457: Performance degradation reported", "Ticket #458: Integration failing with their CRM"],
"account_manager": "Mike Thompson"
},
"Global Retail": {
"contact_name": "Carol Wang",
"recent_meeting": "Quarterly review yesterday, happy with platform performance",
"support_tickets": [],
"account_manager": "Sarah Johnson"
}
}

# Use a retriever span to enable MLflow's predefined RetrievalGroundedness scorer to work
@mlflow.trace(span_type="RETRIEVER")
def retrieve_customer_info(customer_name: str) -> List[Document]:
"""Retrieve customer information from CRM database"""
if customer_name in CRM_DATA:
data = CRM_DATA[customer_name]
return [
Document(
id=f"{customer_name}_meeting",
page_content=f"Recent meeting: {data['recent_meeting']}",
metadata={"type": "meeting_notes"}
),
Document(
id=f"{customer_name}_tickets",
page_content=f"Support tickets: {', '.join(data['support_tickets']) if data['support_tickets'] else 'No open tickets'}",
metadata={"type": "support_status"}
),
Document(
id=f"{customer_name}_contact",
page_content=f"Contact: {data['contact_name']}, Account Manager: {data['account_manager']}",
metadata={"type": "contact_info"}
)
]
return []

@mlflow.trace
def generate_sales_email(customer_name: str, user_instructions: str) -> Dict[str, str]:
"""Generate personalized sales email based on customer data & a sale's rep's instructions."""
# Retrieve customer information
customer_docs = retrieve_customer_info(customer_name)

# Combine retrieved context
context = "\n".join([doc.page_content for doc in customer_docs])

# Generate email using retrieved context
prompt = f"""You are a sales representative. Based on the customer information below,
write a brief follow-up email that addresses their request.

Customer Information:
{context}

User instructions: {user_instructions}

Keep the email concise and personalized."""

response = client.chat.completions.create(
model="databricks-claude-3-7-sonnet", # This example uses a Databricks hosted LLM - you can replace this with any AI Gateway or Model Serving endpoint. If you provide your own OpenAI credentials, replace with a valid OpenAI model e.g., gpt-4o, etc.
messages=[
{"role": "system", "content": "You are a helpful sales assistant."},
{"role": "user", "content": prompt}
],
max_tokens=2000
)

return {"email": response.choices[0].message.content}

# Test the application
result = generate_sales_email("Acme Corp", "Follow up after product demo")
print(result["email"])

trace

Step 2: Simulate production traffic

This step simulates traffic for demonstration purposes. In practice, you would use traces traces from actual usage to create your evaluation dataset.

Python
# Simulate beta testing traffic with scenarios designed to fail guidelines
test_requests = [
{"customer_name": "Acme Corp", "user_instructions": "Follow up after product demo"},
{"customer_name": "TechStart", "user_instructions": "Check on support ticket status"},
{"customer_name": "Global Retail", "user_instructions": "Send quarterly review summary"},
{"customer_name": "Acme Corp", "user_instructions": "Write a very detailed email explaining all our product features, pricing tiers, implementation timeline, and support options"},
{"customer_name": "TechStart", "user_instructions": "Send an enthusiastic thank you for their business!"},
{"customer_name": "Global Retail", "user_instructions": "Send a follow-up email"},
{"customer_name": "Acme Corp", "user_instructions": "Just check in to see how things are going"},
]

# Run requests and capture traces
print("Simulating production traffic...")
for req in test_requests:
try:
result = generate_sales_email(**req)
print(f"✓ Generated email for {req['customer_name']}")
except Exception as e:
print(f"✗ Error for {req['customer_name']}: {e}")

Step 3: Create evaluation dataset

Now, lets convert the traces into an evaluation dataset. Storing the traces in an evaluation dataset allows us to link evaluation results to the dataset so we can track changes to the dataset over time and see all evaluation results generated using this dataset.

Follow the recording below to use the UI to:

  1. Create an evaluation dataset
  2. Add the simulated production traces from step 2 to the dataset

trace

Step 4: Run evaluation with predefined scorers

Now, let's use MLflow's provided predefined scorers to automatically evaluate different aspects of your GenAI application's quality. To learn more, refer the LLM-based scorers and code-based scorers reference pages.

note

Optionally, you can track application and prompt versions with MLflow. To learn more, view the track app and prompt versions guide.

Python
from mlflow.genai.scorers import (
RetrievalGroundedness,
RelevanceToQuery,
Safety,
Guidelines,
)

# Save the scorers as a variable so we can re-use them in step 7

email_scorers = [
RetrievalGroundedness(), # Checks if email content is grounded in retrieved data
Guidelines(
name="follows_instructions",
guidelines="The generated email must follow the user_instructions in the request.",
),
Guidelines(
name="concise_communication",
guidelines="The email MUST be concise and to the point. The email should communicate the key message efficiently without being overly brief or losing important context.",
),
Guidelines(
name="mentions_contact_name",
guidelines="The email MUST explicitly mention the customer contact's first name (e.g., Alice, Bob, Carol) in the greeting. Generic greetings like 'Hello' or 'Dear Customer' are not acceptable.",
),
Guidelines(
name="professional_tone",
guidelines="The email must be in a professional tone.",
),
Guidelines(
name="includes_next_steps",
guidelines="The email MUST end with a specific, actionable next step that includes a concrete timeline.",
),
RelevanceToQuery(), # Checks if email addresses the user's request
Safety(), # Checks for harmful or inappropriate content
]

# Run evaluation with predefined scorers
eval_results = mlflow.genai.evaluate(
data=eval_dataset,
predict_fn=generate_sales_email,
scorers=email_scorers,
)

Step 5: View and interpret results

Running mlflow.genai.evaluate() creates an Evaluation Run that contains a trace for every row in your evaluation dataset annotated with feedback from each scorer.

Use the Evaluation Run to:

  • View aggregate metrics: Average performance across all test cases for each scorer
  • Debug individual failure cases: Understand why failures occured to identify improvements to make in future versions
  • Failure analysis: Specific examples where scorers identified issues

In this evaluation, we see several issues:

  1. Poor instruction following - The agent frequently provides responses that don't match user requests, such as sending detailed product information when asked for simple check-ins, or providing support ticket updates when asked for enthusiastic thank-you messages
  2. Lack of conciseness - Most emails are unnecessarily long and include excessive details that dilute the key message, failing to communicate efficiently despite instructions to keep emails "concise and personalized"
  3. Missing concrete next steps - The majority of emails fail to end with specific, actionable next steps that include concrete timelines, which was identified as a required element

Access the evaluation results through the Evaluations tab in the MLflow UI to understand your application's performance:

trace

Step 6: Create an improved version

Based on the evaluation results, let's create an improved version that addresses the identified issues.

note

The new version of the generate_sales_email() function uses the retrieval function retrieve_customer_info() from the first step.

Python
@mlflow.trace
def generate_sales_email_v2(customer_name: str, user_instructions: str) -> Dict[str, str]:
"""Generate personalized sales email based on customer data & a sale's rep's instructions."""
# Retrieve customer information
customer_docs = retrieve_customer_info(customer_name)

if not customer_docs:
return {"error": f"No customer data found for {customer_name}"}

# Combine retrieved context
context = "\n".join([doc.page_content for doc in customer_docs])

# Generate email using retrieved context with better instruction following
prompt = f"""You are a sales representative writing an email.

MOST IMPORTANT: Follow these specific user instructions exactly:
{user_instructions}

Customer context (only use what's relevant to the instructions):
{context}

Guidelines:
1. PRIORITIZE the user instructions above all else
2. Keep the email CONCISE - only include information directly relevant to the user's request
3. End with a specific, actionable next step that includes a concrete timeline (e.g., "I'll follow up with pricing by Friday" or "Let's schedule a 15-minute call this week")
4. Only reference customer information if it's directly relevant to the user's instructions

Write a brief, focused email that satisfies the user's exact request."""

response = client.chat.completions.create(
model="databricks-claude-3-7-sonnet",
messages=[
{"role": "system", "content": "You are a helpful sales assistant who writes concise, instruction-focused emails."},
{"role": "user", "content": prompt}
],
max_tokens=2000
)

return {"email": response.choices[0].message.content}

# Test the application
result = generate_sales_email("Acme Corp", "Follow up after product demo")
print(result["email"])

Step 7: Evaluate the new version and compare

Let's run the evaluation on our improved version using the same scorers and dataset to see if we've addressed the issues:

Python
import mlflow

# Run evaluation of the new version with the same scorers as before
# We use start_run to name the evaluation run in the UI
with mlflow.start_run(run_name="v2"):
eval_results_v2 = mlflow.genai.evaluate(
data=eval_dataset, # same eval dataset
predict_fn=generate_sales_email_v2, # new app version
scorers=email_scorers, # same scorers as step 4
)

Step 8: Compare results

Now, we will compare the results to understand if our changes improved quality.

Navigate to the MLflow UI to compare the evaluation results:

trace

Step 9: Continued iteration

Based on the evaluation results, we can continue iterating to improve the application's quality and test each new fix we implement.

Next steps

Continue your journey with these recommended actions and tutorials.

Reference guides

Explore detailed documentation for concepts and features mentioned in this guide.