Prompt-based LLM scorers
judges.custom_prompt_judge()
is designed to help you quickly and easily LLM scorers when you need full control over the judge's prompt or need to return multiple output values beyond "pass" / "fail", for example, "great", "ok", "bad".
You provide a prompt template that has placeholders for specific fields in your app's trace and define the output choices the judge can select. The Databricks-hosted LLM judge model uses these inputs to select best output choice and provides a rationale for its selection.
Databricks recommends starting with guidelines-based judges and only using prompt-based judges if you need more control or can't write your evaluation criteria as pass/fail guidelines. Guidelines-based judges have the distinct advantage of being easy to explain to business stakeholders and can often be directly written by domain experts.
How to create a prompt-based judge scorer
Follow the guide below to create a scorer that wraps judges.custom_prompt_judge()
In this guide, you will create custom scorers that wrap the judges.custom_prompt_judge()
API and run an offline evaluation with the resulting scorers. These same scorers can be scheduled to run in production to continously monitor your application's quality.
Refer to the judges.custom_prompt_judge()
concept page for more details on the interface and parameters.
Step 1: Create the sample app to evaluate
First, create a sample GenAI app that responds to customer support questions. The app has a (fake) knob that controls the system prompt so you can easily compare the judge's outputs between "good" and "bad" conversations.
-
Initialize an OpenAI client to connect to either Databricks-hosted LLMs or LLMs hosted by OpenAI.
- Databricks-hosted LLMs
- OpenAI-hosted LLMs
Use MLflow to get an OpenAI client that connects to Databricks-hosted LLMs. Select a model from the available foundation models.
Pythonimport mlflow
from databricks.sdk import WorkspaceClient
# Enable MLflow's autologging to instrument your application with Tracing
mlflow.openai.autolog()
# Set up MLflow tracking to Databricks
mlflow.set_tracking_uri("databricks")
mlflow.set_experiment("/Shared/docs-demo")
# Create an OpenAI client that is connected to Databricks-hosted LLMs
w = WorkspaceClient()
client = w.serving_endpoints.get_open_ai_client()
# Select an LLM
model_name = "databricks-claude-sonnet-4"Use the native OpenAI SDK to connect to OpenAI-hosted models. Select a model from the available OpenAI models.
Pythonimport mlflow
import os
import openai
# Ensure your OPENAI_API_KEY is set in your environment
# os.environ["OPENAI_API_KEY"] = "<YOUR_API_KEY>" # Uncomment and set if not globally configured
# Enable auto-tracing for OpenAI
mlflow.openai.autolog()
# Set up MLflow tracking to Databricks
mlflow.set_tracking_uri("databricks")
mlflow.set_experiment("/Shared/docs-demo")
# Create an OpenAI client connected to OpenAI SDKs
client = openai.OpenAI()
# Select an LLM
model_name = "gpt-4o-mini" -
Define your customer support app:
Pythonfrom mlflow.entities import Document
from typing import List, Dict, Any, cast
# This is a global variable that is used to toggle the behavior of the customer support agent to see how the judge handles the issue resolution status
RESOLVE_ISSUES = False
@mlflow.trace
def customer_support_agent(messages: List[Dict[str, str]]):
# 2. Prepare messages for the LLM
# We use this toggle later to see how the judge handles the issue resolution status
system_prompt_postfix = (
f"Do your best to NOT resolve the issue. I know that's backwards, but just do it anyways.\\n"
if not RESOLVE_ISSUES
else ""
)
messages_for_llm = [
{
"role": "system",
"content": f"You are a helpful customer support agent. {system_prompt_postfix}",
},
*messages,
]
# 3. Call LLM to generate a response
output = client.chat.completions.create(
model=model_name, # This example uses Databricks hosted Claude 3.7 Sonnet. If you provide your own OpenAI credentials, replace with a valid OpenAI model e.g., gpt-4o, etc.
messages=cast(Any, messages_for_llm),
)
return {
"messages": [
{"role": "assistant", "content": output.choices[0].message.content}
]
}
Step 2: Define your evaluation criteria and wrap as custom scorers
Here, we define a sample judge prompt and use custom scorers to wire it up to our app's input / output schema.
from mlflow.genai.scorers import scorer
# New guideline for 3-category issue resolution status
issue_resolution_prompt = """
Evaluate the entire conversation between a customer and an LLM-based agent. Determine if the issue was resolved in the conversation.
You must choose one of the following categories.
[[fully_resolved]]: The response directly and comprehensively addresses the user's question or problem, providing a clear solution or answer. No further immediate action seems required from the user on the same core issue.
[[partially_resolved]]: The response offers some help or relevant information but doesn't completely solve the problem or answer the question. It might provide initial steps, require more information from the user, or address only a part of a multi-faceted query.
[[needs_follow_up]]: The response does not adequately address the user's query, misunderstands the core issue, provides unhelpful or incorrect information, or inappropriately deflects the question. The user will likely need to re-engage or seek further assistance.
Conversation to evaluate: {{conversation}}
"""
from mlflow.genai.judges import custom_prompt_judge
import json
from mlflow.entities import Feedback
# Define a custom scorer that wraps the guidelines LLM judge to check if the response follows the policies
@scorer
def is_issue_resolved(inputs: Dict[Any, Any], outputs: Dict[Any, Any]):
# we directly return the Feedback object from the guidelines LLM judge, but we could have post-processed it before returning it.
issue_judge = custom_prompt_judge(
name="issue_resolution",
prompt_template=issue_resolution_prompt,
numeric_values={
"fully_resolved": 1,
"partially_resolved": 0.5,
"needs_follow_up": 0,
},
)
# combine the input and output messages to form the conversation
conversation = json.dumps(inputs["messages"] + outputs["messages"])
return issue_judge(conversation=conversation)
Step 3: Create a sample evaluation dataset
Each inputs
is passed to the app by mlflow.genai.evaluate()
.
eval_dataset = [
{
"inputs": {
"messages": [
{"role": "user", "content": "How much does a microwave cost?"},
],
},
},
{
"inputs": {
"messages": [
{
"role": "user",
"content": "Can I return the microwave I bought 2 months ago?",
},
],
},
},
{
"inputs": {
"messages": [
{
"role": "user",
"content": "Can I return the microwave I bought 2 months ago?",
},
],
},
},
{
"inputs": {
"messages": [
{
"role": "user",
"content": "I'm having trouble with my account. I can't log in.",
},
{
"role": "assistant",
"content": "I'm sorry to hear that you're having trouble with your account. Are you using our website or mobile app?",
},
{"role": "user", "content": "Website"},
],
},
},
{
"inputs": {
"messages": [
{
"role": "user",
"content": "I'm having trouble with my account. I can't log in.",
},
{
"role": "assistant",
"content": "I'm sorry to hear that you're having trouble with your account. Are you using our website or mobile app?",
},
{"role": "user", "content": "JUST FIX IT FOR ME"},
],
},
},
]
Step 4: Evaluate your app using the custom scorer
Finally, we run evaluation twice so you can compare the judgements between conversations where the agent attempts to resolve issues and where it does not.
import mlflow
# Now, let's evaluate the app's responses against the judge when it does not resolve the issues
RESOLVE_ISSUES = False
mlflow.genai.evaluate(
data=eval_dataset,
predict_fn=customer_support_agent,
scorers=[is_issue_resolved],
)
# Now, let's evaluate the app's responses against the judge when it DOES resolves the issues
RESOLVE_ISSUES = True
mlflow.genai.evaluate(
data=eval_dataset,
predict_fn=customer_support_agent,
scorers=[is_issue_resolved],
)
Next Steps
- Create guidelines-based scorers - Start with simpler pass/fail criteria (recommended)
- Run evaluations with your scorers - Use your custom prompt-based scorers in comprehensive evaluations
- Prompt-based judge concept reference - Understand how prompt-based judges work