Human feedback quickstart
This quickstart shows you how to collect end-user feedback, add developer annotations, create expert review sessions, and use that feedback to evaluate your GenAI app's quality.
It covers the following steps of the human feedback lifecycle:
- Instrument a GenAI app with MLflow tracing.
- Collect end-user feedback (in this example, end-user feedback is simulated using the SDK).
- Add developer feedback interactively through the UI.
- View feedback alongside your traces.
- Create a labeling session for structured expert review.
- Use expert feedback to evaluate app quality.
All of the code on this page is included in the example notebook.
Prerequisites
-
Install MLflow and required packages
Bashpip install --upgrade "mlflow[databricks]>=3.1.0" openai "databricks-connect>=16.1"
-
Create an MLflow experiment by following the setup your environment quickstart.
Step 1: Create and trace a simple app
First, create a simple GenAI app using an LLM with MLflow tracing.
import mlflow
from openai import OpenAI
# Enable automatic tracing for all OpenAI API calls
mlflow.openai.autolog()
# Connect to a Databricks LLM via OpenAI using the same credentials as MLflow
# Alternatively, you can use your own OpenAI credentials here
mlflow_creds = mlflow.utils.databricks_utils.get_databricks_host_creds()
client = OpenAI(
api_key=mlflow_creds.token,
base_url=f"{mlflow_creds.host}/serving-endpoints"
)
# Create a RAG app with tracing
@mlflow.trace
def my_chatbot(user_question: str) -> str:
# Retrieve relevant context
context = retrieve_context(user_question)
# Generate response using LLM with retrieved context
response = client.chat.completions.create(
model="databricks-claude-3-7-sonnet", # If using OpenAI directly, use "gpt-4o" or "gpt-3.5-turbo"
messages=[
{"role": "system", "content": "You are a helpful assistant. Use the provided context to answer questions."},
{"role": "user", "content": f"Context: {context}\n\nQuestion: {user_question}"}
],
temperature=0.7,
max_tokens=150
)
return response.choices[0].message.content
@mlflow.trace(span_type="RETRIEVER")
def retrieve_context(query: str) -> str:
# Simulated retrieval - in production, this would search a vector database
if "mlflow" in query.lower():
return "MLflow is an open-source platform for managing the end-to-end machine learning lifecycle. It provides tools for experiment tracking, model packaging, and deployment."
return "General information about machine learning and data science."
# Run the app to generate a trace
response = my_chatbot("What is MLflow?")
print(f"Response: {response}")
# Get the trace ID for the next step
trace_id = mlflow.get_last_active_trace_id()
print(f"Trace ID: {trace_id}")
Step 2: Collect end-user feedback
When users interact with your app, they can provide feedback through UI elements like thumbs up/down buttons. This quickstart simulates an end user giving negative feedback by using the SDK directly.
import mlflow
from mlflow.entities.assessment import AssessmentSource, AssessmentSourceType
# Simulate end-user feedback from your app
# In production, this would be triggered when a user clicks thumbs down in your UI
mlflow.log_feedback(
trace_id=trace_id,
name="user_feedback",
value=False, # False for thumbs down - user is unsatisfied
rationale="Missing details about MLflow's key features like Projects and Model Registry",
source=AssessmentSource(
source_type=AssessmentSourceType.HUMAN,
source_id="enduser_123", # Would be actual user ID in production
),
)
print("End-user feedback recorded!")
# In a real app, you would:
# 1. Return the trace_id with your response to the frontend
# 2. When user clicks thumbs up/down, call your backend API
# 3. Your backend would then call mlflow.log_feedback() with the trace_id
Step 3: View feedback in the UI
Launch the MLflow UI to see your traces with feedback:
- Navigate to your MLflow Experiment.
- Navigate to the Traces tab.
- Click on your trace.
- The trace details dialog appears. Under Assessments on the right side of the dialog, the
user_feedback
showsfalse
, indicating that the user marked the response thumbs-down.
Step 4: Add developer annotations via the UI
As a developer, you can also add your own feedback and notes directly in the UI:
- In the Traces tab, click on a trace to open it.
- Click on any span (choose the root span for trace-level feedback).
- In the Assessments tab on the right, click Add new assessment and fill in the following:
- Type:
Feedback
orExpectation
. - Name: for example, "accuracy_score".
- Value: Your assessment.
- Rationale: Optional explanation.
- Type:
- Click Create.
After you refresh the page, columns for the new assessments appear in the Traces table.
Step 5: Send trace for expert review
The negative end-user feedback from Step 2 signals a potential quality issue, but only domain experts can confirm if there's truly a problem and provide the correct answer. Create a labeling session to get authoritative expert feedback:
import mlflow
from mlflow.genai.label_schemas import create_label_schema, InputCategorical, InputText
from mlflow.genai.labeling import create_labeling_session
# Define what feedback to collect
accuracy_schema = create_label_schema(
name="response_accuracy",
type="feedback",
title="Is the response factually accurate?",
input=InputCategorical(options=["Accurate", "Partially Accurate", "Inaccurate"]),
overwrite=True
)
ideal_response_schema = create_label_schema(
name="expected_response",
type="expectation",
title="What would be the ideal response?",
input=InputText(),
overwrite=True
)
# Create a labeling session
labeling_session = create_labeling_session(
name="quickstart_review",
label_schemas=[accuracy_schema.name, ideal_response_schema.name],
)
# Add your trace to the session
# Get the most recent trace from the current experiment
traces = mlflow.search_traces(
max_results=1 # Gets the most recent trace
)
labeling_session.add_traces(traces)
# Share with reviewers
print(f"Trace sent for review!")
print(f"Share this link with reviewers: {labeling_session.url}")
Expert reviewers can now do the following:
- Open the Review App URL.
- See your trace with the question and response (including any end-user feedback).
- Assess whether the response is actually accurate.
- Provide the correct answer in
expected_response
if needed. - Submit their expert assessments as ground truth.
You can also use the MLflow 3 UI to create a labeling session, as follows:
- On the Experiment page, click the Labeling tab.
- At the left, use the Sessions and Schemas tabs to add a new label schema and create a new session.
Step 6: Use feedback to evaluate your app
After experts provide feedback, use their expected_response
labels to evaluate your app with MLflow's Correctness scorer:
This example directly uses the traces for evaluation. In your application, Databricks recommends adding labeled traces to an MLflow Evaluation Dataset which provides version tracking and lineage. See the create evaluation set guide.
import mlflow
from mlflow.genai.scorers import Correctness
# Get traces from the labeling session
labeled_traces = mlflow.search_traces(
run_id=labeling_session.mlflow_run_id, # Labeling Sessions are MLflow Runs
)
# Evaluate your app against expert expectations
eval_results = mlflow.genai.evaluate(
data=labeled_traces,
predict_fn=my_chatbot, # The app we created in Step 1
scorers=[Correctness()] # Compares outputs to expected_response
)
The Correctness scorer compares your app's outputs against the expert-provided expected_response
, giving you quantitative feedback on alignment with expert expectations.
Example notebook
The following notebook includes all of the code on this page.
Human feedback quickstart notebook
Next steps
Continue your journey with these recommended actions and tutorials.
- Build evaluation datasets - Create comprehensive test datasets from production feedback
- Label during development - Learn advanced annotation techniques for development
- Collect domain expert feedback - Set up systematic expert review processes
Reference guides
For more details on the concepts and features mentioned in this quickstart, see the following:
- Review App - Understand MLflow's human feedback interface
- Labeling Sessions - Learn how expert review sessions work
- Labeling Schemas - Explore feedback structure and types