Add metadata and user feedback to traces
MLflow Tracing allows production apps to augment traces with additional metadata and context, such as request IDs, session and user IDs, and custom tags. Apps can also log user feedback alongside traces. This metadata can then be used for organizing, analyzing and debugging traces.
Add metadata to traces
After basic tracing works, add metadata or context to traces for better debugging and insights. Production applications need to track multiple pieces of context simultaneously: client request IDs for debugging, session IDs for multi-turn conversations, user IDs for personalization and analytics, and environment metadata for operational insights. MLflow has the following standardized tags and attributes to capture important contextual information:
Metadata | Use cases | MLflow attribute or tag |
---|---|---|
Client request ID | Link traces to specific client requests or API calls for end-to-end debugging | |
User session ID | Group traces from multi-turn conversations, allowing you to analyze the full conversational flow | |
User ID | Associate traces with specific users for personalization, cohort analysis, and user-specific debugging | |
Environment data | Track deployment context (environment, version, region) for operational insights and debugging across different deployments | |
Custom tags | Add custom metadata, especially for organization, search, and filtering traces | (your tags) |
Below is a comprehensive example showing how to track all of these in a FastAPI application.
import mlflow
import os
from fastapi import FastAPI, Request, HTTPException # HTTPException might be needed depending on full app logic
from pydantic import BaseModel
# Initialize FastAPI app
app = FastAPI()
class ChatRequest(BaseModel):
message: str
@mlflow.trace # Ensure @mlflow.trace is the outermost decorator
@app.post("/chat") # FastAPI decorator should be inner decorator
def handle_chat(request: Request, chat_request: ChatRequest):
# Retrieve all context from request headers
client_request_id = request.headers.get("X-Request-ID")
session_id = request.headers.get("X-Session-ID")
user_id = request.headers.get("X-User-ID")
# Update the current trace with all context and environment metadata
# The @mlflow.trace decorator ensures an active trace is available
mlflow.update_current_trace(
client_request_id=client_request_id,
metadata={
# Session context - groups traces from multi-turn conversations
"mlflow.trace.session": session_id,
# User context - associates traces with specific users
"mlflow.trace.user": user_id,
# Override automatically populated environment metadata
"mlflow.source.type": os.getenv("APP_ENVIRONMENT", "development"), # Override default LOCAL/NOTEBOOK
# Add customer environment metadata
"environment": "production",
"app_version": os.getenv("APP_VERSION", "1.0.0"),
"deployment_id": os.getenv("DEPLOYMENT_ID", "unknown"),
"region": os.getenv("REGION", "us-east-1"),
# Add custom tags
"my_custom_tag": "custom tag value",
}
)
# --- Your application logic for processing the chat message ---
# For example, calling a language model with context
# response_text = my_llm_call(
# message=chat_request.message,
# session_id=session_id,
# user_id=user_id
# )
response_text = f"Processed message: '{chat_request.message}'"
# --- End of application logic ---
# Return response
return {
"response": response_text
}
# To run this example (requires uvicorn and fastapi):
# uvicorn your_file_name:app --reload
#
# Example curl request with context headers:
# curl -X POST "http://127.0.0.1:8000/chat" \
# -H "Content-Type: application/json" \
# -H "X-Request-ID: req-abc-123-xyz-789" \
# -H "X-Session-ID: session-def-456-uvw-012" \
# -H "X-User-ID: user-jane-doe-12345" \
# -d '{"message": "What is my account balance?"}'
For more information on adding context to traces, see:
- Track users & sessions
- Track versions & environments
- Attach custom tags and metadata
mlflow.update_current_trace()
API for adding metadata- MLflow Tracing documentation for a list of automatically populated tags and reserved standard tags
Collect user feedback
Capturing user feedback on specific interactions is essential for understanding quality and improving your GenAI application. Building on the client request ID tracking shown in Add metadata to traces, the FastAPI example below demonstrates how to:
- Link feedback to specific interactions by using the client request ID to find the exact trace and attach feedback.
- Store structured feedback using the
log_feedback
andlog_expectation
APIs to create structured feedback objects that are visible in the MLflow UI. - Analyze quality patterns by querying traces with their associated feedback to identify what types of interactions receive positive or negative ratings.
import mlflow
from mlflow.client import MlflowClient
from fastapi import FastAPI, Query, Request
from pydantic import BaseModel
from typing import Optional
from mlflow.entities import AssessmentSource
# Initialize FastAPI app
app = FastAPI()
class FeedbackRequest(BaseModel):
is_correct: bool # True for correct, False for incorrect
comment: Optional[str] = None
@app.post("/chat_feedback")
def handle_chat_feedback(
request: Request,
client_request_id: str = Query(..., description="The client request ID from the original chat request"),
feedback: FeedbackRequest = ...
):
"""
Collect user feedback for a specific chat interaction identified by client_request_id.
"""
# Search for the trace with the matching client_request_id
client = MlflowClient()
# Get the experiment by name (using Databricks workspace path)
experiment = client.get_experiment_by_name("/Shared/production-app")
traces = client.search_traces(
experiment_ids=[experiment.experiment_id],
filter_string=f"attributes.client_request_id = '{client_request_id}'",
max_results=1
)
if not traces:
return {
"status": "error",
"message": f"Unable to find data for client request ID: {client_request_id}"
}, 500
# Log feedback using MLflow's log_feedback API
feedback = mlflow.log_feedback(
trace_id=traces[0].info.trace_id,
name="response_is_correct",
value=feedback.is_correct,
source=AssessmentSource(
source_type="HUMAN",
source_id=request.headers.get("X-User-ID")
),
rationale=feedback.comment
)
return feedback
# Example usage:
# After a chat interaction returns a response, the client can submit feedback:
#
# curl -X POST "http://127.0.0.1:8000/chat_feedback?client_request_id=req-abc-123-xyz-789" \
# -H "Content-Type: application/json" \
# -H "X-User-ID: user-jane-doe-12345" \
# -d '{
# "is_correct": true,
# "comment": "The response was accurate and helpful"
# }'
For more information on logging user feedback, see:
- Collect user feedback
mlflow.log_feedback()
API andlog_expectation
API for storing structured feedback
Query traces with metadata
After adding metadata to traces, you can use that contextual information to analyze production behavior. Specifically, the MLflowClient.search_traces()
method allows filtering by tags and metadata. The example below finds traces for a specific user and a specific user session.
import mlflow
from mlflow.client import MlflowClient
import pandas as pd
client = MlflowClient()
experiment = client.get_experiment_by_name("/Shared/production-app")
# Query traces by user
user_traces = client.search_traces(
experiment_ids=[experiment.experiment_id],
filter_string="tags.`mlflow.trace.user` = 'user-jane-doe-12345'",
max_results=100
)
# Query traces by session
session_traces = client.search_traces(
experiment_ids=[experiment.experiment_id],
filter_string="tags.`mlflow.trace.session` = 'session-123'",
max_results=100
)
For many example use cases of mlflow.search_traces()
, see Search and analyze traces.
Next steps
- Track users & sessions
- Track versions & environments
- Attach custom tags and metadata
- Collect user feedback
- Evaluate and monitor your agent using scorers, evaluation datasets, and production monitoring
Feature references
For details on concepts and features in this guide, see:
- Tracing data model - Learn about traces, spans, and attributes
- Logging assessments - Understand how feedback is stored and used
- Search and analyze traces - See examples queries for many common use cases of
mlflow.search_traces()