Tracing a GenAI App (Notebook)
This quickstart helps you integrate your GenAI app with MLflow Tracing if you use a Databricks Notebook as your development environment. If you use a local IDE, please use the IDE quickstart instead.
What you'll achieve
By the end of this tutorial, you will have:
- A Databricks Notebook with a linked MLflow Experiment for your GenAI app
- A simple GenAI application instrumented with MLflow Tracing
- A trace from that app in your MLflow Experiment
Prerequisites
- Databricks Workspace: Access to a Databricks workspace.
Step 1: Create a Databricks Notebook
Creating a Databricks Notebook will create an MLflow Experiment that is the container for your GenAI application. Learn more about the Experiment and what it contains in the data model section.
- Open your Databricks workspace
- Go to New at the top of the left sidebar
- Click Notebook
Step 2: Install the latest version of MLflow (Recommended)
Databricks runtimes include MLflow. However, for the best experience with GenAI capabilities, including the most comprehensive tracing features and robust support, it is highly recommended to use the latest version of MLflow.
Update MLflow in your notebook by running the following:
%pip install --upgrade "mlflow[databricks]>=3.1" openai
dbutils.library.restartPython()
mlflow[databricks]>=3.1
: This command ensures you have MLflow 3.1 or a more recent version, along with thedatabricks
extra for seamless connectivity and functionality within Databricks.dbutils.library.restartPython()
: This is crucial to ensure the newly installed version is used by the Python kernel.
While tracing features are available in MLflow 2.15.0+, it is strongly recommended to install MLflow 3 (specifically 3.1 or newer if using mlflow[databricks]
) for the latest GenAI capabilities, including expanded tracing features and robust support.
Step 3: Instrument your application
Databricks provides out-of-the-box access to popular frontier and open source foundation LLMs. To run this quickstart, you can choose from the following model hosting options:
- Access Databricks-hosted LLMs.
- Directly use your own API key from an LLM provider such as OpenAI or the 20+ other LLM SDKs supported by MLflow.
- Create an external model to enable governed access to your LLM provider's API keys.
Run the following code in a notebook cell. It uses the @mlflow.trace
decorator combined with OpenAI automatic instrumentation to capture the details of the LLM request.
- Databricks-hosted LLMs
- OpenAI SDK
Use MLflow to get an OpenAI client that connects to Databricks-hosted LLMs. The code snippet below uses Anthropic's Claude Sonnet LLM, but you can choose from the available foundation models.
import mlflow
from databricks.sdk import WorkspaceClient
# Enable MLflow's autologging to instrument your application with Tracing
mlflow.openai.autolog()
# Create an OpenAI client that is connected to Databricks-hosted LLMs
w = WorkspaceClient()
client = w.serving_endpoints.get_open_ai_client()
# Use the trace decorator to capture the application's entry point
@mlflow.trace
def my_app(input: str):
# This call is automatically instrumented by `mlflow.openai.autolog()`
response = client.chat.completions.create(
# Replace this model name with any Databricks-hosted LLM, AI Gateway, or Model Serving endpoint name.
model="databricks-claude-sonnet-4",
messages=[
{
"role": "system",
"content": "You are a helpful assistant.",
},
{
"role": "user",
"content": input,
},
],
)
return response.choices[0].message.content
result = my_app(input="What is MLflow?")
print(result)
Use the native OpenAI SDK to connect to OpenAI-hosted models. The code snippet below uses gpt-4o-mini
, but you can select from available OpenAI models.
import mlflow
import os
import openai
# Ensure your OPENAI_API_KEY is set in your environment
# os.environ["OPENAI_API_KEY"] = "<YOUR_API_KEY>" # Uncomment and set if not globally configured
# Enable auto-tracing for OpenAI
mlflow.openai.autolog()
# Set up MLflow tracking to Databricks
mlflow.set_tracking_uri("databricks")
mlflow.set_experiment("/Shared/openai-tracing-demo")
openai_client = openai.OpenAI()
# Use the trace decorator to capture the application's entry point
@mlflow.trace
def my_app(input: str):
# This call is automatically instrumented by `mlflow.openai.autolog()`
response = openai_client.chat.completions.create(
model="gpt-4o-mini",
temperature=0.1,
max_tokens=100,
messages=[
{
"role": "system",
"content": "You are a helpful assistant.",
},
{
"role": "user",
"content": input,
},
]
)
return response.choices[0].message.content
result = my_app(input="What is MLflow?")
print(result)
Step 4: View the Trace in MLflow
The Trace will appear below the Notebook cell.
Optionally, you can go to the MLflow Experiment UI to see the Trace:
- Click the Experiment icon on the right side of your screen.
- Click the open icon next to Experiment Runs.
- The generated trace appears in the Traces tab.
- Click the trace to view its details.
Understanding the Trace
The trace you've just created shows:
- Root Span: Represents the inputs to the
my_app(...)
function- Child Span: Represents the OpenAI completion request
- Attributes: Contains metadata like model name, token counts, and timing information
- Inputs: The messages sent to the model
- Outputs: The response received from the model
This simple trace already provides valuable insights into your application's behavior, such as:
- What was asked
- What response was generated
- How long the request took
- How many tokens were used (affecting cost)
For more complex applications like RAG systems or multi-step agents, MLflow Tracing provides even more value by revealing the inner workings of each component and step.
Next steps
Continue your journey with these recommended actions and tutorials.
- Evaluate your app's quality - Measure and improve your GenAI app's quality with MLflow's evaluation capabilities
- Collect human feedback - Learn how to collect feedback from users and domain experts
- Track users & sessions - Add user and conversation context to your traces
Reference guides
Explore detailed documentation for concepts and features mentioned in this guide.
- Tracing concepts - Understand the fundamentals of MLflow Tracing
- Tracing data model - Learn about traces, spans, and how MLflow structures observability data
- Manual tracing APIs - Explore advanced tracing techniques for custom instrumentation