Skip to main content

Automatic Tracing

MLflow Tracing is integrated with various GenAI libraries and provides one-line automatic tracing experience for each library (and the combination of them!). This page shows detailed examples to integrate MLflow with popular GenAI libraries.

Prerequisites

This guide requires the following packages:

  • mlflow[databricks]>=3.1: Core MLflow functionality with GenAI features and Databricks connectivity.
  • openai>=1.0.0: Only required to run the Basic Automatic Tracing Example on this page (if using other LLM providers, install their respective SDKs instead)
  • Additional libraries: Install specific libraries for the integrations you want to use

Install the basic requirements:

Bash
pip install --upgrade "mlflow[databricks]>=3.1" openai>=1.0.0
MLflow Version Recommendation

While automatic tracing features are available in MLflow 2.15.0+, it is strongly recommended to install MLflow 3 (specifically 3.1 or newer if using mlflow[databricks]) for the latest GenAI capabilities, including expanded tracing features and robust support.

tip

Running in a Databricks notebook? MLflow is pre-installed in the Databricks runtime. You only need to install additional packages for the specific libraries you want to trace.

Running locally? You'll need to install all packages listed above plus any additional integration libraries.

Prerequisites for Databricks Setup

Before running any of the examples below, make sure you have MLflow tracking configured for Databricks:

For users outside Databricks notebooks

If you're running outside of Databricks notebooks, set your environment variables:

Bash
export DATABRICKS_HOST="https://your-workspace.cloud.databricks.com"
export DATABRICKS_TOKEN="your-personal-access-token"

For users inside Databricks notebooks

If you're running inside a Databricks notebook, these credentials are automatically set for you. You only need to configure your LLM provider API keys.

LLM Provider API Keys

Set your API keys for the LLM providers you plan to use:

Bash
export OPENAI_API_KEY="your-openai-api-key"
export ANTHROPIC_API_KEY="your-anthropic-api-key"
export MISTRAL_API_KEY="your-mistral-api-key"
# Add other provider keys as needed

Basic Automatic Tracing Example

Here's how to enable automatic tracing for OpenAI in just one line:

Python
import mlflow
from openai import OpenAI
import os

# Set up MLflow tracking
mlflow.set_tracking_uri("databricks")
mlflow.set_experiment("/Shared/automatic-tracing-demo")

# Set your OpenAI API key
os.environ["OPENAI_API_KEY"] = "your-api-key-here"

# Enable automatic tracing with one line
mlflow.openai.autolog()

# Your existing OpenAI code works unchanged
client = OpenAI()
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Explain MLflow Tracing in one sentence."}
],
max_tokens=100,
temperature=0.7
)

print(response.choices[0].message.content)
# All OpenAI calls are now automatically traced!

Integrations

Each integration automatically captures your application's logic and intermediate steps based on your implementation of the authoring framework / SDK. For a comprehensive list of all supported libraries and detailed documentation for each integration, please see the MLflow Tracing Integrations page.

Below are quick-start examples for some of the most popular integrations. Remember to install the necessary packages for each library you intend to use (e.g., pip install openai langchain langgraph anthropic dspy boto3 databricks-sdk ag2).

Top Integrations

MLflow provides automatic tracing for many popular GenAI frameworks and libraries. Here are the most commonly used integrations:

Python
import mlflow
import openai

# Enable auto-tracing for OpenAI
mlflow.openai.autolog()

# Set up MLflow tracking on Databricks
mlflow.set_tracking_uri("databricks")
mlflow.set_experiment("/Shared/openai-tracing-demo")

openai_client = openai.OpenAI()

messages = [
{
"role": "user",
"content": "What is the capital of France?",
}
]

response = openai_client.chat.completions.create(
model="gpt-4o-mini",
messages=messages,
temperature=0.1,
max_tokens=100,
)

Full OpenAI Integration Guide

Combining Manual and Automatic Tracing

The @mlflow.trace decorator can be used in conjunction with auto tracing to create powerful, integrated traces. This is particularly useful for:

  1. Complex workflows that involve multiple LLM calls
  2. Multi-agent systems where different agents use different LLM providers
  3. Chaining multiple LLM calls together with custom logic in between

Basic Example

Here's a simple example that combines OpenAI auto-tracing with manually defined spans:

Python
import mlflow
import openai
from mlflow.entities import SpanType

mlflow.openai.autolog()


@mlflow.trace(span_type=SpanType.CHAIN)
def run(question):
messages = build_messages(question)
# MLflow automatically generates a span for OpenAI invocation
response = openai.OpenAI().chat.completions.create(
model="gpt-4o-mini",
max_tokens=100,
messages=messages,
)
return parse_response(response)


@mlflow.trace
def build_messages(question):
return [
{"role": "system", "content": "You are a helpful chatbot."},
{"role": "user", "content": question},
]


@mlflow.trace
def parse_response(response):
return response.choices[0].message.content


run("What is MLflow?")

Running this code generates a single trace that combines the manual spans with the automatic OpenAI tracing:

Mix of auto and manual tracing

Advanced Example: Multiple LLM Calls

For more complex workflows, you can combine multiple LLM calls into a single trace. Here's an example that demonstrates this pattern:

Python
import mlflow
import openai
from mlflow.entities import SpanType

# Enable auto-tracing for OpenAI
mlflow.openai.autolog()

@mlflow.trace(span_type=SpanType.CHAIN)
def process_user_query(query: str):
# First LLM call: Analyze the query
analysis = openai.OpenAI().chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": "Analyze the user's query and determine if it requires factual information or creative writing."},
{"role": "user", "content": query}
]
)
analysis_result = analysis.choices[0].message.content

# Second LLM call: Generate response based on analysis
if "factual" in analysis_result.lower():
# Use a different model for factual queries
response = openai.OpenAI().chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": "Provide a factual, well-researched response."},
{"role": "user", "content": query}
]
)
else:
# Use a different model for creative queries
response = openai.OpenAI().chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": "Provide a creative, engaging response."},
{"role": "user", "content": query}
]
)

return response.choices[0].message.content

# Run the function
result = process_user_query("Tell me about the history of artificial intelligence")

This example creates a single trace that includes:

  1. A parent span for the entire process_user_query function
  2. Two child spans automatically created by the OpenAI autologging:
    • One for the analysis LLM call
    • One for the response LLM call

Multi-Framework Example

You can also combine different LLM providers in a single trace. For example:

note

This example requires installing LangChain in addition to the base requirements:

Bash
pip install --upgrade langchain langchain-openai
Python
import mlflow
import openai
from mlflow.entities import SpanType
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate

# Enable auto-tracing for both OpenAI and LangChain
mlflow.openai.autolog()
mlflow.langchain.autolog()

@mlflow.trace(span_type=SpanType.CHAIN)
def multi_provider_workflow(query: str):
# First, use OpenAI directly for initial processing
analysis = openai.OpenAI().chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": "Analyze the query and extract key topics."},
{"role": "user", "content": query}
]
)
topics = analysis.choices[0].message.content

# Then use LangChain for structured processing
llm = ChatOpenAI(model="gpt-4o-mini")
prompt = ChatPromptTemplate.from_template(
"Based on these topics: {topics}\nGenerate a detailed response to: {query}"
)
chain = prompt | llm
response = chain.invoke({"topics": topics, "query": query})

return response

# Run the function
result = multi_provider_workflow("Explain quantum computing")

This example shows how to combine:

  1. Direct OpenAI API calls
  2. LangChain chains
  3. Custom logic between the calls

All of this is captured in a single trace, making it easy to:

  • Debug issues
  • Monitor performance
  • Understand the flow of the request
  • Track which parts of the system are being used

The trace visualization will show the complete hierarchy of spans, making it clear how the different components interact and how long each step takes.

Next steps

Continue your journey with these recommended actions and tutorials.

Reference guides

Explore detailed documentation for concepts and features mentioned in this guide.