Skip to main content

Span tracing with context managers

The mlflow.start_span context manager allows you to create spans for arbitrary code blocks. It can be useful for capturing complex interactions within your code in finer detail than what is possible by capturing the boundaries of a single function.

Span tracing with context managers gives you fine-grained control over what code gets traced:

  • Arbitrary code blocks: Trace any code block, not just entire functions
  • Flexible boundaries: Define exact start and end points for spans
  • Automatic context management: MLflow handles parent-child relationships and cleanup
  • Works with function decorators: Mix and match with @mlflow.trace for hybrid approaches
  • Exception handling: Automatic error capture like decorators

Prerequisites

This page requires the following packages:

  • mlflow[databricks] 3.1 and above: Core MLflow functionality with GenAI features and Databricks connectivity.
  • openai 1.0.0 and above: (Optional) Only if your custom code interacts with OpenAI; replace with other SDKs if needed.

Install the basic requirements:

Python
%pip install --upgrade "mlflow[databricks]>=3.1"
# %pip install --upgrade openai>=1.0.0 # Install if needed

Context Manager API

Similarly to the decorator, the context manager automatically captures parent-child relationship, exceptions, execution time, and works with auto-tracing. However, the name, inputs, and outputs of the span must be provided manually. You can set them using the mlflow.entities.Span object that is returned from the context manager.

Python
with mlflow.start_span(name="my_span") as span:
span.set_inputs({"x": 1, "y": 2})
z = x + y
span.set_outputs(z)

Below is a slightly more complex example that uses the mlflow.start_span context manager in conjunction with both the decorator and auto-tracing for OpenAI.

Python
import mlflow
import openai
from mlflow.entities import SpanType

# Enable auto-tracing for OpenAI
mlflow.openai.autolog()

# Create OpenAI client
client = openai.OpenAI()

@mlflow.trace(span_type=SpanType.CHAIN)
def start_session():
messages = [{"role": "system", "content": "You are a friendly chat bot"}]
while True:
with mlflow.start_span(name="User") as span:
span.set_inputs(messages)
user_input = input(">> ")
span.set_outputs(user_input)

if user_input == "BYE":
break

messages.append({"role": "user", "content": user_input})

response = client.chat.completions.create(
model="gpt-4o-mini",
max_tokens=100,
messages=messages,
)
answer = response.choices[0].message.content
print(f"Assistant: {answer}")

messages.append({"role": "assistant", "content": answer})


start_session()

Next steps