Skip to main content

Instrument your app: Tracing approaches

Learn about the different approaches you can take to add traces your generative AI application. MLflow Tracing provides end-to-end instrumentation to give you a complete picture of how your app behaves.

Trace Overview

MLflow has three approaches to tracing.

Which approach should I use?

Start with automatic tracing. It's the fastest way to get traces working. Add manual tracing later if you need more control.

For example, you could use the auto-tracing for OpenAI's SDK and manual tracing to combine multiple LLM calls into a single trace that represents your application's end to end logic.

Determine the best tracing approach for your use case based on how you are writing your application's code:

Using one GenAI library (LangGraph, CrewAI, OpenAI Agents, Bedrock Agents, and others)

Using LLM SDKs directly (e.g., OpenAI SDK, Anthropic SDK, Bedrock SDK, etc)

  • Use automatic tracing for the API library
  • Add manual tracing decorators to combine multiple LLM calls into a single trace

Using multiple GenAI libraries or SDKs (e.g., LangGraph AND OpenAI SDK, etc)

  • Enable automatic tracing for each framework / SDK
  • Add manual tracing decorators to combine calls to multiple frameworks or SDKs into a single trace

All other approaches or you have a need more control

  • Use manual tracing
    • Start with the high-level APIs (@mlflow.trace decorator and fluent context managers) which provide a balance of control and ease of use
    • Use the low-level APIs only if the high-level APIs don't give you enough control

Next steps

Reference guides

Explore detailed documentation for concepts and features mentioned in this guide.