Skip to main content

Tracing Mistral

Mistral tracing via autolog

MLflow Tracing ensures observability for your interactions with Mistral AI models. When Mistral auto-tracing is enabled by calling the mlflow.mistral.autolog function, usage of the Mistral SDK will automatically record generated traces during interactive development.

Note that only synchronous calls to the Text Generation API are supported, and that asynchronous API and streaming methods are not traced.

Prerequisites

Before running the examples below, make sure you have:

  1. Databricks credentials configured: If running outside of Databricks, set your environment variables:

    Bash
    export DATABRICKS_HOST="https://your-workspace.cloud.databricks.com"
    export DATABRICKS_TOKEN="your-personal-access-token"
    tip

    If you're running inside a Databricks notebook, these are automatically set for you.

  2. Mistral API key: Set your API key as an environment variable:

    Bash
    export MISTRAL_API_KEY="your-mistral-api-key"

Example Usage

Python
import os

from mistralai import Mistral

import mlflow

# Turn on auto tracing for Mistral AI by calling mlflow.mistral.autolog()
mlflow.mistral.autolog()

# Set up MLflow tracking on Databricks
mlflow.set_tracking_uri("databricks")
mlflow.set_experiment("/Shared/mistral-demo")

# Configure your API key.
client = Mistral(api_key=os.environ["MISTRAL_API_KEY"])

# Use the chat complete method to create new chat.
chat_response = client.chat.complete(
model="mistral-small-latest",
messages=[
{
"role": "user",
"content": "Who is the best French painter? Answer in one short sentence.",
},
],
)
print(chat_response.choices[0].message)

Disable auto-tracing

Auto tracing for Mistral can be disabled globally by calling mlflow.mistral.autolog(disable=True) or mlflow.autolog(disable=True).

Next Steps