Skip to main content

Label during development

As a developer building GenAI applications, you need a way to track your observations about the quality of your application's outputs. MLflow Tracing allows you to add feedback or expectations directly to traces during development, giving you a quick way to record quality issues, mark successful examples, or add notes for future reference.

Prerequisites

  • Your application is instrumented with MLflow Tracing
  • You have generated traces by running your application

Add assessment labels

Assessments attach structured feedback, scores, or ground truth to traces and spans for quality evaluation and improvement in MLflow.

MLflow makes it easy to add annotations (labels) directly to traces through the MLflow UI.

note

If you are using a Databricks notebook, you can also perform these steps from the Trace UI that renders inline in the notebook.

human feedback

  1. Navigate to the Traces tab in the MLflow Experiment UI
  2. Open an individual trace
  3. Within the trace UI, click on the specific span you want to label
    • Selecting the root span attaches feedback to the entire trace
  4. Expand the Assessments tab at the far right
  5. Fill in the form to add your feedback
    • Assessment Type
      • Feedback: Subjective assessment of quality (ratings, comments)
      • Expectation: The expected output or value (what should have been produced)
    • Assessment Name
      • A unique name for what the feedback is about
    • Data Type
      • Number
      • Boolean
      • String
    • Value
      • Your assessment
    • Rationale
      • Optional notes about the value
  6. Click Create to save your label
  7. When you return to the Traces tab, your label will appear as a new column

Next steps

Continue your journey with these recommended actions and tutorials.

Reference guides

Explore detailed documentation for concepts and features mentioned in this guide.