Skip to main content

Feedback model (deprecated)

important

Deprecation notice: The feedback model, an experimental API for collecting agent feedback, has been deprecated and will be removed in a future release.

Action required: Make sure to log your model with MLflow 3 and use the log_feedback API instead.

Timeline:

  • December 4, 2025:
    • The legacy experimental API for logging feedback will no longer be supported for agents deployed with the latest version of databricks-agents. Use the MLflow 3 Assessments API instead.
    • Legacy request_logs and assessment_logs tables are no longer populated by Mosaic AI. You can create your own replacement table using materialized views. See alternative solutions for MLflow 2.

The feedback model allows you to programmatically collect feedback on agent responses. When you deploy an agent using agents.deploy(), Databricks automatically creates a feedback model endpoint alongside your agent.

This endpoint accepts structured feedback (ratings, comments, assessments) and logs it to inference tables. However, this approach has been replaced by MLflow 3's more robust feedback capabilities.

How the feedback API works

The feedback model exposed a REST endpoint that accepted structured feedback about agent responses. You would send feedback via a POST request to the feedback endpoint after your agent processed a request.

Example feedback request:

Bash
curl \
-u token:$DATABRICKS_TOKEN \
-X POST \
-H "Content-Type: application/json" \
-d '
{
"dataframe_records": [
{
"source": {
"id": "user@company.com",
"type": "human"
},
"request_id": "573d4a61-4adb-41bd-96db-0ec8cebc3744",
"text_assessments": [
{
"ratings": {
"answer_correct": {
"value": "positive"
},
"accurate": {
"value": "positive"
}
},
"free_text_comment": "The answer used the provided context to talk about Lakeflow Declarative Pipelines"
}
],
"retrieval_assessments": [
{
"ratings": {
"groundedness": {
"value": "positive"
}
}
}
]
}
]
}' \
https://<workspace-host>.databricks.com/serving-endpoints/<your-agent-endpoint-name>/served-models/feedback/invocations

You can pass additional or different key-value pairs in the text_assessments.ratings and retrieval_assessments.ratings fields to provide different types of feedback. In the example, the feedback payload indicates that the agent's response to the request with ID 573d4a61-4adb-41bd-96db-0ec8cebc3744 is correct, accurate, and grounded in context fetched by a retriever tool.

Feedback API limitations

The experimental feedback API has several limitations:

  • No input validation; The API always responds successfully, even with invalid input
  • Required Databricks request ID: You need to pass the databricks_request_id from the original agent request
  • Inference table dependency: Feedback is collected using inference tables with their inherent limitations
  • Limited error handling: No meaningful error messages for troubleshooting

To get the required databricks_request_id, you must include {"databricks_options": {"return_trace": True}} in your original request to the agent serving endpoint.

Migrate to MLflow 3

Instead of using the deprecated feedback model, migrate to MLflow 3 for comprehensive feedback and assessment capabilities:

  • First-class assessment logging with robust validation and error handling
  • Real-time tracing integration for immediate feedback visibility
  • Review App integration with enhanced stakeholder collaboration features
  • Production monitoring support with automated quality assessment

To migrate existing workloads to MLflow 3:1:

  1. Upgrade to MLflow 3.1.3 or above in your development environment:

    Python
    %pip install mlflow>=3.1.3
    dbutils.library.restartPython()
  2. Replace feedback API calls with MLflow 3 assessment logging:

  3. Deploy your agent with MLflow 3:

    • Real-time tracing automatically captures all interactions
    • Assessments attach directly to traces for unified visibility
  4. Set up production monitoring (optional):

Next steps