Monitor served models with Lakehouse Monitoring

Preview

This feature is in Public Preview.

With Databricks Lakehouse Monitoring, you can monitor the quality of served models in a Databricks Model Serving endpoints using inference tables.

Requirements

To monitor served models in your serving endpoints using inference tables, you must be enrolled and meet the requirements of both the Public Preview for inference tables and the Public Preview for Lakehouse Monitoring.

Set up model monitoring

You can set up model monitoring with the following these steps:

  1. Enable inference tables on your endpoint, either during endpoint creation or by updating it afterwards.

  2. Schedule a workflow to process the JSON payloads in the inference table by unpacking them according to the schema of the endpoint.

  3. (Optional) Join the unpacked requests and responses with ground-truth labels to allow model quality metrics to be calculated.

  4. Create a monitor over the resulting Delta table and refresh the metrics.

The starter notebook below implements this workflow.

Starter notebook for monitoring an inference table

The following notebook implements the steps outlined above to unpack requests from an inference table and enable model monitoring. The notebook can be run on demand, or on a recurring schedule using Databricks Workflows.

Inference table Lakehouse Monitoring starter notebook

Open notebook in new tab