Create a monitor using the API

This page describes how to create a monitor in Databricks using the Databricks SDK and describes all of the parameters used in API calls. You can also create and manage monitors using the REST API. For reference information, see the Lakehouse monitoring SDK reference and the REST API reference.

You can create a monitor on any managed or external Delta table registered in Unity Catalog. Only a single monitor can be created in a Unity Catalog metastore for any table.

Requirements

The Lakehouse Monitoring API is built into databricks-sdk 0.28.0 and above. To use the most recent version of the API, use the following command at the beginning of your notebook to install the Python client:

%pip install "databricks-sdk>=0.28.0"

To authenticate to use the Databricks SDK in your environment, see Authentication.

Profile types

When you create a monitor, you select one of the following profile types: TimeSeries, InferenceLog, or Snapshot. This section briefly describes each option. For details, see the API reference or the REST API reference.

Note

  • When you first create a time series or inference profile, the monitor analyzes only data from the 30 days prior to its creation. After the monitor is created, all new data is processed.

  • Monitors defined on materialized views and streaming tables do not support incremental processing.

Tip

For TimeSeries and Inference profiles, it’s a best practice to enable change data feed (CDF) on your table. When CDF is enabled, only newly appended data is processed, rather than re-processing the entire table every refresh. This makes execution more efficient and reduces costs as you scale monitoring across many tables.

TimeSeries profile

A TimeSeries profile compares data distributions across time windows. For a TimeSeries profile, you must provide the following:

  • A timestamp column (timestamp_col). The timestamp column data type must be either TIMESTAMP or a type that can be converted to timestamps using the to_timestamp PySpark function.

  • The set of granularities over which to calculate metrics. Available granularities are “5 minutes”, “30 minutes”, “1 hour”, “1 day”, “1 week”, “2 weeks”, “3 weeks”, “4 weeks”, “1 month”, “1 year”.

from databricks.sdk import WorkspaceClient
from databricks.sdk.service.catalog import MonitorTimeSeries

w = WorkspaceClient()
w.quality_monitors.create(
  table_name=f"{catalog}.{schema}.{table_name}",
  assets_dir=f"/Workspace/Users/{user_email}/databricks_lakehouse_monitoring/{catalog}.{schema}.{table_name}",
  output_schema_name=f"{catalog}.{schema}",
  time_series=MonitorTimeSeries(timestamp_col=ts, granularities=["30 minutes"])
)

InferenceLog profile

An InferenceLog profile is similar to a TimeSeries profile but also includes model quality metrics. For an InferenceLog profile, the following parameters are required:

Parameter

Description

problem_type

MonitorInferenceLogProblemType.PROBLEM_TYPE_CLASSIFICATION or MonitorInferenceLogProblemType.PROBLEM_TYPE_REGRESSION

prediction_col

Column containing the model’s predicted values.

timestamp_col

Column containing the timestamp of the inference request.

model_id_col

Column containing the id of the model used for prediction.

granularities

Determines how to partition the data in windows across time. Possible values: “5 minutes”, “30 minutes”, “1 hour”, “1 day”, “1 week”, “2 weeks”, “3 weeks”, “4 weeks”, “1 month”, “1 year”.

There is also an optional parameter:

Optional parameter

Description

label_col

Column containing the ground truth for model predictions.

from databricks.sdk import WorkspaceClient
from databricks.sdk.service.catalog import MonitorInferenceLog, MonitorInferenceLogProblemType

w = WorkspaceClient()
w.quality_monitors.create(
  table_name=f"{catalog}.{schema}.{table_name}",
  assets_dir=f"/Workspace/Users/{user_email}/databricks_lakehouse_monitoring/{catalog}.{schema}.{table_name}",
  output_schema_name=f"{catalog}.{schema}",
  inference_log=MonitorInferenceLog(
        problem_type=MonitorInferenceLogProblemType.PROBLEM_TYPE_CLASSIFICATION,
        prediction_col="preds",
        timestamp_col="ts",
        granularities=["30 minutes", "1 day"],
        model_id_col="model_ver",
        label_col="label", # optional
  )
)

For InferenceLog profiles, slices are automatically created based on the the distinct values of model_id_col.

Snapshot profile

In contrast to TimeSeries, a Snapshot profile monitors how the full contents of the table change over time. Metrics are calculated over all data in the table, and monitor the table state at each time the monitor is refreshed.

from databricks.sdk import WorkspaceClient
from databricks.sdk.service.catalog import MonitorSnapshot

w = WorkspaceClient()
w.quality_monitors.create(
  table_name=f"{catalog}.{schema}.{table_name}",
  assets_dir=f"/Workspace/Users/{user_email}/databricks_lakehouse_monitoring/{catalog}.{schema}.{table_name}",
  output_schema_name=f"{catalog}.{schema}",
  snapshot=MonitorSnapshot()
)

Refresh and view monitor results

To refresh metrics tables, use run_refresh. For example:

from databricks.sdk import WorkspaceClient

w = WorkspaceClient()
w.quality_monitors.run_refresh(
    table_name=f"{catalog}.{schema}.{table_name}"
)

When you call run_refresh from a notebook, the monitor metric tables are created or updated. This calculation runs on serverless compute, not on the cluster that the notebook is attached to. You can continue to run commands in the notebook while the monitor statistics are updated.

For information about the statistics that are stored in metric tables, see Monitor metric tables Metric tables are Unity Catalog tables. You can query them in notebooks or in the SQL query explorer, and view them in Catalog Explorer.

To display the history of all refreshes associated with a monitor, use list_refreshes.

from databricks.sdk import WorkspaceClient

w = WorkspaceClient()
w.quality_monitors.list_refreshes(
    table_name=f"{catalog}.{schema}.{table_name}"
)

To get the status of a specific run that has been queued, running, or finished, use get_refresh.

from databricks.sdk import WorkspaceClient

w = WorkspaceClient()
run_info = w.quality_monitors.run_refresh(table_name=f"{catalog}.{schema}.{table_name}")

w.quality_monitors.get_refresh(
    table_name=f"{catalog}.{schema}.{table_name}",
    refresh_id = run_info.refresh_id
)

To cancel a refresh that is queued or running, use cancel_refresh.

from databricks.sdk import WorkspaceClient

w = WorkspaceClient()
run_info = w.quality_monitors.run_refresh(table_name=f"{catalog}.{schema}.{table_name}")

w.quality_monitors.cancel_refresh(
    table_name=f"{catalog}.{schema}.{table_name}",
    refresh_id=run_info.refresh_id
)

View monitor settings

You can review monitor settings using the API get_monitor.

from databricks.sdk import WorkspaceClient

w = WorkspaceClient()
w.quality_monitors.get(f"{catalog}.{schema}.{table_name}")

Schedule

To set up a monitor to run on a scheduled basis, use the schedule parameter of create_monitor:

from databricks.sdk import WorkspaceClient
from databricks.sdk.service.catalog import MonitorTimeSeries, MonitorCronSchedule

w = WorkspaceClient()
w.quality_monitors.create(
  table_name=f"{catalog}.{schema}.{table_name}",
  assets_dir=f"/Workspace/Users/{user_email}/databricks_lakehouse_monitoring/{catalog}.{schema}.{table_name}",
  output_schema_name=f"{catalog}.{schema}",
  time_series=MonitorTimeSeries(timestamp_col=ts, granularities=["30 minutes"]),
  schedule=MonitorCronSchedule(
        quartz_cron_expression="0 0 12 * * ?", # schedules a refresh every day at 12 noon
        timezone_id="PST",
    )
)

See cron expressions for more information.

Notifications

To set up notifications for a monitor, use the notifications parameter of create_monitor:

from databricks.sdk import WorkspaceClient
from databricks.sdk.service.catalog import MonitorTimeSeries, MonitorNotifications, MonitorDestination

w = WorkspaceClient()
w.quality_monitors.create(
  table_name=f"{catalog}.{schema}.{table_name}",
  assets_dir=f"/Workspace/Users/{user_email}/databricks_lakehouse_monitoring/{catalog}.{schema}.{table_name}",
  output_schema_name=f"{catalog}.{schema}",
  time_series=MonitorTimeSeries(timestamp_col=ts, granularities=["30 minutes"]),
  notifications=MonitorNotifications(
        # Notify the given email when a monitoring refresh fails or times out.
        on_failure=MonitorDestination(
            email_addresses=["your_email@domain.com"]
        )
    )
)

A maximum of 5 email addresses is supported per event type (for example, “on_failure”).

Control access to metric tables

The metric tables and dashboard created by a monitor are owned by the user who created the monitor. You can use Unity Catalog privileges to control access to metric tables. To share dashboards within a workspace, use the Share button at the upper-right of the dashboard.

Delete a monitor

To delete a monitor:

from databricks.sdk import WorkspaceClient

w = WorkspaceClient()
w.quality_monitors.delete(table_name=f"{catalog}.{schema}.{table_name}")

This command does not delete the profile tables and the dashboard created by the monitor. You must delete those assets in a separate step, or you can save them in a different location.

Example notebooks

The following example notebooks illustrate how to create a monitor, refresh the monitor, and examine the metric tables it creates.

Notebook example: Time series profile

This notebook illustrates how to create a TimeSeries type monitor.

TimeSeries Lakehouse Monitor example notebook

Open notebook in new tab

Notebook example: Inference profile (regression)

This notebook illustrates how to create a InferenceLog type monitor for a regression problem.

Inference Lakehouse Monitor regression example notebook

Open notebook in new tab

Notebook example: Inference profile (classification)

This notebook illustrates how to create a InferenceLog type monitor for a classification problem.

Inference Lakehouse Monitor classification example notebook

Open notebook in new tab

Notebook example: Snapshot profile

This notebook illustrates how to create a Snapshot type monitor.

Snapshot Lakehouse Monitor example notebook

Open notebook in new tab