Skip to main content

Create custom model serving endpoints

This article describes how to create model serving endpoints that serve custom models using Databricks Model Serving.

Model Serving provides the following options for serving endpoint creation:

  • The Serving UI
  • REST API
  • MLflow Deployments SDK

For creating endpoints that serve generative AI models, see Create foundation model serving endpoints.

Requirements

  • Your workspace must be in a supported region.
  • If you use custom libraries or libraries from a private mirror server with your model, see Use custom Python libraries with Model Serving before you create the model endpoint.
  • For creating endpoints using the MLflow Deployments SDK, you must install the MLflow Deployment client. To install it, run:
Python
import mlflow.deployments

client = mlflow.deployments.get_deploy_client("databricks")

Access control

To understand access control options for model serving endpoints for endpoint management, see Manage permissions on a model serving endpoint.

The identity under which a model serving endpoint runs is tied to the original creator of the endpoint. After endpoint creation, the associated identity cannot be changed or updated on the endpoint. This identity and its associated permissions are used to access Unity Catalog resources for deployments. If the identity does not have the appropriate permissions to access the needed Unity Catalog resources, you must delete the endpoint and recreate it under a user or service principal that can access those Unity Catalog resources.

You can also add environment variables to store credentials for model serving. See Configure access to resources from model serving endpoints

Create an endpoint

You can create an endpoint for model serving with the Serving UI.

  1. Click Serving in the sidebar to display the Serving UI.

  2. Click Create serving endpoint.

    Model serving pane in Databricks UI

For models registered in Unity Catalog:

  1. In the Name field provide a name for your endpoint.

    • Endpoint names cannot use the databricks- prefix. This prefix is reserved for Databricks preconfigured endpoints.
  2. In the Served entities section, click into the Entity field to open the Select served entity form.

    1. Select My models- Unity Catalog.
      • Not all models are custom models. Models can also be foundation models. The form dynamically updates based on your selection.
    2. Select which model and model version you want to serve.
    3. Select the percentage of traffic to route to your served model.
    4. Select what size CPU or GPU compute to use. Only the GPU_MEDIUM compute is supported for GPU.
    5. Under Compute Scale-out, select the size of the compute scale out that corresponds with the number of requests this served model can process at the same time. This number should be roughly equal to QPS x model run time. For customer-defined compute settings, see model serving limits. Available sizes are Small for 0-4 requests, Medium 8-16 requests, and Large for 16-64 requests.
    6. Specify if the endpoint should scale to zero when not in use. Scale to zero is not recommended for production endpoints, as capacity is not guaranteed when scaled to zero. When an endpoint scales to zero, there is additional latency, also referred to as a cold start, when the endpoint scales back up to serve requests.
    7. (Optional) To add additional served entities to your endpoint, click Add served entity and repeat the configuration steps above. You can serve multiple models or model versions from a single endpoint and control the traffic split between them. See serve multiple models for more information.
  3. In the Route optimization section, you can enable route optimization for your endpoint. Route optimization is recommended for endpoints with high QPS and throughput requirements. See Route optimization on serving endpoints.

  4. In the AI Gateway section, you can select which governance features to enable on your endpoint. See Mosaic AI Gateway introduction.

  5. Click Create. The Serving endpoints page appears with Serving endpoint state shown as Not Ready.

    Create a model serving endpoint

GPU workload types

GPU deployment is compatible with the following package versions:

  • PyTorch 1.13.0 - 2.0.1
  • TensorFlow 2.5.0 - 2.13.0
  • MLflow 2.4.0 and above

The following examples show how to create GPU endpoints using different methods.

To configure your endpoint for GPU workloads with the Serving UI, select the desired GPU type from the Compute Type dropdown when creating your endpoint. Follow the same steps in Create an endpoint, but select a GPU workload type instead of CPU.

The following table summarizes the available GPU workload types supported.

GPU workload type

GPU instance

GPU memory

GPU_MEDIUM

L4

Modify a custom model endpoint

After enabling a custom model endpoint, you can update the compute configuration as desired. This configuration is particularly helpful if you need additional resources for your model. Workload size and compute configuration play a key role in what resources are allocated for serving your model.

note

Updates to the endpoint configuration can fail. When failures occur the existing active configuration stays effective as if the update didn’t happen.

Verify that the update was successfully applied by reviewing the status of your endpoint.

Until the new configuration is ready, the old configuration keeps serving prediction traffic. While there is an update in progress, another update cannot be made. However, you can cancel an in progress update from the Serving UI.

After you enable a model endpoint, select Edit endpoint to modify the compute configuration of your endpoint.

Edit endpoint button

You can change most aspects of the endpoint configuration, except for the endpoint name and certain immutable properties.

You can cancel an in progress configuration update by selecting Cancel update on the endpoint's details page.

Scoring a model endpoint

To score your model, send requests to the model serving endpoint.

Additional resources

Notebook examples

The following notebooks include different Databricks registered models that you can use to get up and running with model serving endpoints. For additional examples, see Tutorial: Deploy and query a custom model.

The model examples can be imported into the workspace by following the directions in Import a notebook. After you choose and create a model from one of the examples, register it in Unity Catalog, and then follow the UI workflow steps for model serving.

Train and register a scikit-learn model for model serving notebook

Open notebook in new tab

Train and register a HuggingFace model for model serving notebook

Open notebook in new tab