Deploy custom models

This article describes support for deploying a custom model using Mosaic AI Model Serving. It also provides details about supported model logging options and compute types, how to package model dependencies for serving, and endpoint creation and scaling.

What are custom models?

Model Serving can deploy any Python model as a production-grade API. Databricks refers to such models as custom models. These ML models can be trained using standard ML libraries like scikit-learn, XGBoost, PyTorch, and HuggingFace transformers and can include any Python code.

To deploy a custom model,

  1. Log the model or code in the MLflow format, using either native MLflow built-in flavors or pyfunc.

  2. After the model is logged, register it in the Unity Catalog (recommended) or the workspace registry.

  3. From here, you can create a model serving endpoint to deploy and query your model.

    1. See Create custom model serving endpoints

    2. See Query serving endpoints for custom models.

For a complete tutorial on how to serve custom models on Databricks, see Model serving tutorial.

Databricks also supports serving generative AI models for generative AI applications, see Foundation Model APIs and External models for supported models and compute offerings.

Important

If you rely on Anaconda, review the terms of service notice for additional information.

Log ML models

There are different methods to log your ML model for model serving. The following list summarizes the supported methods and examples.

  • Autologging This method is automatically enabled when using Databricks Runtime for ML.

    import mlflow
    from sklearn.ensemble import RandomForestRegressor
    from sklearn.datasets import load_iris
    
    iris = load_iris()
    model = RandomForestRegressor()
    model.fit(iris.data, iris.target)
    
  • Log using MLflow’s built-in flavors. You can use this method if you want to manually log the model for more detailed control.

    import mlflow
    from sklearn.ensemble import RandomForestClassifier
    from sklearn.datasets import load_iris
    
    iris = load_iris()
    model = RandomForestClassifier()
    model.fit(iris.data, iris.target)
    
    with mlflow.start_run():
        mlflow.sklearn.log_model(model, "random_forest_classifier")
    
  • Custom logging with pyfunc. You can use this method for deploying arbitrary python code models or deploying additional code alongside your model.

      import mlflow
      import mlflow.pyfunc
    
      class Model(mlflow.pyfunc.PythonModel):
          def predict(self, context, model_input):
              return model_input * 2
    
      with mlflow.start_run():
          mlflow.pyfunc.log_model("custom_model", python_model=Model())
    
  • Download from HuggingFace. You can download a model directly from Hugging Face and log that model for serving. For examples, see Notebook examples.

Signature and input examples

Adding a signature and input example to MLflow is recommended. Signatures are necessary for logging models to the Unity Catalog.

The following is a signature example:

from mlflow.models.signature import infer_signature

signature = infer_signature(training_data, model.predict(training_data))
mlflow.sklearn.log_model(model, "model", signature=signature)

The following is an input example:


input_example = {"feature1": 0.5, "feature2": 3}
mlflow.sklearn.log_model(model, "model", input_example=input_example)

Compute type

Mosaic AI Model Serving provides a variety of CPU and GPU options for deploying your model. When deploying with a GPU, it is essential to make sure that your code is set up so that predictions are run on the GPU, using the methods provided by your framework. MLflow does this automatically for models logged with the PyTorch or Transformers flavors.

workload type

GPU instance

memory

CPU

4GB per concurrency

GPU_SMALL

1xT4

16GB

GPU_MEDIUM

1xA10G

24GB

MULTIGPU_MEDIUM

4xA10G

96GB

GPU_MEDIUM_8

8xA10G

192GB

GPU_LARGE_8

8xA100-80GB

320GB

Deployment container and dependencies

During deployment, a production-grade container is built and deployed as the endpoint. This container includes libraries automatically captured or specified in the MLflow model.

The model serving container doesn’t contain pre-installed dependencies, which might lead to dependency errors if not all required dependencies are included in the model. When running into model deployment issues, Databricks recommends you test the model locally.

Package and code dependencies

Custom or private libraries can be added to your deployment. See Use custom Python libraries with Model Serving.

For MLflow native flavor models, the necessary package dependencies are automatically captured.

For custom pyfunc models, dependencies can be explicitly added.

You can add package dependencies using:

  • The pip_requirements parameter:

    mlflow.sklearn.log_model(model, "sklearn-model", pip_requirements = ["scikit-learn", "numpy"])
    
  • The conda_env parameter:

    
    conda_env = {
        'channels': ['defaults'],
        'dependencies': [
            'python=3.7.0',
            'scikit-learn=0.21.3'
        ],
        'name': 'mlflow-env'
    }
    
    mlflow.sklearn.log_model(model, "sklearn-model", conda_env = conda_env)
    
  • To include additional requirements beyond what is automatically captured, use extra_pip_requirements.

    mlflow.sklearn.log_model(model, "sklearn-model", extra_pip_requirements = ["sklearn_req"])
    

If you have code dependencies, these can be specified using code_path.

  mlflow.sklearn.log_model(model, "sklearn-model", code_path=["path/to/helper_functions.py"],)

Dependency validation

Prior to deploying a custom MLflow model, it is beneficial to verify that the model is capable of being served. MLflow provides an API that allows for validation of the model artifact that both simulates the deployment environment and allows for testing of modified dependencies.

There are two pre-deployment validation APIs the MLflow Python API and the MLflow CLI.

You can specify the following using either of these APIs.

  • The model_uri of the model that is deployed to model serving.

  • One of the following:

    • The input_data in the expected format for the mlflow.pyfunc.PyFuncModel.predict() call of the model.

    • The input_path that defines a file containing input data that will be loaded and used for the call to predict.

  • The content_type in csv or json format.

  • An optional output_path to write the predictions to a file. If you omit this parameter, the predictions are printed to stdout.

  • An environment manager, env_manager, that is used to build the the environment for serving:

    • The default is virtualenv. Recommended for serving validation.

    • local is available, but potentially error prone for serving validation. Generally used only for rapid debugging.

  • Whether to install the current version of MLflow that is in your environment with the virtual environment using install_mlflow. This setting defaults to False.

  • Whether to update and test different versions of package dependencies for troubleshooting or debugging. You can specify this as a list of string dependency overrides or additions using the override argument, pip_requirements_override.

For example:

import mlflow

run_id = "..."
model_uri = f"runs:/{run_id}/model"

mlflow.models.predict(
  model_uri=model_uri,
  input_data={"col1": 34.2, "col2": 11.2, "col3": "green"},
  content_type="json",
  env_manager="virtualenv",
  install_mlflow=False,
  pip_requirements_override=["pillow==10.3.0", "scipy==1.13.0"],
)

Dependency updates

If there are any issues with the dependencies specified with a logged model, you can update the requirements by using the MLflow CLI or mlflow.models.model.update_model_requirements() in th MLflow Python API without having to log another model.

The following example shows how update the pip_requirements.txt of a logged model in-place.

You can update existing definitions with specified package versions or add non-existent requirements to the pip_requirements.txt file. This file is within the MLflow model artifact at the specified model_uri location.

from mlflow.models.model import update_model_requirements

update_model_requirements(
  model_uri=model_uri,
  operation="add",
  requirement_list=["pillow==10.2.0", "scipy==1.12.0"],
)

Expectations and limitations

The following sections describe known expectations and limitations for serving custom models using Model Serving.

Endpoint creation and update expectations

Note

The information in this section does not apply to endpoints that serve foundation models or external models.

Deploying a newly registered model version involves packaging the model and its model environment and provisioning the model endpoint itself. This process can take approximately 10 minutes.

Databricks performs a zero-downtime update of endpoints by keeping the existing endpoint configuration up until the new one becomes ready. Doing so reduces risk of interruption for endpoints that are in use.

If model computation takes longer than 120 seconds, requests will time out. If you believe your model computation will take longer than 120 seconds, reach out to your Databricks account team.

Databricks performs occasional zero-downtime system updates and maintenance on existing Model Serving endpoints. During maintenance, Databricks reloads models and marks an endpoint as Failed if a model fails to reload. Make sure your customized models are robust and are able to reload at any time.

Endpoint scaling expectations

Note

The information in this section does not apply to endpoints that serve foundation models or external models.

Serving endpoints automatically scale based on traffic and the capacity of provisioned concurrency units.

  • Provisioned concurrency: The maximum number of parallel requests the system can handle. Estimate the required concurrency using the formula: provisioned concurrency = queries per second (QPS) * model execution time (s).

  • Scaling behavior: Endpoints scale up almost immediately with increased traffic and scale down every five minutes to match reduced traffic.

  • Scale to zero: Endpoints can scale down to zero after 30 minutes of inactivity. The first request after scaling to zero experiences a “cold start,” leading to higher latency. For latency-sensitive applications, consider strategies to manage this feature effectively.

GPU workload limitations

The following are limitations for serving endpoints with GPU workloads:

  • Container image creation for GPU serving takes longer than image creation for CPU serving due to model size and increased installation requirements for models served on GPU.

  • When deploying very large models, the deployment process might timeout if the container build and model deployment exceed a 60-minute duration. Should this occur, initiating a retry of the process should successfully deploy the model.

  • Autoscaling for GPU serving takes longer than for CPU serving.

  • GPU capacity is not guaranteed when scaling to zero. GPU endpoints might expect extra high latency for the first request after scaling to zero.

  • This functionality is not available in ap-southeast-1.

Anaconda licensing update

The following notice is for customers relying on Anaconda.

Important

Anaconda Inc. updated their terms of service for anaconda.org channels. Based on the new terms of service you may require a commercial license if you rely on Anaconda’s packaging and distribution. See Anaconda Commercial Edition FAQ for more information. Your use of any Anaconda channels is governed by their terms of service.

MLflow models logged before v1.18 (Databricks Runtime 8.3 ML or earlier) were by default logged with the conda defaults channel (https://repo.anaconda.com/pkgs/) as a dependency. Because of this license change, Databricks has stopped the use of the defaults channel for models logged using MLflow v1.18 and above. The default channel logged is now conda-forge, which points at the community managed https://conda-forge.org/.

If you logged a model before MLflow v1.18 without excluding the defaults channel from the conda environment for the model, that model may have a dependency on the defaults channel that you may not have intended. To manually confirm whether a model has this dependency, you can examine channel value in the conda.yaml file that is packaged with the logged model. For example, a model’s conda.yaml with a defaults channel dependency may look like this:

channels:
- defaults
dependencies:
- python=3.8.8
- pip
- pip:
    - mlflow
    - scikit-learn==0.23.2
    - cloudpickle==1.6.0
      name: mlflow-env

Because Databricks can not determine whether your use of the Anaconda repository to interact with your models is permitted under your relationship with Anaconda, Databricks is not forcing its customers to make any changes. If your use of the Anaconda.com repo through the use of Databricks is permitted under Anaconda’s terms, you do not need to take any action.

If you would like to change the channel used in a model’s environment, you can re-register the model to the model registry with a new conda.yaml. You can do this by specifying the channel in the conda_env parameter of log_model().

For more information on the log_model() API, see the MLflow documentation for the model flavor you are working with, for example, log_model for scikit-learn.

For more information on conda.yaml files, see the MLflow documentation.