Skip to main content

Manage network policies for serverless egress control

Preview

This feature is in Public Preview.

warning

To avoid breaking the connection between BDC and SAP Databricks, refer to the SAP documentation before configuring serverless egress control.

This document explains how to configure and manage network policies to control outbound network connections from your serverless workloads in SAP Databricks.

Requirements

  • Permissions for managing network policies are restricted to account admins.

Accessing network policies

To create, view, and update network policies in your account:

  1. From the account console, click Cloud resources.
  2. Click the Network tab.

Create a new network policy

  1. Click Create new network policy.

  2. Choose a network access mode:

    • Full access: Unrestricted outbound internet access. If you choose Full access, outbound internet access remains unrestricted.
    • Restricted access: Outbound access is limited to specified destinations. For more information, see Network policy overview.

    Network policy details.

Configure network policies

The following steps outline optional settings for restricted access mode.

Set egress rules

  1. To grant your serverless compute access to additional domains, click Add destination above the Allowed domains list.

    Add internet destination.

    The FQDN filter allows access to all domains that share the same IP address. Model serving provisioned throughout endpoints prevents internet access when network access is set to restricted. However, granular control with FQDN filtering is not supported.

  2. To allow your workspace to access additional cloud-storage-containers, click the Add destination button above the Allowed storage accounts list.

When setting egress rules, note:

  • When your metastore and cloud-storage-containers bucket of your UC external location are located in different regions, you must explicitly add the bucket to your egress allowlist for access to succeed.
  • The maximum number of supported destinations is 2500.

Policy enforcement

Dry-run mode allows you to test your policy configuration and monitor outbound connections without disrupting access to resources. When dry-run mode is enabled, requests that violate the policy are logged but not blocked. You can select from the following options:

  1. Databricks SQL: Databricks SQL warehouses operate in dry-run mode.

  2. AI model serving: Model serving endpoints operate in dry-run mode.

  3. All products: All SAP Databricks services operate in dry-run mode, overriding all other selections.

    Add storage destination.

Update the default policy

Each SAP Databricks account includes a default policy. The default policy is associated with all workspaces with no explicit network policy assignment, including newly created workspaces. You can modify this policy, but it cannot be deleted. Default policies are only applied to workspaces with at least eligible-tier.

Associate a network policy to workspaces

If you have updated your default policy with additional configurations, they are automatically applied to workspaces that do not have an existing network policy. Your workspace must be in eligible-tier.

To associate your workspace with a different policy, do the following:

  1. Select a workspace.
  2. In Network Policy, click Update network policy.
  3. Select the desired network policy from the list.

Update network policy.

Apply network policy changes

Most network configuration updates automatically propagate to your serverless compute in ten minutes. This includes:

  • Adding a new Unity Catalog external location or connection.
  • Attaching your workspace to a different metastore.
  • Changing the allowed storage or internet destinations.
note

You must restart your compute if you modify the internet access or dry-run mode setting.

Restart or redeploy serverless workloads

You only need to update when switching internet access mode or when updating dry-run mode.

To determine the appropriate restart procedure, refer to the following list by product:

  • Databricks ML Serving: Redeploy your ML serving endpoint. See _
  • DLT: Stop and then restart your running DLT pipeline. See _.
  • Serverless SQL warehouse: Stop and restart the SQL warehouse. See _.
  • Workflows: Network policy changes are automatically applied when a new job run is triggered or an existing job run is restarted.
  • Notebooks:
    • If your notebook does not interact with Spark, you can terminate and attach a new serverless cluster to refresh your network configuration applied to your notebook.
    • If your notebook interacts with Spark, your serverless resource refreshes and automatically detects the change. Most changes will be refreshed in ten minutes, but switching internet access modes, updating dry-run mode, or changing between attached policies that have different enforcement types can take up to 24 hours. To expedite a refresh on these specific types of changes, turn off all associated notebooks and jobs.

Verify network policy enforcement

You can validate that your network policy is correctly enforced by attempting to access restricted resources from different serverless workloads.

  1. Run a test query in the SQL editor or a notebook that attempts to access a resource controlled by your network policy.
  2. Verify the results:
    • Trusted destination: The query should succeed.
    • Untrusted Destination: The query should fail with a network access error.

Validate with model serving

To validate your network policy using model serving:

Before you begin

When a model serving endpoint is created, a container image is built to serve your model. Network policies are enforced during this build stage. When using model serving with network policies, consider the following:

  • Dependency access: Any external build dependencies like Python packages from PyPI and conda-forge, base container images, or files from external URLs specified in your model's environment or Docker context required by your model's environment must be permitted by your network policy.
    • For example, if your model requires a specific version of scikit-learn that needs to be downloaded during the build, the network policy must allow access to the repository hosting the package.
  • Build failures: If your network policy blocks access to necessary dependencies, the model serving container build will fail. This will prevent the serving endpoint from deploying successfully and potentially cause it to fail to store or function correctly. See _.
  • Troubleshooting denials: Network access denials during the build phase are logged. These logs will feature a network_source_type field with the value ML Build. This information is crucial for identifying the specific blocked resources that must be added to your network policy to allow the build to complete successfully.

Validate runtime network access

The following steps demonstrate how to validate network policy for a deployed model at runtime, specifically for attempts to access external resources during inference. This assumes the model serving container has been built successfully, meaning any build-time dependencies were allowed in the network policy.

  1. Create a test model

    1. In a Python notebook, create a model that attempts to access a public internet resource at inference time, like downloading a file or making an API request.

    2. Run this notebook to generate a model in the test workspace. For example:

      Python
      import mlflow
      import mlflow.pyfunc
      import mlflow.sklearn
      import requests

      class DummyModel(mlflow.pyfunc.PythonModel):
      def load_context(self, context):
      # This method is called when the model is loaded by the serving environment.
      # No network access here in this example, but could be a place for it.
      pass

      def predict(self, _, model_input):
      # This method is called at inference time.
      first_row = model_input.iloc[0]
      try:
      # Attempting network access during prediction
      response = requests.get(first_row['host'])
      except requests.exceptions.RequestException as e:
      # Return the error details as text
      return f"Error: An error occurred - {e}"
      return [response.status_code]

      with mlflow.start_run(run_name='internet-access-model'):
      wrappedModel = DummyModel()

      # When this model is deployed to a serving endpoint,
      # the environment will be built. If this environment
      # itself (e.g., specified conda_env or python_env)
      # requires packages from the internet, the build-time SEG policy applies.
      mlflow.pyfunc.log_model(
      artifact_path="internet_access_ml_model",
      python_model=wrappedModel,
      registered_model_name="internet-http-access"
      )
  2. Create a serving endpoint

    1. In the workspace navigation, select Machine Learning.
    2. Click the Serving tab.
    3. Click Create Serving Endpoint.
    4. Configure the endpoint with the following settings:
      • Serving Endpoint Name: Provide a descriptive name.
      • Entity Details: Select Model registry model.
      • Model: Choose the model you created in the previous step (internet-http-access).
    5. Click Confirm. At this stage, the model serving container build process begins. Network policies for ML Build will be enforced. If the build fails due to blocked network access for dependencies, the endpoint will not become ready.
    6. Wait for the serving endpoint to reach the Ready state. If it fails to become ready, check the denial logs for network_source_type: ML Build entries. See _.
  3. Query the endpoint.

    1. Use the Query Endpoint option in the serving endpoint page to send a test request.

      JSON
      { "dataframe_records": [{ "host": "[https://www.google.com](https://www.google.com)" }] }
  4. Verify the result for run-time access:

    • Internet access enabled at runtime: The query should succeed and return a status code like 200.
    • Internet access restricted at runtime: The query should fail with a network access error, such as the error message from the try-except block in the model code, indicating a connection timeout or host resolution failure.

Update a network policy

You can update a network policy any time after it is created. To update a network policy:

  1. On the details page of the network policy in your accounts console, modify the policy:
    • Change the network access mode.
    • Enable or disable dry-run mode for specific services.
    • Add or remove FQDN or storage destinations.
  2. Click Update.
  3. Refer to Apply network policy changes to verify that the updates are applied to existing workloads.

Limitations

  • Configuration: This feature is only configurable through the account console. API support is not yet available.
  • Artifact upload size: When using MLflow's internal Databricks Filesystem with the dbfs:/databricks/mlflow-tracking/<experiment_id>/<run_id>/artifacts/<artifactPath> format, artifact uploads are limited to 5GB for log_artifact, log_artifacts, and log_model APIs.
  • Model serving: Egress control does not apply when building images for model serving.
  • Denial log delivery for short-lived garbage collection (GC) workloads: Denial logs from short-lived GC workloads lasting less than 120 seconds might not be delivered before the node terminates due to logging delays. Although access is still enforced, the corresponding log entry might be missing.
  • Network Connectivity for Databricks SQL User-defined functions (UDFs): To enable network acccess in Databricks SQL, contact your Databricks account team.
  • DLT Eventhook logging: DLT Eventhooks that target another workspace are not logged. This applies to Eventhooks configured for both cross-region workspaces and workspaces in the same region.