Mosaic AI Gateway

Preview

This feature is in Public Preview.

This article describes Mosaic AI Gateway, the Databricks solution for governing and monitoring access to supported generative AI models and their associated model serving endpoints.

What is Mosaic AI Gateway?

Mosaic AI Gateway is designed to streamline the usage and management of generative AI models within an organization. It is a centralized service that brings governance, monitoring, and production readiness to model serving endpoints. It also allows you to run, secure, and govern AI traffic to democratize and accelerate AI adoption for your organization.

All data is logged into Delta tables in Unity Catalog.

AI Gateway supports the following features:

  • Permission and rate limiting to control who has access and how much access.

  • Payload logging to monitor and audit data being sent to model APIs using inference tables.

  • Usage tracking to monitor operational usage on endpoints and associated costs using system tables.

  • AI Guardrails to prevent unwanted data and unsafe data in requests and responses.

  • Traffic routing to minimize production outages during and after deployment.

Mosaic AI Gateway incurs charges on an enabled feature basis. During preview these paid features include AI Guardrails, payload logging and usage tracking. Features such as query permissions, rate limiting, and traffic routing are free of charge. Any new features are subject to charges.

AI Guardrails

AI Guardrails allow users to configure and enforce data compliance at the model serving endpoint level and to reduce harmful content on any requests sent to the underlying model. Bad requests and responses are blocked and a default message is returned to the user. See how to configure guardrails on a model serving endpoint.

Important

AI Guardrails are only available in regions that support Foundation Model APIs pay-per-token.

The following table summarizes the configurable guardrails.

Guardrail

Definition

Safety filtering

Safety filtering prevents your model from interacting with unsafe and harmful content, like violent crime, self-harm, and hate speech.

AI Gateway safety filter is built with Meta Llama 3. Databricks uses Llama Guard 2-8b as the safety filter. To learn more about the Llama Guard safety filter and what topics apply to the safety filter, see the Meta Llama Guard 2 8B model card

Meta Llama 3 is licensed under the LLAMA 3 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved. Customers are responsible for ensuring compliance with applicable model licenses.

Personally identifiable information (PII) detection

Customers can detect any sensitive information (such as names, addresses, credit card numbers) for users.

For this feature, AI Gateway uses Presidio. The PII classifier can help identify sensitive information or PII in structured and unstructured data. However, because it is using automated detection mechanisms, there is no guarantee that the service will find all sensitive information. Consequently, additional systems and protections should be employed.

These classification methods are primarily scoped to U.S. categories of PII, such as U.S. phone numbers, and social security numbers.

Topic moderation

Capability to list a set of allowed topics. Given a chat request, this guardrail flags the request if its topic is not in the allowed topics.

Keyword filtering

Customers can specify different sets of invalid keywords for both the input and the output. One potential use case for keyword filtering is so the model does not talk about competitors.

This guardrail uses keyword or string matching to decide if the keyword exists in the request or response content.

Use AI Gateway

You can configure AI Gateway features on your model serving endpoints using the Serving UI. See Configure AI Gateway on model serving endpoints.

Limitations

The following are limitations during the preview:

  • AI Gateway is only supported for model serving endpoints that serve external models.

  • AI Gateway is not supported on HIPAA workspaces.

  • When guardrails are used, the request batch size, that is an embeddings batch size, completions batch size, or the n parameter of chat requests, can not exceed 16.