Log and register AI agents
Log AI agents using Mosaic AI Agent Framework. Logging an agent is the basis of the development process. Logging captures a "point in time" of the agent's code and configuration so you can evaluate the quality of the configuration.
Requirements
Create an AI agent before logging it.
Databricks recommends installing the latest version of the databricks-sdk
.
% pip install databricks-sdk
Code-based logging
Databricks recommends using MLflow's Models from Code functionality when logging agents.
In this approach, the agent's code is captured as a Python file, and the Python environment is captured as a list of packages. When the agent is deployed, the Python environment is restored, and the agent's code is run to load the agent into memory so it can be invoked when the endpoint is called.
You can couple this approach with the use of pre-deployment validation APIs like mlflow.models.predict() to ensure that the agent runs reliably when deployed for serving.
To see an example of code-based logging, see ResponsesAgent authoring example notebooks.
Infer Model Signature during logging
Databricks recommends authoring an agent using the ResponsesAgent interface. If using ResponsesAgent, you can skip this section; MLflow automatically infers a valid signature for your agent.
If not using the ResponsesAgent
interface, you must use one of the following methods to specify your agent's MLflow Model Signature at logging time:
- Manually define the signature
- Use MLflow's Model Signature inferencing capabilities to automatically generate the agent's signature based on an input example you provide. This approach is more convenient than manually defining the signature.
The MLflow model signature validates inputs and outputs to ensure the agent interacts correctly with downstream tools like AI Playground and the review app. It also guides other applications on how to use the agent effectively.
The LangChain and PyFunc examples below use Model Signature inferencing.
If you would rather explicitly define a Model Signature yourself at logging time, see MLflow docs - How to log models with signatures.
Code-based logging with LangChain
The following instructions and code sample show you how to log an agent with LangChain.
-
Create a notebook or Python file with your code. For this example, the notebook or file is named
agent.py
. The notebook or file must contain a LangChain agent, referred to here aslc_agent
. -
Include mlflow.models.set_model(lc_agent) in the notebook or file.
-
Create a new notebook to serve as the driver notebook (called
driver.py
in this example). -
In the driver notebook, use the following code to run
agent.py
and log the results to an MLflow model:Pythonmlflow.langchain.log_model(lc_model="/path/to/agent.py", resources=list_of_databricks_resources)
The
resources
parameter declares Databricks-managed resources needed to serve the agent, such as a vector search index or serving endpoint that serves a foundation model. For more information, see Authentication for Databricks resources. -
Deploy the model. See Deploy an agent for generative AI applications.
-
When the serving environment is loaded,
agent.py
is run. -
When a serving request comes in,
lc_agent.invoke(...)
is called.
import mlflow
code_path = "/Workspace/Users/first.last/agent.py"
config_path = "/Workspace/Users/first.last/config.yml"
# Input example used by MLflow to infer Model Signature
input_example = {
"messages": [
{
"role": "user",
"content": "What is Retrieval-augmented Generation?",
}
]
}
# example using langchain
with mlflow.start_run():
logged_agent_info = mlflow.langchain.log_model(
lc_model=code_path,
model_config=config_path, # If you specify this parameter, this configuration is used by agent code. The development_config is overwritten.
artifact_path="agent", # This string is used as the path inside the MLflow model where artifacts are stored
input_example=input_example, # Must be a valid input to the agent
example_no_conversion=True, # Required
)
print(f"MLflow Run: {logged_agent_info.run_id}")
print(f"Model URI: {logged_agent_info.model_uri}")
# To verify that the model has been logged correctly, load the agent and call `invoke`:
model = mlflow.langchain.load_model(logged_agent_info.model_uri)
model.invoke(example)
Code-based logging with PyFunc
The following instructions and code sample show you how to log an agent with PyFunc.
-
Create a notebook or Python file with your code. For this example, the notebook or file is named
agent.py
. The notebook or file must contain a PyFunc class, namedPyFuncClass
. -
Include
mlflow.models.set_model(PyFuncClass)
in the notebook or file. -
Create a new notebook to serve as the driver notebook (called
driver.py
in this example). -
In the driver notebook, use the following code to run
agent.py
and uselog_model()
to log the results to an MLflow model:Pythonmlflow.pyfunc.log_model(python_model="/path/to/agent.py", resources=list_of_databricks_resources)
The
resources
parameter declares Databricks-managed resources needed to serve the agent, such as a vector search index or serving endpoint that serves a foundation model. For more information, see Authentication for Databricks resources. -
Deploy the model. See Deploy an agent for generative AI applications.
-
When the serving environment is loaded,
agent.py
is run. -
When a serving request comes in,
PyFuncClass.predict(...)
is called.
import mlflow
from mlflow.models.resources import (
DatabricksServingEndpoint,
DatabricksVectorSearchIndex,
)
code_path = "/Workspace/Users/first.last/agent.py"
config_path = "/Workspace/Users/first.last/config.yml"
# Input example used by MLflow to infer Model Signature
input_example = {
"messages": [
{
"role": "user",
"content": "What is Retrieval-augmented Generation?",
}
]
}
with mlflow.start_run():
logged_agent_info = mlflow.pyfunc.log_model(
python_model=agent_notebook_path,
artifact_path="agent",
input_example=input_example,
resources=resources_path,
example_no_conversion=True,
resources=[
DatabricksServingEndpoint(endpoint_name="databricks-meta-llama-3-3-70b-instruct"),
DatabricksVectorSearchIndex(index_name="prod.agents.databricks_docs_index"),
]
)
print(f"MLflow Run: {logged_agent_info.run_id}")
print(f"Model URI: {logged_agent_info.model_uri}")
# To verify that the model has been logged correctly, load the agent and call `invoke`:
model = mlflow.pyfunc.load_model(logged_agent_info.model_uri)
model.invoke(example)
Authentication for Databricks resources
AI agents often must authenticate to other resources to complete tasks. For example, an agent may need to access a Vector Search index to query unstructured data.
As described in Authentication for dependent resources, Model Serving supports authenticating to both Databricks-managed and external resources when you deploy the agent.
Model Serving supports two methods for authenticating to Databricks-managed resources:
-
System authentication: The agent uses a Databricks service principal to access the dependent resources you declare at logging time. Use this method for shared, org-level resources where all agent users should have the same access. For example, a shared Vector Search index of organization-approved documentation, a shared model serving endpoint, or a read-only SQL warehouse used by the application.
-
[Public Preview] On-behalf-of-user authorization: The agent uses the caller’s user identity to access Databricks resources. Use this when you must enforce per-user governance or access user-scoped data. For example, Unity Catalog tables with row/column-level policies, personal Genie spaces, or user-owned connections, or when you need user-attributed auditing.
For auditing purposes, system authentication attributes access to the service principal, while on-behalf-of-user authorization attributes it to the individual user.
Specify resources for automatic authentication passthrough (system authentication)
For the most common Databricks resource types, Databricks supports and recommends declaring resource dependencies for the agent upfront during logging. This enables automatic authentication passthrough when you deploy the agent - Databricks automatically provisions, rotates, and manages short-lived credentials to securely access these resource dependencies from within the agent endpoint.
To enable automatic authentication passthrough, specify dependent resources using the resources
parameter of the log_model()
API, as shown in the following code.
import mlflow
from mlflow.models.resources import (
DatabricksVectorSearchIndex,
DatabricksServingEndpoint,
DatabricksSQLWarehouse,
DatabricksFunction,
DatabricksGenieSpace,
DatabricksTable,
DatabricksUCConnection,
DatabricksApp
)
with mlflow.start_run():
logged_agent_info = mlflow.pyfunc.log_model(
python_model=agent_notebook_path,
artifact_path="agent",
input_example=input_example,
example_no_conversion=True,
# Specify resources for automatic authentication passthrough
resources=[
DatabricksVectorSearchIndex(index_name="prod.agents.databricks_docs_index"),
DatabricksServingEndpoint(endpoint_name="databricks-meta-llama-3-3-70b-instruct"),
DatabricksServingEndpoint(endpoint_name="databricks-bge-large-en"),
DatabricksSQLWarehouse(warehouse_id="your_warehouse_id"),
DatabricksFunction(function_name="ml.tools.python_exec"),
DatabricksGenieSpace(genie_space_id="your_genie_space_id"),
DatabricksTable(table_name="your_table_name"),
DatabricksUCConnection(connection_name="your_connection_name"),
DatabricksApp(app_name="app_name"),
]
)
Databricks recommends you manually specify resources
for all agent flavors.
If you do not specify resources when logging LangChain agents using mlflow.langchain.log_model(...)
, MLflow performs best-effort automatic inference of resources. However, this may not capture all dependencies, resulting in authorization errors when serving or querying the agent.
The following table lists the Databricks resources that support automatic authentication passthrough and the minimum mlflow
version required to log the resource.
Resource type | Minimum |
---|---|
Vector Search index | Requires |
Model serving endpoint | Requires |
SQL warehouse | Requires |
Unity Catalog function | Requires |
Genie space | Requires |
Unity Catalog table | Requires |
Unity Catalog connection | Requires |
On-behalf-of-user authorization
This feature is in Public Preview.
When logging an agent that uses on-behalf-of-user authorization, you must declare the minimum set of Databricks REST API scopes the agent needs to act on behalf of the end user.
This ensures the agent follows the principle of least privilege: tokens are restricted to just the APIs your agent requires, reducing the chance of unauthorized actions or token misuse.
Below is a list of scopes required to access several common types of Databricks resources.
Resource type | Required API scope |
---|---|
Model Serving endpoints |
|
Vector Search endpoints |
|
Vector Search indexes |
|
SQL warehouses |
|
UC connections |
|
MCP Genie spaces |
|
MCP UC functions |
|
MCP Vector Search |
|
MCP external functions |
|
See Deploy an agent using on-behalf-of-user authorization.
Automatic authentication for OpenAI clients
If your agent uses the OpenAI client, use the Databricks SDK to authenticate automatically during deployment. Databricks SDK provides a wrapper for constructing the OpenAI client with authorization automatically configured, get_open_ai_client()
. Run the following in your notebook:
% pip install databricks-sdk[openai]
from databricks.sdk import WorkspaceClient
def openai_client(self):
w = WorkspaceClient()
return w.serving_endpoints.get_open_ai_client()
Then, specify the Model Serving endpoint as part of resources
to authenticate automatically at deployment time.
Register the agent to Unity Catalog
Before you deploy the agent, you must register the agent to Unity Catalog. Registering the agent packages it as a model in Unity Catalog. As a result, you can use Unity Catalog permissions for authorization for resources in the agent.
import mlflow
mlflow.set_registry_uri("databricks-uc")
catalog_name = "test_catalog"
schema_name = "schema"
model_name = "agent_name"
model_name = catalog_name + "." + schema_name + "." + model_name
uc_model_info = mlflow.register_model(model_uri=logged_agent_info.model_uri, name=model_name)