Skip to main content

Build a multi-agent system on Databricks Apps

Instead of building one agent that does everything, a multi-agent orchestrator routes requests to specialized subagents from a single entry point.

For example, you can combine a RAG agent that queries unstructured documents with a Genie agent that queries structured data, so users get answers from multiple sources.

The orchestrator treats each subagent as a tool and uses its instructions to route requests to the right one. The orchestrator supports the following subagent types:

  • Databricks Apps agents: Other agents deployed as Databricks Apps, called through the Responses API.
  • Genie spaces: Natural language data querying through the built-in Databricks MCP server.
  • Serving endpoints: Knowledge assistants, agents, or models on Model Serving that support the Responses API.

Requirements

Try Agent Supervisor first

Before building a custom orchestrator, consider Use Agent Bricks: Supervisor Agent to create a coordinated multi-agent system. It builds and manages the multi-agent system for you through a UI. You can connect Genie spaces, agent endpoints, Unity Catalog functions, and MCP servers, then improve coordination quality over time using natural language feedback from subject matter experts.

Build a multi-agent system on Databricks Apps if you need custom routing logic or orchestration behavior that Agent Supervisor doesn't support.

Clone the multi-agent orchestrator template

The multi-agent orchestrator template provides the scaffolding for project structure and orchestration logic using the OpenAI Agents SDK. It also includes skill files that teach AI coding assistants how to develop the orchestrator.

Clone the template and go to the folder:

Bash
git clone https://github.com/databricks/app-templates.git
cd app-templates/agent-openai-agents-sdk-multiagent

Configure subagents

Each backend the orchestrator can call is defined as a subagent in the SUBAGENTS list in agent_server/agent.py.

Uncomment and configure the entries you need. Update the description to describe the subagent in more detail. Description quality is directly related to how well the orchestrator can route requests to the correct subagent:

Python
SUBAGENTS = [
{
"name": "genie",
"type": "genie",
"space_id": "<YOUR-GENIE-SPACE-ID>",
"description": (
"Query a Genie space for structured data analysis. "
"Use this for questions about data, metrics, and tables."
),
},
{
"name": "app_agent",
"type": "app",
"endpoint": "<YOUR-APP-AGENT-NAME>",
"description": (
"Query a specialist agent deployed as a Databricks App. "
"Use this for questions the specialist app agent handles."
),
},
{
"name": "knowledge_assistant",
"type": "serving_endpoint",
"endpoint": "<YOUR-ENDPOINT>",
"description": (
"Query the knowledge-assistant endpoint on Model Serving. "
"Use this for knowledge-base and documentation lookups. "
"The endpoint must have task type agent/v1/responses."
),
},
]

Each entry automatically becomes a tool that the orchestrator can call. You must enable at least one subagent.

The following table describes each subagent type:

Type

How it connects

Requirements

app

Responses API via apps/<name>

OAuth authentication, CAN_USE permission on the target app

genie

Built-in Databricks MCP server

Genie space ID, CAN_RUN permission

serving_endpoint

Responses API via endpoint name

Endpoint must have task type Agent (Responses) on the Serving UI. Includes knowledge assistants, agents, and models.

Customize the orchestrator

The orchestrator agent is created in the create_orchestrator_agent() function. Update the instructions to describe your specific tools and when to use each one:

Python
Agent(
name="Orchestrator",
instructions=(
"You are an orchestrator agent. Route the user's request to the "
"most appropriate tool or data source:\n"
"- Use the Genie MCP tools for questions about structured data in <dataset_name> that contains information about <topic>\n"
"- Use query_app_agent for questions or tasks that the specialist app agent handles for ...\n"
"- Use query_knowledge_assistant for knowledge-base lookups about <topic>.\n"
"If unsure, ask the user for clarification."
),
model="databricks-claude-sonnet-4-5",
mcp_servers=[mcp_server] if mcp_server else [],
tools=subagent_tools,
)
tip

The more specific the orchestrator instructions, the more accurately it routes requests. Describe each tool's purpose and the types of questions it handles.

Configure resources and permissions

Declare the resources your orchestrator needs in databricks.yml. Each subagent type requires its own resource entry:

YAML
resources:
- name: 'genie_space'
genie_space:
name: 'Genie Space'
space_id: '<YOUR-GENIE-SPACE-ID>'
permission: 'CAN_RUN'

- name: 'serving_endpoint'
serving_endpoint:
name: '<YOUR-ENDPOINT>'
permission: 'CAN_QUERY'

Update the placeholder values in databricks.yml to match the subagents you configured in agent_server/agent.py.

Grant the orchestrator access to a target Databricks app

If your orchestrator calls a subagent Databricks app, you must manually grant the orchestrator app's service principal CAN_USE permission on the target app. This permission cannot be declared as a bundle resource and must be applied after deployment.

note

The service_principal_name field in the permissions request must be the service principal's client ID (UUID), not the display name. Using the display name silently succeeds but doesn't grant the permission. The databricks apps get command returns this value as service_principal_client_id.

  1. Find the orchestrator app's service principal client ID:

    Bash
    databricks apps get <YOUR-ORCHESTRATOR-APP-NAME> --output json | jq -r '.service_principal_client_id'
  2. Grant the orchestrator app's service principal CAN_USE permission on the target app:

    Bash
    databricks apps update-permissions <TARGET-APP-NAME> \
    --json '{"access_control_list": [{"service_principal_name": "<SP-CLIENT-ID>", "permission_level": "CAN_USE"}]}'

Test locally

Set up your local environment and start the agent:

Bash
uv run quickstart
uv run start-app

The quickstart script configures Databricks authentication and creates an MLflow experiment for tracing. After setup, start-app launches the agent server and a chat UI at http://localhost:8000.

Deploy to Databricks Apps

Deploy the orchestrator using Databricks Asset Bundles:

  1. Validate the bundle configuration:

    Bash
    databricks bundle validate
  2. Deploy the bundle to your workspace:

    Bash
    databricks bundle deploy
  3. Start the app:

    Bash
    databricks bundle run agent_openai_agents_sdk_multiagent
important

bundle deploy uploads files but doesn't start the app. Run bundle run to start the app.

Next steps

After deploying your orchestrator, explore the following resources: