Meta parameters for Databricks managed MCP servers
This feature is in Public Preview.
When you build AI agents that use Databricks managed MCP servers, use the _meta parameter to control tool behavior such as result limits, search filters, or SQL warehouse selection. This allows you to preset configuration while keeping queries flexible for your agent to generate dynamically.
The _meta parameter is part of the official MCP specification.
Tool call arguments vs. _meta parameters
Databricks managed MCP servers handle parameters in two ways:
- Tool call arguments: Parameters that an LLM typically generates dynamically based on user input
_metaparameters: Configuration parameters that you can preset in your agent code to control behavior deterministically
DBSQL MCP server _meta parameters
The following _meta parameters are supported for the DBSQL MCP server:
Parameter name | Type | Description |
|---|---|---|
|
| The ID of the SQL warehouse to use for executing queries. Example: If not specified, the system automatically selects a warehouse based on resources and permissions. |
Example: Specify a SQL warehouse for DBSQL queries
This example shows how to use the warehouse_id _meta parameter to specify which SQL warehouse should execute queries from the DBSQL MCP server using the official Python MCP SDK.
In this scenario, you want to:
- Use a specific SQL warehouse for query execution instead of letting the system select one automatically
- Ensure consistent performance by routing queries to a dedicated warehouse
To run this example, set up your Python environment for managed MCP development:
Expand for code example
To find your SQL warehouse ID, see Connect to a SQL warehouse.
# Import required libraries for MCP client and Databricks authentication
import asyncio
from databricks.sdk import WorkspaceClient
from databricks_mcp.oauth_provider import DatabricksOAuthClientProvider
from mcp.client.streamable_http import streamablehttp_client
from mcp.client.session import ClientSession
from mcp.types import CallToolRequest, CallToolResult
async def run_dbsql_tool_call_with_meta():
# Initialize Databricks workspace client for authentication
workspace_client = WorkspaceClient()
# Construct the MCP server URL for DBSQL
# Replace <workspace-hostname> with your workspace hostname
mcp_server_url = "https://<workspace-hostname>/api/2.0/mcp/sql"
# Establish connection to the MCP server with OAuth authentication
async with streamablehttp_client(
url=mcp_server_url,
auth=DatabricksOAuthClientProvider(workspace_client),
) as (read_stream, write_stream, _):
# Create an MCP session for making tool calls
async with ClientSession(read_stream, write_stream) as session:
# Initialize the session before making requests
await session.initialize()
# Create the tool call request with warehouse_id in _meta
request = CallToolRequest(
method="tools/call",
params={
# Tool name for executing SQL queries
"name": "execute_sql",
# Dynamic arguments - typically provided by your AI agent
"arguments": {
"query": "SELECT * FROM my_catalog.my_schema.my_table LIMIT 10"
},
# Meta parameters - specify which warehouse to use
"_meta": {
"warehouse_id": "a1b2c3d4e5f67890" # Your SQL warehouse ID
}
}
)
# Send the request and get the response
response = await session.send_request(request, CallToolResult)
return response
# Execute the async function and get results
response = asyncio.run(run_dbsql_tool_call_with_meta())
Vector Search MCP server _meta parameters
The following _meta parameters are supported for Vector Search:
Parameter name | Type | Description |
|---|---|---|
|
| Comma-separated list of column names to return in the search results. Example: If not specified, all columns (except internal columns starting with "__") are returned. |
|
| Comma-separated list of column names whose content the reranking model uses for re-scoring. The reranker uses this content to re-score all search results to improve relevance. Example: If not specified, reranking is not performed. |
|
| JSON string containing filters to apply to the search. Must be valid JSON. Example: If not specified, no filters are applied. |
|
| Whether to include the similarity score in the returned results. Supported values: Default: |
|
| Number of results to return. Example: |
|
| Search algorithm to use for retrieving results. Supported values: Default: |
|
| Minimum similarity score threshold for filtering results. Results with scores below this threshold will be excluded. Example: If not specified, no score filtering is applied. |
For detailed information about these parameters, see the Vector Search Python SDK documentation.
Example: Configure maximum results and filters for vector search retrieval
This example shows how to use _meta parameters to control Vector Search behavior while allowing dynamic queries from your AI agent using the official Python MCP SDK.
In this scenario, you want to:
- Always limit search results to exactly 3 items for consistent response times
- Only search recent documentation (updated after 2024-01-01) to ensure relevance
- Use hybrid search for better accuracy than pure vector search
- Return only specific columns (id, text, and metadata)
- Include similarity scores in the results
- Exclude results with similarity scores below 0.5
- Use reranking on text and title columns to improve relevance
To run this example, set up your Python environment for managed MCP development:
Expand for code example
# Import required libraries for MCP client and Databricks authentication
import asyncio
from databricks.sdk import WorkspaceClient
from databricks_mcp.oauth_provider import DatabricksOAuthClientProvider
from mcp.client.streamable_http import streamablehttp_client
from mcp.client.session import ClientSession
from mcp.types import CallToolRequest, CallToolResult
async def run_vector_search_tool_call_with_meta():
# Initialize Databricks workspace client for authentication
workspace_client = WorkspaceClient()
# Construct the MCP server URL for your specific catalog and schema
# Replace <workspace-hostname>, YOUR_CATALOG, and YOUR_SCHEMA with your values
mcp_server_url = "https://<workspace-hostname>/api/2.0/mcp/vector-search/YOUR_CATALOG/YOUR_SCHEMA"
# Establish connection to the MCP server with OAuth authentication
async with streamablehttp_client(
url=mcp_server_url,
auth=DatabricksOAuthClientProvider(workspace_client),
) as (read_stream, write_stream, _):
# Create an MCP session for making tool calls
async with ClientSession(read_stream, write_stream) as session:
# Initialize the session before making requests
await session.initialize()
# Create the tool call request with both dynamic and preset parameters
request = CallToolRequest(
method="tools/call",
params={
# Tool name follows the pattern: CATALOG__SCHEMA__INDEX_NAME
"name": "YOUR_CATALOG__YOUR_SCHEMA__YOUR_INDEX_NAME",
# Dynamic arguments - typically provided by your AI agent or user input
"arguments": {
"query": "How do I reset my password?" # This comes from your agent
},
# Meta parameters - preset configuration to control search behavior
"_meta": {
"num_results": "3", # Limit to 3 results for consistent performance
"filters": '{"updated_after": "2024-01-01"}', # JSON string for date filtering
"query_type": "HYBRID", # Use hybrid search for better relevance
"columns": "id,text,metadata", # Return only specific columns
"score_threshold": "0.5", # Filter out results with similarity score < 0.5
"include_score": "true", # Include similarity scores in results
"columns_to_rerank": "text,title" # Use reranker on these columns for better quality
}
}
)
# Send the request and get the response
response = await session.send_request(request, CallToolResult)
return response
# Execute the async function and get results
response = asyncio.run(run_vector_search_tool_call_with_meta())