Use prompts in deployed applications
This feature is in Beta. Workspace admins can control access to this feature from the Previews page. See Manage Databricks previews.
This guide shows you how to use prompts from the MLflow Prompt Registry in your production GenAI applications.
When deploying GenAI applications, configure them to load prompts from the MLflow Prompt Registry using aliases rather than hard-coded versions. This approach enables dynamic updates without redeployment.
Prerequisites
-
Install MLflow and required packages
Bashpip install --upgrade "mlflow[databricks]>=3.1.0" -
Create an MLflow experiment by following the setup your environment quickstart.
-
Verify you have access to a Unity Catalog schema with the
CREATE FUNCTION,EXECUTE, andMANAGEpermissions to use the prompt registry.
Step 1. Create a new prompt
You can create prompts programmatically using the Python SDK.
Create prompts programmatically with mlflow.genai.register_prompt(). Prompts use double-brace syntax ({{variable}}) for template variables.
import mlflow
# Replace with a Unity Catalog schema where you have CREATE FUNCTION permission
uc_schema = "workspace.default"
# This table will be created in the above UC schema
prompt_name = "summarization_prompt"
# Define the prompt template with variables
initial_template = """\
Summarize content you are provided with in {{num_sentences}} sentences.
Content: {{content}}
"""
# Register a new prompt
prompt = mlflow.genai.register_prompt(
name=f"{uc_schema}.{prompt_name}",
template=initial_template,
# all parameters below are optional
commit_message="Initial version of summarization prompt",
tags={
"author": "data-science-team@company.com",
"use_case": "document_summarization",
"task": "summarization",
"language": "en",
"model_compatibility": "gpt-4"
}
)
print(f"Created prompt '{prompt.name}' (version {prompt.version})")
Step 2. Add an alias to the prompt version
Aliases allow you to assign a static string tag to a specific prompt version, making it easier to reference prompts in production applications. Instead of hardcoding version numbers, you can use meaningful aliases like production, staging, or development. When you must update your production prompt, reassign the production alias to point to a newer version without changing or redeploying your application code.
import mlflow
mlflow.genai.set_prompt_alias(
name=f"{uc_schema}.{prompt_name}",
alias="production",
version=1
)
Step 3. Reference the prompt in your app
After you register your prompt and assign an alias, you can reference it in your deployed applications using the prompt URI format. The recommended approach is to use environment variables to make your application flexible and avoid hardcoding prompt references.
The prompt URI format is: prompts:/{catalog}.{schema}.{prompt_name}@{alias}
Using the prompt we registered in Step 1, the URI would be:
prompts://workspace.default.summarization_prompt@production
Here's how to reference the prompt in your application:
import mlflow
import os
from typing import Optional
mlflow.set_tracking_uri("databricks")
mlflow.set_registry_uri("databricks-uc")
class ProductionApp:
def __init__(self):
# Use environment variable for flexibility
self.prompt_alias = os.getenv("PROMPT_ALIAS", "production")
self.prompt_name = os.getenv("PROMPT_URI", "workspace.default.summarization_prompt")
def get_prompt(self) -> str:
"""Load prompt from registry using alias."""
uri = f"prompts:/{self.prompt_name}@{self.prompt_alias}"
prompt = mlflow.genai.load_prompt(uri)
return prompt
# Rest of your application's code
# Example usage
app = ProductionApp()
prompt = app.get_prompt()
print(f"Loaded prompt: {prompt}")
Version management workflows
Aliases enable you to iterate on prompts during development and promote them through environments without changing application code.
Development workflow
Use a development alias to test prompt changes before promoting to production:
from datetime import datetime
import mlflow
def develop_prompt(base_name: str, changes: str):
"""Iterate on prompts during development."""
# Register new version
new_version = mlflow.genai.register_prompt(
name=base_name,
template=changes,
commit_message=f"Dev iteration: {datetime.now()}"
)
# Update dev alias
mlflow.genai.set_prompt_alias(
name=base_name,
alias="dev",
version=new_version.version
)
return new_version
Promotion workflow
Promote prompts between environments by reassigning aliases:
import mlflow
def promote_prompt(name: str, from_env: str, to_env: str):
"""Promote prompt from one environment to another."""
# Get current version in source environment
source = mlflow.genai.load_prompt(f"prompts:/{name}@{from_env}")
# Point target environment to same version
mlflow.genai.set_prompt_alias(
name=name,
alias=to_env,
version=source.version
)
print(f"Promoted {name} v{source.version} from {from_env} to {to_env}")
Alias strategies
Design your alias strategy based on your team's deployment patterns. The following examples demonstrate common approaches:
import mlflow
# Standard environment aliases
ENVIRONMENT_ALIASES = ["dev", "staging", "production"]
# Feature branch aliases
def create_feature_alias(prompt_name: str, feature: str, version: int):
"""Create alias for feature development."""
mlflow.genai.set_prompt_alias(
name=prompt_name,
alias=f"feature-{feature}",
version=version
)
# Regional aliases
REGIONAL_ALIASES = {
"us": "production-us",
"eu": "production-eu",
"asia": "production-asia"
}
# Rollback-ready aliases
def safe_production_update(name: str, new_version: int):
"""Update production with rollback capability."""
try:
# Save current production
current = mlflow.genai.load_prompt(f"prompts:/{name}@production")
mlflow.genai.set_prompt_alias(name, "production-previous", current.version)
except:
pass # No current production
# Update production
mlflow.genai.set_prompt_alias(name, "production", new_version)
Use the prompt registry with a deployed agent using Mosaic AI Agent Framework
To access the prompt registry from an agent deployed using the Agent Framework, you must use manual authentication and override security environment variables to configure the Databricks Client to connect to the registry.
Overriding these security environment variables disables automatic passthrough for other resources your agent depends on.
For more information, see Manual authentication for AI agents.
Next steps
- Link production traces to app versions - Track the prompt versions that are used in production
- Run scorers in production - Monitor the quality of your deployed prompts
- Evaluate prompts - Test new prompt versions before promoting to production