Prompt Registry reference
This feature is in Beta.
Overview
The MLflow Prompt Registry is a centralized repository for managing prompt templates across their lifecycle. It enables teams to:
- Version and track prompts with Git-like versioning, commit messages, and rollback capabilities
- Deploy safely with aliases using mutable references (e.g., "production", "staging") for A/B testing and gradual rollouts
- Collaborate without code changes by allowing non-engineers to modify prompts through the UI
- Integrate with any framework including LangChain, LlamaIndex, and other GenAI frameworks
- Maintain governance through Unity Catalog integration for access control and audit trails
- Track lineage by linking prompts to experiments and evaluation results
Prerequisites
- Install MLflow with Unity Catalog support:
Bash
pip install --upgrade "mlflow[databricks]>=3.1.0"
- Create an MLflow experiment by following the setup your environment quickstart.
- Access to a Unity Catalog schema with
CREATE FUNCTION
- Why? Prompts are stored in the UC as functions
a Unity Catalog schema with CREATE FUNCTION
permissions is required to use prompt registry. If you are using a Databricks trial account, you have CREATE TABLE permissions on the Unity Catalog schema workspace.default
.
Quick start
Here's the essential workflow for using the Prompt Registry:
import mlflow
# Register a prompt template
prompt = mlflow.genai.register_prompt(
name="mycatalog.myschema.customer_support",
template="You are a helpful assistant. Answer this question: {{question}}",
commit_message="Initial customer support prompt"
)
print(f"Created version {prompt.version}") # "Created version 1"
# Set a production alias
mlflow.genai.set_prompt_alias(
name="mycatalog.myschema.customer_support",
alias="production",
version=1
)
# Load and use the prompt in your application
prompt = mlflow.genai.load_prompt(name_or_uri="prompts:/mycatalog.myschema.customer_support@production")
response = llm.invoke(prompt.render(question="How do I reset my password?"))
Core concepts
SDK overview
The Prompt Registry provides six main operations:
Function | Purpose |
---|---|
| Create new prompts or add new versions |
| Retrieve specific prompt versions or aliases |
| Find prompts by name, tags, or metadata |
| Create or update alias pointers |
| Remove aliases (versions remain) |
| Delete entire prompts or specific versions |
register_prompt()
Creates a new prompt or adds a new version to an existing prompt.
mlflow.genai.register_prompt(
name: str,
template: str,
commit_message: str,
tags: dict = None
) -> PromptVersion
Parameters
Parameter | Type | Required | Description |
---|---|---|---|
|
| Yes | Unity Catalog name (catalog.schema.prompt_name) |
|
| Yes | Prompt template with |
|
| No | Description of changes (like git commit messages) |
|
| No | Key-value tags for this version |
Returns
PromptVersion
object containing:
name
: The prompt nameversion
: The version number (auto-incremented)template
: The prompt templatecommit_message
: The commit messagecreation_timestamp
: When the version was created
Example
prompt = mlflow.genai.register_prompt(
name="mycatalog.myschema.summarization",
template="""Summarize the following text in {{num_sentences}} sentences:
Text: {{content}}
Focus on: {{focus_areas}}""",
commit_message="Added focus areas parameter",
tags={
"tested_with": "gpt-4",
"avg_latency_ms": "1200",
"team": "content",
"project": "summarization-v2"
}
)
load_prompt()
Retrieves a prompt by version number or alias.
mlflow.genai.load_prompt(
name_or_uri: str,
version: Optional[Union[str, int]] = None,
allow_missing: bool = False,
) -> Optional[PromptVersion]
Parameters
Parameter | Type | Required | Description |
---|---|---|---|
|
| Yes | The name of the prompt, or the URI in the format "prompts:/name/version" |
|
| No | The version of the prompt (required when using name, not allowed when using URI) |
|
| No | If True, return None instead of raising Exception if the specified prompt is not found |
URI formats
# Load specific version
prompt = mlflow.genai.load_prompt(name_or_uri="prompts:/mycatalog.myschema.chat_prompt/3")
# Load by alias
prompt = mlflow.genai.load_prompt(name_or_uri="prompts:/mycatalog.myschema.chat_prompt@production")
Example
# Load specific version using URI format
v2 = mlflow.genai.load_prompt(name_or_uri="prompts:/mycatalog.myschema.qa_prompt/2")
# Load specific version using name + version parameter
v3 = mlflow.genai.load_prompt(name_or_uri="mycatalog.myschema.qa_prompt", version=3)
# Load by alias using URI
prod = mlflow.genai.load_prompt(name_or_uri="prompts:/mycatalog.myschema.qa_prompt@production")
# Load with allow_missing flag (returns None if not found)
optional_prompt = mlflow.genai.load_prompt(
name_or_uri="mycatalog.myschema.qa_prompt",
version=3,
allow_missing=True
)
# Use the prompt
if optional_prompt is not None:
response = llm.invoke(optional_prompt.render(
question="What is MLflow?",
context="MLflow is an open source platform..."
))
search_prompts()
Lists prompts in Unity Catalog by catalog and schema.
mlflow.genai.search_prompts(
filter_string: str,
max_results: int = None
) -> PagedList[Prompt]
Parameters
Parameter | Type | Required | Description |
---|---|---|---|
|
| Yes | Must specify catalog and schema: |
|
| No | Maximum prompts to return |
Unity Catalog Requirements
For Unity Catalog prompt registries, you must specify both catalog and schema:
# REQUIRED format - list all prompts in a catalog.schema
results = mlflow.genai.search_prompts("catalog = 'mycatalog' AND schema = 'myschema'")
# This is the ONLY supported filter format
results = mlflow.genai.search_prompts("catalog = 'rohit' AND schema = 'default'")
Limitations
The following filters are NOT supported in Unity Catalog:
- Name patterns:
name LIKE '%pattern%'
❌ - Tag filtering:
tags.field = 'value'
❌ - Exact name matching:
name = 'specific.name'
❌ - Combined filters beyond catalog + schema ❌
To find specific prompts, use the returned list and filter programmatically:
# Get all prompts in schema
all_prompts = mlflow.genai.search_prompts("catalog = 'mycatalog' AND schema = 'myschema'")
# Filter programmatically
customer_prompts = [p for p in all_prompts if 'customer' in p.name.lower()]
tagged_prompts = [p for p in all_prompts if p.tags.get('team') == 'support']
set_prompt_alias()
Creates or updates an alias pointing to a specific version.
mlflow.genai.set_prompt_alias(
name: str,
alias: str,
version: int
) -> None
Parameters
Parameter | Type | Required | Description |
---|---|---|---|
|
| Yes | Unity Catalog name of the prompt |
|
| Yes | Alias name (e.g., "production", "staging") |
|
| Yes | Version number to point the alias to |
Example
# Promote version 3 to production
mlflow.genai.set_prompt_alias(
name="mycatalog.myschema.chat_assistant",
alias="production",
version=3
)
# Set up staging for testing
mlflow.genai.set_prompt_alias(
name="mycatalog.myschema.chat_assistant",
alias="staging",
version=4
)
delete_prompt_alias()
Removes an alias without affecting the underlying version.
mlflow.genai.delete_prompt_alias(
name: str,
alias: str
) -> None
Parameters
Parameter | Type | Required | Description |
---|---|---|---|
|
| Yes | Unity Catalog name of the prompt |
|
| Yes | Alias name to delete |
delete_prompt() and delete_prompt_version()
Deleting a prompt entirely or specific versions.
delete_prompt_version()
Deletes a specific version of a prompt:
from mlflow import MlflowClient
client = MlflowClient()
client.delete_prompt_version(name: str, version: str) -> None
Parameters
Parameter | Type | Required | Description |
---|---|---|---|
|
| Yes | Unity Catalog name of the prompt |
|
| Yes | Version to delete (as string) |
delete_prompt()
Deletes an entire prompt from the registry:
from mlflow import MlflowClient
client = MlflowClient()
client.delete_prompt(name: str) -> None
Parameters
Parameter | Type | Required | Description |
---|---|---|---|
|
| Yes | Unity Catalog name of the prompt |
Important Notes:
- For Unity Catalog registries,
delete_prompt()
will fail if any versions still exist. All versions must be deleted first usingdelete_prompt_version()
. - For other registry types,
delete_prompt()
works normally without version checking.
Example
from mlflow import MlflowClient
client = MlflowClient()
# Delete specific versions first (required for Unity Catalog)
client.delete_prompt_version("mycatalog.myschema.chat_assistant", "1")
client.delete_prompt_version("mycatalog.myschema.chat_assistant", "2")
client.delete_prompt_version("mycatalog.myschema.chat_assistant", "3")
# Then delete the entire prompt
client.delete_prompt("mycatalog.myschema.chat_assistant")
# For convenience with Unity Catalog, you can also search and delete all versions:
search_response = client.search_prompt_versions("mycatalog.myschema.chat_assistant")
for version in search_response.prompt_versions:
client.delete_prompt_version("mycatalog.myschema.chat_assistant", str(version.version))
client.delete_prompt("mycatalog.myschema.chat_assistant")
Data model
The Prompt Registry follows a Git-like model:
- Prompts: Named entities in Unity Catalog
- Versions: Immutable snapshots with auto-incrementing numbers
- Aliases: Mutable pointers to specific versions
- Tags: Version-specific key-value pairs
Common patterns
Template variables
Use double-brace syntax for variables:
# Single variable
simple_prompt = mlflow.genai.register_prompt(
name="mycatalog.myschema.greeting",
template="Hello {{name}}, how can I help you today?",
commit_message="Simple greeting"
)
# Multiple variables with formatting
complex_prompt = mlflow.genai.register_prompt(
name="mycatalog.myschema.analysis",
template="""Analyze the following {{data_type}} data:
{{data}}
Consider these factors:
{{factors}}
Output format: {{output_format}}""",
commit_message="Multi-variable analysis template"
)
# Use the prompt
rendered = complex_prompt.render(
data_type="sales",
data="Q1: $1.2M, Q2: $1.5M...",
factors="seasonality, market trends",
output_format="bullet points"
)
Version management workflows
# Development workflow
def develop_prompt(base_name: str, changes: str):
"""Iterate on prompts during development"""
# Register new version
new_version = mlflow.genai.register_prompt(
name=base_name,
template=changes,
commit_message=f"Dev iteration: {datetime.now()}"
)
# Update dev alias
mlflow.genai.set_prompt_alias(
name=base_name,
alias="dev",
version=new_version.version
)
return new_version
# Promotion workflow
def promote_prompt(name: str, from_env: str, to_env: str):
"""Promote prompt from one environment to another"""
# Get current version in source environment
source = mlflow.genai.load_prompt(f"prompts:/{name}@{from_env}")
# Point target environment to same version
mlflow.genai.set_prompt_alias(
name=name,
alias=to_env,
version=source.version
)
print(f"Promoted {name} v{source.version} from {from_env} to {to_env}")
Alias strategies
# Standard environment aliases
ENVIRONMENT_ALIASES = ["dev", "staging", "production"]
# Feature branch aliases
def create_feature_alias(prompt_name: str, feature: str, version: int):
"""Create alias for feature development"""
mlflow.genai.set_prompt_alias(
name=prompt_name,
alias=f"feature-{feature}",
version=version
)
# Regional aliases
REGIONAL_ALIASES = {
"us": "production-us",
"eu": "production-eu",
"asia": "production-asia"
}
# Rollback-ready aliases
def safe_production_update(name: str, new_version: int):
"""Update production with rollback capability"""
try:
# Save current production
current = mlflow.genai.load_prompt(f"prompts:/{name}@production")
mlflow.genai.set_prompt_alias(name, "production-previous", current.version)
except:
pass # No current production
# Update production
mlflow.genai.set_prompt_alias(name, "production", new_version)
Framework integrations
LangChain
from langchain_core.prompts import ChatPromptTemplate
# Load from registry
mlflow_prompt = mlflow.genai.load_prompt("prompts:/mycatalog.myschema.chat@production")
# Convert to LangChain format
langchain_template = mlflow_prompt.to_single_brace_format()
chat_prompt = ChatPromptTemplate.from_template(langchain_template)
# Use in chain
chain = chat_prompt | llm | output_parser
LlamaIndex
from llama_index.core import PromptTemplate
# Load and convert
mlflow_prompt = mlflow.genai.load_prompt("prompts:/mycatalog.myschema.rag@production")
llama_template = PromptTemplate(mlflow_prompt.to_single_brace_format())
# Use in query engine
query_engine.update_prompts({"response_synthesizer:text_qa_template": llama_template})
Next Steps
- Create and edit prompts - Get started with your first prompt
- Evaluate prompts - Compare prompt versions systematically
- Use prompts in deployed apps - Deploy prompts to production with aliases