Create and edit prompts
This feature is in Beta.
This guide shows you how to create new prompts and manage their versions in the MLflow Prompt Registry using the MLflow Python SDK. All of the code on this page is included in the example notebook.
Prerequisites
-
Install MLflow and required packages
Bashpip install --upgrade "mlflow[databricks]>=3.1.0" openai
-
Create an MLflow experiment by following the setup your environment quickstart.
-
Create or identify a Unity Catalog schema for storing prompts. You must have the
CREATE FUNCTION
,EXECUTE
, andMANAGE
privileges on the Unity Catalog schema.
A Unity Catalog schema with CREATE FUNCTION
, EXECUTE
, and MANAGE
permissions is required in order to view or create prompts. If you are using a Databricks trial account, you have the required permissions on the Unity Catalog schema workspace.default
.
Step 1. Create a new prompt
You can create prompts programmatically using the Python SDK.
Create prompts programmatically using mlflow.genai.register_prompt()
. Prompts use double-brace syntax ({{variable}}
) for template variables.
import mlflow
# Replace with a Unity Catalog schema where you have CREATE FUNCTION, EXECUTE, and MANAGE privileges
uc_schema = "workspace.default"
# This table will be created in the UC schema specified in the previous line
prompt_name = "summarization_prompt"
# Define the prompt template with variables
initial_template = """\
Summarize content you are provided with in {{num_sentences}} sentences.
Content: {{content}}
"""
# Register a new prompt
prompt = mlflow.genai.register_prompt(
name=f"{uc_schema}.{prompt_name}",
template=initial_template,
# all parameters below are optional
commit_message="Initial version of summarization prompt",
tags={
"author": "data-science-team@company.com",
"use_case": "document_summarization",
"task": "summarization",
"language": "en",
"model_compatibility": "gpt-4"
}
)
print(f"Created prompt '{prompt.name}' (version {prompt.version})")
Step 2: Use the prompt in your application
The following steps create a simple application that uses your prompt template.
Load the prompt from the registry.
# Load a specific version using URI syntax
prompt = mlflow.genai.load_prompt(name_or_uri=f"prompts:/{uc_schema}.{prompt_name}/1")
# Alternative syntax without URI
prompt = mlflow.genai.load_prompt(name_or_uri=f"{uc_schema}.{prompt_name}", version="1")
Use the prompt in your application.
-
Initialize an OpenAI client to connect to either Databricks-hosted LLMs or LLMs hosted by OpenAI.
- Databricks-hosted LLMs
- OpenAI-hosted LLMs
Use MLflow to get an OpenAI client that connects to Databricks-hosted LLMs. Select a model from the available foundation models.
Pythonimport mlflow
from databricks.sdk import WorkspaceClient
# Enable MLflow's autologging to instrument your application with Tracing
mlflow.openai.autolog()
# Set up MLflow tracking to Databricks
mlflow.set_tracking_uri("databricks")
mlflow.set_experiment("/Shared/docs-demo")
# Create an OpenAI client that is connected to Databricks-hosted LLMs
w = WorkspaceClient()
client = w.serving_endpoints.get_open_ai_client()
# Select an LLM
model_name = "databricks-claude-sonnet-4"Use the native OpenAI SDK to connect to OpenAI-hosted models. Select a model from the available OpenAI models.
Pythonimport mlflow
import os
import openai
# Ensure your OPENAI_API_KEY is set in your environment
# os.environ["OPENAI_API_KEY"] = "<YOUR_API_KEY>" # Uncomment and set if not globally configured
# Enable auto-tracing for OpenAI
mlflow.openai.autolog()
# Set up MLflow tracking to Databricks
mlflow.set_tracking_uri("databricks")
mlflow.set_experiment("/Shared/docs-demo")
# Create an OpenAI client connected to OpenAI SDKs
client = openai.OpenAI()
# Select an LLM
model_name = "gpt-4o-mini" -
Define your application:
Python# Use the trace decorator to capture the application's entry point
@mlflow.trace
def my_app(content: str, num_sentences: int):
# Format with variables
formatted_prompt = prompt.format(
content=content,
num_sentences=num_sentences
)
response = client.chat.completions.create(
model=model_name, # This example uses a Databricks hosted LLM - you can replace this with any AI Gateway or Model Serving endpoint. If you provide your own OpenAI credentials, replace with a valid OpenAI model e.g., gpt-4o, etc.
messages=[
{
"role": "system",
"content": "You are a helpful assistant.",
},
{
"role": "user",
"content": formatted_prompt,
},
],
)
return response.choices[0].message.content
result = my_app(content="This guide shows you how to integrate prompts from the MLflow Prompt Registry into your GenAI applications. You'll learn to load prompts, format them with dynamic data, and ensure complete lineage by linking prompt versions to your MLflow Models.", num_sentences=1)
print(result)
Step 3. Edit the prompt
Prompt versions are immutable after they are created. To edit a prompt, you must create a new version. This Git-like versioning ensures complete history and enables rollbacks.
Create a new version by calling mlflow.genai.register_prompt()
with an existing prompt name:
import mlflow
# Define the improved template
new_template = """\
You are an expert summarizer. Condense the following content into exactly {{ num_sentences }} clear and informative sentences that capture the key points.
Content: {{content}}
Your summary should:
- Contain exactly {{num_sentences}} sentences
- Include only the most important information
- Be written in a neutral, objective tone
- Maintain the same level of formality as the original text
"""
# Register a new version
updated_prompt = mlflow.genai.register_prompt(
name=f"{uc_schema}.{prompt_name}",
template=new_template,
commit_message="Added detailed instructions for better output quality",
tags={
"author": "data-science-team@company.com",
"improvement": "Added specific guidelines for summary quality"
}
)
print(f"Created version {updated_prompt.version} of '{updated_prompt.name}'")
Step 4. Use the new prompt
The following code shows how to use the prompt.
# Load a specific version using URI syntax
prompt = mlflow.genai.load_prompt(name_or_uri=f"prompts:/{uc_schema}.{prompt_name}/2")
# Or load from specific version
prompt = mlflow.genai.load_prompt(name_or_uri=f"{uc_schema}.{prompt_name}", version="2")
Step 5. Search and discover prompts
To find prompts in your Unity Catalog schema:
# REQUIRED format for Unity Catalog - specify catalog and schema
results = mlflow.genai.search_prompts("catalog = 'workspace' AND schema = 'default'")
# Using variables for your schema
catalog_name = uc_schema.split('.')[0] # 'workspace'
schema_name = uc_schema.split('.')[1] # 'default'
results = mlflow.genai.search_prompts(f"catalog = '{catalog_name}' AND schema = '{schema_name}'")
# Limit results
results = mlflow.genai.search_prompts(
filter_string=f"catalog = '{catalog_name}' AND schema = '{schema_name}'",
max_results=50
)
Example notebook
Create and edit prompts example notebook
Next Steps
- Evaluate prompt versions - Compare different prompt versions to identify the best performer.
- Track prompts with app versions - Link prompt versions to your application versions.
- Use prompts in deployed apps - Deploy prompts to production with aliases.