Skip to main content

Use prompts in deployed applications

Beta

This feature is in Beta.

This guide shows you how to use prompts from the MLflow Prompt Registry in your production GenAI applications.

When deploying GenAI applications, configure them to load prompts from the MLflow Prompt Registry using aliases rather than hard-coded versions. This approach enables dynamic updates without redeployment.

Prerequisites

  1. Install MLflow and required packages

    Bash
    pip install --upgrade "mlflow[databricks]>=3.1.0"
  2. Create an MLflow experiment by following the setup your environment quickstart.

  3. Access to a Unity Catalog schema with CREATE FUNCTION

    • Why? Prompts are stored in the UC as functions
note

a Unity Catalog schema with CREATE FUNCTION permissions is required to use prompt registry. If you are using a Databricks trial account, you have CREATE TABLE permissions on the Unity Catalog schema workspace.default.

Step 1. Create a new prompt

You can create prompts programmatically using the Python SDK.

Create prompts programmatically with mlflow.genai.register_prompt(). Prompts use double-brace syntax ({{variable}}) for template variables.

Python
import mlflow

# Replace with a Unity Catalog schema where you have CREATE FUNCTION permission
uc_schema = "workspace.default"
# This table will be created in the above UC schema
prompt_name = "summarization_prompt"

# Define the prompt template with variables
initial_template = """\
Summarize content you are provided with in {{num_sentences}} sentences.

Content: {{content}}
"""

# Register a new prompt
prompt = mlflow.genai.register_prompt(
name=f"{uc_schema}.{prompt_name}",
template=initial_template,
# all parameters below are optional
commit_message="Initial version of summarization prompt",
version_metadata={
"author": "data-science-team@company.com",
"use_case": "document_summarization"
},
tags={
"task": "summarization",
"language": "en",
"model_compatibility": "gpt-4"
}
)

print(f"Created prompt '{prompt.name}' (version {prompt.version})")

Step 2. Add an alias to the prompt version

Aliases allow you to assign a static string tag to a specific prompt version, making it easier to reference prompts in production applications. Instead of hardcoding version numbers, you can use meaningful aliases like production, staging, or development. When you need to update your production prompt, simply reassign the production alias to point to a newer version without changing or redeploying your application code.

Python
import mlflow
mlflow.genai.set_prompt_alias(
name=f"{uc_schema}.{prompt_name}",
alias="production",
version=1
)

Step 3: Reference the prompt in your app

Once you've registered your prompt and assigned an alias, you can reference it in your deployed applications using the prompt URI format. The recommended approach is to use environment variables to make your application flexible and avoid hardcoding prompt references.

The prompt URI format is: prompts:/{catalog}.{schema}.{prompt_name}@{alias}

Using the prompt we registered in Step 1, the URI would be:

  • prompts://workspace.default.summarization_prompt@production

Here's how to reference the prompt in your application:

Python
import mlflow
import os
from typing import Optional

class ProductionApp:
def __init__(self):
# Use environment variable for flexibility
self.prompt_alias = os.getenv("PROMPT_ALIAS", "production")
self.prompt_name = os.getenv("PROMPT_URI", "workspace.default.summarization_prompt")

def get_prompt(self) -> str:
"""Load prompt from registry using alias."""
uri = f"prompts:/{self.prompt_name}@{self.prompt_alias}"
prompt = mlflow.genai.load_prompt(uri)
return prompt

# Rest of your application's code

# Example usage
app = ProductionApp()
prompt = app.get_prompt()
print(f"Loaded prompt: {prompt}")

Next Steps