Query an embedding model
In this article, you learn how to write query requests for foundation models that are optimized for embeddings tasks and send them to your model serving endpoint.
The examples in this article apply to querying foundation models that are made available using either:
- Foundation Models APIs which are referred to as Databricks-hosted foundation models.
- External models which are referred to as foundation models hosted outside of Databricks.
Requirements
- See Requirements.
- Install the appropriate package to your cluster based on the querying client option you choose.
Query examples
The following example is an embeddings request for the gte-large-en
model made available by external models.
- OpenAI client
- REST API
- MLflow Deployments SDK
- Databricks Python SDK
To use the OpenAI client, specify the model serving endpoint name as the model
input.
from databricks.sdk import WorkspaceClient
w = WorkspaceClient()
openai_client = w.serving_endpoints.get_open_ai_client()
response = openai_client.embeddings.create(
model="cohere-embeddings-endpoint",
input="what is databricks"
)
To query foundation models outside your workspace, you must use the OpenAI client directly, as demonstrated below. The following example assumes you have a Databricks API token and openai
installed on your compute. You also need your Databricks workspace instance to connect the OpenAI client to Databricks.
import os
import openai
from openai import OpenAI
client = OpenAI(
api_key="dapi-your-databricks-token",
base_url="https://example.staging.cloud.databricks.com/serving-endpoints"
)
response = client.embeddings.create(
model="cohere-embeddings-endpoint",
input="what is databricks"
)
The following example uses REST API parameters for querying serving endpoints that serve external models. These parameters are in Public Preview and the definition might change. See POST /serving-endpoints/{name}/invocations.
curl \
-u token:$DATABRICKS_TOKEN \
-X POST \
-H "Content-Type: application/json" \
-d '{ "input": "Embed this sentence!"}' \
https://<workspace_host>.databricks.com/serving-endpoints/<your-embedding-model-endpoint>/invocations
The following example uses the predict()
API from the MLflow Deployments SDK.
import mlflow.deployments
export DATABRICKS_HOST="https://<workspace_host>.databricks.com"
export DATABRICKS_TOKEN="dapi-your-databricks-token"
client = mlflow.deployments.get_deploy_client("databricks")
embeddings_response = client.predict(
endpoint="cohere-embeddings-endpoint",
inputs={
"input": "Here is some text to embed"
}
)
from databricks.sdk import WorkspaceClient
from databricks.sdk.service.serving import ChatMessage, ChatMessageRole
w = WorkspaceClient()
response = w.serving_endpoints.query(
name="cohere-embeddings-endpoint",
input="Embed this sentence!"
)
print(response.data[0].embedding)
The following is the expected request format for an embeddings model. For external models, you can include additional parameters that are valid for a given provider and endpoint configuration. See Additional query parameters.
{
"input": [
"embedding text"
]
}
The following is the expected response format:
{
"object": "list",
"data": [
{
"object": "embedding",
"index": 0,
"embedding": []
}
],
"model": "text-embedding-ada-002-v2",
"usage": {
"prompt_tokens": 2,
"total_tokens": 2
}
}
Supported models
See Foundation model types for supported embedding models.
Check whether embeddings are normalized
Use the following to check if the embeddings generated by your model are normalized.
import numpy as np
def is_normalized(vector: list[float], tol=1e-3) -> bool:
magnitude = np.linalg.norm(vector)
return abs(magnitude - 1) < tol