Generative AI models maintenance policy

This article describes the model maintenance policy for the Foundation Model APIs pay-per-token and Foundation Model Fine-tuning offerings.

In order to continue supporting the most state-of-the-art models, Databricks might update supported models or retire older models for the Foundation Model APIs pay-per-token and Foundation Model Fine-tuning offerings.

Model retirement policy

The following retirement policy applies only to supported chat and completion models in the Foundation Model APIs pay-per-token and Foundation Model Fine-tuning offerings.

When a model is retired, it is no longer available for use and is removed from the indicated feature offerings. Databricks takes the following steps to notify customers about a model that is set for retirement:

  • A warning message displays in the model card from the Serving page of your Databricks workspace that indicates that the model is planned for retirement.

  • A warning message displays in the dropdown menu for Foundation Model Fine-tuning in the Experiments tab that indicates that the model is planned for retirement.

  • The applicable documentation contains a notice that indicates the model is planned for retirement and the start date it will no longer be supported.

After users are notified about the upcoming model retirement, Databricks will retire the model in three months. During this three-month period, customers can either:

  • Choose to migrate to a provisioned throughput endpoint to continue using the model past its end-of-life date

  • Migrate existing workflows to use recommended replacement models.

On the retirement date, the model is removed from the product, and applicable documentation is updated to recommend using a replacement model.

See Retired models for a list of currently retired models and planned retirement dates.

Model updates

Databricks might ship incremental updates to pay-per-token models to deliver optimizations. When a model is updated, the endpoint URL remains the same, but the model ID in the response object changes to reflect the date of the update. For example, if an update is shipped to meta-llama/Meta-Llama-3.1-405B on 3/4/2024, the model name in the response object updates to meta-llama/Meta-Llama-3.1-405B-030424. Databricks maintains a version history of the updates that you can refer to.

Retired models

The following sections summarize current and upcoming model retirements for the Foundation Model APIs pay-per-token and Foundation Model Fine-tuning offerings.

Foundation Model Fine-tuning retirements

The following table shows retired model families, their retirement dates, and recommended replacement model families to use for Foundation Model Fine-tuning workloads. Databricks recommends that you migrate your applications to use replacement models before the indicated retirement date.

Model family

Retirement date

Recommended replacement model family

Meta-Llama-3

January 7, 2025

Meta-Llama-3.1

Meta-Llama-2

January 7, 2025

Meta-Llama-3.1

Code Llama

January 7, 2025

Meta-Llama-3.1

Foundation Model APIs pay-per-token retirements

The following table shows model retirements, their retirement dates, and recommended replacement models to use for Foundation Model APIs pay-per-token serving workloads. Databricks recommends that you migrate your applications to use replacement models before the indicated retirement date.

Important

On December 11, 2024, Meta-Llama-3.3-70B-Instruct replaced support for Meta-Llama-3.1-70B-Instruct in Foundation Model APIs pay-per-token endpoints.

Model

Retirement date

Recommended replacement model

Meta-Llama-3.1-70B-Instruct

December 11, 2024

Meta-Llama-3.3-70B-Instruct

Meta-Llama-3-70B-Instruct

July 23, 2024

Meta-Llama-3.1-70B-Instruct

Meta-Llama-2-70B-Chat

October 30, 2024

Meta-Llama-3.1-70B-Instruct

MPT 7B Instruct

August 30, 2024

Mixtral-8x7B

MPT 30B Instruct

August 30, 2024

Mixtral-8x7B

If you require long-term support for a specific model version, Databricks recommends using Foundation Model APIs provisioned throughput for your serving workloads.