Foundation Model Fine-tuning

Important

This feature is in Public Preview in the following regions: centralus, eastus, eastus2, northcentralus, and westus.

With Foundation Model Fine-tuning (now part of Mosaic AI Model Training), you can use your own data to customize a foundation model to optimize its performance for your specific application. By conducting full parameter fine-tuning or continuing training of a foundation model, you can train your own model using significantly less data, time, and compute resources than training a model from scratch.

With Databricks you have everything in a single platform: your own data to use for training, the foundation model to train, checkpoints saved to MLflow, and the model registered in Unity Catalog and ready to deploy.

See Tutorial: Create and deploy a Foundation Model Fine-tuning run to learn how to create a run using the Foundation Model Fine-tuning API, and then review the results and deploy the model using the Databricks UI and Mosaic AI Model Serving.

What is Foundation Model Fine-tuning?

Foundation Model Fine-tuning lets you use the Databricks API or UI to tune or further train a foundation model.

Using Foundation Model Fine-tuning, you can:

  • Train a model with your custom data, with the checkpoints saved to MLflow. You retain complete control of the trained model.
  • Automatically register the model to Unity Catalog, allowing easy deployment with model serving.
  • Further train a completed, proprietary model by loading the weights of a previously trained model.

Databricks recommends that you try Foundation Model Fine-tuning if:

  • You have tried few-shot learning and want better results.
  • You have tried prompt engineering on an existing model and want better results.
  • You want full ownership over a custom model for data privacy.
  • You are latency-sensitive or cost-sensitive and want to use a smaller, cheaper model with your task-specific data.

Supported tasks

Foundation Model Fine-tuning supports the following use cases:

  • Chat completion: Recommended task. Train your model on chat logs between a user and an AI assistant. This format can be used both for actual chat logs, and as a standard format for question answering and conversational text. The text is automatically formatted into the appropriate format for the specific model. See example chat templates in the HuggingFace documentation for more information on templating.
  • Supervised fine-tuning: Train your model on structured prompt-response data. Use this to adapt your model to a new task, change its response style, or add instruction-following capabilities. This task does not automatically apply any formatting to your data and is only recommended when custom data formatting is required.
  • Continued pre-training: Train your model with additional text data. Use this to add new knowledge to a model or focus a model on a specific domain.

Requirements

  • A Databricks workspace in one of the following Azure regions: centralus, eastus, eastus2, northcentralus, or westus.
  • Foundation Model Fine-tuning APIs installed using pip install databricks_genai.
  • Databricks Runtime 12.2 LTS ML or above if your data is in a Delta table.

See Prepare data for Foundation Model Fine-tuning for information about required input data formats.

Recommended data size for model training

Databricks recommends initially training using one to four epochs. After evaluating your fine-tuned model, if you want the model outputs to be more similar to your training data, you can start to continue training by using one to two more epochs.

If the model performance significantly decreases on tasks not represented in your fine-tuning data, or if the model appears to output exact copies of your fine-tuning data, Databricks recommends reducing the number of training epochs.

For supervised fine-tuning and chat completion, you should provide enough tokens for at least one full context length of the model. For example, 4096 tokens for meta-llama/Llama-2-7b-chat-hf or 32768 tokens for mistralai/Mistral-7B-v0.1.

For continued pre-training, Databricks recommends a minimum of 1.5 million tokens to get a higher quality model that learns your custom data.

Supported models

The following table lists the supported models. For the latest supported models and their associated context lengths, use the get_models() function.


from databricks.model_training import foundation_model

foundation_model.get_models()

Important

Meta Llama 3.2 is licensed under the LLAMA 3.2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved. Customers are responsible for ensuring their compliance with the terms of this license and the Llama 3.2 Acceptable Use Policy.

Meta Llama 3.1 is licensed under the LLAMA 3.1 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved. Customers are responsible for ensuring compliance with applicable model licenses.

Llama 3 is licensed under the LLAMA 3 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved. Customers are responsible for ensuring compliance with applicable model licenses.

Llama 2 and Code Llama models are licensed under the LLAMA 2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved. Customers are responsible for ensuring compliance with applicable model licenses.

DBRX is provided under and subject to the Databricks Open Model License, Copyright © Databricks, Inc. All rights reserved. Customers are responsible for ensuring compliance with applicable model licenses, including the Databricks Acceptable Use policy.

Model Maximum context length Notes
databricks/dbrx-base 32768
databricks/dbrx-instruct 32768
meta-llama/Llama-3.2-1B 131072
meta-llama/Llama-3.2-1B-Instruct 131072
meta-llama/Llama-3.2-3B 131072
meta-llama/Llama-3.2-3B-Instruct 131072
meta-llama/Meta-Llama-3.1-405B 131072
meta-llama/Meta-Llama-3.1-405B-Instruct 131072
meta-llama/Meta-Llama-3.1-70B 131072
meta-llama/Meta-Llama-3.1-70B-Instruct 131072
meta-llama/Meta-Llama-3.1-8B 131072
meta-llama/Meta-Llama-3.1-8B-Instruct 131072
meta-llama/Meta-Llama-3-70B 8192 After December 13, 2024, this model will no longer be supported. See Retired models for the recommended replacement.
meta-llama/Meta-Llama-3-70B-Instruct 8192 After December 13, 2024, this model will no longer be supported. See Retired models for the recommended replacement.
meta-llama/Meta-Llama-3-8B 8192 After December 13, 2024, this model will no longer be supported. See Retired models for the recommended replacement.
meta-llama/Meta-Llama-3-8B-Instruct 8192 After December 13, 2024, this model will no longer be supported. See Retired models for the recommended replacement.
meta-llama/Llama-2-7b-hf 4096 After December 13, 2024, this model will no longer be supported. See Retired models for the recommended replacement.
meta-llama/Llama-2-13b-hf 4096 After December 13, 2024, this model will no longer be supported. See Retired models for the recommended replacement.
meta-llama/Llama-2-70b-hf 4096 After December 13, 2024, this model will no longer be supported. See Retired models for the recommended replacement.
meta-llama/Llama-2-7b-chat-hf 4096 After December 13, 2024, this model will no longer be supported. See Retired models for the recommended replacement.
meta-llama/Llama-2-13b-chat-hf 4096 After December 13, 2024, this model will no longer be supported. See Retired models for the recommended replacement.
meta-llama/Llama-2-70b-chat-hf 4096 After December 13, 2024, this model will no longer be supported. See Retired models for the recommended replacement.
codellama/CodeLlama-7b-hf 16384 After December 13, 2024, this model will no longer be supported. See Retired models for the recommended replacement.
codellama/CodeLlama-13b-hf 16384 After December 13, 2024, this model will no longer be supported. See Retired models for the recommended replacement.
codellama/CodeLlama-34b-hf 16384 After December 13, 2024, this model will no longer be supported. See Retired models for the recommended replacement.
codellama/CodeLlama-7b-Instruct-hf 16384 After December 13, 2024, this model will no longer be supported. See Retired models for the recommended replacement.
codellama/CodeLlama-13b-Instruct-hf 16384 After December 13, 2024, this model will no longer be supported. See Retired models for the recommended replacement.
codellama/CodeLlama-34b-Instruct-hf 16384 After December 13, 2024, this model will no longer be supported. See Retired models for the recommended replacement.
codellama/CodeLlama-7b-Python-hf 16384 After December 13, 2024, this model will no longer be supported. See Retired models for the recommended replacement.
codellama/CodeLlama-13b-Python-hf 16384 After December 13, 2024, this model will no longer be supported. See Retired models for the recommended replacement.
codellama/CodeLlama-34b-Python-hf 16384 After December 13, 2024, this model will no longer be supported. See Retired models for the recommended replacement.
mistralai/Mistral-7B-v0.1 32768
mistralai/Mistral-7B-Instruct-v0.2 32768
mistralai/Mixtral-8x7B-v0.1 32768

Use Foundation Model Fine-tuning

Foundation Model Fine-tuning is accessible using the databricks_genai SDK. The following example creates and launches a training run that uses data from Unity Catalog Volumes. See Create a training run using the Foundation Model Fine-tuning API for configuration details.

from databricks.model_training import foundation_model as fm

model = 'meta-llama/Meta-Llama-3.1-8B-Instruct'
# UC Volume with JSONL formatted data
train_data_path = 'dbfs:/Volumes/main/mydirectory/ift/train.jsonl'
register_to = 'main.mydirectory'
run = fm.create(
  model=model,
  train_data_path=train_data_path,
  register_to=register_to,
)

See the Instruction fine-tuning: Named Entity Recognition demo notebook for an instruction fine-tuning example that walks through data preparation, fine-tuning training run configuration and deployment.

Limitations

  • Large datasets (10B+ tokens) are not supported due to compute availability.

  • For continuous pre-training, workloads are limited to 60-256MB files. Files larger than 1GB may cause longer processing times.

  • Databricks strives to make the latest state-of-the-art models available for customization using Foundation Model Fine-tuning. As new models become available, access to older models from the API or UI might be removed, older models might be deprecated, or supported models updated. See Generative AI models maintenance policy.

  • Foundation Model Fine-tuning only supports model training for Azure workspaces using storage behind Private Link.

    • Only reading data from storage behind Private Link in eastus2 is currently supported.
  • If you have firewalls enabled on the Azure Data Lake Storage account that stores your data in Unity Catalog, you need to allowlist traffic from the Databricks serverless data plane clusters in order to use Foundation Model Fine-tuning. Reach out to your Databricks account team for more information and possible custom solutions.