This article guides you through articles that help you learn how to build AI and LLM solutions natively on Databricks. Topics include key steps of the end-to-end AI lifecycle, from data preparation and model building to deployment, monitoring and MLOps.
Learn how to load and process your data for AI workloads, including data preparation for fine-tuning LLMs. How to prepare your data for model training
With feature engineering available in Unity Catalog, learn how to create feature tables, track the lineage of features and discover features that others have already built.
Learn how to use AutoML for efficient training and tuning of your ML models, and MLflow for experiment tracking.
Get started with using model serving for real-time workloads or deploy MLflow models for offline inference.
Learn how to securely and cost-effectively host open source LLMs within your Databricks environment
Learn how to monitor your AI models in production. Continuously capture and log Model Serving endpoint inputs and predictions into a Delta Table using Inference Tables, ensuring you stay on top of model performance metrics. Lakehouse Monitoring also lets you know if you meet desired benchmarks.
Learn how to use Databricks Asset Bundles for efficient packaging and deployment of all data and AI assets.
See how you can use Databricks to combine DataOps, ModelOps and DevOps for end-to-end ML and LLM operations for your AI application.
Learn how to create LLM-powered applications leveraging your data. Use RAG (retrieval augmented generation) with LLMs to build Q&A chatbots that provide more accurate answers.
If the outlined steps above don’t cater to your needs, a wealth of information is available in the Machine Learning documentation.