Phase 9: Design observability strategy
In this phase, you design observability and monitoring strategies to ensure operational excellence and proactive issue resolution.
Databricks provides built-in observability capabilities to monitor platform operations, workload performance, data quality, and model serving. Design your observability strategy to balance operational insights with monitoring costs and complexity.
Design system tables strategy
System tables are a Databricks-hosted analytical store of your account's operational data. They provide historical observability across your account for usage, performance, costs, security, and compliance monitoring.
System tables capabilities
- Billing and usage: Monitor costs, DBU consumption, and usage patterns across workspaces.
- Audit logs: Track workspace activities, access patterns, and compliance events.
- Query history: Analyze query performance, execution patterns, and optimization opportunities.
- Job runs: Monitor job execution history, success rates, and failure patterns.
- Data lineage: Track data dependencies and understand impact of schema changes.
- Cluster events: Monitor cluster creation, termination, and resource utilization.
System tables use cases
- Cost optimization: Identify expensive queries, underutilized clusters, and opportunities to reduce costs.
- Security monitoring: Audit access patterns, identify anomalies, and enforce compliance.
- Performance analysis: Analyze query patterns, identify bottlenecks, and optimize workloads.
- Capacity planning: Forecast resource needs based on historical usage trends.
- Data governance: Track data lineage, monitor access patterns, and ensure compliance.
Best practices for system tables
- Enable system tables for all metastores to capture comprehensive usage data.
- Create dashboards and alerts based on system tables for proactive monitoring.
- Query system tables regularly to identify optimization opportunities.
- Combine system tables with audit logs for comprehensive governance reporting.
- Document key metrics and thresholds for operational monitoring.
- Use system tables to identify unused resources and reduce costs.
Example monitoring queries
- Top 10 most expensive queries by cluster cost.
- Failed jobs by workspace and user.
- Unused clusters running for more than 24 hours.
- Most frequently accessed tables and volumes.
- Data lineage for critical production tables.
For comprehensive system tables documentation and query examples, see Monitor account activity with system tables.
Design job and pipeline monitoring strategy
Monitor job and pipeline execution to ensure data pipelines run successfully and identify failures quickly. Design your monitoring strategy based on workload criticality, SLAs, and operational requirements.
Job monitoring patterns
- Real-time alerting: Configure email notifications or webhooks for critical job failures.
- Trend analysis: Use Workflow and Pipelines Monitoring page to track job run history and identify patterns.
- Anomaly detection: Set up SQL alerts to monitor job duration anomalies or repeated failures.
- SLA monitoring: Define SLAs for critical jobs and alert when jobs exceed expected runtimes.
- Dependency tracking: Monitor job dependencies and upstream failures that impact downstream workloads.
Pipeline monitoring considerations
- Lakeflow Spark Declarative Pipelines observability: Monitor pipeline run status, data quality expectations, and lineage.
- Incremental processing: Track checkpoint information and incremental processing metrics.
- Data freshness: Monitor pipeline latency and ensure data arrives within SLA windows.
- Error handling: Design retry strategies and dead-letter queues for failed records.
Best practices for job monitoring
- Configure email notifications or webhooks for critical job failures.
- Monitor jobs using system tables (
system.workflow.job_runs,system.workflow.task_runs). - Set up SQL alerts to monitor job duration anomalies or repeated failures.
- Define SLAs for critical jobs based on business requirements.
- Implement runbook automation for common failure scenarios.
- Review job performance trends regularly and optimize slow-running jobs.
For detailed job monitoring configuration, see Monitoring and observability for Lakeflow Jobs.
For Lakeflow Spark Declarative Pipelines observability, see Monitor pipelines.
Design Spark performance monitoring strategy
Monitor Spark job performance to identify bottlenecks such as skew, spill, long-running tasks, and memory or I/O issues. Design your Spark monitoring approach based on compute type and performance requirements.
Query profile for serverless and SQL warehouses
For serverless compute and SQL warehouses, use query profile to analyze and optimize query performance. Query profile provides detailed execution plans, stage-level metrics, and optimization recommendations.
Query profile capabilities
- Visualize query execution plans with stage-level metrics.
- Identify expensive operations (for example, sorts, joins, aggregations).
- Analyze data skew and partition imbalance.
- Review optimization recommendations from the query optimizer.
- Compare query performance across runs.
Best practices for query profile
- Review query profile for slow queries to identify optimization opportunities.
- Focus on stages with high execution time or data skew.
- Implement suggested optimizations such as partition pruning and broadcast joins.
- Monitor query performance after optimization to measure improvement.
Spark UI for classic compute
For classic compute clusters, use the Spark UI to identify performance bottlenecks and resource constraints. The Spark UI provides detailed metrics on executors, stages, tasks, and storage.
Spark UI capabilities
- Monitor stage execution time and task distribution.
- Identify data skew by analyzing task duration variance.
- Track memory usage and spill metrics.
- Review executor metrics (for example, CPU, memory, disk I/O).
- Analyze shuffle read/write patterns.
Best practices for Spark UI
- Enable cluster log delivery to cloud storage for long-term log retention.
- Monitor cluster metrics (for example, CPU, memory, disk I/O) to identify resource constraints.
- Review Spark event logs to troubleshoot slow jobs and optimize configurations.
- Focus on stages with high shuffle or spill to reduce memory pressure.
- Optimize partition sizes to reduce task skew.
For query profile documentation, see Query profile.
For Spark UI troubleshooting guidance, see Apache Spark overview.
Design data quality monitoring strategy
Data quality monitoring ensures production tables meet quality standards and identifies data drift over time. Design your data quality monitoring strategy based on table criticality, data freshness requirements, and regulatory compliance needs.
Lakehouse Monitoring capabilities
- Time-series monitors: Track data quality metrics across time-based windows for tables with temporal data.
- Snapshot monitors: Calculate data quality metrics over all data at a point in time.
- Statistical profiling: Monitor column statistics (for example, min, max, mean, stddev, null counts).
- Data drift detection: Identify changes in data distributions over time.
- Anomaly detection: Alert on unexpected changes in data quality metrics.
Data quality monitoring patterns
- Gold layer monitoring: Create monitors for all business-critical gold layer tables.
- Silver layer validation: Monitor silver layer tables for schema compliance and data quality.
- Bronze layer checks: Validate data ingestion completeness and format compliance.
- Real-time alerting: Set up alerts for data quality violations or anomalies.
- Data freshness monitoring: Monitor pipeline latency and ensure data arrives within SLA windows.
Best practices for data quality monitoring
- Create monitors for critical production tables (especially gold layer tables).
- Use time-series monitors for tables with time-based data to track quality trends.
- Use snapshot monitors for tables without time dimensions.
- Set up alerts for data quality violations or anomalies.
- Monitor data freshness to ensure pipelines are running on schedule.
- Document data quality thresholds and escalation procedures.
For comprehensive Lakehouse Monitoring documentation, see Data quality monitoring.
Design model monitoring strategy
Monitor deployed ML models to track performance, health, and request metrics. Design your model monitoring strategy based on model criticality, SLA requirements, and compliance needs.
Model Serving observability capabilities
- Endpoint health: Monitor endpoint availability and health status.
- Invocation metrics: Track request counts, latency, and throughput.
- Inference tables: Log predictions and analyze model behavior over time.
- Model version tracking: Monitor model version usage and deployment history.
- Error monitoring: Track error rates and failure patterns.
Model monitoring patterns
- Real-time alerting: Set up alerts for anomalies such as high latency or error rates.
- SLA monitoring: Define latency and availability SLAs for production models.
- Inference analysis: Use inference tables to analyze prediction distributions and detect drift.
- A/B testing: Monitor performance across model versions to validate improvements.
- Rollback procedures: Define automated rollback triggers based on performance thresholds.
Best practices for model monitoring
- Monitor endpoint health and invocation metrics to identify performance issues.
- Track latency and request throughput to ensure SLA compliance.
- Use inference tables to log predictions and analyze model behavior.
- Set up alerts for anomalies such as high latency or error rates.
- Monitor model version usage to track deployments and rollbacks.
- Document model performance baselines and acceptable thresholds.
For Model Serving observability documentation, see Monitor model quality and endpoint health.
Design third-party monitoring integration strategy
Integrate Databricks with external monitoring solutions for centralized observability across your entire infrastructure. Design your integration strategy based on existing monitoring tools, operational requirements, and team expertise.
Third-party integration patterns
- Centralized monitoring: Forward Databricks metrics and logs to centralized monitoring platforms.
- Multi-cloud observability: Use cloud-agnostic tools to monitor Databricks across multiple clouds.
- Custom dashboards: Build unified dashboards combining Databricks and external system metrics.
- Alerting integration: Route Databricks alerts through existing incident management systems.
- Compliance reporting: Aggregate logs for compliance and audit requirements.
Integration options
- Datadog: Monitor cluster metrics, job runs, and application logs with Datadog integration.
- Prometheus: Export cluster metrics to Prometheus for time-series monitoring and alerting.
- Google Cloud Monitoring: Forward metrics and logs to Google Cloud Monitoring.
- Google Cloud Logging: Aggregate logs in Cloud Logging for analysis and alerting.
Best practices for third-party integrations
- Use standard integrations (for example, Datadog, Prometheus) where available.
- Forward logs to centralized logging platforms for long-term retention.
- Correlate Databricks metrics with infrastructure metrics for root cause analysis.
- Implement consistent tagging across Databricks and external systems.
- Test alert routing and escalation procedures regularly.
Observability recommendations
Recommended
- Enable system tables for all metastores to capture comprehensive usage data.
- Create dashboards based on system tables for cost, performance, and security monitoring.
- Configure job and pipeline monitoring with alerts for critical failures.
- Enable Spark monitoring (query profile, Spark UI) for performance troubleshooting.
- Create Lakehouse Monitoring for critical production tables (gold layer).
- Monitor model serving endpoints for latency, throughput, and error rates.
- Integrate with third-party monitoring solutions for centralized observability.
- Define SLAs and alert thresholds for critical workloads.
- Document runbooks for common operational scenarios.
Evaluate based on requirements
- Balance monitoring granularity with operational overhead and costs.
- Consider third-party integrations only if centralized monitoring is required.
- Evaluate real-time alerting vs batch monitoring based on SLA requirements.
- Consider data quality monitoring costs (for example, storage, compute) for large tables.
- Test alert fatigue by starting with conservative thresholds and refining over time.
Phase 9 outcomes
After completing Phase 9, you should have:
- System tables strategy defined with key monitoring queries and dashboards.
- Job and pipeline monitoring configured with alerts for critical failures.
- Spark performance monitoring approach designed (query profile, Spark UI).
- Data quality monitoring strategy defined with Lakehouse Monitoring for critical tables.
- Model monitoring strategy designed for ML endpoints.
- Third-party monitoring integration approach defined (if applicable).
- SLAs and alert thresholds documented for critical workloads.
- Operational runbooks created for common monitoring scenarios.
Next phase: Phase 10: Design high availability and disaster recovery
Implementation guidance: For step-by-step instructions to implement your observability strategy, see Monitor account activity with system tables and Monitoring and observability for Lakeflow Jobs.