December 2019

These features and Databricks platform improvements were released in December 2019.

Note

Releases are staged. Your Databricks account may not be updated until a week or more after the initial release date.

Databricks Connect now supports Databricks Runtime 6.2

December 17, 2019

Databricks Connect now supports Databricks Runtime 6.2.

Databricks Runtime 6.2 for Genomics GA

December 3, 2019

Databricks Runtime 6.2 for Genomics is built on top of Databricks Runtime 6.2. It includes many improvements and upgrades from Databricks Runtime 6.1 for Genomics, including:

  • Firth logistic regression

  • User-defined sample quality control metrics

  • Pipe transformer performance improvement

  • More robust joint genotyping

  • Simplified integration with LOFTEE

  • Hail 0.26.0

  • Samtools 1.9

Databricks Runtime 5.3 and 5.4 support ends

December 3, 2019

Support for 5.3 and 5.4 ended on December 3. See Databricks runtime support lifecycles.

Databricks Runtime 6.2 ML GA

December 3, 2019

Databricks Runtime 6.2 ML GA brings many library upgrades, including:

  • TensorFlow and TensorBoard: 1.14.0 to 1.15.0.

  • PyTorch: 1.2.0 to 1.3.0.

  • tensorboardX: 1.8 to 1.9.

  • MLflow: 1.3.0 to 1.4.0.

  • Hyperopt: 0.2-db1 with Databricks MLflow integrations.

  • mleap-databricks-runtime to 0.15.0 and includes mleap-xgboost-runtime.

For more information, see the complete Databricks Runtime 6.2 for ML (unsupported) release notes.

Databricks Runtime 6.2 GA

December 3, 2019

Databricks Runtime 6.2 GA brings new features, improvements, and many bug fixes, including:

  • Optimized Delta Lake insert-only merge

  • Multi-region support for Redshift connector reads

For more information, see the complete Databricks Runtime 6.2 (unsupported) release notes.

Databricks Connect now supports Databricks Runtime 6.1

December 3, 2019

Databricks Connect now supports Databricks Runtime 6.1. Databricks Connect allows you to connect your favorite IDE (IntelliJ, Eclipse, PyCharm, RStudio, Visual Studio), notebook server (Zeppelin, Jupyter), and other custom applications to Databricks clusters and run Apache Spark code.