This section shows how to work with data in Databricks. You can:
- Create tables directly from imported data. Table schema is stored in the default Databricks internal metastore and you can also configure and use external metastores.
- Use a wide variety of Apache Spark data sources.
- Import data into Databricks File System (DBFS), a distributed file system mounted into a Databricks workspace and available on Databricks clusters and use the DBFS CLI, DBFS API, Databricks file system utilities (dbutils.fs), Spark APIs, and local file APIs to access the data.
This section covers: