HOW TO: Download Files From DBFS to Local Machine (2024)

Databricks integrates seamlessly with cloud storage solutions like AWS S3, Azure Blob Storage, and Google Cloud Storage, enabling scalable, high-performance data management for analytics and machine learning workloads. At the very core of Databricks is the Databricks File System (DBFS), which acts as an abstraction layer over over cloud storage, providing a unified mount point that allows programmatic and transparent access to underlying object storage resources. That being said, there are situations where you need to download files from DBFS to your local machine. For instance, you might need to validate data locally, analyze it offline, comply with regulations, archive your data, or share data across different systems in an efficient way.

In this article, we will learn the step-by-step process of how to download files from DBFS to your local machine, covering various techniques and best practices for seamless data transfer.

What is the Databricks File System (DBFS)?

Databricks File System (DBFS) is a virtual file system abstraction layer provided by Databricks. It allows you to interact with cloud object storage services like AWS S3, Azure Blob Storage, and Google Cloud Storage as if they were a single, unified file system. DBFS abstracts the complexities of different cloud storage APIs, presenting a consistent interface for data access.

Databricks DBFS is tightly integrated with Databricks. You can perform standard file system operations like creating, reading, writing, deleting files and directories, as well as listing directory contents. Data stored in Databricks DBFS is persisted in the chosen cloud storage service. DBFS translates file system operations into the appropriate API calls for the underlying storage.

Databricks DBFS offers several benefits. First, it simplifies data access, allowing you to work with data across different cloud storage providers using familiar file system commands. Second, your Databricks code becomes less dependent on a specific cloud storage provider, making it easier to migrate between platforms if needed. Finally, DBFS inherits the security and compliance features of the underlying cloud storage service, including access control and encryption.

Databricks File System (DBFS) offers several features and capabilities to simplify data management and integration within the Databricks environment:

Databricks File System acts as a bridge to cloud storage services like Azure Blob Storage or Amazon S3.

Databricks File System provides a Unix-like file system interface. You can use commands like ls, cp, and mv to manage your files easily.

 Databricks File System supports various file formats, including Parquet, Avro, JSON, ORC, CSV, Binary file formats and much more, making it easy to work with diverse data types.

 Databricks File System is optimized for high-performance workloads, such as ETL, machine learning, and ad-hoc analytics.

Databricks File System is designed to handle massive volumes of data (structured and unstructured) efficiently.

Data stored in Databricks DBFS benefits from the redundancy and high availability of the underlying cloud storage service.

Data in DBFS is organized into roots and directories. You can create custom directories within the DBFS root to categorize data logically.

Previously, you could mount external cloud storage into the DBFS namespace. However, mounting is now deprecated. Instead, you can use Databricks Unity Catalog Volumes to manage data access.

Databricks File System supports customer-managed keys for encryption and provides robust access control measures to ensure authorized data access.

Want to take Chaos Genius for a spin?

It takes less than 5 minutes.

Enter your work email
Enter your work email

What File Formats Does Databricks DBFS Support?

Databricks File System (DBFS) supports a wide range of file formats. Here’s a quick rundown of the key file formats supported by DBFS:

➥ Structured and Semi-Structured Formats:

➥ Optimized Formats for Analytics:

➥ Unstructured Data:

What Are the Key Components of Databricks DBFS?

Databricks DBFS consists of two key components:

  • DBFS Root
  • DBFS Mount (🚨 Deprecated)

What is DBFS Root?

Databricks DBFS Root is a storage location set up when you create a Databricks workspace in your cloud account. It serves as the default file system location for your workspace and is often referred to as the top-level directory in DBFS. This root directory is where you typically start when interacting with data in your Databricks workspace. You can organize your data within the DBFS root by creating subdirectories and files as needed.

DBFS root is accessible to all users in a workspace, which means any data stored here can be accessed by everyone. Because of this, Databricks recommends against storing sensitive or production data in the DBFS root. Instead, use Unity Catalog volumes for more secure and managed storage solutions.

DBFS has several default directories used for various actions in the Databricks workspace. Here are some of the key directories:

/FileStore: This directory stores data and libraries uploaded through the Databricks UI. Generated plots are also stored here by default

/databricks-datasets: This directory contains open-source datasets provided by Databricks. 

/databricks-results: This directory stores files generated by downloading the full results of a query. 

/databricks/init: This directory contains legacy global init scripts, which should not be used.

/user/hive/warehouse: This is the default location for managed Hive tables defined in the Hive metastore. Databricks stores data for these tables here by default.

These directories help organize and manage different types of data and actions within the Databricks workspace.

Check out this documentation to learn more about DBFS root.

What is DBFS Mount? (🚨 Deprecated)

Databricks DBFS mount allows you to connect your Databricks workspace to cloud object storage. This feature simplifies how you access data stored in the cloud, making it behave like a local file system. With DBFS mount, you create a link between your Databricks environment and the cloud storage, enabling you to work with files using familiar paths.

When you mount storage, it creates a local alias under the /mnt directory. This alias includes:

  • The location of the cloud object storage.
  • Driver specifications for connecting to the storage.
  • Security credentials needed to access the data.

You can use the dbutils.fs.mount() function to create a mount point. The syntax looks like this:

dbutils.fs.mount(
    source="s3a://<aws-bucket-name>",
    mount_point="/mnt/<mount-name>",
    extra_configs={"<key>": "<value>"}
)

Here, source specifies the URI of your cloud storage, while mount_point indicates where in DBFS you want to access that data. You can also include additional configurations as needed.

Note: DBFS mounts are considered a legacy feature and are now deprecated. Databricks recommends using Unity Catalog for managing data access. Unity Catalog provides a more secure and efficient way to handle data governance and access control. If you are starting a new project or migrating existing data, consider using Unity Catalog instead of DBFS mounts.

What are the Benefits of Using Databricks DBFS?

Databricks DBFS, or Databricks File System, offers several benefits that enhance your data management and analytics capabilities. Here are some of its key benefits:

Databricks File System provides a consistent interface to access data stored in various cloud storage systems (e.g: AWS S3, Azure Blob Storage, Google Cloud Storage). You can interact with data using familiar file system commands, abstracting away the complexities of the underlying storage technology.

Databricks File System implements POSIX-like file system semantics, which means:

  • Standard file path operations are supported
  • Compatibility with existing data processing frameworks
  • Simplified path handling for distributed computing environments

Databricks File System integrates tightly with Databricks' tools, allowing you to use its file system utilities (dbutils.fs) for common operations like listing, copying, and deleting files.

Databricks File System is optimized for Spark-based workloads. It supports high-speed read and write operations—ideal for ETL pipelines, real-time analytics, and large-scale data processing.

You can store various data formats in DBFS, including Parquet, Avro, JSON, ORC, CSV, Binary file formats and much more.

Databricks File System works with different programming environments, including Spark APIs, Python libraries, and shell commands.

 Databricks File System helps you manage costs effectively. When you shut down a Databricks cluster, you stop incurring charges for the compute resources while still retaining access to your data stored in Databricks DBFS.

Mounted storage locations in Databricks File System are accessible across Databricks clusters, reducing duplication and simplifying collaboration between teams working on shared datasets​.

Databricks File System facilitates secure data sharing across different clusters and users within your organization. It includes access controls that help protect sensitive information while allowing collaboration among team members.

Now that you have a clear understanding of Databricks DBFS, its features, characteristics, and benefits, let's dive into the main focus of this article: how to download files from DBFS to your local machine.

Step-by-Step Guide to Download Files From DBFS to Local Machine

Let's begin with a step-by-step guide on how to download files from DBFS to your local machine. We will explore five different techniques:

  1. Technique 1—Download Files from DBFS to Local Machine Using Databricks CLI
  2. Technique 2—Download Files from DBFS to Local Machine Using Web URL Access
  3. Technique 3—Download Files from DBFS to Local Machine Using Databricks Notebooks
  4. Technique 4—Download Files from DBFS to Local Machine Using Databricks Display Option
  5. Technique 5—Download Files from DBFS to Local Machine Using Databricks REST API

Prerequisites

Before you get started, make sure you have the following:

  • Databricks account and workspace access: You need access to your Databricks workspace and the necessary permissions to interact with DBFS.
  • Databricks CLI installed and configured: Install the Databricks CLI on your local machine and set it up to authenticate with your Databricks workspace. You can refer to the following article:
  • Permissions for accessing file paths in DBFS: Make sure your Databricks account has the necessary permissions to access the specific files stored in DBFS.
  • Exact/Accurate file path details in DBFS: You need the correct file path to the data stored in DBFS. You can retrieve this using the databricks fs ls command.
  • Access to a local machine for saving files: You should have write permissions to a folder on your local machine to store the downloaded files.

🔮 Technique 1—Download File From DBFS to Local Machine Using Databricks CLI

Step 1—Installing the Databricks CLI

If you haven't installed the Databricks CLI yet, follow the guide below based on your operating system. Here are the guides to help you:

For this article, we will be using Windows OS. If you are using Windows, follow these instructions thoroughly to install the Databricks CLI.

First, start by opening Windows PowerShell as an administrator. Check if you already have Winget or Chocolatey installed by running:

winget --version  
choco --version  

There are several options for installing the Databricks CLI on Windows, including using Winget, Chocolatey, Windows Subsystem for Linux (WSL), or installing manually from the source code.

Using WinGet: 

As a built-in package manager in Windows 10 and 11, Winget makes installation simple. Open Command Prompt and run the following command to find and install the Databricks CLI.

winget search databricks

Using Chocolatey:

Chocolatey is a third-party package manager for Windows. Just open Command Prompt and execute the following command. Chocolatey will handle the rest for you.

choco install databricks-cli

Using Windows Subsystem for Linux:

If you’re comfortable with Linux commands, you can use WSL to download the Databricks CLI via curl. The process is identical to installing it on a Linux system.

Manual Installation

You can also download the CLI directly from its GitHub releases page. Before downloading, identify your system’s architecture by running $env:PROCESSOR_ARCHITECTURE in PowerShell or echo %PROCESSOR_ARCHITECTURE% in Command Prompt. Then, grab the appropriate zip file for your architecture, extract it, and locate the CLI executable.

Manually download Databricks CLI from source - Download file from Databricks DBFS

If you’ve already followed the above steps, there’s no need to install the Databricks CLI again for WSL or Chocolatey—you should already have it installed.

For Winget:

If you’re using Winget, you can install the Databricks CLI by opening Command Prompt and running:

winget install Databricks.DatabricksCLI  

For manual installation from source:

By now, you should have downloaded and extracted the CLI zip file. Inside the extracted folder, you’ll find the Databricks CLI executable. Simply run the file to get started.

Once installed, it’s a good idea to confirm everything is working properly. Open a new terminal or Command Prompt window and run:

databricks --version  

If the Databricks CLI is correctly installed, this command will display the version number, like this:

Verifying Databricks CLI Installation - Download file from Databricks File System

If you see an error like command not found, it means the Databricks CLI isn’t properly installed or isn’t on your system PATH. Double-check your installation steps, especially check whether the executable is added to your PATH.

With the CLI successfully installed, you’re ready to move on to the next step: setting up the Databricks CLI.

Step 2—Setting up the Databricks CLI

After installing the Databricks CLI, you’ll need to set up authentication to connect it to your Databricks workspace. There are a couple of ways to do this—using a Personal Access Token and then configuring a profile with a token.

First, let's generate a Personal Access Token. A Personal Access Token acts like a password to authenticate the CLI with your Databricks workspace. Start by navigating to your Databricks workspace and then access the User Settings section under your username. 

From there:

Go to the Developer section and generate a new token under Access Token settings.

Generating a new Personal Access Token - Databricks DBFS - Databricks CLI

Add a comment to describe the token’s purpose and set its expiration. Then, click Generate, then copy the token. Make sure to save it securely because it won’t be retrievable later.

Generating a new Personal Access Token - Download file from Databricks DBFS

Next, let's configure a Profile with the Token. To set up the CLI for token-based authentication, you’ll have to create a configuration profile:

Open a terminal or command prompt on your machine and run the command:

databricks configure

When prompted, enter your Databricks workspace URL (e.g: https://dbc-123456789.cloud.databricks.com). Then, paste the access token you generated earlier. This process creates a configuration profile in your .databrickscfg file automatically.

If preferred, you can manually set up the .databrickscfg file in your home directory. The file should follow this format:

[DEFAULT]  
host = https://xxxx.cloud.databricks.com  
token = XXXXXXXXXXXXXXXXXXXX  

[<profile-name>]  
host = https://yyyy.cloud.databricks.com  
token = XXXXXXXXXXXXXXXXXXXX  

You can also customize the location of this file by setting an environment variable.

After setting up, verify that authentication is working by running the following command to list available profiles: 

databricks auth profiles
Verifying authentication - Databricks CLI - Download file from DBFS (Databricks File System)

Or 

To verify if you have set up authentication correctly, you can execute the following command:

databricks clusters spark-versions
Listing all Databricks Runtime versions for the Databricks workspace associated with DEFAULT profile - Databricks CLI

With authentication configured, the CLI is ready to interact with your Databricks workspace!

Step 3—Using DBFS LS Command to List All Files

Now that we’ve successfully installed and configured the Databricks CLI, it’s time to interact with your files in DBFS. Let’s start by listing all the files in the directory where your target file is stored. This step helps you confirm the file's exact path and ensures it’s available for download.

To list files in a specific directory within DBFS, use the following command:

databricks fs ls dbfs:/<path_to_folder>

Replace <path_to_folder> with the folder path in DBFS where the file is stored. 

You can also explore other commands available in the Databricks CLI utility. For example, to list the files in your workspace, you can use the ls command.

databricks workspace ls
Exploring and listing all the workspace commands available in the Databricks CLI utility

Step 4—Using DBFS CP Command to Copy All Files

Once you’ve identified the file path, it’s time to copy it to your local machine. The databricks fs cp command allows you to download files from Databricks DBFS directly.

Run the following command:

databricks fs cp dbfs:/<path_to_file> <local_destination_path>

Replace <path_to_file> with the exact file path you identified earlier. Also, replace <local_destination_path> with the local directory where you want to save the file. Make sure the destination directory exists and you have write permissions.

For instance, to download data.csv from dbfs:/<some_path>/ to your local Downloads folder, use:

databricks fs cp dbfs:/<some_path>/ ~/Downloads/

If you need to download multiple files from the same directory, use a wildcard character (*). For example:

databricks fs cp dbfs:/<some_path>/* ~/Downloads/

This command downloads all files in the directory to your Downloads folder.

Step 5—Verifying the Downloaded File

After running the cp command, check your local machine to confirm the file transfer was successful. Navigate to the destination folder (~/Downloads/DatabricksDBFSDemo in this example) and list the files:

ls ~/Downloads/DatabricksDBFSDemo
Verifying the downloaded file from DBFS - Download file from DBFS (Databricks File System)

You should see your downloaded file (data.csv) listed. To confirm the contents of the file, open it using a text editor or other relevant software. For example:

cat ~/Downloads/<your_file>

If the file isn’t present or doesn’t open correctly:

  • Double-check the file path in your databricks fs cp command.
  • Make sure you have read permissions for the file in DBFS.
  • Verify that your local machine has enough storage space.

If you carefully follow these steps, you will have successfully downloaded files from DBFS to your local machine using the Databricks CLI. If you encounter any issues, go back and review the earlier steps to ensure your setup is correct.

🔮 Technique 2—Download Files From DBFS to Local Machine Using Web URL Access (via Databricks FileStore)

Now that we’ve covered how to use the Databricks CLI to download files from DBFS, let’s explore another method: download files from DBFS directly using a web URL (via Databricks FileStore). This approach is simpler and ideal when files are stored in the /FileStore/ path in DBFS or a path mounted to DBFS. Without these conditions, this method won’t work.

Step 1—Locating the Databricks Instance URL

Databricks Instance URL is the base URL for your Databricks workspace. It includes the unique tenant ID (o parameter) required for authentication. Follow these steps to locate it:

First, open your Databricks workspace in your browser. Look at the address bar. The URL should resemble one of these:

  • Azure: https://<region>.azuredatabricks.net/?o=<tenant-id>
  • AWS: https://<region>.cloud.databricks.com/?o=<tenant-id>
  • Community Edition: https://community.cloud.databricks.com/?o=<tenant-id>

Copy and note the URL:

  • Base URL: the part before the ?
  • Tenant ID: the value of the o parameter after the ?
Databricks instance URL - Download file from DBFS (Databricks File System)

You’ll use these components to create the web URL for downloading the file.

Step 2—Locating the File in Databricks DBFS

Next, you need to identify the file you want to download from DBFS. To do this, navigate to the DBFS tab in Databricks or use the Databricks UI to explore the file system. Make sure the file is located in the /FileStore/ directory. 

Navigating to the DBFS tab in Databricks - Databricks FileStore

As you can see, your file path might look like this: /FileStore/data.csv. If the file is stored outside of the /FileStore/ directory, you might need to move it there or mount the required path to DBFS. Otherwise, this method won’t work.

Step 3—Modifying the File Path for Web Access

The /FileStore/ path is a special folder that allows web access. To create a valid web-accessible URL:

  1. Replace /FileStore/ in the path with /files/. For example:
  • Original: /FileStore/data.csv
  • Updated: /files/data.csv
  1. Combine the base URL, updated file path, and tenant ID to construct the full URL:
  • Example: https://community.cloud.databricks.com/files/data.csv?o=1234567890123456
Modifying the file path for web access - Download file from DBFS - Databricks FileStore

This URL directly points to your file and makes it accessible for download.

Step 4—Downloading the File via Web Browser

With the web URL ready:

  • Paste it into your browser’s address bar.
  • Press Enter. Your browser will prompt you to download the file.
  • Save the file to your preferred location on your local machine.
Downloading the file via web browser - Download file from DBFS - Databricks FileStore

Step 5—Verifying the Downloaded File

After downloading, open the file to make sure it's the correct one. Thoroughly check its contents and format to confirm that the download was successful.

Note that this method is efficient for files in the /FileStore/ directory or paths mounted to DBFS. For other paths, you might need to use the technique described above or the one that will be covered below.

🔮 Technique 3—Download Files From DBFS to Local Machine Utilizing Databricks Notebooks

We’ve already explored two ways to download files from Databricks DBFS (Databricks File System) to your local machine:

Now, let’s dive into the next method—using a Databricks Notebook to directly handle file downloads from Databricks File System to your local machine. This approach is great when you want to integrate file download operations within your data processing workflows. Here’s how you can do it:

Step 1—Set Up Your Databricks Cluster

Before you start, make sure you have an active Databricks cluster. If you don't have one running, head over to the "Compute" section in your Databricks workspace. From there, you can either create a new compute or start an existing one.

Setting up and configuring Databricks Clusters - Download file from DBFS

Make sure to attach the necessary permissions to access your DBFS files. Having a running cluster is crucial because Databricks Notebooks execute on the cluster, so this step is essential.

Step 2—Open a Databricks Notebook

Navigate to the "Workspace" section in Databricks. From there, you can either create a new Notebook or open an existing one. Make sure to set the language to Python, or choose another supported language based on your needs. This Notebook will serve as your workspace for downloading files.

Step 3—Locate Your File in Databricks DBFS

To download a file from Databricks DBFS, you need its full path. Here’s how to find it: Go to the "Catalog" section and select the "DBFS" tab located next to Database Tables. Click on the FileStore section and browse to the file you want to download. Right-click on the file and copy its full path, which will look something like this:

dbfs:/FileStore/data.csv
Selecting and locating DBFS file path - Download file from DBFS - Databricks Notebooks

 We will use this path in our code later on.

Step 4—Attach the Notebook to Your Cluster

At the top of the Notebook, you’ll see an option to attach the Notebook to a cluster. Select the cluster you set up in Step 1.

Attaching the Databricks Notebook to clusters - Download file from DBFS

Step 5—Write Code to Process and Download the File

In the Notebook, add the following code to read the file from Databricks DBFS and create a downloadable link:

import base64
from IPython.core.display import HTML

# Reading CSV from DBFS into Spark DataFrame
df = spark.read.csv("dbfs:/FileStore/data.csv", header=True, inferSchema=True)

# Converting Pandas DataFrame
pandas_df = df.toPandas()

# Creating downloadable link
csv_data = pandas_df.to_csv(index=False)
b64 = base64.b64encode(csv_data.encode()).decode()
href = f'<a href="data:file/csv;base64,{b64}" download="data.csv">Click here to download data.csv</a>'

# Rendering the link in HTML format
HTML(href)

As you can see, we start by importing the base64 library for encoding and IPython.core.display for rendering HTML content. Next, we read a CSV file from DBFS into a Spark DataFrame using spark.read.csv(), specifying the file path and settings to include the header and infer the schema. We then convert this Spark DataFrame into a Pandas DataFrame for easier manipulation.

After that, we generate CSV data from the Pandas DataFrame and encode it in base64 format. We create an HTML anchor tag that lets you download the CSV file directly by clicking on it. Finally, we use HTML(href) to render this download link in the Notebook, making it super easy for you to download file from DBFS to the local machine.

Step 5—Verifying the Downloaded File

After running the cell containing your code, look for a clickable link that says "Click here to download file from DBFS" Click this link, and it should prompt you to download data.csv directly onto your local machine.

Verifying the downloaded file - Download file from DBFS - Databricks Notebooks
Verifying the downloaded file - Download file from DBFS - Databricks Notebooks

🔮 Technique 4—Download Files From DBFS to Local Machine Using Databricks Display Option

Now that we've explored several techniques to download files from DBFS—including Databricks CLI, web URL/UI, and Databricks Notebooks—let's focus on a powerful method using the display() option. This technique offers a user-friendly approach to transferring data directly from your Databricks environment to your local machine.

Step 1—Configuring Databricks Workspace

Log into your Databricks account and navigate to your workspace dashboard. Make sure you have the necessary permissions to access and download files.

Step 2—Open a Databricks Notebook

Launch a new Databricks Notebook or open an existing one where you'll perform your data operations. Choose a language compatible with your workflow—Python is typically the most versatile for Spark DataFrame manipulations.

Step 3—Locate Your File in Databricks DBFS

Finding the exact file path is crucial for successful data retrieval. Navigate to the "Catalog" section and select the "DBFS" tab located next to Database Tables. Click on the FileStore section and browse to the file you want to download. Right-click on the file and copy its full path.

Selecting and locating Databricks DBFS file path

Copy this path precisely, as you'll need it for DataFrame creation and download operations.

Step 4—Attach the Notebook to Your Cluster

Select an active cluster or create a new one to process your data. Attaching your Notebook to a cluster ensures you have the computational resources necessary for reading and manipulating your DataFrame.

Step 5—Prepare Your Spark DataFrame

Create a Spark DataFrame using the file path you identified earlier. Use appropriate Spark reading methods based on your file type. For CSV files, you might use spark.read.csv(), while other formats might require different reading approaches. Here's a sample Python code to read a CSV:

from pyspark.sql import SparkSession

spark = SparkSession.builder.getOrCreate()
df = spark.read.csv("dbfs:/FileStore/data.csv", header=True)
Preparing Spark DataFrame

Step 6—Verify DataFrame Size

Before downloading, check the total number of records in your DataFrame. The display() method has a strict limit of 1 million records. Use the count() method to assess your dataset's size:

record_count = df.count()
print(f"Total records in DataFrame: {record_count}")

if record_count > 1000000:
    print("Warning: Dataset exceeds download limits")
Verifying DataFrame size

Step 7—Using display() to Display the DataFrame

Invoke the display() function to render your DataFrame and prepare it for download:

display(df)
Using display() to Display the DataFrame

As you can see, this command generates an interactive table view of your data within the Databricks Notebook interface.

Step 8—Download from UI

In the displayed DataFrame view, look for the "Download All Rows" 📥 button. This is typically located in the bottom-left corner of the data table. Click the button and Databricks will automatically convert your Spark DataFrame to a CSV file and initiate the download to your local machine.

Downloading all data from DataFrame view
Note that display() method works well for small datasets, but it's not suitable for massive files. If your data exceeds 1 million records, consider alternative methods like using Databricks CLI or writing the entire DataFrame to a location accessible for download.

Limitation of this technique:

  • Limited to 1 million records
  • Potential performance issues with large datasets
  • Requires stable internet connection

If you follow these steps, you can efficiently download files from DBFS to your local machine using the Databricks display() option. The method is very straightforward and user-friendly. Now let's move on to the final technique.

🔮 Technique 5—Download Files From DBFS to Local Machine Using Databricks REST API

Now that we’ve explored techniques like the Databricks CLI, UI, and even display options within Databricks to download files from DBFS to your local machine, let’s turn our focus to another powerful method: the Databricks REST API. This technique provides flexibility and can be particularly useful for automation or working within custom scripts. Let’s walk through this step-by-step process in detail.

Step 1—Setting Up Your Databricks Cluster

Before diving into API usage, make sure you have an active Databricks cluster. The cluster serves as the compute backbone of your workspace and is essential for accessing and managing Databricks DBFS files. If you haven’t already, log into your Databricks workspace and start your cluster or create a new one. You’ll find this under the Compute section. Remember, this cluster doesn’t directly handle the download but enables API-based interactions with Databricks DBFS.

Step 2—Generating a Personal Access Token

REST API requires authentication, and a personal access token is the simplest way to provide it. Navigate to your User Settings section under your username in Databricks workspace and head over to the Developer section and generate a new token under Access Token settings.

Generating a new Personal Access Token

Next, add a meaningful comment to describe the token’s purpose and set its expiration. Then, click Generate, then copy the token. Databricks will display the token only once, so copy it and store it securely. If you lose it, you’ll need to create a new one.

Generating a new Personal Access Token

Step 3—Identifying the Full File Path in Databricks DBFS

Next, determine the full path of the file you want to download. You can browse Databricks DBFS paths using the Databricks UI or dbutils.fs commands in a Notebook. For instance, a typical path might look like /<your-file-path>/data.csv. Note this path carefully because you’ll need it to construct the API request.

Step 5—Using the REST API to Access Files

Databricks REST API provides a /dbfs/read endpoint specifically for reading files. This endpoint allows you to download file data in chunks, which is essential for handling large files. Files are read up to 1 MB at a time, requiring repeated calls for anything larger. You’ll use tools like Python (with request library) or curl to interact with this endpoint.

Step 6—Crafting the API Request

Let’s craft a custom Python script for this step. Using libraries like requests, you’ll authenticate with the API using your personal access token and send a request to download the file. Construct the API URL using your workspace URL (e.g: https://<workspace-url>), and set the file path, offset, and chunk size. Here’s an example snippet:

import requests
import base64

# Replace these with your actual details
host = "https://<your-databricks-instance>.cloud.databricks.com"
token = "<your-api-token>"  
dbfs_path = "/dbfs/FileStore/data.csv"
local_file = "output.csv"  # Local file path to save

# API endpoint to read the file
read_endpoint = f"{host}/api/2.0/dbfs/read"

# Headers for authentication
headers = {
    "Authorization": f"Bearer {token}"
}

# Download file in chunks (max 1 MB/read)
offset = 0
chunk_size = 1048576  # 1 MB
with open(local_file, "wb") as file:
    while True:
        # Requesting file chunk
        response = requests.get(read_endpoint, headers=headers, params={"path": dbfs_path, "offset": offset, "length": chunk_size})
        
        if response.status_code != 200:
            print(f"Error: {response.status_code} - {response.text}")
            break
        
        # Decoding base64 content
        content = base64.b64decode(response.json()["data"])
        
        # Write to file
        file.write(content)
        
        # Stop if less than chunk_size data is received
        if len(content) < chunk_size:
            print("Download complete.")
            break
        
        # Update offset for the next chunk
        offset += chunk_size

Or

you can even use curl:

curl -X POST https://<workspace-url>/api/2.0/dbfs/read \
    -H "Authorization: Bearer <your-token>" \
    -d '{"path": "/dbfs/FileStore/data.csv", "offset": 0, "length": 1048576}' > output.csv

Step 8—Saving and Verifying the Downloaded File

Once the data is downloaded, save it to your local system. If you use the above script, the file will be saved as output.csv. Open the file to confirm that the contents match what you expected from DBFS.

Limitations on Downloading Files From Databricks DBFS

Downloading files from Databricks' Distributed File System has challenges, especially with large datasets or certain Databricks editions. Here are the key limitations:

1) Databricks Community Edition does not support the Databricks Command Line Interface (CLI).

2) You cannot download files directly from Databricks DBFS through the user interface unless the DBFS File Browser is enabled in your workspace settings. If this feature is disabled, you won't see options to download files directly from the Databricks DBFS root.

Enabling DBFS file browser option - Download file from Databricks DBFS

3) Databricks display() method can only export datasets up to 1 million records as a CSV file. Larger datasets cannot be downloaded using this approach, which makes it impractical for substantial ETL or analysis tasks.

4) Files must be located in specific paths within Databricks DBFS, such as /FileStore/, to generate accessible HTTP links. Mounting non-standard paths for such operations requires additional configurations​.

5) Large file downloads via DBFS may lead to timeouts or interruptions when using web URLs or manual exports.

6) Bulk transfers are easy with Databricks CLI or Databricks REST API, but there's a catch. They require token-based authentication and extensive configuration, which can make setup a bit of a process. Hence, it's not straightforward for everyone.

7) Publicly accessible Databricks DBFS file path URLs are convenient, but they can also bring some security risks. Proper access control is necessary to avoid unauthorized access.

8) Files stored in the Databricks DBFS root are accessible by all users in the workspace. This broad access can pose security risks, especially for sensitive data. Databricks recommends using Unity Catalog volumes for better access control and security.

Save up to 50% on your Databricks spend in a few minutes!

Enter your work email
Enter your work email

Conclusion

And that’s a wrap! Downloading files from Databricks DBFS to your local machine can seem overwhelming at first. But with techniques like using the Databricks CLI, Databricks REST API, web URL access, or Databricks Notebooks, it's definitely doable. Each method works well for different situations—like managing big datasets, backing up important data, or running analysis locally. Just remember the limitations, such as file size limits and specific path requirements, to pick the best approach for your workflow. Now, you're all set to transfer data between DBFS and your local environment with ease.

In this article, we have covered:

… and so much more!

FAQs

What is DBFS Databricks?

DBFS stands for Databricks File System; it provides a distributed file system interface over cloud storage solutions used by Databricks.

What is the full form of DBFS?

DBFS stands for Databricks File System.

What is the difference between Databricks DBFS and HDFS?

DBFS is specifically designed for use within Databricks environments leveraging cloud storage; HDFS (Hadoop Distributed File System) is used primarily within Hadoop ecosystems on-premises or on cloud infrastructure but lacks some of the abstraction layers provided by DBFS.

Is DB and DBFS the same?

No, DB refers generally to databases, while DBFS refers specifically to Databricks File System within data management contexts.

Where is Databricks DBFS data stored?

Data in DBFS is stored within cloud object storage systems like AWS S3 or Azure Blob Storage depending on where your Databricks workspace is hosted.

Can I use the DBFS REST API to download files?

Yes, you can use REST API calls along with authentication tokens to programmatically download files from DBFS directly into local environments or other applications/services as needed.

Can you automate the process of downloading multiple files from DBFS?

Yes! You can script automation processes using either CLI commands or REST API calls depending on how many files you need transferred regularly between environments effectively without manual intervention every time needed.

Is there a way to download files from DBFS without using external tools?

Yes! You can directly utilize web URLs or Databricks Notebook functionalities directly within Databricks itself without requiring third-party tools unless preferred otherwise based on specific workflows involved needing additional capabilities beyond what’s natively available already provided through platform features themselves too!

How can I download a large file from DBFS?

To download large files, make use of REST APIs that allow chunked reads if necessary, along with proper handling mechanisms to guarantee complete transfers without timeouts occurring during long-running operations.