Warehouses and Lakes
Files

How to load data from SFTP to Databricks Lakehouse

Learn how to use Airbyte to synchronize your SFTP data into Databricks Lakehouse within minutes.

TL;DR

This can be done by building a data pipeline manually, usually a Python script (you can leverage a tool as Apache Airflow for this). This process can take more than a full week of development. Or it can be done in minutes on Airbyte in three easy steps:

  1. set up SFTP as a source connector (using Auth, or usually an API key)
  2. set up Databricks Lakehouse as a destination connector
  3. define which data you want to transfer and how frequently

You can choose to self-host the pipeline using Airbyte Open Source or have it managed for you with Airbyte Cloud.

This tutorial’s purpose is to show you how.

What is SFTP

SFTP (Secure File Transfer Protocol) is a secure way to transfer files between two computers over the internet. It uses encryption to protect the data being transferred, making it more secure than traditional FTP (File Transfer Protocol). SFTP is commonly used by businesses and organizations to transfer sensitive data such as financial information, medical records, and personal data. It requires authentication using a username and password or public key authentication, ensuring that only authorized users can access the files. SFTP is also platform-independent, meaning it can be used on any operating system, making it a versatile and reliable option for secure file transfers.

What is Databricks Lakehouse

Databricks is an American enterprise software company founded by the creators of Apache Spark. Databricks combines data warehouses and data lakes into a lakehouse architecture.

Integrate SFTP with Databricks Lakehouse in minutes

Try for free now

Prerequisites

  1. A SFTP account to transfer your customer data automatically from.
  2. A Databricks Lakehouse account.
  3. An active Airbyte Cloud account, or you can also choose to use Airbyte Open Source locally. You can follow the instructions to set up Airbyte on your system using docker-compose.

Airbyte is an open-source data integration platform that consolidates and streamlines the process of extracting and loading data from multiple data sources to data warehouses. It offers pre-built connectors, including SFTP and Databricks Lakehouse, for seamless data migration.

When using Airbyte to move data from SFTP to Databricks Lakehouse, it extracts data from SFTP using the source connector, converts it into a format Databricks Lakehouse can ingest using the provided schema, and then loads it into Databricks Lakehouse via the destination connector. This allows businesses to leverage their SFTP data for advanced analytics and insights within Databricks Lakehouse, simplifying the ETL process and saving significant time and resources.

Step 1: Set up SFTP as a source connector

1. Open the Airbyte platform and navigate to the "Sources" tab on the left-hand side of the screen.
2. Click on the "Create a new connection" button and select "SFTP" as the source connector.
3. Enter a name for the connection and click "Next".
4. In the "Connection Configuration" section, enter the hostname or IP address of the SFTP server, as well as the port number (usually 22).
5. Enter the username and password for the SFTP server in the "Authentication" section.
6. If your SFTP server requires a private key for authentication, select the "Private Key" option and enter the path to the key file.
7. In the "Advanced" section, you can specify additional options such as the path to the remote directory and the file pattern to use for selecting files.
8. Click "Test" to verify that the connection is working correctly.
9. If the test is successful, click "Create" to save the connection and start syncing data from the SFTP server.

Step 2: Set up Databricks Lakehouse as a destination connector

1. First, navigate to the Airbyte website and log in to your account.
2. Once you are logged in, click on the "Destinations" tab on the left-hand side of the screen.
3. Scroll down until you find the "Databricks Lakehouse" connector and click on it.
4. You will be prompted to enter your Databricks Lakehouse credentials, including your account name, personal access token, and workspace ID.
5. Once you have entered your credentials, click on the "Test" button to ensure that the connection is successful.
6. If the test is successful, click on the "Save" button to save your Databricks Lakehouse destination connector settings.
7. You can now use the Databricks Lakehouse connector to transfer data from your source connectors to your Databricks Lakehouse destination.
8. To set up a data transfer, navigate to the "Sources" tab and select the source connector that you want to use.
9. Follow the prompts to enter your source connector credentials and configure your data transfer settings.
10. Once you have configured your source connector, select the Databricks Lakehouse connector as your destination and follow the prompts to configure your data transfer settings.
11. Click on the "Run" button to initiate the data transfer.

Step 3: Set up a connection to sync your SFTP data to Databricks Lakehouse

Once you've successfully connected SFTP as a data source and Databricks Lakehouse as a destination in Airbyte, you can set up a data pipeline between them with the following steps:

  1. Create a new connection: On the Airbyte dashboard, navigate to the 'Connections' tab and click the '+ New Connection' button.
  2. Choose your source: Select SFTP from the dropdown list of your configured sources.
  3. Select your destination: Choose Databricks Lakehouse from the dropdown list of your configured destinations.
  4. Configure your sync: Define the frequency of your data syncs based on your business needs. Airbyte allows both manual and automatic scheduling for your data refreshes.
  5. Select the data to sync: Choose the specific SFTP objects you want to import data from towards Databricks Lakehouse. You can sync all data or select specific tables and fields.
  6. Select the sync mode for your streams: Choose between full refreshes or incremental syncs (with deduplication if you want), and this for all streams or at the stream level. Incremental is only available for streams that have a primary cursor.
  7. Test your connection: Click the 'Test Connection' button to make sure that your setup works. If the connection test is successful, save your configuration.
  8. Start the sync: If the test passes, click 'Set Up Connection'. Airbyte will start moving data from SFTP to Databricks Lakehouse according to your settings.

Remember, Airbyte keeps your data in sync at the frequency you determine, ensuring your Databricks Lakehouse data warehouse is always up-to-date with your SFTP data.

Use Cases to transfer your SFTP data to Databricks Lakehouse

Integrating data from SFTP to Databricks Lakehouse provides several benefits. Here are a few use cases:

  1. Advanced Analytics: Databricks Lakehouse’s powerful data processing capabilities enable you to perform complex queries and data analysis on your SFTP data, extracting insights that wouldn't be possible within SFTP alone.
  2. Data Consolidation: If you're using multiple other sources along with SFTP, syncing to Databricks Lakehouse allows you to centralize your data for a holistic view of your operations, and to set up a change data capture process so you never have any discrepancies in your data again.
  3. Historical Data Analysis: SFTP has limits on historical data. Syncing data to Databricks Lakehouse allows for long-term data retention and analysis of historical trends over time.
  4. Data Security and Compliance: Databricks Lakehouse provides robust data security features. Syncing SFTP data to Databricks Lakehouse ensures your data is secured and allows for advanced data governance and compliance management.
  5. Scalability: Databricks Lakehouse can handle large volumes of data without affecting performance, providing an ideal solution for growing businesses with expanding SFTP data.
  6. Data Science and Machine Learning: By having SFTP data in Databricks Lakehouse, you can apply machine learning models to your data for predictive analytics, customer segmentation, and more.
  7. Reporting and Visualization: While SFTP provides reporting tools, data visualization tools like Tableau, PowerBI, Looker (Google Data Studio) can connect to Databricks Lakehouse, providing more advanced business intelligence options. If you have a SFTP table that needs to be converted to a Databricks Lakehouse table, Airbyte can do that automatically.

Wrapping Up

To summarize, this tutorial has shown you how to:

  1. Configure a SFTP account as an Airbyte data source connector.
  2. Configure Databricks Lakehouse as a data destination connector.
  3. Create an Airbyte data pipeline that will automatically be moving data directly from SFTP to Databricks Lakehouse after you set a schedule

With Airbyte, creating data pipelines take minutes, and the data integration possibilities are endless. Airbyte supports the largest catalog of API tools, databases, and files, among other sources. Airbyte's connectors are open-source, so you can add any custom objects to the connector, or even build a new connector from scratch without any local dev environment or any data engineer within 10 minutes with the no-code connector builder.

We look forward to seeing you make use of it! We invite you to join the conversation on our community Slack Channel, or sign up for our newsletter. You should also check out other Airbyte tutorials, and Airbyte’s content hub!

What should you do next?

Hope you enjoyed the reading. Here are the 3 ways we can help you in your data journey:

flag icon
Easily address your data movement needs with Airbyte Cloud
Take the first step towards extensible data movement infrastructure that will give a ton of time back to your data team. 
Get started with Airbyte for free
high five icon
Talk to a data infrastructure expert
Get a free consultation with an Airbyte expert to significantly improve your data movement infrastructure. 
Talk to sales
stars sparkling
Improve your data infrastructure knowledge
Subscribe to our monthly newsletter and get the community’s new enlightening content along with Airbyte’s progress in their mission to solve data integration once and for all.
Subscribe to newsletter

TL;DR

This can be done by building a data pipeline manually, usually a Python script (you can leverage a tool as Apache Airflow for this). This process can take more than a full week of development. Or it can be done in minutes on Airbyte in three easy steps:

  1. set up SFTP as a source connector (using Auth, or usually an API key)
  2. set up Databricks Lakehouse as a destination connector
  3. define which data you want to transfer and how frequently

You can choose to self-host the pipeline using Airbyte Open Source or have it managed for you with Airbyte Cloud.

This tutorial’s purpose is to show you how.

What is SFTP

SFTP (Secure File Transfer Protocol) is a secure way to transfer files between two computers over the internet. It uses encryption to protect the data being transferred, making it more secure than traditional FTP (File Transfer Protocol). SFTP is commonly used by businesses and organizations to transfer sensitive data such as financial information, medical records, and personal data. It requires authentication using a username and password or public key authentication, ensuring that only authorized users can access the files. SFTP is also platform-independent, meaning it can be used on any operating system, making it a versatile and reliable option for secure file transfers.

What is Databricks Lakehouse

Databricks is an American enterprise software company founded by the creators of Apache Spark. Databricks combines data warehouses and data lakes into a lakehouse architecture.

{{COMPONENT_CTA}}

Prerequisites

  1. A SFTP account to transfer your customer data automatically from.
  2. A Databricks Lakehouse account.
  3. An active Airbyte Cloud account, or you can also choose to use Airbyte Open Source locally. You can follow the instructions to set up Airbyte on your system using docker-compose.

Airbyte is an open-source data integration platform that consolidates and streamlines the process of extracting and loading data from multiple data sources to data warehouses. It offers pre-built connectors, including SFTP and Databricks Lakehouse, for seamless data migration.

When using Airbyte to move data from SFTP to Databricks Lakehouse, it extracts data from SFTP using the source connector, converts it into a format Databricks Lakehouse can ingest using the provided schema, and then loads it into Databricks Lakehouse via the destination connector. This allows businesses to leverage their SFTP data for advanced analytics and insights within Databricks Lakehouse, simplifying the ETL process and saving significant time and resources.

Methods to Move Data From sftp to databricks lakehouse

  • Method 1: Connecting sftp to databricks lakehouse using Airbyte.
  • Method 2: Connecting sftp to databricks lakehouse manually.

Method 1: Connecting sftp to databricks lakehouse using Airbyte

Step 1: Set up SFTP as a source connector

1. Open the Airbyte platform and navigate to the "Sources" tab on the left-hand side of the screen.
2. Click on the "Create a new connection" button and select "SFTP" as the source connector.
3. Enter a name for the connection and click "Next".
4. In the "Connection Configuration" section, enter the hostname or IP address of the SFTP server, as well as the port number (usually 22).
5. Enter the username and password for the SFTP server in the "Authentication" section.
6. If your SFTP server requires a private key for authentication, select the "Private Key" option and enter the path to the key file.
7. In the "Advanced" section, you can specify additional options such as the path to the remote directory and the file pattern to use for selecting files.
8. Click "Test" to verify that the connection is working correctly.
9. If the test is successful, click "Create" to save the connection and start syncing data from the SFTP server.

Step 2: Set up Databricks Lakehouse as a destination connector

1. First, navigate to the Airbyte website and log in to your account.
2. Once you are logged in, click on the "Destinations" tab on the left-hand side of the screen.
3. Scroll down until you find the "Databricks Lakehouse" connector and click on it.
4. You will be prompted to enter your Databricks Lakehouse credentials, including your account name, personal access token, and workspace ID.
5. Once you have entered your credentials, click on the "Test" button to ensure that the connection is successful.
6. If the test is successful, click on the "Save" button to save your Databricks Lakehouse destination connector settings.
7. You can now use the Databricks Lakehouse connector to transfer data from your source connectors to your Databricks Lakehouse destination.
8. To set up a data transfer, navigate to the "Sources" tab and select the source connector that you want to use.
9. Follow the prompts to enter your source connector credentials and configure your data transfer settings.
10. Once you have configured your source connector, select the Databricks Lakehouse connector as your destination and follow the prompts to configure your data transfer settings.
11. Click on the "Run" button to initiate the data transfer.

Step 3: Set up a connection to sync your SFTP data to Databricks Lakehouse

Once you've successfully connected SFTP as a data source and Databricks Lakehouse as a destination in Airbyte, you can set up a data pipeline between them with the following steps:

  1. Create a new connection: On the Airbyte dashboard, navigate to the 'Connections' tab and click the '+ New Connection' button.
  2. Choose your source: Select SFTP from the dropdown list of your configured sources.
  3. Select your destination: Choose Databricks Lakehouse from the dropdown list of your configured destinations.
  4. Configure your sync: Define the frequency of your data syncs based on your business needs. Airbyte allows both manual and automatic scheduling for your data refreshes.
  5. Select the data to sync: Choose the specific SFTP objects you want to import data from towards Databricks Lakehouse. You can sync all data or select specific tables and fields.
  6. Select the sync mode for your streams: Choose between full refreshes or incremental syncs (with deduplication if you want), and this for all streams or at the stream level. Incremental is only available for streams that have a primary cursor.
  7. Test your connection: Click the 'Test Connection' button to make sure that your setup works. If the connection test is successful, save your configuration.
  8. Start the sync: If the test passes, click 'Set Up Connection'. Airbyte will start moving data from SFTP to Databricks Lakehouse according to your settings.

Remember, Airbyte keeps your data in sync at the frequency you determine, ensuring your Databricks Lakehouse data warehouse is always up-to-date with your SFTP data.

Method 2: Connecting sftp to databricks lakehouse manually

Moving data from an SFTP server to Databricks Lakehouse involves several steps. Databricks Lakehouse is a data management platform that combines the capabilities of a data lake and a data warehouse. Below is a step-by-step guide to achieve the transfer without using third-party connectors or integrations.

Prerequisites

1. Access to an SFTP server with the data you wish to transfer.

2. A Databricks workspace and the necessary permissions to create clusters and jobs.

3. Knowledge of Python, Scala, or R programming languages, which are supported by Databricks notebooks.

Step 1: Set Up Your Databricks Environment

1. Log in to your Databricks workspace.

2. Create a new cluster or start an existing one that you wish to use for the data transfer process.

3. Once the cluster is running, create a new notebook in the workspace.

Step 2: Install Required Libraries

In your Databricks notebook, you may need to install additional libraries to work with SFTP protocols, such as `paramiko` for Python. Use the following command to install the required library:

```python

%pip install paramiko

```

Step 3: Establish SFTP Connection

In the notebook, write a script to establish a connection to your SFTP server. Here's an example in Python using the `paramiko` library:

```python

import paramiko

sftp_hostname = 'your_sftp_server.com'

sftp_port = 22  # or the port your SFTP server uses

sftp_username = 'your_username'

sftp_password = 'your_password'  # or use key-based authentication

# Initialize the SSH client

ssh_client = paramiko.SSHClient()

ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())

# Connect to the SFTP server

ssh_client.connect(sftp_hostname, port=sftp_port, username=sftp_username, password=sftp_password)

# Create an SFTP session

sftp_client = ssh_client.open_sftp()

```

Step 4: Download Data from SFTP Server

Identify the files you want to transfer and download them to the Databricks file system (DBFS).

```python

remote_file_path = '/path/to/remote/file.csv'

local_file_path = '/dbfs/tmp/my_data.csv'  # Temporary storage in DBFS

# Download the file from SFTP to local DBFS

sftp_client.get(remote_file_path, local_file_path)

# Close the SFTP client

sftp_client.close()

ssh_client.close()

```

Step 5: Load Data into Databricks DataFrame

Load the downloaded data into a DataFrame for further processing or direct storage into the Databricks Lakehouse.

```python

# Using PySpark to read the data into a DataFrame

df = spark.read.csv(local_file_path, header=True, inferSchema=True)

# Perform any necessary data transformations here

```

Step 6: Write Data to Databricks Lakehouse

Now, write the DataFrame to the Databricks Lakehouse, which is backed by a Delta Lake on top of your data storage.

```python

# Define the path to the Delta Lake

delta_lake_path = '/mnt/delta_lakehouse/my_data'

# Write the DataFrame to the Delta Lake

df.write.format("delta").mode("overwrite").save(delta_lake_path)

```

Step 7: Schedule Regular Data Transfers (Optional)

If you need to transfer data regularly, you can schedule the notebook as a job in Databricks.

1. Go to the 'Jobs' tab in your Databricks workspace.

2. Create a new job, and select the notebook you've created as the task.

3. Configure the schedule to run as often as needed.

Step 8: Clean Up (Optional)

After the data transfer is complete, you may want to delete the temporary files from DBFS to free up space.

```python

dbutils.fs.rm(local_file_path, recurse=True)

```

Use Cases to transfer your SFTP data to Databricks Lakehouse

Integrating data from SFTP to Databricks Lakehouse provides several benefits. Here are a few use cases:

  1. Advanced Analytics: Databricks Lakehouse’s powerful data processing capabilities enable you to perform complex queries and data analysis on your SFTP data, extracting insights that wouldn't be possible within SFTP alone.
  2. Data Consolidation: If you're using multiple other sources along with SFTP, syncing to Databricks Lakehouse allows you to centralize your data for a holistic view of your operations, and to set up a change data capture process so you never have any discrepancies in your data again.
  3. Historical Data Analysis: SFTP has limits on historical data. Syncing data to Databricks Lakehouse allows for long-term data retention and analysis of historical trends over time.
  4. Data Security and Compliance: Databricks Lakehouse provides robust data security features. Syncing SFTP data to Databricks Lakehouse ensures your data is secured and allows for advanced data governance and compliance management.
  5. Scalability: Databricks Lakehouse can handle large volumes of data without affecting performance, providing an ideal solution for growing businesses with expanding SFTP data.
  6. Data Science and Machine Learning: By having SFTP data in Databricks Lakehouse, you can apply machine learning models to your data for predictive analytics, customer segmentation, and more.
  7. Reporting and Visualization: While SFTP provides reporting tools, data visualization tools like Tableau, PowerBI, Looker (Google Data Studio) can connect to Databricks Lakehouse, providing more advanced business intelligence options. If you have a SFTP table that needs to be converted to a Databricks Lakehouse table, Airbyte can do that automatically.

Wrapping Up

To summarize, this tutorial has shown you how to:

  1. Configure a SFTP account as an Airbyte data source connector.
  2. Configure Databricks Lakehouse as a data destination connector.
  3. Create an Airbyte data pipeline that will automatically be moving data directly from SFTP to Databricks Lakehouse after you set a schedule

With Airbyte, creating data pipelines take minutes, and the data integration possibilities are endless. Airbyte supports the largest catalog of API tools, databases, and files, among other sources. Airbyte's connectors are open-source, so you can add any custom objects to the connector, or even build a new connector from scratch without any local dev environment or any data engineer within 10 minutes with the no-code connector builder.

We look forward to seeing you make use of it! We invite you to join the conversation on our community Slack Channel, or sign up for our newsletter. You should also check out other Airbyte tutorials, and Airbyte’s content hub!

What should you do next?

Hope you enjoyed the reading. Here are the 3 ways we can help you in your data journey:

flag icon
Easily address your data movement needs with Airbyte Cloud
Take the first step towards extensible data movement infrastructure that will give a ton of time back to your data team. 
Get started with Airbyte for free
high five icon
Talk to a data infrastructure expert
Get a free consultation with an Airbyte expert to significantly improve your data movement infrastructure. 
Talk to sales
stars sparkling
Improve your data infrastructure knowledge
Subscribe to our monthly newsletter and get the community’s new enlightening content along with Airbyte’s progress in their mission to solve data integration once and for all.
Subscribe to newsletter

Frequently Asked Questions

What data can you extract from SFTP?

SFTP provides access to various types of data that can be used for different purposes. Some of the categories of data that SFTP's API gives access to are:  

1. File data: SFTP's API allows users to access and transfer files securely over the internet. This includes uploading, downloading, and managing files.  
2. User data: SFTP's API provides access to user data such as usernames, passwords, and permissions. This allows users to manage and control access to their files and folders.  
3. Server data: SFTP's API gives access to server data such as server logs, server configurations, and server status. This allows users to monitor and manage their server resources.  
4. Security data: SFTP's API provides access to security data such as encryption keys, certificates, and security policies. This allows users to ensure that their data is secure and protected from unauthorized access.  
5. Network data: SFTP's API gives access to network data such as IP addresses, network configurations, and network traffic. This allows users to monitor and manage their network resources.

What data can you transfer to Databricks Lakehouse?

You can transfer a wide variety of data to Databricks Lakehouse. This usually includes structured, semi-structured, and unstructured data like transaction records, log files, JSON data, CSV files, and more, allowing robust, scalable data integration and analysis.

What are top ETL tools to transfer data from SFTP to Databricks Lakehouse?

The most prominent ETL tools to transfer data from SFTP to Databricks Lakehouse include:

  • Airbyte
  • Fivetran
  • Stitch
  • Matillion
  • Talend Data Integration

These tools help in extracting data from SFTP and various sources (APIs, databases, and more), transforming it efficiently, and loading it into Databricks Lakehouse and other databases, data warehouses and data lakes, enhancing data management capabilities.