Building your pipeline or Using Airbyte
Airbyte is the only open solution empowering data teams to meet all their growing custom business demands in the new AI era.
- Inconsistent and inaccurate data
- Laborious and expensive
- Brittle and inflexible
- Reliable and accurate
- Extensible and scalable for all your needs
- Deployed and governed your way
Start snycing with Airbyte in 3 easy steps within 10 minutes
Take a virtual tour
Demo video of Airbyte Cloud
Demo video of AI Connector Builder
What sets Airbyte Apart
Modern GenAI Workflows
Move Large Volumes, Fast
An Extensible Open-Source Standard
Full Control & Security
Fully Featured & Integrated
Enterprise Support with SLAs
What our users say
The intake layer of Datadog’s self-serve analytics platform is largely built on Airbyte.Airbyte’s ease of use and extensibility allowed any team in the company to push their data into the platform - without assistance from the data team!
“Airbyte helped us accelerate our progress by years, compared to our competitors. We don’t need to worry about connectors and focus on creating value for our users instead of building infrastructure. That’s priceless. The time and energy saved allows us to disrupt and grow faster.”
“We chose Airbyte for its ease of use, its pricing scalability and its absence of vendor lock-in. Having a lean team makes them our top criteria. The value of being able to scale and execute at a high level by maximizing resources is immense”
Amazon S3 (Simple Storage Service) is a cloud-based object storage service that provides developers and IT teams with secure, durable, and scalable storage for their data. It allows users to store and retrieve any amount of data from anywhere on the web, making it easy to build and scale applications, backup and archive data, and analyze data. S3 is designed to provide high availability and durability, with data automatically replicated across multiple availability zones within a region. It also offers a range of features such as versioning, lifecycle policies, and access control to help users manage their data effectively.
DuckDB is an in-process SQL OLAP database management system. It has strong support for SQL. DuckDB is borrowing the SQLite shell implementation. Each database is a single file on disk. It’s analogous to “ SQLite for analytical (OLAP) workloads” (direct comparison on the SQLite vs DuckDB paper here), whereas SQLite is for OLTP ones. But it can handle vast amounts of data locally. It’s the smaller, lighter version of Apache Druid and other OLAP technologies.
1. Open the Airbyte dashboard and click on "Sources" from the left-hand menu.
2. Click on the "Create Source" button and select "S3" from the list of available connectors.
3. Enter a name for your S3 source and click on "Next".
4. Enter your AWS access key ID and secret access key in the respective fields. You can find these credentials in your AWS account under "Security Credentials".
5. Select the AWS region where your S3 bucket is located from the dropdown menu.
6. Enter the name of your S3 bucket in the "Bucket Name" field.
7. If your S3 bucket is not in the root directory, enter the path to the directory in the "Path Prefix" field.
8. If you want to include only certain files in your data sync, you can enter a file pattern in the "File Pattern" field. For example, "*.csv" will only include CSV files.
9. Click on "Test" to verify your credentials and connection to the S3 bucket.
10. If the test is successful, click on "Create Source" to save your S3 source connector.Once your S3 source connector is set up, you can use it to create a new Airbyte pipeline and sync data from your S3 bucket to your destination of choice.
1. Open the Airbyte platform and navigate to the "Destinations" tab on the left-hand side of the screen.
2. Click on the "Add Destination" button located in the top right corner of the screen.
3. Scroll down the list of available destinations until you find "DuckDB" and click on it.
4. Fill in the required information for your DuckDB database, including the host, port, database name, username, and password.
5. Test the connection to ensure that the information you provided is correct and that Airbyte can successfully connect to your DuckDB database.
6. If the connection is successful, click on the "Save" button to save your DuckDB destination connector.
7. You can now use this connector to transfer data from your source connectors to your DuckDB database. Simply select the DuckDB destination connector when setting up your data integration pipelines in Airbyte.
With Airbyte, creating data pipelines take minutes, and the data integration possibilities are endless. Airbyte supports the largest catalog of API tools, databases, and files, among other sources. Airbyte's connectors are open-source, so you can add any custom objects to the connector, or even build a new connector from scratch without any local dev environment or any data engineer within 10 minutes with the no-code connector builder.
We look forward to seeing you make use of it! We invite you to join the conversation on our community Slack Channel, or sign up for our newsletter. You should also check out other Airbyte tutorials, and Airbyte’s content hub!
What should you do next?
Hope you enjoyed the reading. Here are the 3 ways we can help you in your data journey:
TL;DR
This can be done by building a data pipeline manually, usually a Python script (you can leverage a tool as Apache Airflow for this). This process can take more than a full week of development. Or it can be done in minutes on Airbyte in three easy steps:
- set up S3 as a source connector (using Auth, or usually an API key)
- set up DuckDB as a destination connector
- define which data you want to transfer and how frequently
You can choose to self-host the pipeline using Airbyte Open Source or have it managed for you with Airbyte Cloud.
This tutorial’s purpose is to show you how.
What is S3
Amazon S3 (Simple Storage Service) is a cloud-based object storage service that provides developers and IT teams with secure, durable, and scalable storage for their data. It allows users to store and retrieve any amount of data from anywhere on the web, making it easy to build and scale applications, backup and archive data, and analyze data. S3 is designed to provide high availability and durability, with data automatically replicated across multiple availability zones within a region. It also offers a range of features such as versioning, lifecycle policies, and access control to help users manage their data effectively.
What is DuckDB
DuckDB is an in-process SQL OLAP database management system. It has strong support for SQL. DuckDB is borrowing the SQLite shell implementation. Each database is a single file on disk. It’s analogous to “ SQLite for analytical (OLAP) workloads” (direct comparison on the SQLite vs DuckDB paper here), whereas SQLite is for OLTP ones. But it can handle vast amounts of data locally. It’s the smaller, lighter version of Apache Druid and other OLAP technologies.
{{COMPONENT_CTA}}
Prerequisites
- A S3 account to transfer your customer data automatically from.
- A DuckDB account.
- An active Airbyte Cloud account, or you can also choose to use Airbyte Open Source locally. You can follow the instructions to set up Airbyte on your system using docker-compose.
Airbyte is an open-source data integration platform that consolidates and streamlines the process of extracting and loading data from multiple data sources to data warehouses. It offers pre-built connectors, including S3 and DuckDB, for seamless data migration.
When using Airbyte to move data from S3 to DuckDB, it extracts data from S3 using the source connector, converts it into a format DuckDB can ingest using the provided schema, and then loads it into DuckDB via the destination connector. This allows businesses to leverage their S3 data for advanced analytics and insights within DuckDB, simplifying the ETL process and saving significant time and resources.
Methods to Move Data From s3 to duckdb
- Method 1: Connecting s3 to duckdb using Airbyte.
- Method 2: Connecting s3 to duckdb manually.
Method 1: Connecting s3 to duckdb using Airbyte
Step 1: Set up S3 as a source connector
1. Open the Airbyte dashboard and click on "Sources" from the left-hand menu.
2. Click on the "Create Source" button and select "S3" from the list of available connectors.
3. Enter a name for your S3 source and click on "Next".
4. Enter your AWS access key ID and secret access key in the respective fields. You can find these credentials in your AWS account under "Security Credentials".
5. Select the AWS region where your S3 bucket is located from the dropdown menu.
6. Enter the name of your S3 bucket in the "Bucket Name" field.
7. If your S3 bucket is not in the root directory, enter the path to the directory in the "Path Prefix" field.
8. If you want to include only certain files in your data sync, you can enter a file pattern in the "File Pattern" field. For example, "*.csv" will only include CSV files.
9. Click on "Test" to verify your credentials and connection to the S3 bucket.
10. If the test is successful, click on "Create Source" to save your S3 source connector.Once your S3 source connector is set up, you can use it to create a new Airbyte pipeline and sync data from your S3 bucket to your destination of choice.
Step 2: Set up DuckDB as a destination connector
1. Open the Airbyte platform and navigate to the "Destinations" tab on the left-hand side of the screen.
2. Click on the "Add Destination" button located in the top right corner of the screen.
3. Scroll down the list of available destinations until you find "DuckDB" and click on it.
4. Fill in the required information for your DuckDB database, including the host, port, database name, username, and password.
5. Test the connection to ensure that the information you provided is correct and that Airbyte can successfully connect to your DuckDB database.
6. If the connection is successful, click on the "Save" button to save your DuckDB destination connector.
7. You can now use this connector to transfer data from your source connectors to your DuckDB database. Simply select the DuckDB destination connector when setting up your data integration pipelines in Airbyte.
Step 3: Set up a connection to sync your S3 data to DuckDB
Once you've successfully connected S3 as a data source and DuckDB as a destination in Airbyte, you can set up a data pipeline between them with the following steps:
- Create a new connection: On the Airbyte dashboard, navigate to the 'Connections' tab and click the '+ New Connection' button.
- Choose your source: Select S3 from the dropdown list of your configured sources.
- Select your destination: Choose DuckDB from the dropdown list of your configured destinations.
- Configure your sync: Define the frequency of your data syncs based on your business needs. Airbyte allows both manual and automatic scheduling for your data refreshes.
- Select the data to sync: Choose the specific S3 objects you want to import data from towards DuckDB. You can sync all data or select specific tables and fields.
- Select the sync mode for your streams: Choose between full refreshes or incremental syncs (with deduplication if you want), and this for all streams or at the stream level. Incremental is only available for streams that have a primary cursor.
- Test your connection: Click the 'Test Connection' button to make sure that your setup works. If the connection test is successful, save your configuration.
- Start the sync: If the test passes, click 'Set Up Connection'. Airbyte will start moving data from S3 to DuckDB according to your settings.
Remember, Airbyte keeps your data in sync at the frequency you determine, ensuring your DuckDB data warehouse is always up-to-date with your S3 data.
Method 2: Connecting s3 to duckdb manually
To move data from Amazon S3 to DuckDB without using third-party connectors or integrations, you can follow these steps. This guide assumes you have AWS CLI installed and configured with the necessary permissions to access your S3 bucket, and you have DuckDB installed on your local machine or server.
Step 1: Install DuckDB and AWS CLI
If you haven't already, install DuckDB and the AWS Command Line Interface (CLI) on your local machine.
DuckDB:
- You can download the DuckDB binary or use a package manager to install it. For Python, you can install it using pip:
```shell
pip install duckdb
```
AWS CLI:
- Follow the instructions on the AWS website to install the AWS CLI for your operating system.
- Configure the AWS CLI by running `aws configure` and entering your AWS Access Key, Secret Key, and default region.
Step 2: Download Data from S3
Use the AWS CLI to download the data from S3 to your local system. Replace `s3://your-bucket-name/your-data-file` with the path to your S3 data file.
```shell
aws s3 cp s3://your-bucket-name/your-data-file /local/path/to/your-data-file
```
Step 3: Prepare Your Data
Before importing the data into DuckDB, ensure that it's in a format compatible with DuckDB. Common formats include CSV, Parquet, and JSON. If necessary, convert your data to one of these formats using a tool like `pandas` in Python or a command-line utility like `awk` or `sed`.
Step 4: Import Data into DuckDB
Start the DuckDB command-line interface or use the DuckDB client in your programming language of choice. The following example uses the DuckDB CLI.
```shell
duckdb
```
Once in the DuckDB CLI, you can create a database and table to store your data:
```sql
CREATE DATABASE mydatabase;
USE mydatabase;
CREATE TABLE mytable (...); -- Replace with the appropriate table schema
```
Now, you can import the data into DuckDB. If your data is in CSV format, you can use the following command:
```sql
COPY mytable FROM '/local/path/to/your-data-file' (FORMAT CSV, HEADER);
```
For other formats like Parquet or JSON, you can adjust the `FORMAT` parameter accordingly.
Step 5: Verify Data Import
After importing the data, you can run some queries to ensure that the data has been imported correctly:
```sql
SELECT * FROM mytable LIMIT 10;
```
Step 6: Cleanup
Once you've verified that the data is correctly imported into DuckDB, you can remove the local copy of the data file if it's no longer needed:
```shell
rm /local/path/to/your-data-file
```
Step 7: Use the Data
Now that your data is in DuckDB, you can use it for your analysis or application. DuckDB is designed for analytical queries, so you can start running your aggregation, join, and analytical queries on the data.
```sql
SELECT COUNT(*), some_column FROM mytable GROUP BY some_column;
```
Tips:
- Always validate the data schema and ensure that it matches the schema defined in your DuckDB table.
- If you're dealing with large datasets, consider compressing the file before downloading it from S3 to save bandwidth and time.
- Make sure you have enough storage space on your local machine for the downloaded data.
- If you're automating this process, you can write a script that encapsulates these steps and handles errors or retries as needed.
By following these steps, you can move data from S3 to DuckDB without using third-party connectors or integrations. Remember to handle data securely and comply with data governance policies applicable to your organization or project.
Use Cases to transfer your S3 data to DuckDB
Integrating data from S3 to DuckDB provides several benefits. Here are a few use cases:
- Advanced Analytics: DuckDB’s powerful data processing capabilities enable you to perform complex queries and data analysis on your S3 data, extracting insights that wouldn't be possible within S3 alone.
- Data Consolidation: If you're using multiple other sources along with S3, syncing to DuckDB allows you to centralize your data for a holistic view of your operations, and to set up a change data capture process so you never have any discrepancies in your data again.
- Historical Data Analysis: S3 has limits on historical data. Syncing data to DuckDB allows for long-term data retention and analysis of historical trends over time.
- Data Security and Compliance: DuckDB provides robust data security features. Syncing S3 data to DuckDB ensures your data is secured and allows for advanced data governance and compliance management.
- Scalability: DuckDB can handle large volumes of data without affecting performance, providing an ideal solution for growing businesses with expanding S3 data.
- Data Science and Machine Learning: By having S3 data in DuckDB, you can apply machine learning models to your data for predictive analytics, customer segmentation, and more.
- Reporting and Visualization: While S3 provides reporting tools, data visualization tools like Tableau, PowerBI, Looker (Google Data Studio) can connect to DuckDB, providing more advanced business intelligence options. If you have a S3 table that needs to be converted to a DuckDB table, Airbyte can do that automatically.
Wrapping Up
To summarize, this tutorial has shown you how to:
- Configure a S3 account as an Airbyte data source connector.
- Configure DuckDB as a data destination connector.
- Create an Airbyte data pipeline that will automatically be moving data directly from S3 to DuckDB after you set a schedule
With Airbyte, creating data pipelines take minutes, and the data integration possibilities are endless. Airbyte supports the largest catalog of API tools, databases, and files, among other sources. Airbyte's connectors are open-source, so you can add any custom objects to the connector, or even build a new connector from scratch without any local dev environment or any data engineer within 10 minutes with the no-code connector builder.
We look forward to seeing you make use of it! We invite you to join the conversation on our community Slack Channel, or sign up for our newsletter. You should also check out other Airbyte tutorials, and Airbyte’s content hub!
What should you do next?
Hope you enjoyed the reading. Here are the 3 ways we can help you in your data journey:
Ready to get started?
Frequently Asked Questions
Amazon S3's API provides access to a wide range of data types, including:
1. Object data: This includes the actual files stored in S3 buckets, such as images, videos, documents, and other types of files.
2. Metadata: S3 stores metadata about each object, including information such as the object's size, creation date, and last modified date.
3. Access control data: S3 provides access control mechanisms to restrict access to objects in a bucket. The API provides access to information about access control policies and permissions.
4. Bucket data: S3 buckets are containers for objects. The API provides access to information about buckets, such as their names, creation dates, and region.
5. Logging data: S3 can log access requests to objects in a bucket. The API provides access to these logs, which can be used for auditing and compliance purposes.
6. Inventory data: S3 can generate inventory reports that provide information about the objects stored in a bucket. The API provides access to these reports.
7. Metrics data: S3 can generate metrics about the usage of a bucket, such as the number of requests and the amount of data transferred. The API provides access to these metrics.
What should you do next?
Hope you enjoyed the reading. Here are the 3 ways we can help you in your data journey: