

Building your pipeline or Using Airbyte
Airbyte is the only open source solution empowering data teams to meet all their growing custom business demands in the new AI era.
- Inconsistent and inaccurate data
- Laborious and expensive
- Brittle and inflexible
- Reliable and accurate
- Extensible and scalable for all your needs
- Deployed and governed your way
Start syncing with Airbyte in 3 easy steps within 10 minutes



Take a virtual tour
Demo video of Airbyte Cloud
Demo video of AI Connector Builder
What sets Airbyte Apart
Modern GenAI Workflows
Move Large Volumes, Fast
An Extensible Open-Source Standard
Full Control & Security
Fully Featured & Integrated
Enterprise Support with SLAs
What our users say


"The intake layer of Datadog’s self-serve analytics platform is largely built on Airbyte.Airbyte’s ease of use and extensibility allowed any team in the company to push their data into the platform - without assistance from the data team!"


“Airbyte helped us accelerate our progress by years, compared to our competitors. We don’t need to worry about connectors and focus on creating value for our users instead of building infrastructure. That’s priceless. The time and energy saved allows us to disrupt and grow faster.”


“We chose Airbyte for its ease of use, its pricing scalability and its absence of vendor lock-in. Having a lean team makes them our top criteria. The value of being able to scale and execute at a high level by maximizing resources is immense”
1. Set up an S3 bucket: Create an Amazon S3 bucket which will serve as the primary storage for your data lake.
- Log in to the AWS Management Console.
- Navigate to the S3 service and create a new bucket.
- Configure the bucket settings according to your requirements (e.g., versioning, access permissions).
2. Set up IAM permissions: Ensure that your AWS account or the IAM role/user that will perform the operations has the necessary permissions to access S3 and any other AWS services you plan to use.
1. Access Elasticsearch: Log in to your Elasticsearch cluster.
2. Create a snapshot repository (if not already done):
- Define a file system repository on a shared file system accessible to all Elasticsearch nodes.
- Register this repository with Elasticsearch.
3. Create a snapshot:
- Use the Elasticsearch `_snapshot` API to create a snapshot of the data you wish to export.
- Ensure that the snapshot is complete and successful.
4. Retrieve the snapshot data:
- Access the file system where the snapshot is stored.
- Locate the snapshot files.
1. Install AWS CLI: If not already installed, download and configure the AWS Command Line Interface (CLI) with your credentials.
2. Transfer files using AWS CLI:
- Use the `aws s3 cp` or `aws s3 sync` command to transfer the snapshot files from your local system to the S3 bucket.
- Ensure that the transfer is complete and successful.
1. AWS Glue or Amazon Athena: Choose an AWS service to catalog and query the data.
- For AWS Glue:
- Set up a Glue crawler to point to your S3 bucket.
- Run the crawler to catalog the data.
- Use Glue jobs to transform and load the data as needed.
- For Amazon Athena:
- Set up a database and table pointing to the data location in S3.
- Use standard SQL queries to import and transform the data.
2. Amazon Redshift Spectrum: If you're using Amazon Redshift, you can use Redshift Spectrum to query the data directly from S3.
1. Validate the import: Once the data is in AWS, run some queries to ensure that the data has been imported correctly.
2. Perform data quality checks: Check for any data inconsistencies or issues that may have arisen during the transfer.
1. Delete the snapshot: If you no longer need the snapshot in Elasticsearch, delete it to free up space.
2. Secure your S3 data: Apply proper access controls and encryption to your S3 data to ensure it is secure.
3. Monitor and maintain: Set up monitoring and alerting for your AWS Data Lake to keep track of costs, performance, and data integrity.
Notes:
- The steps above assume a basic knowledge of AWS services, Elasticsearch, and the command line.
- Data exported from Elasticsearch will be in the format of snapshot files, which may not be directly queryable by AWS services. Additional processing may be required to convert the data into a suitable format (e.g., CSV, Parquet) for querying.
- Depending on the size of your Elasticsearch snapshot, the data transfer to S3 may take a significant amount of time and could incur AWS transfer costs.
- Always ensure that you comply with data governance and compliance requirements when moving data between systems.
FAQs
What is ETL?
ETL, an acronym for Extract, Transform, Load, is a vital data integration process. It involves extracting data from diverse sources, transforming it into a usable format, and loading it into a database, data warehouse or data lake. This process enables meaningful data analysis, enhancing business intelligence.
Elasticsearch is a distributed search and analytics engine for all types of data. Elasticsearch is the central component of the ELK Stack (Elasticsearch, Logstash, and Kibana).
Elasticsearch's API provides access to a wide range of data types, including:
1. Textual data: Elasticsearch can index and search through large volumes of textual data, including documents, emails, and web pages.
2. Numeric data: Elasticsearch can store and search through numeric data, including integers, floats, and dates.
3. Geospatial data: Elasticsearch can store and search through geospatial data, including latitude and longitude coordinates.
4. Structured data: Elasticsearch can store and search through structured data, including JSON, XML, and CSV files.
5. Unstructured data: Elasticsearch can store and search through unstructured data, including images, videos, and audio files.
6. Log data: Elasticsearch can store and search through log data, including server logs, application logs, and system logs.
7. Metrics data: Elasticsearch can store and search through metrics data, including performance metrics, network metrics, and system metrics.
8. Machine learning data: Elasticsearch can store and search through machine learning data, including training data, model data, and prediction data.
Overall, Elasticsearch's API provides access to a wide range of data types, making it a powerful tool for data analysis and search.
What is ELT?
ELT, standing for Extract, Load, Transform, is a modern take on the traditional ETL data integration process. In ELT, data is first extracted from various sources, loaded directly into a data warehouse, and then transformed. This approach enhances data processing speed, analytical flexibility and autonomy.
Difference between ETL and ELT?
ETL and ELT are critical data integration strategies with key differences. ETL (Extract, Transform, Load) transforms data before loading, ideal for structured data. In contrast, ELT (Extract, Load, Transform) loads data before transformation, perfect for processing large, diverse data sets in modern data warehouses. ELT is becoming the new standard as it offers a lot more flexibility and autonomy to data analysts.
What should you do next?
Hope you enjoyed the reading. Here are the 3 ways we can help you in your data journey: