Summarize this article with:


Building your pipeline or Using Airbyte
Airbyte is the only open source solution empowering data teams to meet all their growing custom business demands in the new AI era.
- Inconsistent and inaccurate data
- Laborious and expensive
- Brittle and inflexible
- Reliable and accurate
- Extensible and scalable for all your needs
- Deployed and governed your way
Start syncing with Airbyte in 3 easy steps within 10 minutes
Take a virtual tour
Demo video of Airbyte Cloud
Demo video of AI Connector Builder
Setup Complexities simplified!
Simple & Easy to use Interface
Airbyte is built to get out of your way. Our clean, modern interface walks you through setup, so you can go from zero to sync in minutes—without deep technical expertise.
Guided Tour: Assisting you in building connections
Whether you’re setting up your first connection or managing complex syncs, Airbyte’s UI and documentation help you move with confidence. No guesswork. Just clarity.
Airbyte AI Assistant that will act as your sidekick in building your data pipelines in Minutes
Airbyte’s built-in assistant helps you choose sources, set destinations, and configure syncs quickly. It’s like having a data engineer on call—without the overhead.
What sets Airbyte Apart
Modern GenAI Workflows
Move Large Volumes, Fast
An Extensible Open-Source Standard
Full Control & Security
Fully Featured & Integrated
Enterprise Support with SLAs
What our users say

Andre Exner

"For TUI Musement, Airbyte cut development time in half and enabled dynamic customer experiences."

Chase Zieman

“Airbyte helped us accelerate our progress by years, compared to our competitors. We don’t need to worry about connectors and focus on creating value for our users instead of building infrastructure. That’s priceless. The time and energy saved allows us to disrupt and grow faster.”

Rupak Patel
"With Airbyte, we could just push a few buttons, allow API access, and bring all the data into Google BigQuery. By blending all the different marketing data sources, we can gain valuable insights."
To begin, you need to extract your data from Fauna. This can be achieved by using Fauna's query language, FQL. Write queries to fetch the desired collections or documents and export them in a structured format like JSON. You can use a script or a command-line tool to execute the queries and save the output to files on your local machine.
Once you've exported the data, prepare it for upload to S3. This involves organizing your JSON files or converting them into a format compatible with AWS Glue. Consider compressing the files to reduce storage costs and upload time. Ensure the data is structured properly and validated to avoid issues during the ETL process.
Log in to your AWS Management Console and navigate to the S3 service. Create a new S3 bucket where you will store the exported data from Fauna. Choose a unique bucket name and configure the required settings, such as region and access permissions. Make sure the bucket is publicly accessible if necessary, or configure specific IAM roles for restricted access.
Use the AWS CLI or AWS Management Console to upload your prepared data files to the S3 bucket you created. If using the CLI, ensure you have the correct access credentials configured. Run commands like `aws s3 cp` to transfer your files from your local machine to the S3 bucket. Verify the upload by checking the S3 console for the presence of your files.
Go to the AWS Glue service in your AWS Management Console. Create a new crawler to catalog your data in S3. Define the data source as the S3 bucket where your data is stored. Set the crawler to run on a schedule or manually, depending on your requirements. The crawler will scan your data and create a metadata table in the AWS Glue Data Catalog.
Once your data is cataloged, create an AWS Glue job to transform and move your data as needed. Define the job script using either Python or Scala, specifying the source as your S3 data and the destination where you want the transformed data. Configure the job settings, including the IAM role, allocated resources, and any job parameters.
Execute the Glue job you created. Monitor the job's progress through the AWS Glue console or by using CloudWatch logs. If the job fails or produces errors, review the logs to identify and fix any issues. Once the job runs successfully, verify the output data is correctly transformed and stored as intended. Repeat the process periodically or automate it as necessary to keep your data updated.
FAQs
What is ETL?
ETL, an acronym for Extract, Transform, Load, is a vital data integration process. It involves extracting data from diverse sources, transforming it into a usable format, and loading it into a database, data warehouse or data lake. This process enables meaningful data analysis, enhancing business intelligence.
Fauna merges the flexibility of NoSQL with the relational querying capabilities and ACID consistency of SQL systems. Fauna implements a semi-structured, schema-free, object-relational data model, strict superset of relational, document, object-oriented, and graph. Fauna is a tool in Databases category of tech stack. Inventory of fauna as a tool for sustainable use of economically important mammal species. This is used by animals is a phenomenon in which an animal uses any kind of tool to attain a goal such as acquiring food and water, grooming, defense.
Fauna's API gives access to various types of data, including:
1. Documents: This includes JSON documents that can be stored, retrieved, and queried using Fauna's API.
2. Collections: Collections are groups of documents that share a common schema. They can be used to organize data and make it easier to query.
3. Indexes: Indexes are used to speed up queries by precomputing results. They can be created on any field in a collection.
4. Functions: Functions are reusable blocks of code that can be called from within queries. They can be used to perform complex calculations or manipulate data.
5. Roles: Roles are used to control access to data. They can be used to define permissions for different types of users or applications.
6. Keys: Keys are used to authenticate requests to Fauna's API. They can be used to control access to data and to track usage.
Overall, Fauna's API provides a flexible and powerful way to store, retrieve, and manipulate data. It can be used for a wide range of applications, from simple data storage to complex data analysis and processing.
What is ELT?
ELT, standing for Extract, Load, Transform, is a modern take on the traditional ETL data integration process. In ELT, data is first extracted from various sources, loaded directly into a data warehouse, and then transformed. This approach enhances data processing speed, analytical flexibility and autonomy.
Difference between ETL and ELT?
ETL and ELT are critical data integration strategies with key differences. ETL (Extract, Transform, Load) transforms data before loading, ideal for structured data. In contrast, ELT (Extract, Load, Transform) loads data before transformation, perfect for processing large, diverse data sets in modern data warehouses. ELT is becoming the new standard as it offers a lot more flexibility and autonomy to data analysts.
What should you do next?
Hope you enjoyed the reading. Here are the 3 ways we can help you in your data journey:





