

Building your pipeline or Using Airbyte
Airbyte is the only open source solution empowering data teams to meet all their growing custom business demands in the new AI era.
- Inconsistent and inaccurate data
- Laborious and expensive
- Brittle and inflexible
- Reliable and accurate
- Extensible and scalable for all your needs
- Deployed and governed your way
Start syncing with Airbyte in 3 easy steps within 10 minutes



Take a virtual tour
Demo video of Airbyte Cloud
Demo video of AI Connector Builder
What sets Airbyte Apart
Modern GenAI Workflows
Move Large Volumes, Fast
An Extensible Open-Source Standard
Full Control & Security
Fully Featured & Integrated
Enterprise Support with SLAs
What our users say


"The intake layer of Datadog’s self-serve analytics platform is largely built on Airbyte.Airbyte’s ease of use and extensibility allowed any team in the company to push their data into the platform - without assistance from the data team!"


“Airbyte helped us accelerate our progress by years, compared to our competitors. We don’t need to worry about connectors and focus on creating value for our users instead of building infrastructure. That’s priceless. The time and energy saved allows us to disrupt and grow faster.”


“We chose Airbyte for its ease of use, its pricing scalability and its absence of vendor lock-in. Having a lean team makes them our top criteria. The value of being able to scale and execute at a high level by maximizing resources is immense”
1. Create a Google Cloud Project:
- Go to the Google Cloud Console (https://console.cloud.google.com/).
- Click on the project dropdown and then on "New Project".
- Enter a project name, select a billing account, and click "Create".
2. Enable BigQuery API:
- In the Cloud Console, go to the API Library.
- Search for "BigQuery API" and enable it for your project.
3. Set up a BigQuery Dataset:
- Go to the BigQuery console.
- Click on your project name, then click "Create Dataset".
- Enter a dataset ID and choose default data expiration as needed, then click "Create dataset".
1. Log in to your Amplitude dashboard.
2. Go to Settings > Projects, and select the project you want to export data from.
3. Under the General tab, you'll find your API Key and Secret Key. Take note of these as you'll need them for API requests.
1. Identify the data you want to export:
Determine the events or properties you need from Amplitude.
2. Use Amplitude's Export API:
- Write a script (e.g., in Python) that uses the `requests` library to make API calls to Amplitude's Export API.
- The API endpoint is typically `https://amplitude.com/api/2/export`.
- You'll need to pass the API Key and Secret Key for authentication.
- Specify the start and end times for the data export.
- The response will be a zipped file containing the data in JSON format.
1. Unzip the exported data:
Use a tool or script to unzip the downloaded file.
2. Transform the JSON data:
Write a script to parse the JSON data and transform it into a structured format suitable for BigQuery (e.g., CSV, Avro, or Parquet).
3. Ensure data types match:
Make sure that the data types in your transformed data match the schema you intend to use in BigQuery.
1. Create a BigQuery table:
- Define the schema that corresponds to the data you've formatted.
- Use the BigQuery web UI, the `bq` command-line tool, or the BigQuery API to create a new table with this schema.
2. Upload the data:
- You can upload the data files to Google Cloud Storage and then use the BigQuery Data Transfer Service or `bq` command-line tool to load the data into BigQuery.
- Alternatively, you can use the BigQuery API to stream the data directly into BigQuery.
1. Check the data:
After the data is uploaded, run some queries to ensure that the data has been loaded correctly and completely.
2. Validate data types and counts:
Confirm that the data types are correct and the counts of rows match what you exported from Amplitude.
1. Create a script or application:
Combine all the steps into a script or an application that automates the extraction, transformation, and loading (ETL) process.
2. Schedule the ETL process:
Use a scheduler like `cron` on a Unix-like system or the Google Cloud Scheduler to run your ETL process at regular intervals.
1. Set up logging and monitoring:
Implement logging in your ETL script and monitor the ETL process to quickly identify and fix any issues.
2. Regularly check for API changes:
Keep an eye on any updates to the Amplitude API or BigQuery API that might require changes to your ETL script.
FAQs
What is ETL?
ETL, an acronym for Extract, Transform, Load, is a vital data integration process. It involves extracting data from diverse sources, transforming it into a usable format, and loading it into a database, data warehouse or data lake. This process enables meaningful data analysis, enhancing business intelligence.
Amplitude is a cross-platform product intelligence solution that helps companies accelerate growth by leveraging customer data to build optimum product experiences. Advertised as the digital optimization system that “helps companies build better products,” it enables companies to make informed decisions by showing how a company’s digital products drive business. Amplitude employs a proprietary Amplitude Behavioral Graph to show customers the impact of various combinations of features and actions on business outcomes.
Amplitude's API provides access to a wide range of data related to user behavior and engagement on digital platforms. The following are the categories of data that can be accessed through Amplitude's API:
1. User data: This includes information about individual users such as their demographics, location, and device type.
2. Event data: This includes data related to user actions such as clicks, page views, and purchases.
3. Session data: This includes information about user sessions such as the duration of the session and the number of events that occurred during the session.
4. Funnel data: This includes data related to user behavior in a specific sequence of events, such as a checkout funnel.
5. Retention data: This includes data related to user retention, such as the percentage of users who return to the platform after a certain period of time.
6. Revenue data: This includes data related to revenue generated by the platform, such as the total revenue and revenue per user.
7. Cohort data: This includes data related to groups of users who share a common characteristic, such as the date they signed up for the platform.
What is ELT?
ELT, standing for Extract, Load, Transform, is a modern take on the traditional ETL data integration process. In ELT, data is first extracted from various sources, loaded directly into a data warehouse, and then transformed. This approach enhances data processing speed, analytical flexibility and autonomy.
Difference between ETL and ELT?
ETL and ELT are critical data integration strategies with key differences. ETL (Extract, Transform, Load) transforms data before loading, ideal for structured data. In contrast, ELT (Extract, Load, Transform) loads data before transformation, perfect for processing large, diverse data sets in modern data warehouses. ELT is becoming the new standard as it offers a lot more flexibility and autonomy to data analysts.
What should you do next?
Hope you enjoyed the reading. Here are the 3 ways we can help you in your data journey: