.webp)


Building your pipeline or Using Airbyte
Airbyte is the only open source solution empowering data teams to meet all their growing custom business demands in the new AI era.
- Inconsistent and inaccurate data
- Laborious and expensive
- Brittle and inflexible
- Reliable and accurate
- Extensible and scalable for all your needs
- Deployed and governed your way
Start syncing with Airbyte in 3 easy steps within 10 minutes



Take a virtual tour
Demo video of Airbyte Cloud
Demo video of AI Connector Builder
What sets Airbyte Apart
Modern GenAI Workflows
Move Large Volumes, Fast
An Extensible Open-Source Standard
Full Control & Security
Fully Featured & Integrated
Enterprise Support with SLAs
What our users say


"The intake layer of Datadog’s self-serve analytics platform is largely built on Airbyte.Airbyte’s ease of use and extensibility allowed any team in the company to push their data into the platform - without assistance from the data team!"


“Airbyte helped us accelerate our progress by years, compared to our competitors. We don’t need to worry about connectors and focus on creating value for our users instead of building infrastructure. That’s priceless. The time and energy saved allows us to disrupt and grow faster.”


“We chose Airbyte for its ease of use, its pricing scalability and its absence of vendor lock-in. Having a lean team makes them our top criteria. The value of being able to scale and execute at a high level by maximizing resources is immense”
If you haven't already, sign up for a Google Cloud Platform account. You will need to provide billing information to activate your account.
1. Go to the GCP Console (https://console.cloud.google.com/).
2. Click on the project dropdown at the top of the page.
3. Click on "New Project".
4. Enter a project name and select a billing account.
5. Click "Create".
1. Navigate to the "API & Services" dashboard.
2. Click "Enable APIs and Services".
3. Search for "BigQuery API" and enable it.
Ensure your CSV file is properly formatted and clean. The first row should contain column headers, and the data types should be consistent in each column.
1. Go to the GCP Console and navigate to "Storage" in the left-hand menu.
2. Click "Create bucket", enter a name for your bucket, and follow the prompts to create your bucket.
3. Once the bucket is created, click on its name to open it.
4. Click "Upload files" and select your CSV file to upload it to the bucket.
1. Go to the BigQuery console.
2. In the left-hand menu, click on your project name.
3. Click "Create Dataset" on the right side.
4. Enter a Dataset ID and choose a data location.
5. Click "Create dataset".
1. With the dataset selected, click "Create Table".
2. In the "Create table from" dropdown, select "Google Cloud Storage".
3. In the "Select file" field, browse or enter the path to your CSV file in the bucket.
4. Choose the file format as "CSV".
5. Enter a Table name.
6. Under "Schema", you can either select "Auto-detect" to let BigQuery infer the schema from the CSV header or manually define the schema.
7. Configure other settings if necessary (e.g., partitioning).
8. Click "Create table".
After the table creation process is complete, you should verify that the data has been imported correctly:
1. In the BigQuery console, navigate to your new table.
2. Click on "Preview" to see a sample of the data.
3. Optionally, run a few queries to ensure the data looks as expected.
Notes:
1. Make sure your CSV file is not too large, as there are limits on how much data you can import to BigQuery at once. If the file is too large, consider splitting it into smaller chunks.
2. Ensure that the CSV file is encoded in UTF-8 if it contains special characters.
3. If you're automating this process, consider using Google Cloud SDK (gcloud command-line tool) or the BigQuery API in your preferred programming language.
4. Watch out for any errors during the import process, and refer to the BigQuery documentation for troubleshooting tips.
- Install the Google Cloud SDK containing the bq tool.
- Authenticate using: gcloud auth login
- Set your project: gcloud config set project YOUR_PROJECT_ID
- Ensure your CSV file is ready and accessible.
- If using Cloud Storage, upload your file: gsutil cp your_data.csv gs://your-bucket/
[
{"name": "column1", "type": "STRING"},
{"name": "column2", "type": "INTEGER"},
{"name": "column3", "type": "DATE"}
]
bq load \
--source_format=CSV \
--skip_leading_rows=1 \
--autodetect \
dataset_name.table_name \
gs://your-bucket/your_data.csv \
schema.json
Key options:
- --source_format=CSV: Specifies the file type.
- --skip_leading_rows=1: Skips the header row.
- --autodetect: Automatically detects the schema (or you can provide a schema file).
The command will display job progress. Once complete, verify using:
bq show dataset_name.table_name
This method is excellent for scripting and automation, especially in data pipeline scenarios.
FAQs
What is ETL?
ETL, an acronym for Extract, Transform, Load, is a vital data integration process. It involves extracting data from diverse sources, transforming it into a usable format, and loading it into a database, data warehouse or data lake. This process enables meaningful data analysis, enhancing business intelligence.
CSV (Comma Separated Values) file is a tool used to store and exchange data in a simple and structured format. It is a plain text file that contains data separated by commas, where each line represents a record and each field is separated by a comma. CSV files are widely used in data analysis, data migration, and data exchange between different software applications. The CSV file format is easy to read and write, making it a popular choice for storing and exchanging data. It can be opened and edited using any text editor or spreadsheet software, such as Microsoft Excel or Google Sheets. CSV files can also be imported and exported from databases, making it a convenient tool for data management. CSV files are commonly used for storing large amounts of data, such as customer information, product catalogs, financial data, and scientific data. They are also used for data analysis and visualization, as they can be easily imported into statistical software and other data analysis tools. Overall, the CSV file is a simple and versatile tool that is widely used for storing, exchanging, and analyzing data.
What is ELT?
ELT, standing for Extract, Load, Transform, is a modern take on the traditional ETL data integration process. In ELT, data is first extracted from various sources, loaded directly into a data warehouse, and then transformed. This approach enhances data processing speed, analytical flexibility and autonomy.
Difference between ETL and ELT?
ETL and ELT are critical data integration strategies with key differences. ETL (Extract, Transform, Load) transforms data before loading, ideal for structured data. In contrast, ELT (Extract, Load, Transform) loads data before transformation, perfect for processing large, diverse data sets in modern data warehouses. ELT is becoming the new standard as it offers a lot more flexibility and autonomy to data analysts.
What should you do next?
Hope you enjoyed the reading. Here are the 3 ways we can help you in your data journey: