

Building your pipeline or Using Airbyte
Airbyte is the only open source solution empowering data teams to meet all their growing custom business demands in the new AI era.
- Inconsistent and inaccurate data
- Laborious and expensive
- Brittle and inflexible
- Reliable and accurate
- Extensible and scalable for all your needs
- Deployed and governed your way
Start syncing with Airbyte in 3 easy steps within 10 minutes



Take a virtual tour
Demo video of Airbyte Cloud
Demo video of AI Connector Builder
What sets Airbyte Apart
Modern GenAI Workflows
Move Large Volumes, Fast
An Extensible Open-Source Standard
Full Control & Security
Fully Featured & Integrated
Enterprise Support with SLAs
What our users say


"The intake layer of Datadog’s self-serve analytics platform is largely built on Airbyte.Airbyte’s ease of use and extensibility allowed any team in the company to push their data into the platform - without assistance from the data team!"


“Airbyte helped us accelerate our progress by years, compared to our competitors. We don’t need to worry about connectors and focus on creating value for our users instead of building infrastructure. That’s priceless. The time and energy saved allows us to disrupt and grow faster.”


“We chose Airbyte for its ease of use, its pricing scalability and its absence of vendor lock-in. Having a lean team makes them our top criteria. The value of being able to scale and execute at a high level by maximizing resources is immense”
1. Prepare Teradata Environment: Make sure you have the necessary permissions to read the data from the tables you want to export.
2. Select Data to Export: Identify the data you want to move to Databricks Lakehouse. It could be specific tables or the result of a query.
3. Export Data to a File:
- Use the `BTEQ` (Basic Teradata Query) utility to export data. Write a BTEQ script that selects the data and writes it to a flat file (CSV or TSV).
- Example BTEQ script to export data:
```sql
.LOGON your_teradata_server/your_username,your_password;
.EXPORT DATA FILE = /path/to/export/file.csv;
.SET SEPARATOR ',';
SELECT * FROM your_database.your_table;
.EXPORT RESET;
.LOGOFF;
```
- Run the BTEQ script on the Teradata system.
4. Compress the Data: To optimize the transfer, compress the exported files using a tool like `gzip` or `zip`.
1. Choose a Cloud Storage Solution: Select a cloud storage provider (AWS S3, Azure Blob Storage, or Google Cloud Storage) that is compatible with Databricks.
2. Upload Data:
- Use the cloud provider's CLI or web interface to upload the compressed data files to the chosen cloud storage.
- Ensure the storage bucket or container is secured and has the correct permissions set up.
1. Set Up Databricks Environment:
- Create a Databricks workspace if you don't already have one.
- Start a Databricks cluster with the required configuration.
2. Mount Cloud Storage:
- Mount the cloud storage bucket to Databricks using DBFS (Databricks File System) to make the data accessible from Databricks notebooks.
- Use the following commands in a Databricks notebook to mount the storage:
```python
dbutils.fs.mount(
source = "s3a://your-bucket-name",
mount_point = "/mnt/your-mount-name",
extra_configs = {"fs.s3a.access.key": "your-access-key", "fs.s3a.secret.key": "your-secret-key"}
)
```
3. Read Data into Databricks:
- Use the `spark.read` function to read the data from the mounted cloud storage into a Spark DataFrame.
- For example, to read a CSV file:
```python
df = spark.read.csv("/mnt/your-mount-name/file.csv", header=True, inferSchema=True)
```
4. Transform Data (Optional):
- Perform any necessary data transformations using Spark DataFrame operations.
5. Write Data to Databricks Lakehouse:
- Use the `df.write` function to save the DataFrame to the Databricks Lakehouse in the desired format (Delta, Parquet, etc.).
- For example, to write data as a Delta table:
```python
df.write.format("delta").saveAsTable("your_table_name")
```
6. Verify Data:
- Query the data using a Databricks notebook to ensure it was imported correctly.
1. Unmount Cloud Storage:
- Once the data transfer is complete, unmount the cloud storage to prevent unauthorized access.
- Use the following command in a Databricks notebook:
```python
dbutils.fs.unmount("/mnt/your-mount-name")
```
2. Delete Temporary Files:
- Remove any temporary files from Teradata and the cloud storage to maintain security and reduce costs.
FAQs
What is ETL?
ETL, an acronym for Extract, Transform, Load, is a vital data integration process. It involves extracting data from diverse sources, transforming it into a usable format, and loading it into a database, data warehouse or data lake. This process enables meaningful data analysis, enhancing business intelligence.
Teradata is a data management and analytics platform that helps businesses to collect, store, and analyze large amounts of data. It provides a range of tools and services that enable organizations to make data-driven decisions and gain insights into their operations. Teradata's platform is designed to handle complex data sets and support advanced analytics, including machine learning and artificial intelligence. It also offers cloud-based solutions that allow businesses to scale their data management and analytics capabilities as needed. Overall, Teradata helps businesses to unlock the value of their data and drive better outcomes across their operations.
Teradata's API provides access to a wide range of data types, including:
1. Structured data: This includes data that is organized into tables with defined columns and rows, such as customer information, sales data, and financial records.
2. Unstructured data: This includes data that is not organized in a predefined manner, such as social media posts, emails, and documents.
3. Semi-structured data: This includes data that has some structure, but not as much as structured data. Examples include XML files and JSON data.
4. Time-series data: This includes data that is organized by time, such as stock prices, weather data, and sensor readings.
5. Geospatial data: This includes data that is related to geographic locations, such as maps, GPS coordinates, and location-based services.
6. Machine-generated data: This includes data that is generated by machines, such as log files, sensor data, and telemetry data.
Overall, Teradata's API provides access to a wide range of data types, allowing developers and data analysts to work with diverse data sets and extract insights from them.
What is ELT?
ELT, standing for Extract, Load, Transform, is a modern take on the traditional ETL data integration process. In ELT, data is first extracted from various sources, loaded directly into a data warehouse, and then transformed. This approach enhances data processing speed, analytical flexibility and autonomy.
Difference between ETL and ELT?
ETL and ELT are critical data integration strategies with key differences. ETL (Extract, Transform, Load) transforms data before loading, ideal for structured data. In contrast, ELT (Extract, Load, Transform) loads data before transformation, perfect for processing large, diverse data sets in modern data warehouses. ELT is becoming the new standard as it offers a lot more flexibility and autonomy to data analysts.
What should you do next?
Hope you enjoyed the reading. Here are the 3 ways we can help you in your data journey: