

Building your pipeline or Using Airbyte
Airbyte is the only open source solution empowering data teams to meet all their growing custom business demands in the new AI era.
- Inconsistent and inaccurate data
- Laborious and expensive
- Brittle and inflexible
- Reliable and accurate
- Extensible and scalable for all your needs
- Deployed and governed your way
Start syncing with Airbyte in 3 easy steps within 10 minutes



Take a virtual tour
Demo video of Airbyte Cloud
Demo video of AI Connector Builder
What sets Airbyte Apart
Modern GenAI Workflows
Move Large Volumes, Fast
An Extensible Open-Source Standard
Full Control & Security
Fully Featured & Integrated
Enterprise Support with SLAs
What our users say


"The intake layer of Datadog’s self-serve analytics platform is largely built on Airbyte.Airbyte’s ease of use and extensibility allowed any team in the company to push their data into the platform - without assistance from the data team!"


“Airbyte helped us accelerate our progress by years, compared to our competitors. We don’t need to worry about connectors and focus on creating value for our users instead of building infrastructure. That’s priceless. The time and energy saved allows us to disrupt and grow faster.”


“We chose Airbyte for its ease of use, its pricing scalability and its absence of vendor lock-in. Having a lean team makes them our top criteria. The value of being able to scale and execute at a high level by maximizing resources is immense”
On a Linux Jenkins server, you can install AWS CLI using the package manager. For example, on Ubuntu, you can use:
```
sudo apt-get update
sudo apt-get install awscli
```
On Windows, you can download and run the AWS CLI MSI installer from the AWS website.
Run `aws configure` to set up your AWS credentials (Access Key ID and Secret Access Key), default region, and output format.
Enter the required information when prompted. You can get these credentials from your AWS IAM (Identity and Access Management) user.
Make sure the IAM user has the necessary permissions to access the S3 bucket.
You can create a bucket using the AWS Management Console or CLI:
```
aws s3 mb s3://your-bucket-name
```
Attach an IAM policy to the Jenkins server's IAM role or user that allows it to put objects in the S3 bucket. The policy should look something like this:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": "arn:aws:s3:::your-bucket-name/*"
}
]
}
```
Ensure that the data you want to move to S3 is available on the Jenkins server and is in a directory or file format that can be easily transferred.
Use the `aws s3 cp` command to copy data from the Jenkins server to the S3 bucket.
For a single file, the command is:
```
aws s3 cp /path/to/your/file s3://your-bucket-name/path/in/bucket/
```
- For an entire directory, use the `--recursive` option:
```
aws s3 cp /path/to/your/directory s3://your-bucket-name/path/in/bucket/ --recursive
```
You can add a build step or a post-build step in your Jenkins job to execute the AWS CLI command.
Use the "Execute shell" or "Execute Windows batch command" step, depending on your Jenkins server OS, and paste the AWS CLI command you used earlier.
After the Jenkins job runs, verify that the files have been successfully transferred to the S3 bucket.
You can check the S3 bucket using the AWS Management Console or by running:
```
aws s3 ls s3://your-bucket-name/path/in/bucket/
```
- If the transfer fails, check the Jenkins console output for errors.
- Verify that the AWS CLI is correctly installed and configured.
- Ensure that the IAM user or role has the necessary permissions to access the S3 bucket.
- Check for typos in the bucket name or file paths.
FAQs
What is ETL?
ETL, an acronym for Extract, Transform, Load, is a vital data integration process. It involves extracting data from diverse sources, transforming it into a usable format, and loading it into a database, data warehouse or data lake. This process enables meaningful data analysis, enhancing business intelligence.
Jenkins is an open-source automation server. It helps automate parts of software development that facilitate build, test, and deployment, continuous integration, and continuous delivery. It is a server-based system that runs in servlet containers such as Apache Tomcat. It supports version control tools including AccuRev, CVS, Subversion, Git, Mercurial, Perforce, Clear Case, and RTC, and can execute arbitrary shell scripts and Windows batch commands alongside Apache Ant, Apache Maven and etc.
Jenkins is an open-source automation server that provides a wide range of APIs to access data related to the build process. The Jenkins API provides access to various types of data, including:
1. Build Data: Information about the build process, such as build status, build duration, build logs, and build artifacts.
2. Job Data: Information about the jobs, such as job status, job configuration, job parameters, and job history.
3. Node Data: Information about the nodes, such as node status, node configuration, and node availability.
4. User Data: Information about the users, such as user details, user permissions, and user activity.
5. Plugin Data: Information about the plugins, such as plugin details, plugin configuration, and plugin compatibility.
6. System Data: Information about the Jenkins system, such as system configuration, system logs, and system health.
7. Queue Data: Information about the build queue, such as queued jobs, queue status, and queue history.
Overall, the Jenkins API provides a comprehensive set of data that can be used to monitor, analyze, and optimize the build process.
What is ELT?
ELT, standing for Extract, Load, Transform, is a modern take on the traditional ETL data integration process. In ELT, data is first extracted from various sources, loaded directly into a data warehouse, and then transformed. This approach enhances data processing speed, analytical flexibility and autonomy.
Difference between ETL and ELT?
ETL and ELT are critical data integration strategies with key differences. ETL (Extract, Transform, Load) transforms data before loading, ideal for structured data. In contrast, ELT (Extract, Load, Transform) loads data before transformation, perfect for processing large, diverse data sets in modern data warehouses. ELT is becoming the new standard as it offers a lot more flexibility and autonomy to data analysts.
What should you do next?
Hope you enjoyed the reading. Here are the 3 ways we can help you in your data journey: