

Building your pipeline or Using Airbyte
Airbyte is the only open source solution empowering data teams to meet all their growing custom business demands in the new AI era.
- Inconsistent and inaccurate data
- Laborious and expensive
- Brittle and inflexible
- Reliable and accurate
- Extensible and scalable for all your needs
- Deployed and governed your way
Start syncing with Airbyte in 3 easy steps within 10 minutes



Take a virtual tour
Demo video of Airbyte Cloud
Demo video of AI Connector Builder
Setup Complexities simplified!
Simple & Easy to use Interface
Airbyte is built to get out of your way. Our clean, modern interface walks you through setup, so you can go from zero to sync in minutes—without deep technical expertise.
Guided Tour: Assisting you in building connections
Whether you’re setting up your first connection or managing complex syncs, Airbyte’s UI and documentation help you move with confidence. No guesswork. Just clarity.
Airbyte AI Assistant that will act as your sidekick in building your data pipelines in Minutes
Airbyte’s built-in assistant helps you choose sources, set destinations, and configure syncs quickly. It’s like having a data engineer on call—without the overhead.
What sets Airbyte Apart
Modern GenAI Workflows
Move Large Volumes, Fast
An Extensible Open-Source Standard
Full Control & Security
Fully Featured & Integrated
Enterprise Support with SLAs
What our users say


"For TUI Musement, Airbyte cut development time in half and enabled dynamic customer experiences."


“Airbyte helped us accelerate our progress by years, compared to our competitors. We don’t need to worry about connectors and focus on creating value for our users instead of building infrastructure. That’s priceless. The time and energy saved allows us to disrupt and grow faster.”

"With Airbyte, we could just push a few buttons, allow API access, and bring all the data into Google BigQuery. By blending all the different marketing data sources, we can gain valuable insights."
First, you need to access the Pipedrive API to fetch data. Log into your Pipedrive account and navigate to the settings. Under the "API" section, you will find your API token. This token is essential for authenticating your API requests.
Set up a local development environment where you can write and execute scripts. Install Node.js, as it provides convenient packages for handling HTTP requests and MongoDB operations. Ensure you have a code editor like Visual Studio Code or Sublime Text.
Use Node.js to create a script that makes HTTP requests to the Pipedrive API. Install the `axios` package to handle HTTP requests efficiently. Use the API token to authenticate and fetch the required data (such as deals, contacts, or organizations). Here's a basic example:
```javascript
const axios = require('axios');
async function fetchData() {
const response = await axios.get('https://api.pipedrive.com/v1/deals?api_token=YOUR_API_TOKEN');
return response.data.data;
}
```
Once you have fetched the data, process and transform it as needed to fit the schema of your MongoDB database. This may involve renaming fields, changing data types, or filtering out unnecessary information. You can use JavaScript to iterate over the fetched data and apply the necessary transformations.
If you haven’t already, install MongoDB on your local machine or server. Ensure that the MongoDB service is running. Create a new database and collection where you will store the Pipedrive data. You can use the MongoDB shell or a GUI tool like MongoDB Compass to set this up.
Use the `mongodb` Node.js package to insert the processed data into your MongoDB collection. Connect to your MongoDB instance and use the `insertMany` method to add the data. Here's a basic example:
```javascript
const { MongoClient } = require('mongodb');
const uri = 'mongodb://localhost:27017';
const client = new MongoClient(uri);
async function insertData(data) {
try {
await client.connect();
const database = client.db('pipedrive_data');
const collection = database.collection('deals');
await collection.insertMany(data);
console.log('Data inserted successfully');
} finally {
await client.close();
}
}
```
To keep the MongoDB database updated with the latest data from Pipedrive, consider automating the process. You can use a task scheduler like `cron` on Linux or Task Scheduler on Windows to run your script at regular intervals. Ensure that your script handles errors gracefully and logs any issues for troubleshooting.
By following these steps, you'll be able to move data from Pipedrive to MongoDB manually and efficiently without relying on third-party connectors or integrations.
FAQs
What is ETL?
ETL, an acronym for Extract, Transform, Load, is a vital data integration process. It involves extracting data from diverse sources, transforming it into a usable format, and loading it into a database, data warehouse or data lake. This process enables meaningful data analysis, enhancing business intelligence.
Pipedrive is a customer relationship management (CRM) platform built with the needs of the salesperson in mind. The data it provides helps teams and individual salespeople discover their most effective strategies to close deals and make them repeatable. The pipeline delivers detailed, accurate, timely sales reports and revenue projections that help users monitor deals, plan sales events and support financial decisions.
Pipedrive's API provides access to a wide range of data related to sales and customer relationship management. The following are the categories of data that can be accessed through Pipedrive's API:
1. Deals: Information related to deals such as deal name, deal value, deal stage, deal owner, and deal activities.
2. Contacts: Information related to contacts such as contact name, contact email, contact phone number, and contact activities.
3. Organizations: Information related to organizations such as organization name, organization address, organization phone number, and organization activities.
4. Activities: Information related to activities such as activity type, activity date, activity duration, and activity participants.
5. Users: Information related to users such as user name, user email, user role, and user activities.
6. Products: Information related to products such as product name, product price, product description, and product activities.
7. Pipelines: Information related to pipelines such as pipeline name, pipeline stages, pipeline activities, and pipeline owner.
8. Notes: Information related to notes such as note content, note date, note author, and note activities.
Overall, Pipedrive's API provides access to a comprehensive set of data that can be used to improve sales and customer relationship management processes.
What is ELT?
ELT, standing for Extract, Load, Transform, is a modern take on the traditional ETL data integration process. In ELT, data is first extracted from various sources, loaded directly into a data warehouse, and then transformed. This approach enhances data processing speed, analytical flexibility and autonomy.
Difference between ETL and ELT?
ETL and ELT are critical data integration strategies with key differences. ETL (Extract, Transform, Load) transforms data before loading, ideal for structured data. In contrast, ELT (Extract, Load, Transform) loads data before transformation, perfect for processing large, diverse data sets in modern data warehouses. ELT is becoming the new standard as it offers a lot more flexibility and autonomy to data analysts.
What should you do next?
Hope you enjoyed the reading. Here are the 3 ways we can help you in your data journey: