Summarize this article with:


Building your pipeline or Using Airbyte
Airbyte is the only open source solution empowering data teams to meet all their growing custom business demands in the new AI era.
- Inconsistent and inaccurate data
- Laborious and expensive
- Brittle and inflexible
- Reliable and accurate
- Extensible and scalable for all your needs
- Deployed and governed your way
Start syncing with Airbyte in 3 easy steps within 10 minutes
Take a virtual tour
Demo video of Airbyte Cloud
Demo video of AI Connector Builder
Setup Complexities simplified!
Simple & Easy to use Interface
Airbyte is built to get out of your way. Our clean, modern interface walks you through setup, so you can go from zero to sync in minutes—without deep technical expertise.
Guided Tour: Assisting you in building connections
Whether you’re setting up your first connection or managing complex syncs, Airbyte’s UI and documentation help you move with confidence. No guesswork. Just clarity.
Airbyte AI Assistant that will act as your sidekick in building your data pipelines in Minutes
Airbyte’s built-in assistant helps you choose sources, set destinations, and configure syncs quickly. It’s like having a data engineer on call—without the overhead.
What sets Airbyte Apart
Modern GenAI Workflows
Move Large Volumes, Fast
An Extensible Open-Source Standard
Full Control & Security
Fully Featured & Integrated
Enterprise Support with SLAs
What our users say

Andre Exner

"For TUI Musement, Airbyte cut development time in half and enabled dynamic customer experiences."

Chase Zieman

“Airbyte helped us accelerate our progress by years, compared to our competitors. We don’t need to worry about connectors and focus on creating value for our users instead of building infrastructure. That’s priceless. The time and energy saved allows us to disrupt and grow faster.”

Rupak Patel
"With Airbyte, we could just push a few buttons, allow API access, and bring all the data into Google BigQuery. By blending all the different marketing data sources, we can gain valuable insights."
Begin by exporting the data from PersistIQ. Log in to your PersistIQ account, navigate to the relevant data section such as contacts, campaigns, or emails, and use the export feature to download the data. Typically, PersistIQ allows data export in CSV format, which is a widely compatible data format.
Once you have your CSV file(s) from PersistIQ, inspect the data to ensure it is clean and ready for transformation. Check for any inconsistencies or missing values that might need addressing. Clean and standardize the data as necessary using a tool like Excel or a simple script in Python or another language you are comfortable with.
Access your Databricks workspace. If it's your first time, you may need to set up a new cluster. Choose the appropriate cluster configuration based on your data size and processing needs. Ensure that your environment is configured to handle CSV input, which may involve installing necessary libraries or packages.
Upload the cleaned CSV file to Databricks. You can do this using the Databricks UI by navigating to the "Data" tab and selecting "Add Data". Choose "Upload File" and select your CSV file. This will upload the file to the Databricks File System (DBFS).
With your data in DBFS, use a Databricks notebook to create a table. Write a Spark SQL or PySpark command to read the CSV file and create a table. For example:
```python
df = spark.read.format("csv").option("header", "true").load("/FileStore/.csv")
df.write.saveAsTable("persistiq_data")
```
This command reads the CSV file into a DataFrame and then saves it as a table within your Databricks environment.
Once the data is loaded into a Databricks table, you can perform any necessary transformations. Use Spark SQL or PySpark to manipulate the data, such as filtering, aggregating, or joining with other tables. This step is crucial if you need to reshape the data for specific analytics tasks.
After transforming the data, ensure it is persisted in the Databricks Lakehouse. Confirm that the data is stored in a Delta Lake format to leverage features like ACID transactions, scalable metadata handling, and unified streaming and batch processing. You can do this using:
```python
df.write.format("delta").saveAsTable("persistiq_data_delta")
```
This command saves the DataFrame in a Delta Lake table, ensuring that your data is stored efficiently and is readily available for further processing or analysis.
By following these steps, you can successfully move your data from PersistIQ to the Databricks Lakehouse without the need for third-party connectors or integrations.
FAQs
What is ETL?
ETL, an acronym for Extract, Transform, Load, is a vital data integration process. It involves extracting data from diverse sources, transforming it into a usable format, and loading it into a database, data warehouse or data lake. This process enables meaningful data analysis, enhancing business intelligence.
PersistIQ is a wonderfully lean software that makes sales outreach swift and easy. PersistIQ is a sales intelligence solution. The solution integrates with Salesforce as well as marketing automation platforms. PersistIQ is a salesforce automation software that assists sales teams in improving outbound sales. We've been able to deliver on the promise of many sales tools through PersistIQ, but rarely deliver the technology that actually helps you work more efficiently and sell more effectively.
PersistIQ's API provides access to a variety of data related to sales and marketing activities. The following are the categories of data that can be accessed through the API:
1. Contacts: The API provides access to contact information such as name, email address, phone number, job title, and company name.
2. Activities: The API allows users to retrieve data related to sales and marketing activities such as emails sent, calls made, and meetings scheduled.
3. Campaigns: The API provides access to data related to marketing campaigns such as email campaigns, social media campaigns, and advertising campaigns.
4. Leads: The API allows users to retrieve data related to leads such as lead source, lead status, and lead score.
5. Opportunities: The API provides access to data related to sales opportunities such as deal size, stage, and probability of closing.
6. Analytics: The API allows users to retrieve data related to sales and marketing performance such as open rates, click-through rates, and conversion rates.
Overall, PersistIQ's API provides a comprehensive set of data that can be used to optimize sales and marketing activities and improve overall business performance.
What is ELT?
ELT, standing for Extract, Load, Transform, is a modern take on the traditional ETL data integration process. In ELT, data is first extracted from various sources, loaded directly into a data warehouse, and then transformed. This approach enhances data processing speed, analytical flexibility and autonomy.
Difference between ETL and ELT?
ETL and ELT are critical data integration strategies with key differences. ETL (Extract, Transform, Load) transforms data before loading, ideal for structured data. In contrast, ELT (Extract, Load, Transform) loads data before transformation, perfect for processing large, diverse data sets in modern data warehouses. ELT is becoming the new standard as it offers a lot more flexibility and autonomy to data analysts.
What should you do next?
Hope you enjoyed the reading. Here are the 3 ways we can help you in your data journey:





