

Building your pipeline or Using Airbyte
Airbyte is the only open source solution empowering data teams to meet all their growing custom business demands in the new AI era.
- Inconsistent and inaccurate data
- Laborious and expensive
- Brittle and inflexible
- Reliable and accurate
- Extensible and scalable for all your needs
- Deployed and governed your way
Start syncing with Airbyte in 3 easy steps within 10 minutes



Take a virtual tour
Demo video of Airbyte Cloud
Demo video of AI Connector Builder
Setup Complexities simplified!
Simple & Easy to use Interface
Airbyte is built to get out of your way. Our clean, modern interface walks you through setup, so you can go from zero to sync in minutes—without deep technical expertise.
Guided Tour: Assisting you in building connections
Whether you’re setting up your first connection or managing complex syncs, Airbyte’s UI and documentation help you move with confidence. No guesswork. Just clarity.
Airbyte AI Assistant that will act as your sidekick in building your data pipelines in Minutes
Airbyte’s built-in assistant helps you choose sources, set destinations, and configure syncs quickly. It’s like having a data engineer on call—without the overhead.
What sets Airbyte Apart
Modern GenAI Workflows
Move Large Volumes, Fast
An Extensible Open-Source Standard
Full Control & Security
Fully Featured & Integrated
Enterprise Support with SLAs
What our users say

Andre Exner

"For TUI Musement, Airbyte cut development time in half and enabled dynamic customer experiences."

Chase Zieman

“Airbyte helped us accelerate our progress by years, compared to our competitors. We don’t need to worry about connectors and focus on creating value for our users instead of building infrastructure. That’s priceless. The time and energy saved allows us to disrupt and grow faster.”

Rupak Patel
"With Airbyte, we could just push a few buttons, allow API access, and bring all the data into Google BigQuery. By blending all the different marketing data sources, we can gain valuable insights."
Begin by accessing the Freshdesk API. Freshdesk provides a RESTful API which allows you to programmatically retrieve data. You’ll need to generate an API key from your Freshdesk account and use it to authenticate your requests. This key will be used to make GET requests to endpoints like tickets, contacts, companies, etc., to extract the data you need.
Use a scripting language like Python to send HTTP GET requests to Freshdesk’s API endpoints. Python’s `requests` library is suitable for this purpose. Write scripts to loop through paginated results, ensuring you extract all the required data. Save this data in a structured format like JSON or CSV for easier processing later.
Ensure your Apache Iceberg environment is set up properly. Apache Iceberg requires a compatible computing engine such as Apache Spark, Flink, or Hive. Install the necessary dependencies for the engine you choose, and configure Apache Iceberg following its documentation to ensure your environment is ready to receive data.
Apache Iceberg works efficiently with Parquet files. Convert the extracted JSON or CSV data into Parquet format. This can be done using Python libraries such as `pandas` and `pyarrow`. Load your data into a pandas DataFrame, and then use `pyarrow` to write the DataFrame to a Parquet file.
Define the schema and partitioning strategy suitable for your data in Iceberg. Use the computing engine’s interface (e.g., Spark SQL) to create a new Iceberg table. Consider the types of queries you will run on the data when deciding on the schema and partitioning to optimize performance.
Use your computing engine to load the transformed Parquet files into the Iceberg table. For example, with Spark, you can use SparkSQL or DataFrame APIs to write the Parquet data to the Iceberg table. Ensure that the data types and schema in your Parquet files match the Iceberg table’s schema.
After loading the data, perform checks to ensure data integrity. Run queries to verify row counts and check for any discrepancies between the source data and what’s in the Iceberg table. Ensure that all fields have been correctly imported and test with sample queries to validate the setup.
By following these steps, you can successfully move data from Freshdesk to Apache Iceberg without relying on third-party connectors or integrations, ensuring a custom and controlled data migration process.
FAQs
What is ETL?
ETL, an acronym for Extract, Transform, Load, is a vital data integration process. It involves extracting data from diverse sources, transforming it into a usable format, and loading it into a database, data warehouse or data lake. This process enables meaningful data analysis, enhancing business intelligence.
Freshdesk is a service provided by Freshworks for handling the entire spectrum of customer engagement. A customer support software based in the Cloud, Freshdesk provides a scalable solution for managing customer support simply and efficiently. Freshdesk enables teams to track incoming tickets from a variety of channels; provide support across multiple platforms including phone, chat, and other messaging apps; categorize, prioritize, and assign tickets; prepare preformatted answer to common customer support questions; and much more.
Freshdesk's API provides access to a wide range of data related to customer support and service management. The following are the categories of data that can be accessed through Freshdesk's API:
1. Tickets: Information related to customer support tickets, including ticket ID, status, priority, and requester details.
2. Contacts: Data related to customer contacts, including contact ID, name, email address, and phone number.
3. Agents: Information about support agents, including agent ID, name, email address, and role.
4. Companies: Data related to companies that use Freshdesk for customer support, including company ID, name, and domain.
5. Conversations: Information related to customer conversations, including conversation ID, status, and participants.
6. Knowledge base: Data related to the knowledge base, including articles, categories, and folders.
7. Surveys: Information related to customer satisfaction surveys, including survey ID, status, and responses.
8. Time entries: Data related to time entries for support agents, including time spent on tickets and activities.
9. Custom fields: Information related to custom fields created in Freshdesk, including field ID, name, and value.
Overall, Freshdesk's API provides access to a comprehensive set of data that can be used to improve customer support and service management.
What is ELT?
ELT, standing for Extract, Load, Transform, is a modern take on the traditional ETL data integration process. In ELT, data is first extracted from various sources, loaded directly into a data warehouse, and then transformed. This approach enhances data processing speed, analytical flexibility and autonomy.
Difference between ETL and ELT?
ETL and ELT are critical data integration strategies with key differences. ETL (Extract, Transform, Load) transforms data before loading, ideal for structured data. In contrast, ELT (Extract, Load, Transform) loads data before transformation, perfect for processing large, diverse data sets in modern data warehouses. ELT is becoming the new standard as it offers a lot more flexibility and autonomy to data analysts.
What should you do next?
Hope you enjoyed the reading. Here are the 3 ways we can help you in your data journey: