

Building your pipeline or Using Airbyte
Airbyte is the only open source solution empowering data teams to meet all their growing custom business demands in the new AI era.
- Inconsistent and inaccurate data
- Laborious and expensive
- Brittle and inflexible
- Reliable and accurate
- Extensible and scalable for all your needs
- Deployed and governed your way
Start syncing with Airbyte in 3 easy steps within 10 minutes
Take a virtual tour
Demo video of Airbyte Cloud
Demo video of AI Connector Builder
Setup Complexities simplified!
Simple & Easy to use Interface
Airbyte is built to get out of your way. Our clean, modern interface walks you through setup, so you can go from zero to sync in minutes—without deep technical expertise.
Guided Tour: Assisting you in building connections
Whether you’re setting up your first connection or managing complex syncs, Airbyte’s UI and documentation help you move with confidence. No guesswork. Just clarity.
Airbyte AI Assistant that will act as your sidekick in building your data pipelines in Minutes
Airbyte’s built-in assistant helps you choose sources, set destinations, and configure syncs quickly. It’s like having a data engineer on call—without the overhead.
What sets Airbyte Apart
Modern GenAI Workflows
Move Large Volumes, Fast
An Extensible Open-Source Standard
Full Control & Security
Fully Featured & Integrated
Enterprise Support with SLAs
What our users say

Andre Exner

"For TUI Musement, Airbyte cut development time in half and enabled dynamic customer experiences."

Chase Zieman

“Airbyte helped us accelerate our progress by years, compared to our competitors. We don’t need to worry about connectors and focus on creating value for our users instead of building infrastructure. That’s priceless. The time and energy saved allows us to disrupt and grow faster.”

Rupak Patel
"With Airbyte, we could just push a few buttons, allow API access, and bring all the data into Google BigQuery. By blending all the different marketing data sources, we can gain valuable insights."
First, familiarize yourself with Kyriba's data export options. Kyriba typically allows data export in formats such as CSV, XML, or Excel. Check the platform's documentation or contact their support to understand how to configure and schedule exports of the specific data you need.
Kyriba often supports exporting data to an SFTP server. Set up an SFTP server where Kyriba can send the exported files. This could be a server you manage, or a cloud-based SFTP service. Ensure that the SFTP server is secure and accessible only by trusted parties.
Use Kyriba's scheduling tools to automate the export of data to your SFTP server at regular intervals. Configure the exports to match the frequency and timing that suits your business needs, ensuring that the data is always up-to-date.
In your Google Cloud Platform (GCP) account, create a new GCS bucket. This will serve as the staging area for data before it is published to Pub/Sub. Ensure the bucket has the appropriate permissions to allow access for data transfer processes.
Write a script in a language like Python or Bash to download the data files from your SFTP server and upload them to the GCS bucket. Use libraries such as Paramiko for SFTP operations and Google Cloud Client Libraries for uploading files to GCS. Schedule this script to run automatically using a cron job or a similar scheduling tool.
Develop a Google Cloud Function that triggers on new file uploads to your GCS bucket. This function should read the data, process it as needed, and publish it to a Google Pub/Sub topic. Use the Google Cloud SDK and Pub/Sub client libraries to handle data processing and publishing tasks.
Implement monitoring and logging for each stage of the data pipeline to ensure smooth operations. Use Google Cloud's monitoring tools to keep track of job successes and failures. Regularly review logs and alerts to address any issues promptly and maintain the integrity of your data flow from Kyriba to Google Pub/Sub.
By following these steps, you can establish a customized data pipeline from Kyriba to Google Pub/Sub, ensuring seamless data movement without relying on third-party connectors or integrations.
FAQs
What is ETL?
ETL, an acronym for Extract, Transform, Load, is a vital data integration process. It involves extracting data from diverse sources, transforming it into a usable format, and loading it into a database, data warehouse or data lake. This process enables meaningful data analysis, enhancing business intelligence.
Kyriba is a global leader in cloud treasury and finance solutions, providing mission-critical capabilities for cash and risk management, payments, and working capital solutions. More than 2,500 clients worldwide rely on Kyriba to view, protect and grow their liquidity. Kyriba has connectivity in its DNA and is driven by research and innovation to uncover new ways to use APIs, artificial intelligence, and predictive analytics to support our customers. It unifies cloud offerings with a truly global community of customers, partners, and talented employees reaching over 100 countries worldwide.
Kyriba's API provides access to a wide range of financial data, including:
1. Cash Management Data: This includes information on cash balances, bank accounts, and transactions.
2. Payment Data: This includes details on payments made and received, including payment method, amount, and date.
3. FX Data: This includes exchange rates and currency conversion information.
4. Risk Management Data: This includes data on financial risks such as market risk, credit risk, and liquidity risk.
5. Treasury Management Data: This includes information on treasury operations such as cash forecasting, cash positioning, and cash pooling.
6. Compliance Data: This includes data on regulatory compliance, such as anti-money laundering (AML) and know your customer (KYC) requirements.
7. Reporting Data: This includes data on financial reporting, such as balance sheets, income statements, and cash flow statements.
Overall, Kyriba's API provides a comprehensive set of financial data that can be used to manage cash, payments, risk, compliance, and reporting.
What is ELT?
ELT, standing for Extract, Load, Transform, is a modern take on the traditional ETL data integration process. In ELT, data is first extracted from various sources, loaded directly into a data warehouse, and then transformed. This approach enhances data processing speed, analytical flexibility and autonomy.
Difference between ETL and ELT?
ETL and ELT are critical data integration strategies with key differences. ETL (Extract, Transform, Load) transforms data before loading, ideal for structured data. In contrast, ELT (Extract, Load, Transform) loads data before transformation, perfect for processing large, diverse data sets in modern data warehouses. ELT is becoming the new standard as it offers a lot more flexibility and autonomy to data analysts.
What should you do next?
Hope you enjoyed the reading. Here are the 3 ways we can help you in your data journey:





