

Building your pipeline or Using Airbyte
Airbyte is the only open source solution empowering data teams to meet all their growing custom business demands in the new AI era.
- Inconsistent and inaccurate data
- Laborious and expensive
- Brittle and inflexible
- Reliable and accurate
- Extensible and scalable for all your needs
- Deployed and governed your way
Start syncing with Airbyte in 3 easy steps within 10 minutes



Take a virtual tour
Demo video of Airbyte Cloud
Demo video of AI Connector Builder
What sets Airbyte Apart
Modern GenAI Workflows
Move Large Volumes, Fast
An Extensible Open-Source Standard
Full Control & Security
Fully Featured & Integrated
Enterprise Support with SLAs
What our users say


"The intake layer of Datadog’s self-serve analytics platform is largely built on Airbyte.Airbyte’s ease of use and extensibility allowed any team in the company to push their data into the platform - without assistance from the data team!"


“Airbyte helped us accelerate our progress by years, compared to our competitors. We don’t need to worry about connectors and focus on creating value for our users instead of building infrastructure. That’s priceless. The time and energy saved allows us to disrupt and grow faster.”


“We chose Airbyte for its ease of use, its pricing scalability and its absence of vendor lock-in. Having a lean team makes them our top criteria. The value of being able to scale and execute at a high level by maximizing resources is immense”
1. Install Oracle Database Utilities: Ensure that you have Oracle Database utilities like SQL*Plus and Data Pump (expdp and impdp) installed on your Oracle server.
2. Install PostgreSQL: Make sure PostgreSQL is installed on the target server.
3. Access Credentials: Have the necessary credentials (username, password, database names, hostnames, port numbers) for both Oracle and PostgreSQL databases at hand.
1. Identify Data to Migrate: Determine which schemas, tables, or data need to be migrated.
2. Data Type Mapping: Analyze the data types used in Oracle and determine the equivalent PostgreSQL data types.
3. Character Set Considerations: Ensure that the character sets are compatible between Oracle and PostgreSQL or plan for conversion if they are not.
1. Prepare for Export: Disable any foreign keys, triggers, or other dependencies that might interfere with the export process.
2. Export with SQL*Plus:
- Connect to Oracle using SQL*Plus:
```shell
sqlplus username/password@//hostname:port/SID
```
- Spool the data to a flat file:
```sql
SET ECHO OFF
SET FEEDBACK OFF
SET HEADING OFF
SPOOL /path/to/exported_data.txt
SELECT * FROM schema_name.table_name;
SPOOL OFF
EXIT
```
- Repeat the above step for each table you wish to export.
1. Create Database and Schema: If not already present, create the database and schema in PostgreSQL.
2. Create Tables: Based on the data type mapping, create the corresponding tables in PostgreSQL with appropriate data types.
3. Adjust PostgreSQL Settings: Modify `postgresql.conf` if necessary to increase settings like `max_allowed_packet` to accommodate large data imports.
1. Prepare for Import: Disable triggers, foreign keys, and indexes in PostgreSQL to speed up the import process.
2. Import Using psql:
- Connect to PostgreSQL using psql:
```shell
psql -U username -d database_name -h hostname -p port
```
- Use the COPY command to import data:
```sql
\COPY schema_name.table_name FROM '/path/to/exported_data.txt' WITH (FORMAT csv, DELIMITER '|', NULL 'NULL');
```
- Repeat the above step for each exported file corresponding to a table.
1. Check Row Counts: Compare the row counts in both Oracle and PostgreSQL to ensure they match.
2. Check Data Consistency: Run sample queries on both databases to verify that the data is consistent.
3. Re-enable Constraints: Re-enable foreign keys, triggers, and indexes in PostgreSQL and validate them.
1. Performance Tuning: Analyze the imported tables and run `VACUUM ANALYZE` to update statistics for the PostgreSQL query planner.
2. Test Applications: Update your application connection strings and thoroughly test to ensure that the applications work as expected with the new PostgreSQL database.
3. Backup: Take a backup of the PostgreSQL database after the migration is confirmed to be successful.
Additional Tips
- Always perform the migration first on a test environment before applying it to production.
- For large datasets, consider using PostgreSQL's `pg_dump` and `pg_restore` utilities with custom scripts to handle data type conversion.
- Thoroughly document the migration process, including any data type transformations and issues encountered.
Remember, this is a high-level guide, and actual migration may involve additional complexities depending on the specific use case and data involved. Always ensure you have a backup and recovery strategy in place before beginning any migration.
FAQs
What is ETL?
ETL, an acronym for Extract, Transform, Load, is a vital data integration process. It involves extracting data from diverse sources, transforming it into a usable format, and loading it into a database, data warehouse or data lake. This process enables meaningful data analysis, enhancing business intelligence.
Oracle DB is a fully scalable integrated cloud application and platform service; it is also referred to as a relational database architecture. It provides management and processing of data for both local and wide and networks. Offering software-as-a-service (SaaS), platform-as-a-service (PaaS), and infrastructure-as-a-service (IaaS), it sells a large variety of enterprise IT solutions that help companies streamline the business process, lower costs, and increase productivity.
Oracle DB provides access to a wide range of data types, including:
• Relational data: This includes tables, views, and indexes that are used to store and organize data in a structured manner.
• Spatial data: This includes data that is related to geographic locations, such as maps, satellite imagery, and GPS coordinates.
• Time-series data: This includes data that is related to time, such as stock prices, weather data, and sensor readings.
• Multimedia data: This includes data that is related to images, videos, and audio files.
• XML data: This includes data that is stored in XML format, such as web pages, documents, and other structured data.
• JSON data: This includes data that is stored in JSON format, such as web APIs, mobile apps, and other data sources.
• Graph data: This includes data that is related to relationships between entities, such as social networks, supply chains, and other complex systems.
Overall, Oracle DB's API provides access to a wide range of data types that can be used for a variety of applications, from business intelligence and analytics to machine learning and artificial intelligence.
What is ELT?
ELT, standing for Extract, Load, Transform, is a modern take on the traditional ETL data integration process. In ELT, data is first extracted from various sources, loaded directly into a data warehouse, and then transformed. This approach enhances data processing speed, analytical flexibility and autonomy.
Difference between ETL and ELT?
ETL and ELT are critical data integration strategies with key differences. ETL (Extract, Transform, Load) transforms data before loading, ideal for structured data. In contrast, ELT (Extract, Load, Transform) loads data before transformation, perfect for processing large, diverse data sets in modern data warehouses. ELT is becoming the new standard as it offers a lot more flexibility and autonomy to data analysts.
What should you do next?
Hope you enjoyed the reading. Here are the 3 ways we can help you in your data journey: