

Building your pipeline or Using Airbyte
Airbyte is the only open source solution empowering data teams to meet all their growing custom business demands in the new AI era.
- Inconsistent and inaccurate data
- Laborious and expensive
- Brittle and inflexible
- Reliable and accurate
- Extensible and scalable for all your needs
- Deployed and governed your way
Start syncing with Airbyte in 3 easy steps within 10 minutes



Take a virtual tour
Demo video of Airbyte Cloud
Demo video of AI Connector Builder
What sets Airbyte Apart
Modern GenAI Workflows
Move Large Volumes, Fast
An Extensible Open-Source Standard
Full Control & Security
Fully Featured & Integrated
Enterprise Support with SLAs
What our users say


"The intake layer of Datadog’s self-serve analytics platform is largely built on Airbyte.Airbyte’s ease of use and extensibility allowed any team in the company to push their data into the platform - without assistance from the data team!"


“Airbyte helped us accelerate our progress by years, compared to our competitors. We don’t need to worry about connectors and focus on creating value for our users instead of building infrastructure. That’s priceless. The time and energy saved allows us to disrupt and grow faster.”


“We chose Airbyte for its ease of use, its pricing scalability and its absence of vendor lock-in. Having a lean team makes them our top criteria. The value of being able to scale and execute at a high level by maximizing resources is immense”
1. Install Java SDK (if using Java) and set up your IDE (Integrated Development Environment) of choice (e.g., IntelliJ IDEA, Eclipse).
2. Add the Kafka client library and Oracle JDBC driver to your project dependencies. If you're using Maven, add the following dependencies to your `pom.xml` file:
```xml
<dependencies>
<!-- Kafka client -->
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>YourKafkaVersion</version>
</dependency>
<!-- Oracle JDBC driver -->
<dependency>
<groupId>com.oracle.database.jdbc</groupId>
<artifactId>ojdbc8</artifactId>
<version>YourDriverVersion</version>
</dependency>
</dependencies>
```
1. Create a Kafka consumer configuration with the necessary parameters such as bootstrap servers, group ID, key and value deserializers, etc.
2. Instantiate a Kafka consumer and subscribe to the desired topic.
3. Implement a loop that continuously polls for new records.
1. Load the Oracle JDBC driver class.
2. Create a database connection string with the appropriate URL, username, and password.
3. Establish a connection to the Oracle database using the `DriverManager.getConnection()` method.
1. For each record consumed from Kafka, transform the data if necessary to match the schema of the Oracle database table.
2. Create a SQL INSERT statement with placeholders for the data.
3. Use `PreparedStatement` to set the values from the Kafka record into the INSERT statement.
4. Execute the `PreparedStatement` to insert the data into the Oracle database.
5. Implement proper exception handling and transaction management. Commit the transactions or roll them back in case of errors.
1. Handle Kafka offsets carefully. You may choose to commit offsets after the data is successfully inserted into the Oracle database to ensure exactly-once processing semantics.
2. Implement graceful shutdown logic for the Kafka consumer to close connections and clean up resources when stopping the application.
1. Test the application thoroughly to ensure that data is correctly consumed from Kafka and inserted into the Oracle database.
2. Monitor the application and tune the performance, adjusting consumer configurations, batch sizes, and commit strategies as needed.
3. Ensure proper logging and error handling to troubleshoot any issues that may arise.
1. Package your application into an executable JAR or another suitable format for deployment.
2. Deploy the application to a server or container that has network access to both Kafka and the Oracle database.
3. Monitor the application in a production environment to ensure stability and performance.
Example Code Snippet (Java)
```java
// Example code to illustrate the process. This is not a complete application.
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.PreparedStatement;
import java.util.Collections;
import java.util.Properties;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
public class KafkaToOracle {
public static void main(String[] args) {
// Kafka consumer setup
Properties props = new Properties();
props.put("bootstrap.servers", "your_kafka_broker:9092");
props.put("group.id", "your_group_id");
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
consumer.subscribe(Collections.singletonList("your_topic"));
// Oracle DB setup
String url = "jdbc:oracle:thin:@your_oracle_db_host:port:dbname";
String user = "your_username";
String password = "your_password";
try (Connection connection = DriverManager.getConnection(url, user, password)) {
while (true) {
ConsumerRecords<String, String> records = consumer.poll(100);
for (ConsumerRecord<String, String> record : records) {
// Transform and insert data
String insertSql = "INSERT INTO your_table (column1, column2) VALUES (?, ?)";
try (PreparedStatement statement = connection.prepareStatement(insertSql)) {
// Assuming the record value is a CSV string
String[] values = record.value().split(",");
statement.setString(1, values[0]);
statement.setString(2, values[1]);
statement.executeUpdate();
}
}
// Commit the transaction and Kafka offset
connection.commit();
consumer.commitSync();
}
} catch (Exception e) {
e.printStackTrace();
} finally {
consumer.close();
}
}
}
```
Remember to replace placeholders (`your_kafka_broker`, `your_group_id`, `your_topic`, `your_oracle_db_host`, `your_username`, `your_password`, and `your_table`) with actual values from your environment.
Ensure that you have Kafka installed and running. Kafka Connect is part of the Kafka distribution, so no additional installations are needed if Kafka is already installed. You can verify Kafka Connect is installed properly by running:
bin/connect-standalone.sh or bin/connect-distributed.sh
- Choose Connector: There are multiple connectors available, including Confluent’s Oracle CDC Source Connector or JDBC Sink Connector.some text
- For streaming data to Oracle DB, use the JDBC Sink Connector.
- Download the Kafka JDBC Connector - JDBC Sink Connector
- Unzip the connector file and move it to the Kafka connectors directory (e.g., /usr/local/share/kafka/plugins/).
Create JDBC Sink Connector Properties File: You’ll need to configure a properties file for the JDBC Sink Connector. This will specify details about the Kafka topic, Oracle DB connection, and the specific table in Oracle.
Example configuration file (oracle-sink.properties):
```properties
name=oracle-sink-connector
connector.class=io.confluent.connect.jdbc.JdbcSinkConnector
tasks.max=1
topics=kafka_topic_name
connection.url=jdbc:oracle:thin:@//hostname:port/service_name
connection.user=your_oracle_username
connection.password=your_oracle_password
table.name.format=your_table_name
insert.mode=insert
auto.create=true
auto.evolve=true
key.converter=org.apache.kafka.connect.storage.StringConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
```
Important parameters:
- connection.url: Oracle JDBC URL (with hostname, port, and service name).
- insert.mode: Controls whether to insert, upsert, or update data in Oracle.
- auto.create: Automatically create the Oracle DB table if it doesn’t exist.
- auto.evolve: Allow Kafka Connect to update the table schema when schema changes in Kafka occur.
- Install Oracle JDBC Driver:some text
- Download the Oracle JDBC driver: Oracle JDBC Driver
- Place the JAR file in the Kafka Connect libs directory.
```bash
cp /path_to_jdbc_driver/ojdbc8.jar /path_to_kafka/libs/
```
- Create Oracle Database Table:some text
- Before starting the Kafka Connect process, ensure that the target Oracle table exists (if auto.create=true, this step is optional):
```sql
CREATE TABLE your_table_name (
id NUMBER PRIMARY KEY,
data_column VARCHAR2(255)
);
```
Run Kafka Connect in distributed mode to allow connectors to scale and fault-tolerate:
```bash
bin/connect-distributed.sh config/connect-distributed.properties
```
Deploy the Oracle Sink Connector by submitting the configuration to the Kafka Connect REST API:
```bash
curl -X POST -H "Content-Type: application/json" --data '{
"name": "oracle-sink-connector",
"config": {
"connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
"tasks.max": "1",
"topics": "kafka_topic_name",
"connection.url": "jdbc:oracle:thin:@//hostname:port/service_name",
"connection.user": "your_oracle_username",
"connection.password": "your_oracle_password",
"table.name.format": "your_table_name",
"insert.mode": "insert",
"auto.create": "true",
"auto.evolve": "true",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"value.converter": "org.apache.kafka.connect.json.JsonConverter"
}
}' http://localhost:8083/connectors
```
- Check Connector Status:some text
- To verify if the connector is running:
```bash
curl -X GET http://localhost:8083/connectors/oracle-sink-connector/status
```
- Review Logs:some text
- Check logs in connect-distributed.log to see if data is being correctly written to Oracle.
- Produce Data to Kafka:some text
- Produce some test data to the Kafka topic using the Kafka console producer or any application:
```bash
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic kafka_topic_name
```
- Verify Data in Oracle DB:some text
- Check if the records are correctly inserted into Oracle DB by querying the table:
```sql
SELECT * FROM your_table_name;
```
- Failed Task Recovery: Monitor Kafka Connect logs for task failures and restart tasks if necessary. Use Kafka Connect’s internal REST API for troubleshooting.
- Data Format Issues: Ensure that the format of data being produced to Kafka matches the expected schema in Oracle.
By following these steps, you will have a functional pipeline streaming data from Kafka into Oracle DB via Kafka Connect.
FAQs
What is ETL?
ETL, an acronym for Extract, Transform, Load, is a vital data integration process. It involves extracting data from diverse sources, transforming it into a usable format, and loading it into a database, data warehouse or data lake. This process enables meaningful data analysis, enhancing business intelligence.
Apache Kafka is an open-source distributed event streaming platform that is used to handle real-time data feeds. It is designed to handle high volumes of data and provide real-time processing and analysis of data streams. Kafka is used by many companies for various purposes such as data integration, real-time analytics, and messaging. It is highly scalable and fault-tolerant, making it a popular choice for large-scale data processing. Kafka provides a publish-subscribe model where producers publish data to topics, and consumers subscribe to those topics to receive the data. It also provides features such as data retention, replication, and partitioning to ensure data reliability and availability.
Kafka's API gives access to various types of data, including:
1. Event data: Kafka is primarily used for streaming event data, such as user actions, sensor readings, and log data.
2. Metadata: Kafka provides metadata about the topics, partitions, and brokers in a cluster.
3. Consumer offsets: Kafka tracks the offset of each message consumed by a consumer, allowing for reliable message delivery.
4. Producer metrics: Kafka provides metrics on the performance of producers, such as message send rate and error rate.
5. Consumer metrics: Kafka provides metrics on the performance of consumers, such as message consumption rate and lag.
6. Log data: Kafka stores log data for a configurable amount of time, allowing for historical analysis and debugging.
7. Administrative data: Kafka provides APIs for managing topics, partitions, and consumer groups.
Overall, Kafka's API gives access to a wide range of data related to event streaming, metadata, performance metrics, and administrative tasks.
What is ELT?
ELT, standing for Extract, Load, Transform, is a modern take on the traditional ETL data integration process. In ELT, data is first extracted from various sources, loaded directly into a data warehouse, and then transformed. This approach enhances data processing speed, analytical flexibility and autonomy.
Difference between ETL and ELT?
ETL and ELT are critical data integration strategies with key differences. ETL (Extract, Transform, Load) transforms data before loading, ideal for structured data. In contrast, ELT (Extract, Load, Transform) loads data before transformation, perfect for processing large, diverse data sets in modern data warehouses. ELT is becoming the new standard as it offers a lot more flexibility and autonomy to data analysts.
What should you do next?
Hope you enjoyed the reading. Here are the 3 ways we can help you in your data journey: