Connector catalog

Kafka Connect is a framework for connecting Apache Kafka to external systems. It uses source connectors to move data into Apache Kafka, and sink connectors to move data out of Apache Kafka.

The connector catalog contains a list of connectors that are supported either by IBM or the relevant community:

  • IBM supported: Each IBM supported connector is subject to its own license terms. Support is provided by IBM for customers who have a license and active software subscription and support entitlement for IBM Event Automation or IBM Cloud Pak for Integration. Raise any issues through the official IBM support channel. In accordance with IBM's standard support terms, IBM will investigate, identify, and provide a fix when possible.
  • Community supported: Where not identified as IBM supported, each community supported connector is subject to its own set of license terms and not supported by IBM. Raise issues through the community support links provided for each connector.

Connector catalog

Kafka Connect is a framework for connecting Apache Kafka to external systems. It uses source connectors to move data into Apache Kafka, and sink connectors to move data out of Apache Kafka.

The connector catalog contains a list of connectors that are supported either by IBM or the relevant community:

  • IBM supported: Each IBM supported connector is subject to its own license terms. Support is provided by IBM for customers who have a license and active software subscription and support entitlement for IBM Event Automation or IBM Cloud Pak for Integration. Raise any issues through the official IBM support channel. In accordance with IBM's standard support terms, IBM will investigate, identify, and provide a fix when possible.
  • Community supported: Where not identified as IBM supported, each community supported connector is subject to its own set of license terms and not supported by IBM. Raise issues through the community support links provided for each connector.

Filter is active

Filter by support provider

Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Github Logo
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Github Logo
Github Logo
Download Icon
Github Logo
Github Logo
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Github Logo
Download Icon
Download Icon
Download Icon
Download Icon
Github Logo
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Github Logo
Download Icon
Github Logo

No connectors can be found matching your filters

Have a connector you want added to our catalog?
Loading

Loading

Loading

Loading

Support is provided by IBM

Or
connector arrow
event streams icon

Source connector: produces data into Kafka or Event Streams

Oracle (Debezium)

Oracle (Debezium) source connector monitors Oracle database tables and writes all change events to Kafka topics.

Note: IBM Support is only available for the connector when it is used with the LogMiner adapter.

To use the Oracle (Debezium) source connector, complete the following steps:

  1. Ensure that an Oracle database is set up for use with the LogMiner adapter.

  2. Create a KafkaConnect custom resource to define your Kafka Connect runtime and include the Oracle (Debezium) source connector by following the instructions in setting up and running connectors:

    When adding the connector to your Kafka Connect runtime, click Get connector to obtain the pre-built connector JAR file, or click Source code if you want to build the connector JAR file yourself.

    To obtain the connector JAR file, go to the connector Maven repository, open the directory for the latest version, and download the file ending with -plugin.tar.gz.

  3. Apply the configured KafkaConnect custom resource to start the Kafka Connect runtime and verify that the connector is available for use.

  4. Create a KafkaConnector custom resource to define your connector configuration:

    Specify the class name and configuration properties for the connector in the KafkaConnector custom resource as described in the connector documentation.

    See the following sample KafkaConnector custom resource for a basic username and password connection:

    apiVersion: eventstreams.ibm.com/v1beta2
    kind: KafkaConnector
    metadata:
    name: <connector_name>
    labels:
       eventstreams.ibm.com/cluster: <kafka_connect_name>
    spec:
    class: io.debezium.connector.oracle.OracleConnector
    config:
       database.server.name: <name_of_the_oracle_server_or_cluster>
       plugin.name: pgoutput
       database.hostname: <ip_address_or_hostname_of_the_oracle_database_server>
       database.dbname: <database_name>
       database.user: <database_user_name>
       database.password: <database_user_password>
       database.port: <port_number_for_database_server>
       schema.history.internal.kafka.topic: <name_of_the_kafka_topic>
       schema.history.internal.kafka.bootstrap.servers: <bootstrap_server_address>
    tasksMax: 1
    
  5. Apply the configured KafkaConnector custom resource to start the connector and verify that it is running.

Loading

Loading