Connector catalog

Kafka Connect is a framework for connecting Kafka to external systems. It uses source connectors to move data into Kafka, and sink connectors to move data out of Kafka.

The connector catalog contains a list of connectors that are supported either by IBM or the community:

  • IBM supported: Support is provided by IBM for customers who have a license for IBM Event Automation or IBM Cloud Pak for Integration. Raise any issues through the official IBM support channel. IBM will investigate, identify, and provide a fix when possible.
  • Community supported: Each community connector is subject to its own set of license terms and not supported by IBM. Raise issues through the community support links provided for each connector.

Connector catalog

Kafka Connect is a framework for connecting Kafka to external systems. It uses source connectors to move data into Kafka, and sink connectors to move data out of Kafka.

The connector catalog contains a list of connectors that are supported either by IBM or the community:

  • IBM supported: Support is provided by IBM for customers who have a license for IBM Event Automation or IBM Cloud Pak for Integration. Raise any issues through the official IBM support channel. IBM will investigate, identify, and provide a fix when possible.
  • Community supported: Each community connector is subject to its own set of license terms and not supported by IBM. Raise issues through the community support links provided for each connector.

Filter is active

Filter by support provider

Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Github Logo
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Github Logo
Github Logo
Download Icon
Github Logo
Github Logo
Download Icon
Github Logo
Github Logo
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Github Logo
Download Icon
Download Icon
Download Icon
Download Icon
Github Logo
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Github Logo
Download Icon
Github Logo

No connectors can be found matching your filters

Have a connector you want added to our catalog?
Loading

Loading

Loading

Loading

Support is provided by IBM

Or
connector arrow
event streams icon

Source connector: produces data into Kafka or Event Streams

Oracle (Debezium)

Oracle (Debezium) source connector monitors Oracle database tables and writes all change events to Kafka topics.

Note: IBM Support is only available for the connector when it is used with the LogMiner adapter.

  1. Download or build the connector plugin JAR from the files.

    Go to the connector Maven repository. Open the directory for the latest version and download the file ending with -plugin.tar.gz.

  2. Add the connector plugin to your Kafka Connect environment.

    To add a connector, ensure you have your connector plugin directory or JAR files in the location specified in the plugin.path property of your Kafka Connect worker configuration (for example, <kafka>/config/connect-distributed.properties). The plugin.path property is a comma-separated list of paths to directories that contain connector plugins.

    For example:

     plugin.path=/usr/local/share/java,/usr/local/share/kafka/plugins,/opt/connectors
    
  3. Restart Kafka Connect to make the new connector available in the environment.

    For more information about how to set up a Kafka Connect environment, add connectors to it, and start the connectors, see how to set up and run connectors.

  4. Ensure that an Oracle database is set up for use with the LogMiner adapter.

  5. Configure the connector properties for Event Streams in the spec.config section of the connector configuration YAML and then start your connector.

The following example provides the configuration required for a basic username and password connection.

apiVersion: eventstreams.ibm.com/v1beta2
kind: KafkaConnector
metadata:
  name: <connector_name>
  labels:
    # The eventstreams.ibm.com/cluster label identifies the KafkaConnect instance
    # in which to create this connector. That KafkaConnect instance
    # must have the eventstreams.ibm.com/use-connector-resources annotation
    # set to true.
    eventstreams.ibm.com/cluster: <kafka_connect_name>
spec:
  class: io.debezium.connector.oracle.OracleConnector
  config:
    database.server.name: <name_of_the_oracle_server_or_cluster>
    plugin.name: pgoutput
    database.hostname: <ip_address_or_hostname_of_the_oracle_database_server>
    database.dbname: <database_name>
    database.user: <database_user_name>
    database.password: <database_user_password>
    database.port: <port_number_for_database_server>
    schema.history.internal.kafka.topic: <name_of_the_kafka_topic>
    schema.history.internal.kafka.bootstrap.servers: <bootstrap_server_address>
  tasksMax: 1
Loading

Loading