Support is provided by IBM
-
Download or build the connector plugin JAR from the files.
Go to the connector Maven repository. Open the directory for the latest version and download the file ending with
-plugin.tar.gz
. -
Add the connector plugin to your Kafka Connect environment.
To add a connector, ensure you have your connector plugin directory or JAR files in the location specified in the
plugin.path
property of your Kafka Connect worker configuration (for example,<kafka>/config/connect-distributed.properties
). Theplugin.path
property is a comma-separated list of paths to directories that contain connector plugins.For example:
plugin.path=/usr/local/share/java,/usr/local/share/kafka/plugins,/opt/connectors
-
Restart Kafka Connect to make the new connector available in the environment.
For more information about how to set up a Kafka Connect environment, add connectors to it, and start the connectors, see how to set up and run connectors. -
Ensure that an Oracle database is set up for use with the LogMiner adapter.
-
Configure the connector properties for Event Streams in the
spec.config
section of the connector configuration YAML and then start your connector.
The following example provides the configuration required for a basic username and password connection.
apiVersion: eventstreams.ibm.com/v1beta2
kind: KafkaConnector
metadata:
name: <connector_name>
labels:
# The eventstreams.ibm.com/cluster label identifies the KafkaConnect instance
# in which to create this connector. That KafkaConnect instance
# must have the eventstreams.ibm.com/use-connector-resources annotation
# set to true.
eventstreams.ibm.com/cluster: <kafka_connect_name>
spec:
class: io.debezium.connector.oracle.OracleConnector
config:
database.server.name: <name_of_the_oracle_server_or_cluster>
plugin.name: pgoutput
database.hostname: <ip_address_or_hostname_of_the_oracle_database_server>
database.dbname: <database_name>
database.user: <database_user_name>
database.password: <database_user_password>
database.port: <port_number_for_database_server>
schema.history.internal.kafka.topic: <name_of_the_kafka_topic>
schema.history.internal.kafka.bootstrap.servers: <bootstrap_server_address>
tasksMax: 1