Connect catalog

Kafka Connect is a framework for connecting Apache Kafka to external systems. It uses source connectors to move data into Apache Kafka, and sink connectors to move data out of Apache Kafka.

The connect catalog contains a list of connectors, converters, and transformations that are supported either by IBM or the relevant community:

  • IBM supported: Each IBM supported connector is subject to its own license terms. Support is provided by IBM for customers who have a license and active software subscription and support entitlement for IBM Event Automation or IBM Cloud Pak for Integration. Raise any issues through the official IBM support channel. In accordance with IBM's standard support terms, IBM will investigate, identify, and provide a fix when possible.
  • Community supported: Where not identified as IBM supported, each community supported connector is subject to its own set of license terms and not supported by IBM. Raise issues through the community support links provided for each connector.

Connect catalog

Kafka Connect is a framework for connecting Apache Kafka to external systems. It uses source connectors to move data into Apache Kafka, and sink connectors to move data out of Apache Kafka.

The connect catalog contains a list of connectors, converters, and transformations that are supported either by IBM or the relevant community:

  • IBM supported: Each IBM supported connector is subject to its own license terms. Support is provided by IBM for customers who have a license and active software subscription and support entitlement for IBM Event Automation or IBM Cloud Pak for Integration. Raise any issues through the official IBM support channel. In accordance with IBM's standard support terms, IBM will investigate, identify, and provide a fix when possible.
  • Community supported: Where not identified as IBM supported, each community supported connector is subject to its own set of license terms and not supported by IBM. Raise issues through the community support links provided for each connector.

Filter is active

Filter by support provider

Didn't find what you were looking for?
Let us know by sending an email to eventstreams@uk.ibm.com or by submitting your request on the IBM Ideas portal.

Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Github Logo
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Github Logo
Github Logo
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Github Logo
Github Logo
Download Icon
Github Logo
Github Logo
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Github Logo
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Github Logo
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon
Download Icon

No connectors can be found that match your filters.

No matches . Clear the search or click a different tab to search.

Filtered item is available on a different tab. Switch tabs to view it.

Have a connector you want added to our catalog?
Loading

Loading

Loading

Loading

Support is provided by IBM

S3
connector arrow
event streams icon

Sink connector: consumes data from Kafka or Event Streams

Amazon S3

The Amazon S3 sink connector stores Apache Kafka messages in an Amazon Simple Storage Service (Amazon S3) bucket.

To use the Amazon S3 sink connector, complete the following steps:

  1. Create a KafkaConnect custom resource to define your Kafka Connect runtime and include the Amazon S3 sink connector by following the instructions in setting up and running connectors:

    When adding the connector to your Kafka Connect runtime, click Get connector to obtain the pre-built connector JAR file, or click Source code if you want to build the connector JAR file yourself.

    Note: The Releases page contains JAR files for multiple connectors, so ensure that you download the latest Amazon S3 sink connector JAR file.

  2. Apply the configured KafkaConnect custom resource to start the Kafka Connect runtime and verify that the connector is available for use.

  3. Create a KafkaConnector custom resource to define your connector configuration:

    Specify the class name and configuration properties for the connector in the KafkaConnector custom resource as described in the connector documentation.

  4. Apply the configured KafkaConnector custom resource to start the connector and verify that it is running.

Loading

Loading