Attention: This version of Event Streams has reached End of Support. For more information about supported versions, see the support matrix.

Configuring

Enabling persistent storage

If you want your data to be preserved in the event of a restart, set persistent storage for Kafka, ZooKeeper, and schemas in your Event Streams installation.

To enable persistent storage for Kafka:

  1. Go to the Kafka persistent storage settings section.
  2. Select the Enable persistent storage for Apache Kafka check box.
  3. Optional: Select the Use dynamic provisioning for Apache Kafka check box and provide a storage class name if the Persistent Volumes will be created dynamically.

To enable persistent storage for ZooKeeper:

  1. Go to the ZooKeeper settings section.
  2. Select the Enable persistent storage for ZooKeeper servers check box.
  3. Optional: Select the Use dynamic provisioning for ZooKeeper servers check box and provide a storage class name if the Persistent Volumes will be created dynamically.

To enable persistent storage for schemas:

  1. Go to the Schema Registry settings section.
  2. Select the Enable persistent storage for Schema Registry API servers check box.
  3. Optional: Select the Use dynamic provisioning for Schema Registry API servers check box and provide a storage class name if the Persistent Volumes will be created dynamically.

Important: If membership of a specific group is required to access the file system used for persistent volumes, ensure you specify in the File system group ID field the GID of the group that owns the file system.

Enabling encryption between pods

To enable TLS encryption for communication between Event Streams pods, set the Pod to pod encryption field of the Global install settings section to Enabled. By default, encryption between pods is disabled.

Specifying a ConfigMap for Kafka configuration

If you have a ConfigMap for Kafka configuration settings, you can provide it to your Event Streams installation to use. Enter the name in the Cluster configuration ConfigMap field of the Kafka broker settings section.

Important: The ConfigMap must be in the same namespace as where you intend to install the Event Streams release.

Setting geo-replication nodes

When installing Event Streams as an instance intended for geo-replication, configure the number of geo-replication worker nodes in the Geo-replication settings section by setting the number of nodes required in the Geo-replicator workers field.

Note: If you want to set up a cluster as a destination for geo-replication, ensure you set a minimum of 2 nodes for high availability reasons.

Consider the number of geo-replication nodes to run on a destination cluster. You can also set up destination clusters and configure the number of geo-replication worker nodes for an existing installation later.

Note: Geo-replication is only available in the paid-for version of Event Streams (not available in the Community Edition).

Configuring external access

By default, external Kafka client applications connect to the IBM Cloud Private master node directly without any configuration required. You simply leave the External hostname/IP address field of the External access settings section blank.

If you want clients to connect through a different route such as a load balancer, use the field to specify the host name or IP address of the endpoint.

Also ensure you configure security for your cluster by setting certificate details in the Secure connection settings section. By default, a self-signed certificate is created during installation and the Private key, TLS certificate, and CA certificate fields can be left blank. If you want to use an existing certificate, select provided under Certificate type, and provide these additional keys and certificate values as base 64-encoded strings. Alternatively, you can generate your own certificates.

After installation, set up external access by checking the port number to use for external connections and ensuring the necessary certificates are configured within your client environment.

Configuring external monitoring tools

You can use third-party monitoring tools to monitor the deployed Event Streams Kafka cluster by connecting to the JMX port on the Kafka brokers and reading Kafka metrics. To set this up, you need to:

  • Have a third-party monitoring tool set up to be used within your IBM Cloud Private cluster.
  • Enable access to the broker JMX port by selecting the Enable secure JMX connections check box in the Kafka broker settings section.
  • Provide any configuration settings required by your monitoring tool to be applied to Event Streams. For example, Datadog requires you to deploy an agent on your IBM Cloud Private system that requires configuration settings to work with Event Streams.
  • Configure your applications to connect to a secure JMX port.

Configuration reference

Configure your Event Streams installation by setting the following parameters as needed.

Global install settings

The following table describes the parameters for setting global installation options.

Field Description Default
Docker image registry Docker images are fetched from this registry. The format is <cluster_name>:<port>/<namespace>. ibmcom
Image pull secret If using a registry that requires authentication, the name of the secret containing credentials. None
Image pull policy Controls when Docker images are fetched from the registry. IfNotPresent
File system group ID Specify the ID of the group that owns the file system intended to be used for persistent volumes. Volumes that support ownership management must be owned and writable by this group ID. None
Architecture The worker node architecture on which to deploy Event Streams. amd64
Pod to pod encryption Select whether you want to enable TLS encryption for communication between pods. Disabled
Kubernetes internal DNS domain name If you have changed the default DNS domain name from cluster.local in your Kubernetes installation, then this field must be set to the same value. You cannot change this value after installation. cluster.local

Insights - help us improve our product

The following table describes the options for product improvement analytics.

Field Description Default
Share my product usage data Select to enable product usage analytics to be transmitted to IBM for business reporting and product usage understanding. Not selected (false)

Note: The data gathered helps IBM understand how Event Streams is used, and can help build knowledge about typical deployment scenarios and common user preferences. The aim is to improve the overall user experience, and the data could influence decisions about future enhancements. For example, information about the configuration options used the most often could help IBM provide better default values for making the installation process easier. The data is only used by IBM and is not shared outside of IBM.
If you enable analytics, but want to opt out later, or want more information, contact us.

Kafka broker settings

The following table describes the options for configuring Kafka brokers.

Field Description Default
CPU request for Kafka brokers The minimum required CPU core for each Kafka broker. Specify integers, fractions (for example, 0.5), or millicore values (for example, 100m, where 100m is equivalent to .1 core). 1000m
CPU limit for Kafka brokers The maximum amount of CPU core allocated to each Kafka broker when the broker is heavily loaded. Specify integers, fractions (for example, 0.5), or millicores values (for example, 100m, where 100m is equivalent to .1 core). 1000m
Memory request for Kafka brokers The minimum amount of memory required for each Kafka broker in bytes. Specify integers with one of these suffixes: E, P, T, G, M, K, or power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. 2Gi
Memory limit for Kafka brokers The maximum amount of memory in bytes allocated to each Kafka broker when the broker is heavily loaded. Specify integers with suffixes: E, P, T, G, M, K, or power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. 2Gi
Kafka brokers Number of brokers in the Kafka cluster. 3
Cluster configuration ConfigMap Provide the name of a ConfigMap containing Kafka configuration to apply changes to Kafka’s server.properties. See how to create a ConfigMap for your installation. None
Enable secure JMX connections Select to make each Kafka broker’s JMX port accessible to secure connections from applications running inside the IBM Cloud Private cluster. When access is enabled, you can configure your applications to connect to a secure JMX port and read Kafka metrics. Also, see External monitoring settings for application-specific configuration requirements. Not selected (false)

Kafka persistent storage settings

The following table describes the options for configuring persistent storage.

Field Description Default
Enable persistent storage for Apache Kafka Set whether to store Apache Kafka data on a persistent volume. Enabling storage ensures the data is preserved if the pod is stopped. Not selected (false)
Use dynamic provisioning for Apache Kafka Set whether to use a Storage Class when provisioning Persistent Volumes for Apache Kafka. Selecting will dynamically create Persistent Volume Claims for the Kafka brokers. Not selected (false)
Name Prefix for the name of the Persistent Volume Claims used for the Apache Kafka brokers. datadir
Storage class name Storage Class to use for Kafka brokers if dynamically provisioning Persistent Volume Claims. None
Size Size to use for the Persistent Volume Claims created for Kafka nodes. 4Gi

ZooKeeper settings

The following table describes the options for configuring ZooKeeper.

Field Description Default
CPU request for ZooKeeper servers The minimum required CPU core for each ZooKeeeper server. Specify integers, fractions (for example, 0.5), or millicore values (for example, 100m, where 100m is equivalent to .1 core). 100m
CPU limit for ZooKeeper servers The maximum amount of CPU core allocated to each ZooKeeper server when the server is heavily loaded. Specify integers, fractions (for example, 0.5), or millicores values (for example, 100m, where 100m is equivalent to .1 core). 100m
Enable persistent storage for ZooKeeper servers Set whether to store Apache ZooKeeper data on a persistent volume. Enabling storage ensures the data is preserved if the pod is stopped. Not selected (false)
Use dynamic provisioning for ZooKeeper servers Set whether to use a Storage Class when provisioning Persistent Volumes for Apache ZooKeeper. Selecting will dynamically create Persistent Volume Claims for the ZooKeeper servers. Not selected (false)
Name Prefix for the name of the Persistent Volume Claims used for Apache ZooKeeper. datadir
Storage class name Storage Class to use for Apache ZooKeeper if dynamically provisioning Persistent Volume Claims. None
Size Size to use for the Persistent Volume Claims created for Apache ZooKeeper. 2Gi

External access settings

The following table describes the options for configuring external access to Kafka.

Field Description Default
External hostname/IP address The external hostname or IP address to be used by external clients. Leave blank to default to the IP address of the cluster master node. None

Secure connection settings

The following table describes the options for configuring secure connections.

Field Description Default
Certificate type Select whether you want to have a self-signed certificate generated during installation, or if you will provide your own certificate details. selfsigned
Private key If you set Certificate type to provided, this is the base64-encoded TLS key or private key. None
TLS certificate If you set Certificate type to provided, this is the base64-encoded TLS certificate or public key certificate. None
CA certificate If you set Certificate type to provided, this is the base64-encoded TLS cacert or Certificate Authority Root Certificate. None

Message indexing settings

The following table describes the options for configuring message indexing.

Field Description Default
Enable message indexing Set whether to enable message indexing to enhance browsing the messages on topics. Selected (true)
CPU request for Elastic Search nodes The minimum required CPU core for each Elastic Search node. Specify integers, fractions (for example, 0.5), or millicore values (for example, 100m, where 100m is equivalent to .1 core). 500m
CPU limit for Elastic Search nodes The maximum amount of CPU core allocated to each Elastic Search node. Specify integers, fractions (for example, 0.5), or millicores values (for example, 100m, where 100m is equivalent to .1 core). 1000m
Memory request for Elastic Search nodes The minimum amount of memory required for each Elastic Search node in bytes. Specify integers with one of these suffixes: E, P, T, G, M, K, or power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. 2Gi
Memory limits for Elastic Search nodes The maximum amount of memory allocated to each Elastic Search node in bytes. Specify integers with suffixes: E, P, T, G, M, K, or power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. 4Gi

Geo-replication settings

The following table describes the options for configuring geo-replicating topics between clusters.

Field Description Default
Geo-replicator workers Number of workers to support geo-replication. 0

Note: Geo-replication is only available in the paid-for version of Event Streams (not available in the Community Edition).

Schema Registry settings

Field Description Default
Enable persistent storage for Schema Registry API servers Set whether to store Schema Registry data on a persistent volume. Enabling storage ensures the schema data is preserved if the pod is stopped. Not selected (false)
Use dynamic provisioning for Schema Registry API servers Set whether to use a Storage Class when provisioning Persistent Volumes for schemas. Selecting will dynamically create Persistent Volume Claims for schemas. Not selected (false)
Name Prefix for the name of the Persistent Volume Claims used for schemas. datadir
Storage class name Storage Class to use for schemas if dynamically provisioning Persistent Volume Claims. None
Size Size to use for the Persistent Volume Claims created for schemas. 100Mi

External monitoring

The following table describes the options for configuring external monitoring tools.

Field Description Default
Datadog - Autodiscovery annotation check templates for Kafka brokers YAML object that contains the Datadog Autodiscovery annotations for configuring the Kafka JMX checks. The Datadog prefix and container identifier is applied automatically to the annotation, so only use the template name as the object’s keys (for example, check_names). For more information about setting up monitoring with Datadog, see the Datadog tutorial. None

Generating your own certificates

You can create your own certificates for configuring external access. When prompted, answer all questions with the appropriate information.

  1. Create the certificate to use for the Certificate Authority (CA):
    openssl req -newkey rsa:2048 -nodes -keyout ca.key -x509 -days 365 -out ca.pem
  2. Generate a RSA 2048-bit private key:
    openssl genrsa -out es.key 2048
    Other key lengths and algorithms are also supported. The following cipher suites are supported, using TLS 1.2 and later only:
    • TLS_RSA_WITH_AES_128_GCM_SHA256
    • TLS_RSA_WITH_AES_256_GCM_SHA384
    • TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
    • TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384

    Note: The string “TLS” is interchangeable with “SSL” and vice versa. For example, where TLS_RSA_WITH_AES_128_CBC_SHA is specified, SSL_RSA_WITH_AES_128_CBC_SHA also applies. For more information about each cipher suite, go to the Internet Assigned Numbers Authority (IANA) site, and search for the selected cipher suite ID.

  3. Create a certificate signing request for the key generated in the previous step:
    openssl req -new -key es.key -out es.csr
  4. Sign the request with the CA certificate created in step 1:
    openssl x509 -req -in es.csr -CA ca.pem -CAkey ca.key -CAcreateserial -out es.pem
  5. Encode your generated file to a base64 string. This can be done using command line tools such as base64, for example, to encode the file created in step 1:
    cat ca.pem | base64 > ca.b64

Completing these steps creates the following files which, after being encoded to a base64 string, can be used to configure your installation:

  1. ca.pem : CA public certificate
  2. es.pem : Release public certificate
  3. es.key : Release private key