Event Streams provides samples to help you get started with deployments, as described in the planning section. Choose one of the samples suited to your requirements to get started:
- Lightweight without security
- Development
- Minimal production
- Production 3 brokers
- Production 6 brokers
- Production 9 brokers
You can modify the samples, save them, and apply custom configuration settings as well. See the following sections for guidance about configuring your instance of Event Streams.
Note: The Production 6 brokers and Production 9 brokers samples are only available on GitHub (download and extract the resources for your Event Streams version, then go to /cr-examples/eventstreams
to access the samples). You can configure and apply them by using the command line or by dragging and dropping them onto the OpenShift Container Platform web console, and editing them.
Checking configuration settings
This page gives information about many configuration options. To see further information about specific configuration options, or to see what options are available, you can use the oc explain
command. To see information about a specific field, run the following:
oc explain eventstreams.<path-of-field>
Where path-of-field
is the JSON path of the field of interest.
For example, if you want to see more information about configuring external listeners for Kafka you can run the following command:
oc explain eventstreams.spec.strimziOverrides.kafka.listeners.external
Enabling persistent storage
If you want your data to be preserved in the event of a restart, configure persistent storage for Kafka, ZooKeeper, and Schema Registry in your Event Streams installation.
Note: Ensure you have sufficient disk space for persistent storage.
These settings are specified in the YAML configuration document that defines an instance of the EventStreams
custom resource and can be applied when defining a new Event Streams instance under the “Event Streams” operator in the OpenShift Container Platform web console.
- To enable persistent storage for Kafka, add the
storage
property underspec.strimziOverrides.kafka
- To enable persistent storage for ZooKeeper, add the
storage
property underspec.strimziOverrides.zookeeper
- To enable persistent storage for Schema Registry, add the
storage
property underspec.schemaRegistry
Complete the configuration by adding additional fields to these storage properties as follows:
-
Specify the storage type in
storage.type
(for example,"ephemeral"
or"persistent-claim"
).Note: When using ephemeral storage, ensure you set retention limits for Kafka topics so that you do not run out of disk space. If message retention is set to long periods and the message volume is high, the storage requirements for the topics could impact the OpenShift nodes that host the Kafka pods, and cause the nodes to run out of allocated disk space, which could impact normal operation.
- Specify the storage size in
storage.size
(for example,"100Gi"
). - Optionally, specify the storage class in
storage.class
(for example,"rook-ceph-block-internal"
). - Optionally, specify the retention setting for the storage if the cluster is deleted in
storage.deleteClaim
(for example,"true"
).
An example of these configuration options:
apiVersion: eventstreams.ibm.com/v1beta1
kind: EventStreams
# ...
spec:
schemaRegistry:
# ...
storage:
type: "persistent-claim"
size: "10Gi"
class: "cephfs"
strimziOverrides:
kafka:
# ...
storage:
type: "persistent-claim"
size: "100Gi"
class: "ceph-block"
zookeeper:
# ...
storage:
type: "persistent-claim"
size: "100Gi"
class: "ceph-block"
# ...
If present, existing persistent volumes with the specified storage class are used after installation, or if a dynamic provisioner is configured for the specified storage class, new persistent volumes are created.
Where optional values are not specified:
- If no storage class is specified and a default storage class has been defined in the OpenShift Container Platform settings, the default storage class will be used.
- If no storage class is specified and no default storage class has been defined in the OpenShift Container Platform settings, the deployment will use any persistent volume claims that have at least the set size value.
- If no retention setting is provided, the storage will be retained when the cluster is deleted.
The following example YAML document shows an example EventStreams
custom resource with dynamically allocated storage provided using CephFS for Kafka and ZooKeeper. To try this deployment, set the required namespace
and accept the license by changing the spec.license.accept
value to "true"
.
apiVersion: eventstreams.ibm.com/v1beta1
kind: EventStreams
metadata:
name: example-storage
namespace: myproject
spec:
license:
accept: false
version: 10.0.0
adminApi: {}
adminUI: {}
collector: {}
restProducer: {}
schemaRegistry:
storage:
type: ephemeral
strimziOverrides:
kafka:
replicas: 1
config:
interceptor.class.names: com.ibm.eventstreams.interceptors.metrics.ProducerMetricsInterceptor
offsets.topic.replication.factor: 1
transaction.state.log.min.isr: 1
transaction.state.log.replication.factor: 1
listeners:
external:
type: route
plain: {}
tls: {}
storage:
type: persistent-claim
size: 100Gi
class: rook-ceph-block-internal
deleteClaim: true
metrics: {}
zookeeper:
replicas: 1
storage:
type: persistent-claim
size: 100Gi
class: rook-ceph-block-internal
deleteClaim: true
metrics: {}
Configuring encryption between pods
Pod-to-Pod encryption is enabled by default for all Event Streams pods. Unless explicitly overridden in an EventStreams
custom resource, the configuration option spec.security.internalTls
will be set to TLSv1.2
. This value can be set to NONE
which will disable Pod-to-Pod encryption.
For example, the following YAML snippet disables encryption between pods:
apiVersion: eventstreams.ibm.com/v1beta1
kind: EventStreams
metadata:
name: example-internal-disabled
namespace: myproject
spec:
# ...
security:
# ...
internalTls: NONE
# ...
Configuring UI Security
By default, accessing the Event Streams UI requires an IBM Cloud Platform Common Services Identity and Access Management (IAM) user that has been assigned access to Event Streams (see managing access for details).
The login requirement for the UI is disabled when all Kafka authentication and authorization is disabled. This is demonstrated by the proof-of-concept lightweight without security sample.
Important: This configuration will limit UI capability due to the security requirements of other components. The following features will be disabled:
- Geo-replication
- Metrics panel
- Producers panel
- Connect to schema
- Connect to cluster
- Connect to topic
Applying Kafka broker configuration settings
Kafka supports a number of broker configuration settings, typically provided in a properties file.
When creating an instance of Event Streams, these settings are defined in an EventStreams
custom resource under a the spec.strimziOverrides.kafka.config
property.
The following example uses Kafka broker settings to configure replication for system topics:
apiVersion: eventstreams.ibm.com/v1beta1
kind: EventStreams
metadata:
name: example-broker-config
namespace: myproject
spec:
# ...
strimziOverrides:
# ...
kafka:
# ...
config:
offsets.topic.replication.factor: 1
transaction.state.log.min.isr: 1
transaction.state.log.replication.factor: 1
This custom resource can be created using the oc
command or the OpenShift Container Platform web console under the Event Streams operator page.
You can specify all the broker configuration options supported by Kafka except from those managed directly by Event Streams. For further information, see the list of supported configuration options.
After deployment, these settings can be modified by updating the EventStreams
custom resource.
Applying Kafka rack awareness
Kafka rack awareness is configured by setting the rack
property in the EventStreams
custom resource using the zone label as the topology key in the spec.strimziOverrides.kafka.rack
field. This key needs to match the zone label name applied to the nodes.
Note: Before this is applied, ensure the Kafka cluster role for rack awareness has been applied.
The following example sets the rack
topologyKey to failure-domain.beta.kubernetes.io/zone
:
apiVersion: eventstreams.ibm.com/v1beta1
kind: EventStreams
metadata:
name: example-broker-config
namespace: myproject
spec:
# ...
strimziOverrides:
# ...
kafka:
# ...
rack:
topologyKey: failure-domain.beta.kubernetes.io/zone
# ...
Setting geo-replication nodes
You can install geo-replication in a cluster to enable messages to be automatically synchronized between local and remote topics. A cluster can be a geo-replication origin or destination. Origin clusters send messages to a remote system, while destination clusters receive messages from a remote system. A cluster can be both an origin and a destination cluster at the same time.
To enable geo-replication, create an EventStreamsGeoReplicator
custom resource alongside the EventStreams
custom resource. This can be defined in a YAML configuration document under the Event Streams operator in the OpenShift Container Platform web console.
When setting up geo-replication, consider the number of geo-replication worker nodes (replicas) to deploy and configure this in the spec.replicas
property.
Ensure that the following properties match the name of the Event Streams instance:
metadata.name
metadata.labels["eventstreams.ibm.com/cluster"]
For example, to configure geo-replication with 2
replicas for an Event Streams instance called sample-three
in the namespace myproject
, create the following EventStreamsGeoReplicator
configuration:
apiVersion: eventstreams.ibm.com/v1beta1
kind: EventStreamsGeoReplicator
metadata:
labels:
eventstreams.ibm.com/cluster: sample-three
name: sample-three
namespace: myproject
spec:
# ...
replicas: 2
Note: Geo-replication can be deployed or reconfigured at any time. For more information, see Setting up geo-replication.
Configuring access
External access using OpenShift Container Platform routes is automatically configured for the following services if they are included in the Event Streams installation:
- The Event Streams UI
- The Schema Registry
- The Admin API
- The REST Producer
REST services access
The REST services for Event Streams are configured with defaults for the container port, type, TLS version, certificates, and authentication mechanisms. If the Kafka listeners have been configured without authentication requirements then the authentication mechanisms are automatically removed from the REST endpoints.
The schema for REST endpoint configuration is described in the following table, followed by an example of an endpoint configuration for the Admin API. In the example, the potential values for <component>
in spec.<component>.endpoints
are:
adminApi
for the Admin APIrestProducer
for the REST ProducerschemaRegistry
for the Schema Registry
Key | Type | Description |
---|---|---|
name |
String | Name to uniquely identify the endpoint among other endpoints in the list for a component. |
containerPort |
Integer | A unique port to open on the container that this endpoint will serve requests on. Restricted ranges are 0-1000 and 7000-7999. |
type |
String [internal , route ] |
Event Streams REST components support internal type endpoints and OpenShift Container Platform Routes. |
tlsVersion |
String [TLSv1.2 ,NONE ] |
Specifies the TLS version where NONE will disable HTTPS. |
authenticationMechanisms |
List of Strings | List of authentication mechanisms to be supported at this endpoint. By default, all authentication mechanisms: [iam-bearer ,tls ,scram-sha-512 ] are enabled. Optionally a subset or even none ([] ) can be configured. |
certOverrides.certificate |
String | The name of the key in the provided certOverrides.secretName secret that contains the base64 encoded certificate. |
certOverrides.key |
String | The name of the key in the provided certOverrides.secretName secret that contains the base64 encoded key. |
certOverrides.secretName |
String | The name of the secret in the instance namespace that contains the encoded certificate and key to secure the endpoint with. |
host |
String (DNS rules apply) | An optional override for the default host that an OpenShift Container Platform route will generate. |
# ...
spec:
# ...
adminApi:
# ...
endpoints:
- name: example
containerPort: 9080
type: route
tlsVersion: TLSv1.2
authenticationMechanisms:
- iam-bearer
- tls
- scram-sha-512
certOverrides:
certificate: mycert.crt
key: mykey.key
secretName: custom-endpoint-cert
host: example-host.apps.example-domain.com
Note: Changing an endpoint in isolation might have adverse effects if Kafka is configured to require authentication and the configured endpoint has no authentication mechanisms specified. In such cases, a warning message might be displayed in the instance status conditions.
The Event Streams REST components also allow for the default set of cipher suites to be overridden. Though not a recommended practice, it is possible to enable alternative cipher suites to facilitate connectivity of legacy systems. This capability is provided through the CIPHER_SUITES
environment variable as shown in this example:
# ...
spec:
# ...
restProducer:
# ...
env:
- name: CIPHER_SUITES
value: >-
TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
Kafka access
All examples provided for Event Streams include an external listener for Kafka and varying internal listener types by default. The supported external listener is of type route
. This indicates the use of an OpenShift Container Platform route, and it can have either tls
or scram-sha-512
configured as the authentication mechanisms.
The following example snippet defines an external listener that exposes the Kafka brokers using an OpenShift Container Platform route with SCRAM-SHA-512 authentication enabled.
# ...
spec:
# ...
strimziOverrides:
# ...
kafka:
listeners:
external:
type: route
authentication:
type: scram-sha-512
Internal listeners for Kafka can also be configured. In addition to the external
listener, there are plain
and tls
internal listeners. Each of these can be configured to have an authentication mechanism as shown in the following example.
# ...
spec:
# ...
strimziOverrides:
# ...
kafka:
listeners:
plain:
authentication:
type: scram-sha-512
tls:
authentication:
type: tls
The Kafka listener security protocols are mapped to the internal listener configurations as shown in the following table:
Security protocol | Listener configuration |
---|---|
PLAINTEXT | spec.strimziOverrides.kafka.listeners.plain: {} |
SSL (no-authentication) | spec.strimziOverrides.kafka.listeners.tls: {} |
SSL (mutual-authentication) | spec.strimziOverrides.kafka.listeners.tls.authentication.type: tls |
SASL_PLAINTEXT | spec.strimziOverrides.kafka.listeners.plain.authentication.type: scram-sha-512 |
SASL_SSL | spec.strimziOverrides.kafka.listeners.tls.authentication.type: scram-sha-512 |
Configuring external monitoring through Prometheus
Metrics provide information about the health and operation of the Event Streams instance.
Metrics can be enabled for Kafka, ZooKeeper, geo-replicator, and Kafka Connect pods.
Note: Kafka metrics can also be exposed externally through JMX by configuring external monitoring tools.
Kafka metrics can be enabled by setting spec.strimziOverrides.kafka.metrics
to {}
in the EventStreams
custom resource. For example:
apiVersion: eventstreams.ibm.com/v1beta1
kind: EventStreams
# ...
spec:
strimziOverrides:
kafka:
# ...
metrics: {}
# ...
ZooKeeper metrics can be enabled by setting spec.strimziOverrides.zookeeper.metrics
to {}
in the EventStreams
custom resource. For example:
apiVersion: eventstreams.ibm.com/v1beta1
kind: EventStreams
# ...
spec:
strimziOverrides:
zookeeper:
# ...
metrics: {}
# ...
Geo-replicator metrics can be enabled by setting spec.metrics
to {}
in the KafkaMirrorMaker2
custom resource. For example:
apiVersion: eventstreams.ibm.com/v1alpha1
kind: KafkaMirrorMaker2
# ...
spec:
# ...
metrics: {}
# ...
Note: The Event Streams operator automatically applies a KafkaMirrorMaker2
custom resource when a EventStreamsGeoReplicator
custom resource is created. Metrics can then be enabled by editing the generated KafkaMirrorMaker2
custom resource.
Kafka Connect metrics can be enabled by setting spec.metrics
to {}
in the KafkaConnectS2I
custom resource. For example:
apiVersion: eventstreams.ibm.com/v1beta1
kind: KafkaConnectS2I
# ...
spec:
# ...
metrics: {}
# ...
To complement the default Kafka metrics, Event Streams can be configured to publish additional information about the Event Streams instance by setting the spec.strimziOverrides.kafka.config.interceptor.class.name
to com.ibm.eventstreams.interceptors.metrics.ProducerMetricsInterceptor
, for example:
apiVersion: eventstreams.ibm.com/v1beta1
kind: EventStreams
# ...
spec:
strimziOverrides:
kafka:
# ...
config:
# ...
interceptor.class.names: com.ibm.eventstreams.interceptors.metrics.ProducerMetricsInterceptor
# ...
Note: For details about viewing metrics information, see the cluster health and topic health sections.
Configuring external monitoring through JMX
You can use third-party monitoring tools to monitor the deployed Event Streams Kafka cluster by collecting Kafka metrics. To set this up, you need to:
- Have a third-party monitoring tool set up to be used within your OpenShift Container Platform cluster.
- Enable access to the broker JMX port by setting
spec.strimizOverrides.kafka.jmxOptions
.apiVersion: eventstreams.ibm.com/v1beta1 kind: EventStreams # ... spec: # ... strimziOverrides: # ... kafka: jmxOptions: {}
- Include any configuration settings for Event Streams as required by your monitoring tool. For example, Datadog’s autodiscovery requires you to annotate Kafka broker pods (
strimziOverrides.kafka.template.statefulset.metadata.annotations
) - Configure your monitoring applications to consume JMX metrics.
Configuring the Kafka Exporter
You can configure the Kafka Exporter to expose additional metrics to Prometheus on top of the default ones. For example, you can obtain the consumer group lag information for each topic.
The Kafka Exporter can be configured using a regex
to expose metrics for a collection of topics and consumer groups that match the expression. For example, to enable JMX metrics collection for the topic orders
and the group buyers
, configure the EventStreams
custom resource as follows:
apiVersion: eventstreams.ibm.com/v1beta1
kind: EventStreams
# ...
spec:
# ...
strimziOverrides:
# ...
kafkaExporter:
groupRegex: orders
topicRegex: buyers
template:
pod:
metadata:
annotations:
prometheus.io/port: '9404'
prometheus.io/scheme: https
prometheus.io/scrape: 'true'
For more information about configuration options, see configuring the Kafka Exporter.
Configuring the JMX Exporter
You can configure the JMX Exporter to expose JMX metrics from Kafka brokers, ZooKeeper nodes, and Kafka Connect nodes to Prometheus.
To enable the collection of all JMX metrics available on the Kafka brokers and ZooKeeper nodes, configure the EventStreams
custom resource as follows:
apiVersion: eventstreams.ibm.com/v1beta1
kind: EventStreams
# ...
spec:
# ...
strimziOverrides:
kafka:
metrics: {}
# ...
zookeepers:
#
# ...
For more information about configuration options, see the following documentation:
Using your own certificates
Event Streams offers the capability to provide your own CA certificates and private keys instead of using the ones generated by the operator.
Note: You must complete the process of providing your own certificates before installing an instance of Event Streams.
You must provide your own X.509 certificates and keys in PEM format with the addition of a PKCS12-formatted certificate and the CA password. If you want to use a CA which is not a Root CA, you have to include the whole chain in the certificate file. The chain should be in the following order:
- The cluster or clients CA
- One or more intermediate CAs
- The root CA
All CAs in the chain should be configured as a CA with the X509v3 Basic Constraints.
Note: In the following instructions, the CA public certificate file is denoted CA.crt
and the CA private key is denoted CA.key
.
As Event Streams also serves the truststore
in PKCS12 format, generate a .p12
file containing the relevant CA Certificates. When generating your PKCS12 truststore, ensure that the truststore does not contain the CA private key. This is important because the .p12
file will be available to download from the Event Streams UI and distributed to clients.
The following is an example showing how to use the Java keytool
utility to generate a PKCS12 truststore that does not contain a private key:
keytool -import -file <ca.pem> -keystore ca.jks
keytool -importkeystore -srckeystore ca.jks -srcstoretype JKS -deststoretype PKCS12 -destkeystore ca.p12
Note: Using OpenSSL PKCS12 commands to generate a truststore without private keys can break the cluster, because the resulting truststore is not compatible with Java runtimes.
One way to test that the truststore is compatible and contains the correct certificates is to use the following java keytool
utility command:
keytool -list -keystore ca.p12 -storepass <keystore password>
The cluster and/or clients certificates, and keys must be added to secrets in the namespace that the Event Streams instance is intended to be created in. The naming of the secrets and required labels must follow the conventions detailed in the following command templates.
The following four commands can be used to create and label the secrets for custom certificates and keys. The templates demonstrate providing cluster certificates but the same commands can be re-used substituting cluster
with clients
in each secret name.
For each command, provide the intended name and namespace for the Event Streams instance.
oc create --namespace <namespace> secret generic <instance-name>-cluster-ca --from-file=ca.key=CA.key
oc label --namespace <namespace> secret <instance-name>-cluster-ca eventstreams.ibm.com/kind=Kafka eventstreams.ibm.com/cluster=<instance-name>
oc create --namespace <namespace> secret generic <instance-name>-cluster-ca-cert --from-file=ca.crt=CA.crt --from-file=ca.p12=CA.p12 --from-literal=ca.password='<CA_PASSWORD>'
oc label --namespace <namespace> secret <instance-name>-cluster-ca-cert eventstreams.ibm.com/kind=Kafka eventstreams.ibm.com/cluster=<instance-name>
To make use of the provided secrets, Event Streams will require the following overrides to be added to the custom resource.
spec:
# ...
strimziOverrides:
clusterCa:
generateCertificateAuthority: false
# And/Or
clientsCa:
generateCertificateAuthority: false
It is also possible to configure the renewalDays
(default 30) and validityDays
(default 365) under the spec.strimziOverrides.clusterCa
and spec.strimziOverrides.clientsCa
keys. Validity periods are expressed as a number of days after certificate generation.