Upgrade your Event Streams installation as follows. The Event Streams operator handles the upgrade of your Event Streams instance.
Upgrade paths
You can upgrade Event Streams to the latest 11.8.x version directly from any earlier 11.8.x or any 11.7.x version by using the latest 3.8.x operator. The upgrade procedure depends on whether you are upgrading to a major, minor, or patch level version, and what your catalog source is.
If you are upgrading from Event Streams version 11.6.x or earlier, you must first upgrade your installation to 11.7.x and then follow these instructions to upgrade from 11.7.x to 11.8.x.
-
On OpenShift, you can upgrade to the latest version by using operator channel v3.8. Review the general upgrade prerequisites before following the instructions to upgrade on OpenShift.
Note: If your operator upgrades are set to automatic, patch level upgrades are completed automatically. This means that the Event Streams operator is upgraded to the latest 3.8.x version when it is available in the catalog, and your Event Streams instance is then also automatically upgraded, unless you set a schedule for the upgrade by pausing the reconciliation.
-
On other Kubernetes platforms, you must update the Helm repository for any level version update (any digit update: major, minor, or patch), and then upgrade by using the Helm chart. Review the general upgrade prerequisites before following the instructions to upgrade on other Kubernetes platforms.
Prerequisites
-
The images for Event Streams release 11.8.x are available in the IBM Cloud Container Registry. Ensure you redirect your catalog source to use
icr.io/cpopen
as described in Implementing ImageContentSourcePolicy to redirect to the IBM Container Registry. -
Ensure that you have installed a supported container platform and system. For supported container platform versions and systems, see the support matrix.
-
To upgrade successfully, your Event Streams instance must include a node pool with the
controller
role and persistent storage enabled. If you upgrade an Event Streams instance that has a single ZooKeeper node with ephemeral storage, all messages and topics will be lost, and the instance will move to an error state. To avoid this issue, when you upgrade an Event Streams instance to use KRaft, ensure that your KRaft configuration includes multiple controller nodes (recommended for quorum) configured with persistent storage.Note: From Event Streams version 11.8.0 and later, ZooKeeper is no longer supported, and controller node pools are required. During the upgrade, if your Event Streams instance does not have a defined controller node pool, the upgrade will be blocked. For more information about the migration to KRaft, including important prerequsites, see the migration overview.
For example:
apiVersion: eventstreams.ibm.com/v1beta2 kind: EventStreams metadata: name: example-pre-upgrade namespace: myproject spec: # ... strimziOverrides: #... kafka: #... nodePools: - name: kafka replicas: 3 storage: type: persistent-claim #... roles: - broker - name: controller replicas: 3 storage: type: persistent-claim #... roles: - controller
-
If you installed the Event Streams operator to manage instances of Event Streams in any namespace (one per namespace), then you might need to control when each of these instances is upgraded to the latest version. You can control the updates by pausing the reconciliation of the instance configuration as described in the following sections.
-
If you are running Event Streams as part of IBM Cloud Pak for Integration, ensure you meet the following requirements:
- Follow the upgrade steps for IBM Cloud Pak for Integration before upgrading Event Streams.
- If you are planning to configure Event Streams with Keycloak, ensure you have the IBM Cloud Pak for Integration 2023.4.1 (operator version 7.2.0) or later installed, including the required dependencies.
-
Ensure all applications connecting to your instance of Event Streams that use the schema registry are using Apicurio client libraries version 2.5.0 or later before migrating.
Note: There is no downtime during the Event Streams upgrade. The Kafka pods are rolled one at a time, so a Kafka instance will always be present to serve traffic. However, if the number of brokers you have matches the min.insync.replicas
value set for any of your topics, then that topic will be unavailable to write to while the Kafka pods are rolling.
Migration to KRaft overview
This section provides an overview of the migration from ZooKeeper to Kafka Raft metadata (KRaft).
From Event Streams version 11.8.x and later, all Kafka clusters in Event Streams must run in KRaft mode. ZooKeeper is no longer supported, removing the dependency on ZooKeeper for metadata management.
To migrate your Kafka cluster to KRaft, a controller node pool is required. The controller node pool is added during the upgrade of your instance. The upgrade process is blocked until you run a patch command that copies configuration settings from the existing ZooKeeper specification to a new node pool with a controller role. After the controller node is added, the Event Streams operator automatically migrates the cluster to KRaft.
Important: KRaft migration is irreversible. After migration, you cannot roll back to the ZooKeeper-based cluster.
Prerequisites for KRaft migration
For a successful migration to KRaft during the upgrade to Event Streams 11.8.x, ensure the following:
- You are running Event Streams version 11.7.0 or later, which includes a node pool with the
broker
role. -
Your cluster has sufficient CPU, memory, and storage to support the temporary increase in workload during migration.
KRaft controllers run in parallel with existing ZooKeeper nodes, resulting in a temporary increase in resource usage. Ensure that your cluster can temporarily support approximately double the current CPU, memory, and storage used by the ZooKeeper nodes.
For example, in a cluster with 3 Kafka brokers and 3 ZooKeeper nodes, you might require enough capacity to run 3 controller nodes with similar CPU and memory settings as the ZooKeeper nodes.
Note: The migration process triggers multiple rolling updates to Kafka brokers and controllers. This can increase the overall migration time and might temporarily reduce cluster capacity.
After the migration to KRaft is complete and ZooKeeper is removed, resource usage will return to the original state, and any temporary increase in resource limits set before the upgrade can be reverted to their previous values.
Scheduling the upgrade of an instance
In 11.1.x and later, the Event Streams operator handles the upgrade of your Event Streams instance automatically after the operator is upgraded. No additional step is required to change the instance (product) version.
If your operator manages more than one instance of Event Streams, you can control when each instance is upgraded by pausing the reconciliation of the configuration settings for each instance, running the upgrade, and then unpausing the reconciliation when ready to proceed with the upgrade for a selected instance.
Pausing reconciliation by using the CLI
- Log in to your Kubernetes cluster as a cluster administrator by setting your
kubectl
context. -
To apply the annotation first to the
EventStreams
and then to theKafka
custom resource, run the following command, where<type>
is eitherEventStreams
orKafka
:kubectl annotate <type> <instance-name> -n <instance-namespace> eventstreams.ibm.com/pause-reconciliation='true'
- Follow the steps to upgrade on OpenShift.
Unpausing reconciliation by using the CLI
To unpause the reconciliation and continue with the upgrade of an Event Streams instance, run the following command to first remove the annotations from the Kafka
custom resource, and then from the EventStreams
custom resource, where <type>
is either Kafka
or EventStreams
:
kubectl annotate <type> <instance-name> -n <instance-namespace> eventstreams.ibm.com/pause-reconciliation-
When the annotations are removed, the configuration of your instance is updated, and the upgrade to the latest version of Event Streams completes.
Pausing reconciliation by using the OpenShift web console
- Log in to the OpenShift Container Platform web console using your login credentials.
-
Expand Operators in the navigation on the left, and click Installed Operators.
- From the Project list, select the namespace (project) the instance is installed in.
- Locate the operator that manages your Event Streams instance in the namespace. It is called Event Streams in the Name column. Click the Event Streams link in the row.
- Select the instance you want to pause and click the
YAML
tab. -
In the
YAML
for the custom resource, addeventstreams.ibm.com/pause-reconciliation: 'true'
to themetadata.annotations
field as follows:apiVersion: eventstreams.ibm.com/v1beta2 kind: EventStreams metadata: name: <instance-name> namespace: <instance-namespace> annotations: eventstreams.ibm.com/pause-reconciliation: 'true' spec: # ...
- This annotation also needs to be applied to the corresponding
Kafka
custom resource. Expand Home in the navigation on the left, click API Explorer, and typeKafka
in theFilter by kind...
field. SelectKafka
. - From the Project list, select the namespace (project) the instance is installed in and click the Instances tab.
- Select the instance with the name
<instance-name>
(the same as the Event Streams instance). -
In the
YAML
for the custom resource, addeventstreams.ibm.com/pause-reconciliation: 'true'
to themetadata.annotations
field as follows:apiVersion: eventstreams.ibm.com/v1beta2 kind: Kafka metadata: name: <instance-name> namespace: <instance-namespace> annotations: eventstreams.ibm.com/pause-reconciliation: 'true'
- Follow the steps to upgrade on OpenShift.
Unpausing reconciliation by using the OpenShift web console
To unpause the reconciliation and continue with the upgrade of an Event Streams instance, first remove the annotations from the Kafka
custom resource, and then from the EventStreams
custom resource. When the annotations are removed, the configuration of your instance is updated, and the upgrade to the latest version of Event Streams completes.
Upgrading on the OpenShift Container Platform
Upgrade your Event Streams instance running on the OpenShift Container Platform by using the CLI or web console as follows.
Planning your upgrade
Complete the following steps to plan your upgrade on OpenShift.
-
Determine which Operator Lifecycle Manager (OLM) channel is used by your existing Subscription. You can check the channel you are subscribed to in the web console (see Update channel section), or by using the CLI as follows (this is the subscription created during installation):
-
Run the following command to check your subscription details:
oc get subscription
-
Check the
CHANNEL
column for the channel you are subscribed to, for example, v3.7 in the following snippet:NAME PACKAGE SOURCE CHANNEL ibm-eventstreams ibm-eventstreams ibm-eventstreams-catalog v3.7
-
- If your existing Subscription does not use the v3.8 channel, your upgrade is a change in a minor version. Complete the following steps to upgrade:
- Ensure the catalog source for new version is available.
- Change your Subscription to the
v3.8
channel by using the CLI or the web console. The channel change will upgrade your operator, and then the operator will upgrade your Event Streams instance automatically.
- If your existing Subscription is already on the v3.8 channel, your upgrade is a change to the patch level (third digit) only. Make the catalog source for your new version available to upgrade to the latest level. If you installed by using the IBM Operator Catalog with the
latest
label, new versions are automatically available. The operator will upgrade your Event Streams instance automatically.
Making new catalog source available
Before you can upgrade to the latest version, the catalog source for the new version must be available on your cluster. Whether you have to take action depends on how you set up the catalog sources for your deployment.
-
Latest versions: If your catalog source is the IBM Operator Catalog, latest versions are always available when published, and you do not have to make new catalog sources available.
- Specific versions: If you used the CASE bundle to install catalog source for a specific previous version, you must download and use a new CASE bundle for the version you want to upgrade to.
- If you previously used the CASE bundle for an online install, apply the new catalog source to update the
CatalogSource
to the new version. - If you used the CASE bundle for an offline install that uses a private registry, follow the instructions in installing offline to remirror images and update the
CatalogSource
for the new version.
- If you previously used the CASE bundle for an online install, apply the new catalog source to update the
- In both cases, wait for the
status.installedCSV
field in theSubscription
to update. It eventually reflects the latest version available in the newCatalogSource
image for the currently selected channel in theSubscription
:- In the OpenShift Container Platform web console, the current version of the operator is displayed under
Installed Operators
. - If you are using the CLI, check the status of the
Subscription
custom resource, thestatus.installedCSV
field shows the current operator version.
- In the OpenShift Container Platform web console, the current version of the operator is displayed under
Upgrading Subscription by using the CLI
If you are using the OpenShift command-line interface (CLI), the oc
command, complete the steps in the following sections to upgrade your Event Streams installation.
- Log in to your Red Hat OpenShift Container Platform as a cluster administrator by using the
oc
CLI (oc login
). -
Ensure the required Event Streams Operator Upgrade Channel is available:
oc get packagemanifest ibm-eventstreams -o=jsonpath='{.status.channels[*].name}'
-
Change the subscription to move to the required update channel, where
vX.Y
is the required update channel (for example,v3.7
):oc patch subscription -n <namespace> ibm-eventstreams --patch '{"spec":{"channel":"vX.Y"}}' --type=merge
- Add a controller node pool by running the patch command. The upgrade will be blocked until a controller node pool is added.
All Event Streams pods that need to be updated as part of the upgrade will be gracefully rolled. During the KRaft migration, Kafka controller nodes and Kafka broker nodes will be rolled multiple times as the migration progresses through different phases.
Upgrading Subscription by using the web console
If you are using the web console, complete the steps in the following sections to upgrade your Event Streams installation.
- Log in to the OpenShift Container Platform web console using your login credentials.
-
Expand Operators in the navigation on the left, and click Installed Operators.
- From the Project list, select the namespace (project) the instance is installed in.
- Locate the operator that manages your Event Streams instance in the namespace. It is called Event Streams in the Name column. Click the Event Streams link in the row.
- Click the Subscription tab to display the Subscription details for the Event Streams operator.
- Click the version number link in the Update channel section (for example, v3.7). The Change Subscription update channel dialog is displayed, showing the channels that are available to upgrade to.
- Select v3.8 and click the Save button on the Change Subscription Update Channel dialog.
- Add a controller node pool by running the patch command. The upgrade will be blocked until a controller node pool is added.
All Event Streams pods that need to be updated as part of the upgrade will be gracefully rolled. During the KRaft migration, Kafka controller nodes and Kafka broker nodes will be rolled multiple times as the migration progresses through different phases.
Note: The number of containers in each Kafka broker will reduce from 2 to 1 as the TLS-sidecar container will be removed from each broker during the upgrade process.
Upgrading on other Kubernetes platforms by using Helm
If you are running Event Streams on Kubernetes platforms that support the Red Hat Universal Base Images (UBI) containers, you can upgrade Event Streams by using the Helm chart.
Planning your upgrade
Complete the following steps to plan your upgrade on other Kubernetes platforms.
-
Determine the chart version for your existing deployment:
-
Change to the namespace where your Event Streams instance is installed:
kubectl config set-context --current --namespace=<namespace>
-
Run the following command to check what version is installed:
helm list
-
Check the version installed in the
CHART
column, for example,<chart-name>-3.7.0
in the following snippet:NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION ibm-eventstreams es 1 2023-11-20 11:49:27.221411789 +0000 UTC deployed ibm-eventstreams-operator-3.7.0 3.7.0
-
-
Check the latest chart version that you can upgrade to:
- Log in to your Kubernetes cluster as a cluster administrator by setting your
kubectl
context. -
Add the IBM Helm repository:
helm repo add ibm-helm https://raw.githubusercontent.com/IBM/charts/master/repo/ibm-helm
-
Update the Helm repository:
helm repo update ibm-helm
-
Check the version of the chart you will be upgrading to is the intended version:
helm show chart ibm-helm/ibm-eventstreams-operator
Check the
version:
value in the output, for example:version: 3.8.0
- Log in to your Kubernetes cluster as a cluster administrator by setting your
-
If the chart version for your existing deployment is earlier than 3.7.x, you must first upgrade your installation to 11.7.x and then follow these instructions to upgrade to chart version 3.8.x.
-
If your existing installation is in an offline environment, you must carry out the steps in the offline installation instructions to download the CASE bundle and mirror the images for the new version you want to upgrade to, before running any
helm
commands. -
Complete the steps in Helm upgrade to update your Custom Resource Definitions (CRDs) and operator charts to the latest version. The operator will then upgrade your Event Streams instance automatically.
Upgrading by using Helm
You can upgrade your Event Streams on other Kubernetes platforms by using Helm.
To upgrade Event Streams to the latest version, run the following command:
helm upgrade \
<release-name> ibm-helm/ibm-eventstreams-operator \
-n <namespace> \
--set watchAnyNamespace=<true/false>
--set previousVersion=<previous-version>
Where:
<release-name>
is the name you provide to identify your operator.<namespace>
is the name of the namespace where you want to install the operator.watchAnyNamespace=<true/false>
determines whether the operator manages instances of Event Streams in any namespace or only a single namespace (default isfalse
if not specified). For more information, see choosing operator installation mode.-
<previous-version>
is the version of the Helm chart being upgraded from. For example, if your Helm chart version is 3.7.0, set the field as:--set previousVersion=3.7.0
. You can retrieve the version of your existing Helm chart by running the following command:helm list --filter <release-name> -n <namespace> -o json | jq '.[0].app_version'
Important: The upgrade will be blocked if your instance does not include a controller node pool. Run the patch command to add a controller node pool and proceed with the upgrade.
Post-upgrade tasks
Initiate KRaft migration
When upgrading to Event Streams version 11.8.x and later, the upgrade will be blocked if your instance does not include a Kafka node pool with the controller
role. To proceed, you must define a controller-only node pool.
It is recommended to configure a dedicated node pool for the controller
role, separate from any broker node pools.
Run the following command to add a controller node pool by copying configuration properties from the existing ZooKeeper specification. This step initiates the migration to KRaft mode.
oc patch eventstreams <instance-name> -n <namespace> --type=json -p='[
{"op": "add", "path": "/spec/strimziOverrides/nodePools/0", "value": {"name": "controller", "roles": ["controller"]}},
{"op": "copy", "from": "/spec/strimziOverrides/zookeeper/replicas", "path": "/spec/strimziOverrides/nodePools/0/replicas"},
{"op": "copy", "from": "/spec/strimziOverrides/zookeeper/storage", "path": "/spec/strimziOverrides/nodePools/0/storage"}
]'
Additionally, if resource configurations are defined in the custom resource under the ZooKeeper component, they must be moved to the nodePools
section for the controller node pool. Run the following command to copy the resource configurations to the controller node pool:
oc patch eventstreams <instance-name> -n <namespace> --type=json -p='[
{"op":"copy","from":"/spec/strimziOverrides/zookeeper/resources","path":"/spec/strimziOverrides/nodePools/0/resources"}]'
Where:
<instance-name>
is the name of your Event Streams instance.<namespace>
is the namespace where the instance is installed.
For guidance about setting up Kafka node pools, see Kafka node pool configuration.
Migration phases
During KRaft migration, the Event Streams operator updates the status of the EventStreams
custom resource to reflect the current state of the migration. The following kafkaMetadataState
transitions occur during this process:
Phase | State Value | Description |
---|---|---|
Initial | ZooKeeper |
Initial state where the Kafka cluster uses ZooKeeper for metadata. |
Migration start | KRaftMigration |
Metadata transfer from ZooKeeper to KRaft begins. The migration can take some time depending on the number of topics and partitions in the cluster. |
Dual writing | KRaftDualWriting |
Cluster writes metadata to both ZooKeeper and KRaft to ensure consistency. |
Post-migration | KRaftPostMigration |
KRaft mode is enabled for Kafka brokers. Metadata are still being stored in both Kafka and ZooKeeper. |
ZooKeeper cleanup | PreKRaft |
ZooKeeper is no longer used and all ZooKeeper-related resources are deleted. |
Final | KRaft |
Kafka cluster operates fully in KRaft mode. |
Clean up ZooKeeper PVCs after KRaft migration
After migrating to KRaft mode, ZooKeeper is no longer used and the associated persistent volume claims (PVCs) are removed automatically. Ensure that no ZooKeeper PVCs remain in the cluster. If any PVCs remain, delete them manually.
-
To check for leftover ZooKeeper PVCs, run the following command:
oc get pvc -n <namespace> | grep zookeeper
-
If any ZooKeeper PVCs are still present, delete them manually by running the following command for each PVC:
oc delete pvc <zookeeper-pvc-name>
Where:
<namespace>
is the namespace of your Event Streams instance.<zookeeper-pvc-name>
is the name of the ZooKeeper PVC to delete.
Enable collection of producer metrics
In Event Streams version 11.0.0 and later, a Kafka Proxy handles gathering metrics from producing applications. The information is displayed in the Producers dashboard. The proxy is optional and is not enabled by default. To enable metrics gathering and have the information displayed in the dashboard, enable the Kafka Proxy.
Enable metrics for monitoring
To display metrics in the monitoring dashboards of the Event Streams UI, ensure that you enable the Monitoring dashboard.
Update SCRAM Kafka User permissions
Event Streams 11.5.0 and later uses KafkaTopic
custom resources (CRs) and topic operator for managing topics through Event Streams UI and CLI. If access to the Event Streams UI and CLI has been configured with SCRAM authentication, see the managing access to update the KafkaUser
permissions accordingly.
Verifying the upgrade
After the upgrade, verify the status of Event Streams by using the CLI or the UI.