Preparing for multizone clusters

Event Streams supports multiple availability zones for your clusters. Multizone clusters add resilience to your Event Streams installation.

For guidance about handling outages in a multizone setup, see managing a multizone setup.

Zone awareness

Kubernetes uses zone-aware information to determine the zone location of each of its nodes in the cluster to enable scheduling of pod replicas in different zones.

Some clusters, typically AWS, will already be zone aware. For clusters that are not zone aware, each Kubernetes node will need to be set up with a zone label.

To determine if your cluster is zone aware:

  1. Log in to your Red Hat OpenShift Container Platform as a cluster administrator by using the oc CLI (oc login).
  2. Run the following command as cluster administrator:

    oc get nodes --show-labels

If your Kubernetes cluster is zone aware, the following label is displayed against each node:

  • topology.kubernetes.io/zone if using OpenShift 4.5 or later
  • failure-domain.beta.kubernetes.io/zone if using an earlier version of OpenShift

The value of the label is the zone the node is in, for example, es-zone-1.

If your Kubernetes cluster is not zone aware, all cluster nodes will need to be labeled using a value that identifies the zone that each node is in. For example, run the following command to label and allocate a node to es-zone-1:

oc label node <node-name> topology.kubernetes.io/zone=es-zone-1

The zone label is needed to set up rack awareness when installing for multizone.

Kafka rack awareness

In addition to zone awareness, Kafka rack awareness helps to spread the Kafka broker pods and Kafka topic replicas across different availability zones, and also sets the brokers’ broker.rack configuration property for each Kafka broker.

To set up Kafka rack awareness, Kafka brokers require a cluster role to provide permission to view which Kubernetes node they are running on.

Before applying Kafka rack awareness to an Event Streams installation, apply a cluster role:

  1. Download the cluster role YAML file from GitHub.
  2. Log in to your Red Hat OpenShift Container Platform as a cluster administrator by using the oc CLI (oc login).
  3. Apply the cluster role by using the following command and the downloaded file:

    oc apply -f eventstreams-kafka-broker.yaml