Prerequisites

Ensure your environment meets the following prerequisites before installing Event Streams.

Container environment

Event Streams 11.1.x is supported on the Red Hat OpenShift Container Platform.

  • Version 11.1.6 is installed by the Event Streams operator version 3.1.6, and includes Kafka version 3.4.0.
  • Version 11.1.5 is installed by the Event Streams operator version 3.1.5, and includes Kafka version 3.4.0.
  • Version 11.1.4 is installed by the Event Streams operator version 3.1.4, and includes Kafka version 3.3.1.
  • Version 11.1.3 is installed by the Event Streams operator version 3.1.3, and includes Kafka version 3.2.3.
  • Version 11.1.2 is installed by the Event Streams operator version 3.1.2, and includes Kafka version 3.2.3.
  • Version 11.1.1 is installed by the Event Streams operator version 3.1.1, and includes Kafka version 3.2.3.
  • Version 11.1.0 is installed by the Event Streams operator version 3.1.0, and includes Kafka version 3.2.3.

For an overview of supported component and platform versions, see the support matrix.

Ensure you have the following set up for your environment:

  • A supported version of OpenShift Container Platform installed. See the support matrix for supported versions.
  • The OpenShift Container Platform CLI installed.
  • The IBM Cloud Pak CLI (cloudctl) installed.
  • A supported version of the IBM Cloud Pak foundational services installed.

Hardware requirements

Ensure your hardware can accommodate the resource requirements for your planned deployment.

Kubernetes manages the allocation of containers within your cluster. This allows resources to be available for other Event Streams components which might be required to reside on the same node.

For production systems, it is recommended to have Event Streams configured with at least 3 Kafka brokers, and to have one worker node for each Kafka broker. This requires a minimum of 3 worker nodes available for use by Event Streams. Ensure each worker node runs on a separate physical server. See the guidance about Kafka high availability for more information.

IBM Cloud Pak foundational services

Ensure you have installed a supported version of IBM Cloud Pak foundational services before installing Event Streams, as described in the foundational services documentation. Event Streams supports foundational services version 3.19.0 or later 3.x releases.

Event Streams supports both the Continuous Delivery (CD) and the Long Term Service Release (LTSR) version of foundational services (for more information, see release types). This provides more flexibility for compatibility with other Cloud Pak components (for more information, see deploying with other Cloud Paks on the same cluster.

By default, the starterset profile is requested for new installations. If you are preparing for a production deployment, ensure you set a more suitable profile, for example, the medium profile as described in setting the hardware profile.

Note: If you are installing Event Streams in an existing IBM Cloud Pak for Integration deployment, the required foundational services might already be installed due to other capabilities, and the dependencies required by Event Streams might already be satisfied with a profile other than the default starterset.

If you plan to install other IBM Cloud Pak for Integration capabilities, ensure you meet the resource requirements for the whole profile. If you only want to deploy Event Streams on the cluster, you can calculate more granular sizing requirements based on the following foundational services components that Event Streams uses:

  • Catalog UI
  • Certificate Manager
  • Common Web UI
  • IAM
  • Ingress NGINX
  • Installer
  • Management ingress
  • Mongo DB
  • Platform API

Resource requirements

Event Streams resource requirements depend on several factors. The following sections provide guidance about minimum requirements for a starter deployment, and options for initial production configurations.

Installing Event Streams has two phases:

  1. Install the Event Streams operator. The operator will then be available to install and manage your Event Streams instances.
  2. Install one or more instances of Event Streams by applying configured custom resources. Sample configurations for development and production use cases are provided to get you started.

Minimum resource requirements are as follows, and are based on the total of requests set for the deployment. You will require more resources to accommodate the limit settings (see more about “requests” and “limits” later in this section). Always ensure you have sufficient resources in your environment to deploy the Event Streams operator together with a development or a production Event Streams instance.

Deployment CPU (cores) Memory (Gi) VPCs (see licensing)
Operator 0.2 1.0 N/A
Development 2.4 5.4 0.5
Production 2.8 5.9 3.0

Note: Event Streams provides sample configurations to help you get started with deployments. The resource requirements for these specific samples are detailed in the planning section. If you do not have an Event Streams installation on your system yet, always ensure you include the resource requirements for the operator together with the intended Event Streams instance requirements (development or production).

Important: Licensing is based on the number of Virtual Processing Cores (VPCs) used by your Event Streams instance. See licensing considerations for more information. For a production installation of Event Streams, the ratio is 1 license required for every 1 VPC being used. For a non-production installation of Event Streams, the ratio is 1 license required for every 2 VPCs being used.

Event Streams is a Kubernetes operator-based release and uses custom resources to define your Event Streams configurations. The Event Streams operator uses the declared required state of your Event Streams in the custom resources to deploy and manage the entire lifecycle of your Event Streams instances. Custom resources are presented as YAML configuration documents that define instances of the EventStreams custom resource type.

The provided samples define typical configuration settings for your Event Streams instance, including broker configurations, security settings, and default values for resources such as CPU and memory defined as “request” and “limit” settings. Requests and limits are Kubernetes concepts for controlling resource types such as CPU and memory.

  • Requests set the minimum requirements a container requires to be scheduled. If your system does not have the required request value, then the services will not start up.
  • Limits set the value beyond which a container cannot consume the resource. It is the upper limit within your system for the service. Containers that exceed a CPU resource limit are throttled, and containers that exceed a memory resource limit are terminated by the system.

Ensure you have sufficient CPU capacity and physical memory in your environment to service these requirements. Your Event Streams instance can be dynamically updated later through the configuration options provided in the custom resource.

Operator requirements

The Event Streams operator requires the following minimum resource requirements. Ensure you always include sufficient CPU capacity and physical memory in your environment to service the operator requirements.

CPU request (cores) CPU limit (cores) Memory request (Gi) Memory limit (Gi)
0.2 1.0 1.0 1.0

Cluster-scoped permissions required

The Event Streams operator requires the following cluster-scoped permissions:

  • Permission to list nodes in the cluster: When the Event Streams operator is deploying a Kafka cluster that spans multiple availability zones, it needs to label the pods with zone information. The permission to list nodes in the cluster is required to retrieve the information for these labels.
  • Permission to manage admission webhooks: The Event Streams operator uses admission webhooks to provide immediate validation and feedback about the creation and modification of Event Streams instances. The permission to manage webhooks is required for the operator to register these actions.
  • Permission to manage ConsoleYAMLSamples: ConsoleYAMLSamples are used to provide samples for Event Streams resources in the OpenShift Container Platform web console. The permission to manage ConsoleYAMLSamples is required for the operator to register the setting up of samples.
  • Permission to view ConfigMaps: Event Streams uses authentication services from IBM Cloud Pak foundational services. The status of these services is maintained in ConfigMaps, so the permission to view the contents of the ConfigMaps allows Event Streams to monitor the availability of the foundational services dependencies.
  • Permission to list specific CustomResourceDefinitions: This allows Event Streams to identify whether other optional dependencies have been installed into the cluster.
  • Permission to list ClusterRoles and ClusterRoleBindings: The Event Streams operator uses ClusterRoles created by the Operator Lifecycle Manager (OLM) as parents for supporting resources that the Event Streams operator creates. This is needed so that the supporting resources are correctly cleaned up when Event Streams is uninstalled. The permission to list ClusterRoles is required to allow the operator to identify the appropriate cluster role to use for this purpose.

Adding Event Streams geo-replication to a deployment

The Event Streams geo-replicator allows messages sent to a topic on one Event Streams cluster to be automatically replicated to another Event Streams cluster. This capability ensures messages are available on a separate system to form part of a disaster recovery plan.

To use this feature, ensure you have the following additional resources available. The following table shows the prerequisites for each geo-replicator node.

CPU request (cores) CPU limit (cores) Memory request (Gi) Memory limit (Gi) VPCs (see licensing)
1.0 2.0 2.0 2.0 1.0

For instructions about installing geo-replication, see configuring.

Red Hat OpenShift Security Context Constraints

Event Streams requires a Security Context Constraint (SCC) to be bound to the target namespace prior to installation.

By default, Event Streams uses the default restricted SCC that comes with the OpenShift Container Platform.

If you use a custom SCC (for example, one that is more restrictive), or have an operator that updates the default SCC, the changes might interfere with the functioning of your Event Streams deployment. To resolve any issues, apply the SCC provided by Event Streams as described in troubleshooting.

Network requirements

Event Streams is supported for use with IPv4 networks only.

Data storage requirements

If you want to set up persistent storage, Event Streams requires block storage configured to use the XFS or ext4 file system. The use of file storage (for example, NFS) is not recommended.

For example, you can use one of the following systems:

  • Red Hat OpenShift Data Foundation (previously OpenShift Container Storage) version 4.2 or later (block storage only)
  • IBM Cloud Block storage
  • IBM Storage Suite for IBM Cloud Paks: block storage from IBM Spectrum Virtualize, FlashSystem, or DS8K
  • Portworx Storage version 2.5.5 or later
  • Kubernetes local volumes
  • Amazon Elastic Block Store (EBS)
  • Rook Ceph

Event Streams UI

The Event Streams user interface (UI) is supported on the following web browsers:

  • Google Chrome version 65 or later
  • Mozilla Firefox version 59 or later
  • Safari version 11.1 or later

Event Streams CLI

The Event Streams command-line interface (CLI) is supported on the following systems:

  • Windows 10 or later
  • Linux® Ubuntu 16.04 or later
  • macOS 10.13 (High Sierra) or later

See the post-installation tasks for information about installing the CLI.

Kafka clients

The Apache Kafka Java client included with Event Streams is supported for use with the following Java versions:

  • IBM Java 8 or later
  • Oracle Java 8 or later

You can also use other Kafka version 2.0 or later clients when connecting to Event Streams. If you encounter client-side issues, IBM can assist you to resolve those issues (see our support policy).

Event Streams is designed for use with clients based on the librdkafka implementation of the Apache Kafka protocol.