Consider the following when planning your installation.
Event Streams Community Edition
The Event Streams Community Edition is a free version intended for trial and demonstration purposes. It can be installed and used without charge.
You can install the Community Edition from the catalog included with IBM Cloud Private.
Event Streams
Event Streams is the paid-for version intended for enterprise use, and includes full IBM support and additional features such as geo-replication.
You can install Event Streams by downloading the image from IBM Passport Advantage, and making it available in the IBM Cloud Private catalog.
Note: If you do not have IBM Cloud Private already, Event Streams includes entitlement to IBM Cloud Private Foundation which you can download from IBM Passport Advantage as well, and install as a prerequisite. IBM Cloud Private Foundation can only be used to deploy Event Streams. No other service can be deployed without upgrading IBM Cloud Private.
Persistent storage
Persistence is not enabled by default, so no persistent volumes are required. Enable persistence if you want messages in topics and configuration settings to be retained in the event of a restart. You should enable persistence for production use and whenever you want your data to survive a restart.
If you plan to have persistent volumes, consider the disk space required for storage.
Also, as both Kafka and ZooKeeper rely on fast write access to disks, ensure you use separate dedicated disks for storing Kafka and ZooKeeper data. For more information, see the disks and filesystems guidance in the Kafka documentation, and the deployment guidance in the ZooKeeper documentation.
If persistence is enabled, each Kafka broker and ZooKeeper server requires one physical volume each. The number of Kafka brokers and ZooKeeper servers depends on your setup, for default requirements, see the resource requirements table. You either need to create a persistent volume for each Kafka broker and ZooKeeper server, or specify a storage class that supports dynamic provisioning. Kafka and ZooKeeper can use different storage classes to control how physical volumes are allocated.
See the IBM Cloud Private documentation for information about creating persistent volumes and creating a storage class that supports dynamic provisioning. For both, you must have the IBM Cloud Private Cluster administrator role.
Important: When creating persistent volumes to use with Event Streams, ensure you set Access mode to ReadWriteOnce
.
More information about persistent volumes and the system administration steps required before installing Event Streams can be found in the Kubernetes documentation.
If these persistent volumes are to be created manually, this must be done by the system administrator before installing Event Streams. The administrator will add these to a central pool before the Helm chart can be installed. The installation will then claim the required number of persistent volumes from this pool. For manual creation, dynamic provisioning must be disabled when configuring your installation. It is up to the administrator to provide appropriate storage to contain these physical volumes.
If these persistent volumes are to be created automatically at the time of installation, the system administrator must enable support for this prior to installing Event Streams. For automatic creation, enable dynamic provisioning when configuring your installation, and provide the storage class names to define the persistent volumes that get allocated to the deployment.
Important: If membership of a specific group is required to access the file system used for persistent volumes, ensure you specify in the File system group ID field the GID of the group that owns the file system.
ConfigMap for Kafka static configuration
You can choose to create a ConfigMap to specify Kafka configuration settings for your Event Streams installation. This is optional.
You can use a ConfigMap to override default Kafka configuration settings when installing Event Streams.
You can also use a ConfigMap to modify read-only Kafka broker settings for an existing Event Streams installation. Read-only parameters are defined by Kafka as settings that require a broker restart. Find out more about the Kafka configuration options and how to modify them for an existing installation.
To create a ConfigMap:
- Log in to your cluster as an administrator by using the IBM Cloud Private CLI:
cloudctl login -a https://<Cluster Master Host>:<Cluster Master API Port>
The master host and port for your cluster are set during the installation of IBM Cloud Private.
Note: To create a ConfigMap, you must have the Operator, Administrator, or Cluster administrator role in IBM Cloud Private. - To create a ConfigMap from an existing Kafka
server.properties
file, use the following command (where namespace is where you install Event Streams):
kubectl -n <namespace_name> create configmap <configmap_name> --from-env-file=<full_path/server.properties>
- To create a blank ConfigMap for future configuration updates, use the following command:
kubectl -n <namespace_name> create configmap <configmap_name>
Geo-replication
You can deploy multiple instances of Event Streams and use the included geo-replication feature to synchronize data between your clusters. Geo-replication helps maintain service availability.
Find out more about geo-replication.
Prepare your destination cluster by setting the number of geo-replication worker nodes during installation.
Note: Geo-replication is only available in the paid-for version of Event Streams (not available in the Community Edition).
Connecting clients
By default, Kafka client applications connect to the IBM Cloud Private master node directly without any configuration required. If you want clients to connect through a different route, specify the target endpoint host name or IP address when configuring your installation.
Sizing considerations
Consider the capacity requirements of your deployment before installing Event Streams. See the information about scaling for guidance. You can modify the capacity settings for existing installations as well.
Logging
IBM Cloud Private uses the Elastic Stack for managing logs (Elasticsearch, Logstash, and Kibana products). Event Streams logs are written to stdout
and are picked up by the default Elastic Stack setup.
Consider setting up the IBM Cloud Private logging for your environment to help resolve problems with your deployment and aid general troubleshooting. See the IBM Cloud Private documentation about logging for information about the built-in Elastic Stack.
As part of setting up the IBM Cloud Private logging for Event Streams, ensure you consider the following:
- Capacity planning guidance: set up your system to have sufficient resources towards the capture, storage, and management of logs.
- Log retention: The logs captured using the Elastic Stack persist during restarts. However, logs older than a day are deleted at midnight by default to prevent log data from filling up available storage space. Consider changing the log data retention in line with your capacity planning. Longer retention of logs provides access to older data that might help troubleshoot problems.
You can use log data to investigate any problems affecting your system health.
Monitoring Kafka clusters
Event Streams uses the IBM Cloud Private monitoring service to provide you with information about the health of your Event Streams Kafka clusters. You can view data for the last 1 hour, 1 day, 1 week, or 1 month in the metrics charts.
Important: By default, the metrics data used to provide monitoring information is only stored for a day. Modify the time period for metric retention to be able to view monitoring data for longer time periods, such as 1 week or 1 month.
For more information about keeping an eye on the health of your Kafka cluster, see the monitoring Kafka topic.
Licensing
You require a license to use Event Streams. Licensing is based on a Virtual Processing Cores (VPC) metric.
An Event Streams deployment consists of a number of different types of containers, as described in the components of the helm chart. To use Event Streams you must have a license for all of the virtual cores that are available to all Kafka and Geo-replicator containers deployed. All other container types are pre-requisite components that are supported as part of Event Streams, and do not require additional licenses.
The number of virtual cores available to each Kafka and geo-replicator container can be specified during installation or modified later.
To check the number of cores, use the IBM Cloud Private metering report as follows:
- Log in to your IBM Cloud Private cluster management console from a supported web browser by using the URL
https://<Cluster Master Host>:<Cluster Master API Port>
. The master host and port for your cluster are set during the installation of IBM Cloud Private. For more information, see the IBM Cloud Private documentation. - From the navigation menu, click Platform > Metering.
- Select your namespace, and select Event Streams (Chargeable).
- Click Containers.
- Go to the Containers section on the right, and ensure you select the Usage tab.
- Select Capped Processors from the first drop-down list, and select 1 Month from the second drop-down list.
A page similar to the following is displayed:
- Click Download Report, and save the CSV file to a location of your choice.
- Open the downloaded report file.
- Look for the month in Period, for example, 2018/9, then in the rows underneath look for Event Streams (Chargeable), and check the CCores/max Cores column. The value is the maximum aggregate number of cores provided to all Kafka and geo-replicator containers. You are charged based on this number.
For example, the following excerpt from a downloaded report shows that for the period 2018/9 the chargeable Event Streams containers had a total of 4 cores available (see the highlighted fields):