The following sections provide instructions about installing Event Processing on the Red Hat OpenShift Container Platform. The instructions are based on using the OpenShift Container Platform web console and oc
command-line utility.
Before you begin
- Ensure you have set up your environment according to the prerequisites, including setting up your OpenShift Container Platform and installing a supported version of a certificate manager.
- Ensure you have planned for your installation, such as preparing for persistent storage, considering security options, and considering adding resilience through multiple availability zones.
- Obtain the connection details for your OpenShift Container Platform cluster from your administrator.
- To secure the communication between Flink pods, configure TLS for Flink.
- If you have IBM Cloud Pak for Integration, you can install Event Processing as an add-on.
Create a project (namespace)
Create a namespace into which the Event Processing instance will be installed by creating a project. When you create a project, a namespace with the same name is also created.
Ensure you use a namespace that is dedicated to a single instance of Event Processing. This is required because Event Processing uses network security policies to restrict network connections between its internal components. A single namespace per instance also allows for finer control of user accesses.
Important: Do not use any of the default or system namespaces to install an instance of Event Processing (some examples of these are: default
, kube-system
, kube-public
, and openshift-operators
).
Creating a project by using the web console
- Log in to the OpenShift Container Platform web console using your login credentials.
- Expand the Home dropdown and select Projects to open the Projects panel.
- Click Create Project.
- Enter a new project name in the Name field, and optionally, a display name in the Display Name field, and a description in the Description field.
- Click Create.
Creating a project by using the CLI
- Log in to your Red Hat OpenShift Container Platform as a cluster administrator by using the
oc
CLI (oc login
). -
Run the following command to create a new project:
oc new-project <project_name> --description="<description>" --display-name="<display_name>"
The
description
anddisplay-name
command arguments are optional settings you can use to specify a description and a custom name for your project. -
When you create a project, your namespace automatically switches to your new namespace. Ensure you are using the project that you created by selecting it as follows:
oc project <new-project-name>
The following message is displayed if successful:
Now using project "<new-project-name>" on server "https://<OpenShift-host>:6443".
Create an image pull secret
Before installing an instance, create an image pull secret called ibm-entitlement-key
in the namespace where you want to create an instance of Flink. The secret enables container images to be pulled from the registry.
- Obtain an entitlement key from the IBM Container software library.
-
Create the secret in the namespace that will be used to deploy an instance of Event Processing as follows.
Name the secret
ibm-entitlement-key
, usecp
as the username, your entitlement key as the password, andcp.icr.io
as the docker server:oc create secret docker-registry ibm-entitlement-key --docker-username=cp --docker-password="<your-entitlement-key>" --docker-server="cp.icr.io" -n <target-namespace>
Note: If you do not create the required secret, pods will fail to start with ImagePullBackOff
errors. In this case, ensure the secret is created and allow the pod to restart.
Choose the operator installation mode
Before installing an operator, decide whether you want the operator to:
-
Manage instances in any namespace.
To use this option, select
All namespaces on the cluster (default)
later. The operator will be deployed into the system namespaceopenshift-operators
, and will be able to manage instances in any namespace. -
Only manage instances in a single namespace.
To use this option, select
A specific namespace on the cluster
later. The operator will be deployed into the specified namespace, and will not be able to manage instances in any other namespace.
Important: Choose only one mode when installing the operator. Mixing installation modes is not supported due to possible conflicts. If an operator is installed to manage all namespaces and a single namespace at the same time, it can result in conflicts and attempts to control the same CustomResourceDefinition
resources.
Decide version control and catalog source
Before you can install the required IBM operators, make them available for installation by adding the catalog sources to your cluster. Selecting how the catalog source is added will determine the versions you receive.
Consider how you want to control your deployments, whether you want to install specific versions, and how you want to receive updates.
-
Latest versions: You can install the latest versions of all operators from the IBM Operator Catalog as described in adding latest versions. This means that every deployment will always have the latest versions made available, and you cannot specify which version is installed. In addition, upgrades to latest versions are automatic and provided when they become available. This path is more suitable for development or proof of concept deployments.
-
Specific versions: You can control the version of the operator and instances that are installed by downloading specific Container Application Software for Enterprises (CASE) files as described in adding specific versions. This means you can specify the version you deploy, and only receive updates when you take action manually to do so. This is often required in production environments where the deployment of any version might require it to go through a process of validation and verification before it can be pushed to production use.
Adding latest versions
Important: Use this method of installation only if you want your deployments to always have the latest version and if you want upgrades to always be automatic.
Before you can install the latest operators and use them to create instances of Flink and Event Processing, make the IBM Operator Catalog available in your cluster.
If you have other IBM products that are installed in your cluster, then you might already have the IBM Operator Catalog available. If it is configured for automatic updates as described in the following section, it already contains the required operators, and you can skip the deployment of the IBM Operator Catalog.
If you are installing the IBM Operator for Apache Flink or the Event Processing operator as the first IBM operator in your cluster, to make the operators available in the OpenShift OperatorHub catalog, create the following YAML file and apply it as follows.
To add the IBM Operator Catalog:
-
Create a file for the IBM Operator Catalog source with the following content, and save as
ibm_catalogsource.yaml
:apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: ibm-operator-catalog namespace: openshift-marketplace spec: displayName: "IBM Operator Catalog" publisher: IBM sourceType: grpc image: icr.io/cpopen/ibm-operator-catalog updateStrategy: registryPoll: interval: 45m
Automatic updates of your IBM Operator Catalog can be disabled by removing the polling attribute,
spec.updateStrategy.registryPoll
. To disable automatic updates, remove the following parameters in the IBM Operator Catalog source YAML under thespec
field:updateStrategy: registryPoll: interval: 45m
Important: Other factors such as Subscription might enable the automatic updates of your deployments. For tight version control of your operators or to install a fixed version, add specific versions of the CASE bundle, and then install the IBM Operator for Apache Flink and the Event Processing operator by using the CLI.
- Log in to your Red Hat OpenShift Container Platform as a cluster administrator by using the
oc
CLI (oc login
). -
Apply the source by using the following command:
oc apply -f ibm_catalogsource.yaml
Alternatively, you can add the catalog source through the OpenShift web console by using the Import YAML option:
- Select the plus icon on the upper right.
- Paste the IBM Operator Catalog source YAML in the YAML editor. You can also drag-and-drop the YAML files into the editor.
- Select Create.
This adds the catalog source for both the IBM Operator for Apache Flink and Event Processing to the OperatorHub catalog, making these operators available to install.
Adding specific versions
Important: Use this method if you want to install specific versions and do not want to automatically receive upgrades or have the latest versions made available immediately.
Before you can install the required operator versions and use them to create instances of Flink and Event Processing, make their catalog source available in your cluster as described in the following sections.
Note: This procedure must be performed by using the CLI.
-
Before you begin, ensure that you have the following set up for your environment:
- The OpenShift Container Platform CLI (
oc
) installed. - The IBM Catalog Management Plug-in for IBM Cloud Paks (
ibm-pak
) installed. After installing the plug-in, you can runoc ibm-pak
commands against the cluster. Run the following command to confirm thatibm-pak
is installed:
oc ibm-pak --help
- The OpenShift Container Platform CLI (
-
Run the following command to download, validate, and extract the CASE:
-
For IBM Operator for Apache Flink:
oc ibm-pak get ibm-eventautomation-flink --version <case-version>
Where
<case-version>
is the version of the CASE you want to install. For example:oc ibm-pak get ibm-eventautomation-flink --version 1.2.2
-
For Event Processing:
oc ibm-pak get ibm-eventprocessing --version <case-version>
Where
<case-version>
is the version of the CASE you want to install. For example:oc ibm-pak get ibm-eventprocessing --version 1.2.2
-
-
Generate mirror manifests by running the following command:
-
For IBM Operator for Apache Flink:
oc ibm-pak generate mirror-manifests ibm-eventautomation-flink icr.io
-
For Event Processing:
oc ibm-pak generate mirror-manifests ibm-eventprocessing icr.io
Note: To filter for a specific image group, add the parameter
--filter <image_group>
to the previous command.The previous command generates the following files based on the target internal registry provided:
- catalog-sources.yaml
- catalog-sources-linux-
<arch>
.yaml (if there are architecture specific catalog sources) - image-content-source-policy.yaml
- images-mapping.txt
-
-
Apply the catalog sources for the operator to the cluster by running the following command:
-
For IBM Operator for Apache Flink:
oc apply -f ~/.ibm-pak/data/mirror/ibm-eventautomation-flink/<case-version>/catalog-sources.yaml
Where
<case-version>
is the version of the CASE you want to install. For example:oc apply -f ~/.ibm-pak/data/mirror/ibm-eventautomation-flink/1.2.2/catalog-sources.yaml
-
For Event Processing:
oc apply -f ~/.ibm-pak/data/mirror/ibm-eventprocessing/<case-version>/catalog-sources.yaml
Where
<case-version>
is the version of the CASE you want to install. For example:oc apply -f ~/.ibm-pak/data/mirror/ibm-eventprocessing/1.2.2/catalog-sources.yaml
-
This adds the catalog source for the IBM Operator for Apache Flink and the Event Processing making the operators available to install.
Install the operators
Event Processing consists of two operators that must be installed in the Red Hat OpenShift Container Platform:
- IBM Operator for Apache Flink
- Event Processing
Important: To install the operators by using the OpenShift web console, you must add the operators to the OperatorHub catalog. OperatorHub updates your operators automatically when a latest version is available. This might not be suitable for some production environments. For production environments that require manual updates and version control, add specific versions, and then install the IBM Operator for Apache Flink and the Event Processing operator by using the CLI.
Installing the IBM Operator for Apache Flink
Ensure you have considered the IBM Operator for Apache Flink requirements, including resource requirements and, if installing in any namespace, the required cluster-scoped permissions.
Important:
- IBM Operator for Apache Flink must not be installed in a cluster where Apache Flink operator is also installed. Rationale:
IBM Operator for Apache Flink leverages the Apache Flink
CustomResourceDefinition
(CRD) resources. These resources cannot be managed by more than one operator (for more information, see the Operator Framework documentation). - Before installing IBM Operator for Apache Flink on a cluster where Apache Flink operator is already installed, to avoid possible conflicts due to different versions, fully uninstall the Apache Flink operator, including the deletion of the Apache Flink CRDs as described in the Apache Flink operator documentation.
- Only one version of IBM Operator for Apache Flink should be installed in a cluster. Installing multiple versions
is not supported, due to the possible conflicts between versions of the
CustomResourceDefinition
resources. - Before you install the IBM Operator for Apache Flink, ensure that you have created truststores and keystores that are required to secure communication with Flink deployments.
Installing the IBM Operator for Apache Flink by using the web console
To install the operator by using the OpenShift Container Platform web console, do the following:
- Log in to the OpenShift Container Platform web console using your login credentials.
- Expand the Operators dropdown and select OperatorHub to open the OperatorHub dashboard.
- Select the project that you want to deploy the Event Processing instance in.
- In the All Items search box, enter
IBM Operator for Apache Flink
to locate the operator title. - Click the IBM Operator for Apache Flink tile to open the install side panel.
- Click the Install button to open the Create Operator Subscription dashboard.
- Select the chosen installation mode that suits your requirements. If the installation mode is A specific namespace on the cluster, select the target namespace that you created previously.
- Click Install to begin the installation.
The installation can take a few minutes to complete.
Installing the IBM Operator for Apache Flink by using the command line
To install the operator by using the OpenShift Container Platform command line, complete the following steps:
-
Change to the namespace (project) where you want to install the operator. For command line installations, this sets the chosen installation mode for the operator:
- Change to the system namespace
openshift-operators
if you are installing the operator to be able to manage instances in all namespaces. - Change to the custom namespace if you are installing the operator for use in a specific namespace only.
oc project <target-namespace>
- Change to the system namespace
-
Check whether there is an existing
OperatorGroup
in your target namespace:oc get OperatorGroup
If there is an existing
OperatorGroup
, continue to the next step to create aSubscription
.If there is no
OperatorGroup
, create one as follows:a. Create a YAML file with the following content, replacing
<target-namespace>
with your namespace:apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ibm-eventautomation-operatorgroup namespace: <target-namespace> spec: targetNamespaces: - <target-namespace>
b. Save the file as
operator-group.yaml
.c. Run the following command:
oc apply -f operator-group.yaml
-
Create a
Subscription
for the IBM Operator for Apache Flink as follows:a. Create a YAML file similar to the following example:
apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ibm-eventautomation-flink namespace: <target-namespace> spec: channel: <current_channel> name: ibm-eventautomation-flink source: <catalog-source-name> sourceNamespace: openshift-marketplace
Where:
<target-namespace>
is the namespace where you want to install the IBM Operator for Apache Flink (openshift-operators
if you are installing in all namespaces, or a custom name if you are installing in a specific namespace).<current_channel>
is the operator channel for the release you want to install (see the support matrix).<catalog-source-name>
is the name of the catalog source that was created for this operator. This isibm-eventautomation-flink
when installing a specific version by using a CASE bundle, oribm-operator-catalog
if the source is the IBM Operator Catalog.
b. Save the file as
subscription.yaml
.c. Run the following command:
oc apply -f subscription.yaml
Checking the operator status
-
To see the installed operator and check its status by using the web console, complete the following steps:
- Log in to the OpenShift Container Platform web console using your login credentials.
- Expand the Operators dropdown and select Installed Operators to open the Installed Operators page.
- Expand the Project dropdown and select the project the instance is installed in. Click the operator called IBM Operator for Apache Flink.
- Scroll down to the ClusterServiceVersion details section of the page.
- Check the Status field. After the operator is successfully installed, this will change to
Succeeded
.
In addition to the status, information about key events that occur can be viewed under the Conditions section of the same page. After a successful installation, a condition with the following message is displayed:
install strategy completed with no errors
. -
To check the status of the installed operator by using the command line:
oc get csv
The command returns a list of installed operators. The installation is successful if the value in the
PHASE
column for your IBM Operator for Apache Flink isSucceeded
.
Note: If the operator is installed into a specific namespace, then it will only appear under the associated project. If the operator is installed for all namespaces, then it will appear under any selected project. If the operator is installed for all namespaces and you select all projects from the Project dropdown, the operator will be shown multiple times in the resulting list, once for each project.
When the IBM Operator for Apache Flink is installed, the following additional operators will appear in the installed operator list:
- Operand Deployment Lifecycle Manager.
- IBM Common Service Operator.
Installing the Event Processing operator
Ensure you have considered the Event Processing operator requirements, including resource requirements and the required cluster-scoped permissions.
Important: You can only install one version of the Event Processing operator on a cluster. Installing multiple versions on a single cluster is not supported due to possible compatibility issues as they share the same Custom Resource Definitions (CRDs), making them unsuitable for coexistence.
Installing the Event Processing operator by using the web console
To install the operator by using the OpenShift Container Platform web console, do the following:
- Log in to the OpenShift Container Platform web console using your login credentials.
- Expand the Operators dropdown and select OperatorHub to open the OperatorHub dashboard.
- Select the project that you want to deploy the Event Processing instance in.
- In the All Items search box, enter
Event Processing
to locate the operator title. - Click the Event Processing tile to open the install side panel.
- Click the Install button to open the Create Operator Subscription dashboard.
- Select the chosen installation mode that suits your requirements. If the installation mode is A specific namespace on the cluster, select the target namespace that you created previously.
- Click Install to begin the installation.
The installation can take a few minutes to complete.
Installing the Event Processing operator by using the command line
To install the operator by using the OpenShift Container Platform command line, complete the following steps:
-
Change to the namespace (project) where you want to install the operator. For command line installations, this sets the chosen installation mode for the operator:
- Change to the system namespace
openshift-operators
if you are installing the operator to be able to manage instances in all namespaces. - Change to the custom namespace if you are installing the operator for use in a specific namespace only.
oc project <target-namespace>
- Change to the system namespace
-
Check whether there is an existing
OperatorGroup
in your target namespace:oc get OperatorGroup
If there is an existing
OperatorGroup
, continue to the next step to create aSubscription
.If there is no
OperatorGroup
, create one as follows:a. Create a YAML file with the following content, replacing
<target-namespace>
with your namespace:apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ibm-eventautomation-operatorgroup namespace: <target-namespace> spec: targetNamespaces: - <target-namespace>
b. Save the file as
operator-group.yaml
.c. Run the following command:
oc apply -f operator-group.yaml
-
Create a
Subscription
for the Event Processing operator as follows:a. Create a YAML file similar to the following example:
apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ibm-eventprocessing namespace: <target-namespace> spec: channel: <current_channel> name: ibm-eventprocessing source: <catalog-source-name> sourceNamespace: openshift-marketplace
Where:
<target-namespace>
is the namespace where you want to install Event Processing (openshift-operators
if you are installing in all namespaces, or a custom name if you are installing in a specific namespace).<current_channel>
is the operator channel for the release you want to install (see the support matrix).<catalog-source-name>
is the name of the catalog source that was created for this operator. This isibm-eventprocessing
when installing a specific version by using a CASE bundle, oribm-operator-catalog
if the source is the IBM Operator Catalog.
b. Save the file as
subscription.yaml
.c. Run the following command:
oc apply -f subscription.yaml
Checking the operator status
-
To see the installed operator and check its status by using the web console, complete the following steps:
- Log in to the OpenShift Container Platform web console using your login credentials.
- Expand the Operators dropdown and select Installed Operators to open the Installed Operators page.
- Expand the Project dropdown and select the project the instance is installed in. Click the operator called IBM Event Processing managing the project.
- Scroll down to the ClusterServiceVersion details section of the page.
- Check the Status field. After the operator is successfully installed, this will change to
Succeeded
.
In addition to the status, information about key events that occur can be viewed under the Conditions section of the same page. After a successful installation, a condition with the following message is displayed:
install strategy completed with no errors
. -
To check the status of the installed operator by using the command line:
oc get csv
The command returns a list of installed operators. The installation is successful if the value in the
PHASE
column for your Event Processing isSucceeded
.
Note: If the operator is installed into a specific namespace, then it will only appear under the associated project. If the operator is installed for all namespaces, then it will appear under any selected project. If the operator is installed for all namespaces, and you select all projects from the Project dropdown, the operator will be shown multiple times in the resulting list, once for each project.
When the Event Processing is installed, the following additional operators will appear in the installed operator list:
- Operand Deployment Lifecycle Manager.
- IBM Common Service Operator.
Install a Flink instance
Instances of Flink can be created after the IBM Operator for Apache Flink is installed. If the operator was installed into a specific namespace, then it can only be used to manage instances of Flink in that namespace.
If the operator was installed for all namespaces, then it can be used to manage instances of Flink in any namespace, including those created after the operator was deployed.
A Flink instance is installed by deploying the FlinkDeployment
custom resource to a namespace managed by an instance of IBM Operator for Apache Flink.
Installing a Flink instance by using the web console
To install a Flink instance through the OpenShift Container Platform web console, do the following:
- Log in to the OpenShift Container Platform web console using your login credentials.
- Expand the Operators dropdown and select Installed Operators to open the Installed Operators page.
-
Expand the Project dropdown and select the project the instance is installed in. Click the operator called IBM Operator for Apache Flink managing the project.
Note: If the operator is not shown, it is either not installed or not available for the selected namespace.
- In the Operator Details dashboard, click the Flink Deployment tab.
- Click the Create FlinkDeployment button to open the Create FlinkDeployment panel. You can use this panel
to define a
FlinkDeployment
custom resource.
From here you can install by using the YAML view or the form view. For advanced configurations or to install one of the samples, see installing by using the YAML view.
Installing a Flink instance using the YAML view
Alternatively, you can configure the FlinkDeployment
custom resource by editing YAML documents. To do this, select YAML view.
A number of sample configurations are provided on which you can base your deployment. These range from quick start deployments for non-production development to large scale clusters ready to handle a production workload. Alternatively, a pre-configured YAML file containing the custom resource can be dragged and dropped onto this screen to apply the configuration.
To view the samples, complete the following steps:
- Click the Samples tab to view the available sample configurations.
- Click the Try it link under any of the samples to open the configuration in the Create FlinkDeployment panel.
More information about these samples is available in the planning section. You can base your deployment on the sample that most closely reflects your requirements and apply customizations on top as required. You can also directly edit the custom resource YAML by clicking on the editor.
When modifying the sample configuration, the updated document can be exported from the Create FlinkDeployment panel by clicking the Download button and re-imported by dragging the resulting file back into the window.
Note: If you experiment with IBM Operator for Apache Flink and want a minimal CPU and memory footprint, the Quick Start sample is the smallest and simplest example. For the smallest production setup, use the Minimal Production sample configuration.
Important: All Flink samples except Quick Start use a PersistentVolumeClaim
(PVC), which must be deployed manually as described in planning.
-
In all Flink samples, just as in any
FlinkDeployment
custom resource, accept the license agreement(spec.flinkConfiguration.license.accept: 'true'
), and set the required licensing configuration parameters for your deployment.spec: flinkConfiguration: license.use: <license-use-value> license.license: L-KCVZ-JL5CRM license.accept: 'true'
Where
<license-use-value>
must be eitherEventAutomationProduction
orEventAutomationNonProduction
, depending on your case. -
To secure your communication between Flink pods, add the following snippet to the
spec.flinkConfiguration
section:spec: flinkConfiguration: security.ssl.enabled: 'true' security.ssl.truststore: /opt/flink/tls-cert/truststore.jks security.ssl.truststore-password: <jks-password> security.ssl.keystore: /opt/flink/tls-cert/keystore.jks security.ssl.keystore-password: <jks-password> security.ssl.key-password: <jks-password> kubernetes.secrets: '<jks-secret>:/opt/flink/tls-cert'
To deploy a Flink instance, use the following steps:
- Complete any changes to the sample configuration in the Create FlinkDeployment panel.
- Click Create to begin the installation process.
- Wait for the installation to complete.
Installing a Flink instance by using the form view
Note: For advanced configurations, such as configuring parameters under spec.flinkConfiguration
, see installing by using the YAML view.
To configure a FlinkDeployment
custom resource in the Form view, do the following:
- Enter a name for the instance in the Name field.
-
You can optionally configure the fields such as Job Manager, Task Manager, or Job to suit your requirements.
Note: Do not fill the fields Flink Version and Image, as they are automatically filled by IBM Operator for Apache Flink.
-
Switch to the YAML view, accept the license agreement (
spec.flinkConfiguration.license.accept: 'true'
), and set the required licensing configuration parameters for your deployment.For example:
spec: flinkConfiguration: license.use: <license-use-value> license.license: L-KCVZ-JL5CRM license.accept: 'true'
Where
<license-use-value>
must be eitherEventAutomationProduction
orEventAutomationNonProduction
, depending on your deployment.Note: License configuration parameters for your Flink instance can only be set by using the YAML view.
-
To secure your communication between Flink pods, switch to the YAML view, and add the following snippet to the
spec.flinkConfiguration
section:spec: flinkConfiguration: security.ssl.enabled: 'true' security.ssl.truststore: /opt/flink/tls-cert/truststore.jks security.ssl.truststore-password: <jks-password> security.ssl.keystore: /opt/flink/tls-cert/keystore.jks security.ssl.keystore-password: <jks-password> security.ssl.key-password: <jks-password> kubernetes.secrets: '<jks-secret>:/opt/flink/tls-cert'
- Scroll down and click the Create button to deploy the Flink instance.
- Wait for the installation to complete.
Installing a Flink instance by using the CLI
To install an instance of Flink from the command-line, you must first prepare a FlinkDeployment
custom resource configuration in a YAML file.
A number of sample configuration files are available in GitHub, where you can select the GitHub tag for your Flink version, and then go to /cr-examples/flinkdeployment
to access the samples. These range from quick start deployments for non-production development to large scale clusters ready to handle a production workload.
Important: All Flink samples except Quick Start use a PersistentVolumeClaim
(PVC), which must be deployed manually as described in planning.
To deploy a Flink instance, run the following commands:
-
Prepare a
FlinkDeployment
custom resource in a YAML file by using the information provided in Apache Flink documentation.Note: Do not include the fields
spec.image
andspec.flinkVersion
, as they are automatically included by IBM Operator for Apache Flink.-
Accept the license agreement(
spec.flinkConfiguration.license.accept: 'true'
), and set the required licensing configuration parameters for your deployment.spec: flinkConfiguration: license.use: <license-use-value> license.license: L-KCVZ-JL5CRM license.accept: 'true'
Where
<license-use-value>
must be eitherEventAutomationProduction
orEventAutomationNonProduction
, depending on your deployment. -
To secure your communication between Flink pods, add the following snippet to the
spec.flinkConfiguration
section:spec: flinkConfiguration: security.ssl.enabled: 'true' security.ssl.truststore: /opt/flink/tls-cert/truststore.jks security.ssl.truststore-password: <jks-password> security.ssl.keystore: /opt/flink/tls-cert/keystore.jks security.ssl.keystore-password: <jks-password> security.ssl.key-password: <jks-password> kubernetes.secrets: '<jks-secret>:/opt/flink/tls-cert'
Where
<jks-secret>
is the secret containing the keystores and truststores for your deployment, and<jks-password>
is the password for those stores.
-
-
Set the project where your
FlinkDeployment
custom resource will be deployed in:oc project <project-name>
-
Apply the configured
FlinkDeployment
custom resource:oc apply -f <custom-resource-file-path>
For example:
oc apply -f flinkdeployment_demo.yaml
-
Wait for the installation to complete.
Install an Event Processing instance
Instances of Event Processing can be created after the Event Processing operator is installed. If the operator was installed into a specific namespace, then it can only be used to manage instances of Event Processing in that namespace. If the operator was installed for all namespaces, then it can be used to manage instances of Event Processing in any namespace, including those created after the operator was deployed.
When installing an instance of Event Processing, ensure you are using a namespace that an operator is managing.
Retrieving the Flink REST endpoint
To retrieve the REST endpoint URL, do the following:
- Expand Networking in the navigation on the left, and select Services.
- Select your service to open Service details.
- Your endpoint URL is hostname and port separated by colon(:). For example, if your hostname is
my-flink-rest.namespace.svc.cluster.local
and the port is8081
, your REST endpoint URL ismy-flink-rest.namespace.svc.cluster.local:8081
.
Installing an Event Processing instance by using the web console
To install an Event Processing instance through the OpenShift Container Platform web console, do the following:
- Log in to the OpenShift Container Platform web console using your login credentials.
- Expand the Operators dropdown and select Installed Operators to open the Installed Operators page.
-
Expand the Project dropdown and select the project the instance is installed in. Click the operator called IBM Event Processing managing the project.
Note: If the operator is not shown, it is either not installed or not available for the selected namespace.
- In the Operator Details dashboard, click the Event Processing tab.
- Click the Create EventProcessing button to open the Create EventProcessing panel. You can use this panel to define an
EventProcessing
custom resource.
From here you can install by using the YAML view or the form view. For advanced configurations or to install one of the samples, see installing by using the YAML view.
Installing an Event Processing instance by using the YAML view
You can configure the EventProcessing
custom resource by editing YAML documents. To do this, select YAML view.
A number of sample configurations are provided on which you can base your deployment. These range from smaller deployments for non-production development or general experimentation to large scale clusters ready to handle a production workload. Alternatively, a pre-configured YAML file containing the custom resource sample can be dragged and dropped onto this screen to apply the configuration.
To view the samples, complete the following steps:
- Click the Samples tab to view the available sample configurations.
- Click the Try it link under any of the samples to open the configuration in the Create EventProcessing panel.
More information about these samples is available in the planning section. You can base your deployment on the sample that most closely reflects your requirements and apply customizations as required.
Note: If experimenting with Event Processing for the first time, the Quick Start sample is the smallest and simplest example that can be used to create an experimental deployment. For a production setup, use the Production sample configuration.
When modifying the sample configuration, the updated document can be exported from the Create EventProcessing panel by clicking the Download button and re-imported by dragging the resulting file back into the window. You can also directly edit the custom resource YAML by clicking on the editor.
When modifying the sample configuration, ensure that the following fields are updated:
- Flink REST endpoint in the
spec.flink.endpoint
field. -
To secure your communication between Event Processing and Flink deployments, identify the secret that contains the same truststore as your Flink deployment and the secret containing the password for this keystore. Then, add the secret to the
spec.flink.tls
section. For example:spec: flink: tls: secretKeyRef: key: <key-containing-password-value> name: <flink-jks-password-secret> secretName: <flink-jks-secret>
spec.license.accept
field in the custom resource YAML is set totrue
, and that the correct values are selected for thespec.license.license
andspec.license.use
fields before deploying the Event Processing instance. See the licensing section for more details about selecting the correct value.
To deploy an Event Processing instance, use the following steps:
- Complete any changes to the sample configuration in the Create EventProcessing panel.
- Click Create to begin the installation process.
- Wait for the installation to complete.
- You can now verify your installation and consider other post-installation tasks.
Installing an Event Processing instance by using the form view
Alternatively, you can configure an EventProcessing
custom resource using the interactive form. You can load
samples into the form and then edit as required.
To view a sample in the form view, complete the following steps:
- Select YAML view in the Configure via section at the top of the form.
- Click the Samples tab to view the available sample configurations.
- Click the Try it link under any of the samples.
- Select Form view in the Configure via section to switch back to the form view with the data from the sample populated.
- Edit as required.
Note: If experimenting with Event Processing for the first time, the Quick Start sample is the smallest and simplest example that can be used to create an experimental deployment. For a production setup, use the Production sample configuration.
To configure an EventProcessing
custom resource, complete the following steps:
- Enter a name for the instance in the Name field.
- Under License Acceptance, select the accept checkbox.
- Ensure that the correct values for License and Use are entered.
- Under Flink, enter the Flink REST endpoint in the flink > endpoint text-box.
- To secure your communication between Event Processing and Flink deployments, enter the TLS configuration in flink > tls.
- You can optionally configure other components such as storage, and TLS to suit your requirements.
- Scroll down and click the Create button to deploy the Event Processing instance.
- Wait for the installation to complete.
- You can now verify your installation and consider other post-installation tasks.
Installing an Event Processing instance by using the CLI
To install an instance of Event Processing from the command-line, you must first prepare an EventProcessing
custom resource configuration in a YAML file.
A number of sample configuration files are available in GitHub, where you can select the GitHub tag for your Event Processing version, and then go to /cr-examples/eventprocessing
to access the samples. These sample configurations range from smaller deployments for non-production development or general experimentation to large scale clusters ready to handle a production workload.
More information about these samples is available in the planning section. You can base your deployment on the sample that most closely reflects your requirements and apply customizations as required.
Note: If experimenting with Event Processing for the first time, the Quick Start sample is the smallest and simplest example that can be used to create an experimental deployment. For a production setup, use the Production sample configuration.
When modifying the sample configuration, ensure that the following fields are updated:
-
Flink REST endpoint in the
spec.flink.endpoint
field. -
To secure your communication between Event Processing and Flink deployments, identify the secret that contains the same truststore as your Flink deployment and the secret containing the password for this keystore. Then, add the secret to the
spec.flink.tls
section. For example:spec: flink: tls: secretKeyRef: key: <key-containing-password-value> name: <flink-jks-password-secret> secretName: <flink-jks-secret>
-
spec.license.accept
field in the custom resource YAML is set totrue
, and that the correct values are selected for thespec.license.license
andspec.license.use
fields before deploying the Event Processing instance.
To deploy an Event Processing instance, run the following commands:
-
Set the project where your
EventProcessing
custom resource will be deployed in:oc project <project-name>
-
Apply the configured
EventProcessing
custom resource:oc apply -f <custom-resource-file-path>
For example:
oc apply -f production.yaml
- Wait for the installation to complete.
- You can now verify your installation and consider other post-installation tasks.