Setting environment variables
You can configure the Event Manager or the Event Gateway by setting environment variables. On operator-managed and Kubernetes Deployment Event Gateways you specify the environment variables in a template override (env
) which specifies one or more name-value pairs. On Docker gateways, add the environment variable to your Docker run
command, for example: -e <variable name>
.
Important: Remember to backup your gateway configuration after you make updates.
The format for Event Manager instances is:
spec:
manager:
template:
pod:
spec:
containers:
- name: manager
env:
- name: <name>
value: <value>
The format for operator-managed Event Gateway instances is:
spec:
template:
pod:
spec:
containers:
- name: egw
env:
- name: <name>
value: <value>
Where:
<name>
is the specification that you want to configure.<value>
is the value to configure the specification.
For Kubernetes Deployment Event Gateway instances the path in the Kubernetes Deployment is spec.template.spec.containers
.
For example, to enable trace logging in the Event Manager:
spec:
manager:
template:
pod:
spec:
containers:
- name: manager
env:
- name: TRACE_SPEC
value: "<package>:<trace level>"
Enabling persistent storage
To persist the data input into an Event Manager instance, configure persistent storage in your EventEndpointManagement
configuration.
To enable persistent storage for EventEndpointManagement
set spec.manager.storage.type
to persistent-claim
, and then configure the storage in one of the following ways:
- dynamic provisioning
- providing persistent volume
- providing persistent volume and persistent volume claim.
Ensure that you have sufficient disk space for persistent storage.
Note: spec.manager.storage.type
can also be set to ephemeral
, although no persistence is provisioned with this configuration. This is not recommended for production usage because it results in lost data.
Dynamic provisioning
If there is a dynamic storage provisioner present on the system, you can use the dynamic storage provisioner to dynamically provision the persistence for Event Endpoint Management.
To configure this, set spec.manager.storage.storageClassName
to the name of the storage class provided by the provisioner.
apiVersion: events.ibm.com/v1beta1
kind: EventEndpointManagement
# ...
spec:
license:
# ...
manager:
storage:
type: persistent-claim
storageClassName: csi-cephfs
# ...
- Optionally, specify the storage size in
storage.size
(for example, the default value used would be"100Mi"
). Ensure that the quantity suffix, such asMi
orGi
, is included. - Optionally, specify the root storage path where data is stored in
storage.root
(for example,"/opt/storage"
). - Optionally, specify the retention setting for the storage if the instance is deleted in
storage.deleteClaim
(for example,"true"
).
Providing persistent volumes
Before you install Event Endpoint Management, you can create a persistent volume for it to use as storage.
To use a persistent volume that you created earlier, set the spec.manager.storage.selectors
to match the labels on the persistent volume and set the spec.manager.storage.storageClassName
to match the storageClassName
on the persistent volume.
The following example creates a persistent volume claim to bind to a persistent volume with the label precreated-persistence: my-pv
and storageClassName: manual
.
Multiple labels can be added as selectors, and the persistent volume must have all labels present to match.
apiVersion: events.ibm.com/v1beta1
kind: EventEndpointManagement
# ...
spec:
license:
# ...
manager:
storage:
type: persistent-claim
selectors:
precreated-persistence: my-pv
storageClassName: manual
# ...
- Optionally, specify the storage size in
storage.size
(for example, the default value used would be"100Mi"
). Ensure that the quantity suffix, such asMi
orGi
, is included. - Optionally, specify the root storage path where data is stored in
storage.root
(for example,"/opt/storage"
). - Optionally, specify the retention setting for the storage if the instance is deleted in
storage.deleteClaim
(for example,"true"
).
Providing persistent volume and persistent volume claim
A persistent volume and persistent volume claim can be pre-created for Event Endpoint Management to use as storage.
To use this method, set spec.manager.storage.existingClaimName
to match the name of the pre-created persistent volume claim.
apiVersion: events.ibm.com/v1beta1
kind: EventEndpointManagement
# ...
spec:
license:
# ...
manager:
storage:
type: persistent-claim
existingClaimName: my-existing-pvc
# ...
Deploy network policies for operator-managed Event Gateways
By default, the operator deploys an instance-specific network policy when an instance of EventEndpointManagement
or EventGateway
is created.
The deployment of these network policies can be turned off by setting the spec.deployNetworkPolicies
to false
.
The following code snippet is an example of a configuration that turns off the deployment of the network policy:
apiVersion: events.ibm.com/v1beta1
kind: EventEndpointManagement
# ...
spec:
license:
# ...
deployNetworkPolicies: false
# ...
---
apiVersion: events.ibm.com/v1beta1
kind: EventGateway
# ...
spec:
license:
# ...
deployNetworkPolicies: false
Configuring ingress
If you are running on the Red Hat OpenShift Container Platform, routes are automatically configured to provide external access.
Optional: You can set a host for each exposed route on your Event Manager and operator-managed Event Gateway instances by setting values under spec.manager.endpoints[]
in your EventEndpointManagement
custom resource, and under spec.endpoints[]
in your EventGateway
custom resource.
If you are running on other Kubernetes platforms, the Event Endpoint Management operator creates ingress resources to provide external access. No default hostnames are assigned to the ingress resource, and you must set hostnames for each exposed endpoint that is defined for the Event Manager and Event Gateway instances.
For the Event Manager instance, the spec.manager.endpoints[]
section of your EventEndpointManagement
custom resource must contain entries for the following service endpoints:
- The Event Endpoint Management UI (service name:
ui
) - The Event Gateway (service name:
gateway
) - The Event Endpoint Management Admin API (service name:
admin
) -
The Event Endpoint Management server for deploying gateways and exposing the Admin API (service name:
server
)Note:
- The
server
service endpoint is required to deploy an Event Gateway by using the Event Endpoint Management UI. - The
server
service endpoint also exposes the Event Endpoint Management Admin API on path/admin
, and can be used for making API requests to Event Endpoint Management programmatically. The Admin API URL is displayed on the Profile page. - The value that is supplied in
endpoints[server].host
must start witheem.
- The
For each service endpoint, set the following values:
name
is the name of the service:ui
,gateway
,admin
, orserver
as applicable.host
is a DNS-resolvable hostname for accessing the named service.
For example:
apiVersion: events.ibm.com/v1beta1
kind: EventEndpointManagement
# ...
spec:
manager:
endpoints:
- name: ui
host: my-eem-ui.mycluster.com
- name: gateway
host: my-eem-gateway.mycluster.com
- name: admin
host: my-eem-admin.mycluster.com
- name: server
host: eem.my-eem-server.mycluster.com
For the operator-managed Event Gateway instance, set the gateway endpoint host in the spec.endpoints[]
section of your EventGateway
custom resource, as shown in the following code snippet:
apiVersion: events.ibm.com/v1beta1
kind: EventGateway
# ...
spec:
license:
# ...
endpoints:
- name: gateway
host: my-gateway.mycompany.com
# ...
Ingress default settings
If you are not running on the Red Hat OpenShift Container Platform, the following ingress defaults are set unless overridden:
-
class
: The ingress class name is set by default tonginx
. Set theclass
field on endpoints to use a different ingress class. -
annotations
: The following annotations are set by default on generated ingress endpoints:
ingress.kubernetes.io/ssl-passthrough: 'true'
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
nginx.ingress.kubernetes.io/ssl-passthrough: 'true'
If you specify a spec.manager.tls.ui.secretName
on an EventEndpointManagement
instance, the following reencrypt annotations are set on the ui
ingress. Other ingresses are configured for pass-through.
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
nginx.ingress.kubernetes.io/configuration-snippet: proxy_ssl_name "<HOSTNAME>";
nginx.ingress.kubernetes.io/proxy-ssl-protocols: TLSv1.3
nginx.ingress.kubernetes.io/proxy-ssl-secret: <NAMESPACE>/<SECRETNAME>
nginx.ingress.kubernetes.io/proxy-ssl-verify: 'on'
Ingress annotations can be overridden by specifying an alternative set of annotations on an endpoint. The following code snippet is an example of overriding the annotations set on an operator-managed EventGateway
gateway endpoint ingress.
apiVersion: events.ibm.com/v1beta1
kind: EventGateway
# ...
spec:
license:
# ...
endpoints:
- name: gateway
host: my-gateway.mycompany.com
annotations:
some.annotation.foo: "true"
some.other.annotation: value
# ...
Configuring external access to the operator-managed and Kubernetes Deployment Event Gateway
A Kafka client implementation might require access to at least one route or ingress for each broker that the client is expected to connect to. To present a route or an ingress, you can manually configure the number of routes that are associated with an operator-managed Event Gateway in the EventGateway
custom resource or Kubernetes Deployment.
For example, you can set the number of routes in the spec.maxNumKafkaBrokers
field of your EventGateway
custom resource, as shown in the following code snippet:
apiVersion: events.ibm.com/v1beta1
kind: EventGateway
# ...
spec:
license:
# ...
maxNumKafkaBrokers: 3
# ...
If spec.maxNumKafkaBrokers
value is not provided, the default (20
) is used. The value of the spec.maxNumKafkaBrokers
must be greater than or equal to the total number of brokers managed by this Event Gateway.
Configuring gateway security on the Event Gateways
You can configure various settings that help protect the Event Gateway from uncontrolled resource consumption such as excessive memory usage, or connection exhaustion. Enable these features to help you ensure that the gateway remains available and responsive.
For operator-managed gateways the following table lists the parameters that are available in the EventGateway
custom resource in the security
section. All parameters are optional.
Parameter | Description | Default |
---|---|---|
spec.security.connection.closeDelayMs |
The minimum delay in milliseconds after you close a connection. This helps prevent spam. | 8000 |
spec.security.connection.closeJitterMs |
Additional delay in milliseconds after you close a connection. This helps prevent attacks. | 4000 |
spec.security.connection.perSubLimit |
The maximum allowed TCP connections for each subscription. | -1 (no limit) |
spec.security.authentication.maxRetries |
The maximum number of failed authentication attempts after which further attempts are blocked. | -1 (no limit) |
spec.security.authentication.retryBackoffMs |
The backoff time in milliseconds between consecutive failed authentication attempts. | 0 |
spec.security.authentication.lockoutPeriod |
The duration in seconds while the account is locked after an unsuccessful authentication attempt. (-1 for permanent lockout) | 0 |
spec.security.request.maxSizeBytes |
The maximum size allowed for the request payload in bytes. | -1 (no limit) |
The default values for these parameters are shown in the following sample. A value of -1 represents no limit.
apiVersion: events.ibm.com/v1beta1
kind: EventGateway
# ...
spec:
license:
# ...
security:
connection:
closeDelayMs: 8000
closeJitterMs: 4000
perSubLimit: -1
authentication:
maxRetries: -1
retryBackoffMs: 0
lockoutPeriod: 0
request:
maxSizeBytes: -1
# ...
For the Docker gateway, the equivalent environment variable names are:
CONNECTION_CLOSE_DELAY_MS
CONNECTION_CLOSE_JITTER_MS
MAX_CONNECTIONS_PER_SUBSCRIPTION
AUTHN_MAX_RETRIES
AUTHN_BACKOFF_DELAY_INCREMENT_MILLIS
AUTHN_LOCKOUT_PERIOD_SECONDS
KAFKA_MAX_MESSAGE_LENGTH
Add these properties as arguments to your Docker run
command, for example: docker run ... -e MAX_CONNECTIONS_PER_SUBSCRIPTION=10 ...