What’s new

IBM’s conditions of support for Apache Flink in IBM Event Automation has been expanded. For more information, see the support policy statement.

Find out what is new in Event Processing version 1.4.x.

Release 1.4.5

Processor node: deduplicate

Event Processing release 1.4.5 introduces the deduplicate node for removing duplicate events from your Event Processing flow. Based on one or more properties, the node identifies whether events on an ordered stream are unique within a set time interval, and filters out repeated events. For more information, see deduplicate node and the related tutorial.

Support for multiple event destination nodes

In Event Processing 1.4.5 and later, flows can terminate with multiple nodes, and results can be sent to several destination nodes (event destination or SQL destination). Multiple destination nodes can be connected from any node that is not itself a destination node.

You can view Flink watermarks for any node in your flow

In Event Processing 1.4.5 and later, while running your flow, you can view Flink watermarks (displayed as date and time) for any node in the flow. To view watermarks, click the Customize view icon Customize view icon, and select Show Flink watermark View watermark icon. For more information, see running a flow.

Filter node: support for multiple filter expressions with the Assistant

In Event Processing 1.4.5 and later, you can create a filter expression with multiple conditions in the filter node by using the Assistant. You can create complex expressions with the Assistant and join them with AND and OR.

IBM Operator for Apache Flink version 1.4.5 update includes Apache Flink version 1.20.2.

Documentation: Highlighting differences between versions

Any difference in features or behavior introduced by Event Processing 1.4.5 compared to 1.4.4 or earlier is highlighted in this documentation by using the following graphic: Event Processing 1.4.5 icon

Security and bug fixes

Event Processing release 1.4.5 and IBM Operator for Apache Flink version 1.4.5 contain security and bug fixes.

Release 1.4.4

Filter node: You can now manage output properties

The filter node now includes an Output properties pane to manage the output properties generated by this node. You can view, edit, or remove these properties as required.

Documentation: Highlighting differences between versions

Any difference in features or behavior introduced by Event Processing 1.4.4 compared to 1.4.3 or earlier is highlighted in this documentation by using the following graphic: Event Processing 1.4.4 icon

Security and bug fixes

Event Processing release 1.4.4 and IBM Operator for Apache Flink version 1.4.4 contain security and bug fixes.

Release 1.4.3

Security and bug fixes

Event Processing release 1.4.3 and IBM Operator for Apache Flink version 1.4.3 contain security and bug fixes.

Release 1.4.2

In Event Processing 1.4.2 or later, when running flows containing event destination nodes in the Event Processing authoring UI:

  • Only one Flink job is deployed to collect the output events displayed in the UI. In earlier releases, a second job is also deployed.
  • For flows containing database, watsonx.ai, or API nodes, the number of calls to the database, watsonx.ai, or the API server is reduced by half. For such flows, this optimization also prevents discrepancies when the output events displayed in the UI could be different from those written in the Kafka output topic, if successive calls to the database, watsonx.ai, or the API server produce different results.

Note: When upgrading from Event Processing 1.4.1 or earlier, any flows that are running in the Event Processing authoring UI are automatically stopped. You can run those flows again after the upgrade of both Event Processing and IBM Operator for Apache Flink.

Enhancements for better insights of a running flow

In Event Processing 1.4.2 and later, when you run the flow, you can view the output events of any particular node and the number of output events for all the nodes. You can also filter output events by searching for a particular text and find matching events of any node.

Temporal join: Support for multiple join conditions in the primary key

In Event Processing 1.4.2 and later, you can add multiple join conditions in the primary key for the temporal join node.

Collection of usage metrics

To improve product features and performance, Event Processing 1.4.2 and later collects data about Event Processing instances by default. This is in addition to the data collected about Flink instances in 1.4.1 and later.

You can disable data collection at any time.

Support for Red Hat OpenShift Container Platform 4.19

Event Processing version 1.4.2 introduces support for Red Hat OpenShift Container Platform 4.19.

Documentation: Highlighting differences between versions

Any difference in features or behavior introduced by Event Processing 1.4.2 compared to 1.4.1 or earlier is highlighted in this documentation by using the following graphic: Event Processing 1.4.2 icon

Security and bug fixes

Event Processing release 1.4.2 and IBM Operator for Apache Flink version 1.4.2 contain security and bug fixes.

Release 1.4.1

Join: window join

In Event Processing 1.4.1 and later, you can use the window join node to merge two input event streams based on a join condition that matches events within the same time window.

Join: temporal join

In Event Processing 1.4.1 and later, you can use the temporal join node to merge a main event source with the most recent supplementary event source based on a join condition and timestamp.

Documentation: Highlighting differences between versions

Any difference in features or behavior introduced by Event Processing 1.4.1 compared to 1.4.0 or earlier is highlighted in this documentation by using the following graphic: Event Processing 1.4.1 icon

Security and bug fixes

Event Processing release 1.4.1 and IBM Operator for Apache Flink version 1.4.1 contain security and bug fixes.

Release 1.4.0

To improve product features and performance, Event Processing 1.4.1 and later collects data about the usage of deployments by default. Data is collected about all Flink application and session job instances.

You can disable data collection at any time.

New tutorial: nudge customers with abandoned cart by using the watsonx.ai node

A new tutorial is available that shows how you can use the watsonx.ai node to check abandoned shopping carts, and attempt to persuade customers to complete their purchase by highlighting the product with the most positive review.

Security and bug fixes

Event Processing release 1.4.0 and IBM Operator for Apache Flink version 1.4.0 contain security and bug fixes.