Introduction

Event Processing is a scalable, low-code event stream processing platform you can use to transform and act on events in real time, helping you turn events into insights.

You can author flows in the low-code editor that bring events (messages) from Apache Kafka into your flow and apply processing actions you want to take on your events.

The flows are run as Apache Flink jobs. Apache Flink is a framework and a distributed processing engine for stateful computations over event streams.

The flow is represented as a graph of event sources, processors (actions), and event destinations. You can use the results of the processing to obtain and share insights on the business data, or to build automations.

The following diagram shows how Event Processing capability fits into the overall IBM Event Automation architecture.

Event Processing architecture

Features

Event Processing features include:

  • A user interface (UI) designed to provide a low-code experience, including:
    • A free-form layout canvas to create flows, with drag-and-drop functionality to add and join nodes.
    • The option to test your event flow while constructing it.
    • The option to export flows to be deployed in other environments.
  • The IBM Operator for Apache Flink that provides:
    • The runtime for the low-code editor.
    • The option to deploy flows exported from the low-code editor.
    • The option to deploy custom Flink workloads.