You can backup and restore your Event Processing flows and Flink instances as follows.
Event Processing flows
You can export your existing Event Processing flows to save them, making them available to import later, as described in exporting flows.
Flink instances
A Flink savepoint is a consistent image of a Flink job’s execution state. Backing up your Flink instances involves backing up savepoints.
Prerequisites
This procedure assumes that you have the following deployed:
- An instance of Flink deployed by the IBM Operator for Apache Flink and configured with persistent storage with a PersistentVolumeClaim (PVC).
- Flink jobs as Application deployments.
The FlinkDeployment
custom resource that configures your Flink instance must define the hereafter parameters, each pointing to a different directory on the persistent storage.
spec.flinkConfiguration.state.checkpoints.dir
spec.flinkConfiguration.state.savepoints.dir
spec.flinkConfiguration.high-availability.storageDir
(if high availability is required)
Note: These directories are automatically created by Flink if they do not exist.
Backing up
The backup process captures the latest state of a running flow and its job specification, allowing to re-create the job from the saved state when required. To back up your Flink instance, update each of your deployed instances by editing their respective FlinkDeployment
custom resources as follows:
-
Ensure that the status section indicates that a
JobManager
is ready and the Flink job is running by checking theFlinkDeployment
custom resource.status: jobManagerDeploymentStatus: READY jobStatus: state: RUNNING
-
Edit the
FlinkDeployment
custom resource and make the following changes:a. Set the value of
spec.job.upgradeMode
tosavepoint
.b. Set the value of
spec.job.state
torunning
.c. Set the value of
spec.job.savepointTriggerNonce
to an integer that has never been used before for that option.For example:
job: jarURI: local:///opt/flink/usrlib/sql-runner.jar args: ["/opt/flink/usrlib/sql-scripts/statements.sql"] savepointTriggerNonce: <integer value> state: running upgradeMode: savepoint
- Save a copy of
FlinkDeployment
custom resource. - Keep the
FlinkDeployment
custom resource and the PVC containing the savepoint to make them available later for restoring your deployment.
Restoring
To restore a previously backed-up Flink instance, ensure that the PVC bound to a PV containing the snapshot is available, then update your FlinkDeployment
custom resource as follows.
-
Edit the saved
FlinkDeployment
custom resource that you saved when backing up your instance:a. Set that the value of
spec.job.upgradeMode
tosavepoint
.b. Set that the value of
spec.job.state
torunning
to resume the Flink job.c. Ensure that the same directory is set for the parameters
spec.job.initialSavepointPath
andspec.flinkConfiguration["state.savepoints.dir"]
.For example:
job: jarURI: local:///opt/flink/usrlib/sql-runner.jar args: ["/opt/flink/usrlib/sql-scripts/statements.sql"] state: running upgradeMode: savepoint initialSavepointPath: <savepoint directory> allowNonRestoredState: true
-
Apply the modified
FlinkDeployment
custom resource.