Skip to main contentIBM DataPower Operator Documentation

Migrating from v2018 to v10

This guide provides steps to migrate an existing (Helm based) v2018 IBM DataPower deployment to a v10 IBM DataPower deployment, using the IBM DataPower Operator.

Prerequisites

This migration document assumes that a user has all v10 resources ready to deploy. Specifically, it assumes the following prerequistes:

  • A v2018 Helm deployment of IBM DataPower
  • An IBM DataPower Operator deployed watching the relevant namespace
  • A functionally equivalent v10 DataPowerService Custom Resource
  • All relevant ConfigMaps and Secrets have been created

Overview

  1. Update v2018 Deployment with “migration labels”
  2. Create DataPowerService Custom Resource in namespace with single replica and “migration labels”
    • If resources are constrained, scale the v2018 deployment down by 1 replica
  3. Edit Service(s) to change Label Selectors to match “migration labels”
  4. Scale down v2018 and scale up v10 1 replica at a time
  5. Uninstall Helm deployment
  6. Edit Service(s) to change Label Selectors from “migration labels” to v10 labels
  7. Remove “migration labels” from v10 StatefulSet

Step 1: Update v2018 with Migration Labels

To perform a side-by-side migration from Helm managed v2018 to Operator managed v10, you must bridge the gap with “migration labels.” These labels will be used to connect the v2018 and v10 Pods to the same API traffic via the relevant Services while scaling operations are performed. To add the label, you can edit the Deployment or StatefulSet directly:

kubectl edit Deployment v2018-ibm-datapower-dev

Where v2018-ibm-datapower-dev is the name of your v2018 Deployment.

The labels section of your Deployment yaml should look similar to:

spec:
template:
metadata:
labels:
app.kubernetes.io/instance: v2018
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: v2018-ibm-datapower-dev
helm.sh/chart: ibm-datapower-dev-3.0.5
release: v2018

You will now add a label, v2018.v10.datapower.migration: <release-name> where <release-name> is the name of this release. This new label will be the intermediary Selector for the Services handling your traffic. The purpose of <release-name> is to ensure uniqueness so that your Service doesn’t send traffic to another Deployment with an on-going migration. Using the above example labels, your new label set should look like:

spec:
template:
metadata:
labels:
app.kubernetes.io/instance: v2018
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: v2018-ibm-datapower-dev
helm.sh/chart: ibm-datapower-dev-3.0.5
release: v2018

Now save and exit your editor, which should prompt an update of the v2018 Deployment. This will cause a Rolling Update (default) and you should wait until all pods are back up before proceeding to the next step.

Step 2: Deploy v10

Now we need to install the v10 StatefulSet. A prerequisite for this guide is to have built a DataPowerService CustomResource yaml that configures the same APIs as your v2018 deployment. Before installing this custom resource, you need to add the “migration labels” to it. To do so, add the following to your custom resource yaml:

spec:
labels:
v2018.v10.datapower.migration: <v2018-release-name>

With the migration label in place, install the DataPowerService resource.

kubectl apply -f datapowerservice.yaml

Wait for a minute for the Pods to come up. Once up, you can inspect the labels.

kubectl describe pod <v10-pod>

The labels on the v10 pod should resemble:

Labels: app.kubernetes.io/component=datapower
app.kubernetes.io/instance=test-v10-datapower
app.kubernetes.io/managed-by=datapower-operator
app.kubernetes.io/name=datapower
app.kubernetes.io/version=10.0.0.0
controller-revision-hash=v10-datapower-9fdb8f499
v2018.v10.datapower.migration=v2018

Note that now you have a common label on both v2018 and v10 Pods. This allows us to edit any Service objects to send traffic to both sets of Pods.

Step 3: Edit Services to Select on Migration Labels

With the labels in place, it’s time to edit the relevant service(s) to Select using the “migration label” instead of previous Selectors.

kubectl edit Service <v2018-service>

In your editor, you can find the below selector section.

spec:
selector:
app.kubernetes.io/name: v2018-ibm-datapower-dev

This selector will need to change to the migration label:

spec:
selector:
v2018.v10.datapower.migration: v2018

Save and exit the editor. The Service will now be pointing at both the v2018 and v10 Pods.

Step 4: Scale down v2018; Scale up v10

With the traffic handling Service pointing at both v10 and v2018 pods, you can now safely scale down v2018 while scaling up v10.

To scale down v2018, edit the Deployment/Statefulset and decrease the spec.replicas by 1.

kubectl edit Deployment v2018-ibm-datapower-dev

To scale up v10, edit the DataPowerService resource and increase the spec.replicas by 1.

kubectl edit datapowerservice v10-datapower

Continue to scale down v2018 and scale up v10 until there are no more v2018 Pods. Between each iteration, wait for the new v10 pod to become Ready.

Step 5: Delete Helm installation (Optional)

If ability to Rollback isn’t required, you can delete the Helm installation at this stage.

helm delete --purge v2018

Step 6: Edit Service to Select only v10

With the v2018 deployment fully scaled down, you can change the Service selectors to be v10 specific.

kubectl edit Service <v2018-service>

Your updated selector labels should look something like:

spec:
selector:
app.kubernetes.io/name: datapower
app.kubernetes.io/instance: test-v10-datapower

Step 7: Remove Migration Labels from v10 (Optional)

If desired, clean up the v10 StatefulSet by removing the custom migration label from the Custom Resource.

kubectl edit datapowerservice v10-datapower