Step 2: Set Variables (group_vars)#

Overview#

  • In a text editor of your choice, open the template of the environment variables file. Make a copy of it called all.yaml and paste it into the same directory with its template.
  • all.yaml is your master variables file and you will likely reference it many times throughout the process. The default inventory can be found at inventories/default.
  • The variables marked with an X are required to be filled in. Many values are pre-filled or are optional. Optional values are commented out; in order to use them, remove the # and fill them in.
  • This is the most important step in the process. Take the time to make sure everything here is correct.
  • Note on YAML syntax: Only the lowest value in each hierarchicy needs to be filled in. For example, at the top of the variables file env and z don't need to be filled in, but the cpc_name does. There are X's where input is required to help you with this.
  • Scroll the table to the right to see examples for each variable.

1 - Controller#

Variable Name Description Example
env.controller.sudo_pass The password to the machine running Ansible (localhost).
This will only be used for two things. To ensure you've installed the
pre-requisite packages if you're on Linux, and to add the login URL
to your /etc/hosts file.
Pas$w0rd!

2 - LPAR(s)#

Variable Name Description Example
env.z.high_availability Is this cluster spread across three LPARs? If yes, mark True. If not (just in
one LPAR), mark False
True
env.z.lpar1.create To have Ansible create an LPAR and install RHEL on it for the KVM
host, mark True. If using a pre-existing LPAR with RHEL already
installed, mark False.
True
env.z.lpar1.hostname The hostname of the KVM host. kvm-host-01
env.z.lpar1.ip The IPv4 address of the KVM host. 192.168.10.1
env.z.lpar1.user Username for Linux admin on KVM host 1. Recommended to run as a non-root user with sudo access. admin
env.z.lpar1.pass The password for the user that will be created or exists on the KVM host. ch4ngeMe!
env.z.lpar2.create To create a second LPAR and install RHEL on it to act as
another KVM host, mark True. If using pre-existing LPAR(s) with RHEL
already installed, mark False.
True
env.z.lpar2.hostname (Optional) The hostname of the second KVM host. kvm-host-02
env.z.lpar2.ip (Optional) The IPv4 address of the second KVM host. 192.168.10.2
env.z.lpar2.user Username for Linux admin on KVM host 2. Recommended to run as a non-root user with sudo access. admin
env.z.lpar2.pass (Optional) The password for the admin user on the second KVM host. ch4ngeMe!
env.z.lpar3.create To create a third LPAR and install RHEL on it to act as
another KVM host, mark True. If using pre-existing LPAR(s) with RHEL
already installed, mark False.
True
env.z.lpar3.hostname (Optional) The hostname of the third KVM host. kvm-host-03
env.z.lpar3.ip (Optional) The IPv4 address of the third KVM host. 192.168.10.3
env.z.lpar3.user Username for Linux admin on KVM host 3. Recommended to run as a non-root user with sudo access. admin
env.z.lpar3.pass (Optional) The password for the admin user on the third KVM host. ch4ngeMe!

3 - FTP Server#

Variable Name Description Example
env.ftp.ip IPv4 address for the FTP server that will be used to pass config files and
iso to KVM host LPAR(s) and bastion VM during their first boot.
192.168.10.201
env.ftp.user Username to connect to the FTP server. Must have sudo and SSH access. ftp-user
env.ftp.pass Password to connect to the FTP server as above user. FTPpa$s!
env.ftp.iso_mount_dir Directory path relative to FTP root where RHEL ISO is mounted. If FTP root is /var/ftp/pub
and the ISO is mounted at /var/ftp/pub/RHEL/8.5 then this variable would be
RHEL/8.5. No slash before or after.
RHEL/8.5
env.ftp.cfgs_dir Directory path relative to FTP root where configuration files can be stored. If FTP root is /var/ftp/pub
and you would like to store the configs at /var/ftp/pub/ocpz-config then this variable would be
ocpz-config. No slash before or after.
ocpz-config

4 - Red Hat Info#

Variable Name Description Example
env.redhat.username Red Hat username with a valid license or free trial to Red Hat
OpenShift Container Platform (RHOCP), which comes with
necessary licenses for Red Hat Enterprise Linux (RHEL) and
Red Hat CoreOS (RHCOS).
redhat.user
env.redhat.password Password to Red Hat above user's account. Used to auto-attach
necessary subscriptions to KVM Host, bastion VM, and pull live
images for OpenShift.
rEdHatPa$s!
env.redhat.pull_secret Pull secret for OpenShift, comes from Red Hat's Hybrid Cloud Console.
Make sure to enclose in 'single quotes'.
'{"auths":{"cloud.openshift
.com":{"auth":"b3Blb
...
4yQQ==","email":"redhat.
user@gmail.com"}}}'

5 - Bastion#

Variable Name Description Example
env.bastion.create True or False. Would you like to create a bastion KVM guest to host essential infrastructure services like DNS,
load balancer, firewall, etc? Can de-select certain services with the env.bastion.options
variables below.
True
env.bastion.vm_name Name of the bastion VM. Arbitrary value. bastion
env.bastion.resources.disk_size How much of the storage pool would you like to allocate to the bastion (in
Gigabytes)? Recommended 30 or more.
30
env.bastion.resources.ram How much memory would you like to allocate the bastion (in
megabytes)? Recommended 4096 or more
4096
env.bastion.resources.swap How much swap storage would you like to allocate the bastion (in
megabytes)? Recommended 4096 or more.
4096
env.bastion.resources.vcpu How many virtual CPUs would you like to allocate to the bastion? Recommended 4 or more. 4
env.bastion.networking.ip IPv4 address for the bastion. 192.168.10.3
env.bastion.networking.hostname Hostname of the bastion. Will be combined with
env.bastion.networking.base_domain to create a Fully Qualified Domain Name (FQDN).
ocpz-bastion
env.bastion.networking.
subnetmask
Subnet of the bastion. 255.255.255.0
env.bastion.networking.gateway IPv4 of he bastion's gateway server. 192.168.10.0
env.bastion.networking.name
server1
IPv4 address of the server that resolves the bastion's hostname. 192.168.10.200
env.bastion.networking.name
server2
(Optional) A second IPv4 address that resolves the bastion's hostname. 192.168.10.201
env.bastion.networking.interface Name of the networking interface on the bastion from Linux's perspective. Most likely enc1. enc1
env.bastion.networking.base_
domain
Base domain that, when combined with the hostname, creates a fully-qualified
domain name (FQDN) for the bastion?
ihost.com
env.bastion.access.user What would you like the admin's username to be on the bastion?
If root, make pass and root_pass vars the same.
admin
env.bastion.access.pass The password to the bastion's admin user. If using root, make
pass and root_pass vars the same.
cH4ngeM3!
env.bastion.access.root_pass The root password for the bastion. If using root, make
pass and root_pass vars the same.
R0OtPa$s!
env.bastion.options.dns Would you like the bastion to host the DNS information for the
cluster? True or False. If false, resolution must come from
elsewhere in your environment. Make sure to add IP addresses for
KVM hosts, bastion, bootstrap, control, compute nodes, AND api,
api-int and *.apps as described here in section "User-provisioned
DNS Requirements" Table 5. If True this will be done for you in
the dns and check_dns roles.
True
env.bastion.options.load
balancer.on_bastion
Would you like the bastion to host the load balancer (HAProxy) for the cluster?
True or False (boolean).
If false, this service must be provided elsewhere in your environment, and public and
private IP of the load balancer must be
provided in the following two variables.
True
env.bastion.options.load
balancer.public_ip
(Only required if env.bastion.options.loadbalancer.on_bastion is True). The public IPv4
address for your environment's loadbalancer. api, apps, *.apps must use this.
192.168.10.50
env.bastion.options.load
balancer.private_ip
(Only required if env.bastion.options.loadbalancer.on_bastion is True). The private IPv4 address
for your environment's loadbalancer. api-int must use this.
10.24.17.12

6 - Cluster Networking#

Variable Name Description Example
env.cluster.networking.metadata_name Name to describe the cluster as a whole, can be anything if DNS will be hosted on the bastion. If
DNS is not on the bastion, must match your DNS configuration. Will be combined with the base_domain
and hostnames to create Fully Qualified Domain Names (FQDN).
ocpz
env.cluster.networking.base_domain The site name, where is the cluster being hosted? This will be combined with the metadata_name
and hostnames to create FQDNs.
ihost.com
env.cluster.networking.nameserver1 IPv4 address that the cluster get its hostname resolution from. If env.bastion.options.dns
is True, this should be the IP address of the bastion.
192.168.10.200
env.cluster.networking.nameserver2 (Optional) A second IPv4 address will the cluster get its hostname resolution from? If env.bastion.options.dns
is True, this should be left commented out.
192.168.10.201
env.cluster.networking.forwarder What IPv4 address will be used to make external DNS calls? Can use 1.1.1.1 or 8.8.8.8 as defaults. 8.8.8.8

7 - Bootstrap Node#

Variable Name Description Example
env.cluster.nodes.bootstrap.disk_size How much disk space do you want to allocate to the bootstrap node (in Gigabytes)? Bootstrap node
is temporary and will be brought down automatically when its job completes. 120 or more recommended.
120
env.cluster.nodes.bootstrap.ram How much memory would you like to allocate to the temporary bootstrap node (in
megabytes)? Recommended 16384 or more.
16384
env.cluster.nodes.bootstrap.vcpu How many virtual CPUs would you like to allocate to the temporary bootstrap node?
Recommended 4 or more.
4
env.cluster.nodes.bootstrap.vm_name Name of the temporary bootstrap node VM. Arbitrary value. bootstrap
env.cluster.nodes.bootstrap.ip IPv4 address of the temporary bootstrap node. 192.168.10.4
env.cluster.nodes.bootstrap.hostname Hostname of the temporary boostrap node. If DNS is hosted on the bastion, this can be anything.
If DNS is hosted elsewhere, this must match DNS definition. This will be combined with the
metadata_name and base_domain to create a Fully Qualififed Domain Name (FQDN).
bootstrap-ocpz

8 - Control Nodes#

Variable Name Description Example
env.cluster.nodes.control.disk_size How much disk space do you want to allocate to each control node (in Gigabytes)? 120 or more recommended. 120
env.cluster.nodes.control.ram How much memory would you like to allocate to the each control
node (in megabytes)? Recommended 16384 or more.
16384
env.cluster.nodes.control.vcpu How many virtual CPUs would you like to allocate to each control node? Recommended 4 or more. 4
env.cluster.nodes.control.vm_name Name of the control node VMs. Arbitrary values. Usually no more or less than 3 are used. Must match
the total number of IP addresses and hostnames for control nodes. Use provided list format.
control-1
control-2
control-3
env.cluster.nodes.control.ip IPv4 address of the control nodes. Use provided
list formatting.
192.168.10.5
192.168.10.6
192.168.10.7
env.cluster.nodes.control.hostname Hostnames for control nodes. Must match the total number of IP addresses for control nodes
(usually 3). If DNS is hosted on the bastion, this can be anything. If DNS is hosted elsewhere,
this must match DNS definition. This will be combined with the metadata_name and
base_domain to create a Fully Qualififed Domain Name (FQDN).
control-01
control-02
control-03

9 - Compute Nodes#

Variable Name Description Example
env.cluster.nodes.compute.disk_size How much disk space do you want to allocate to each compute
node (in Gigabytes)? 120 or more recommended.
120
env.cluster.nodes.compute.ram How much memory would you like to allocate to the each compute
node (in megabytes)? Recommended 16384 or more.
16384
env.cluster.nodes.compute.vcpu How many virtual CPUs would you like to allocate to each compute node? Recommended 2 or more. 2
env.cluster.nodes.compute.vm_name Name of the compute node VMs. Arbitrary values. This list can be expanded to any
number of nodes, minimum 2. Must match the total number of IP
addresses and hostnames for compute nodes. Use provided list format.
compute-1
compute-2
env.cluster.nodes.compute.ip IPv4 address of the compute nodes. Must match the total number of VM names and
hostnames for compute nodes. Use provided list formatting.
192.168.10.8
192.168.10.9
env.cluster.nodes.compute.hostname Hostnames for compute nodes. Must match the total number of IP addresses and
VM names for compute nodes. If DNS is hosted on the bastion, this can be anything.
If DNS is hosted elsewhere, this must match DNS definition. This will be combined with the
metadata_name and base_domain to create a Fully Qualififed Domain Name (FQDN).
compute-01
compute-02

10 - Infra Nodes#

Variable Name Description Example
env.cluster.nodes.infra.disk_size (Optional) Set up compute nodes that are made for infrastructure workloads (ingress,
monitoring, logging)? How much disk space do you want to allocate to each infra node (in Gigabytes)?
120 or more recommended.
120
env.cluster.nodes.infra.ram (Optional) How much memory would you like to allocate to the each infra node (in
megabytes)? Recommended 16384 or more.
16384
env.cluster.nodes.infra.vcpu (Optional) How many virtual CPUs would you like to allocate to each infra node?
Recommended 2 or more.
2
env.cluster.nodes.infra.vm_name (Optional) Name of additional infra node VMs. Arbitrary values. This list can be
expanded to any number of nodes, minimum 2. Must match the total
number of IP addresses and hostnames for infra nodes. Use provided list format.
infra-1
infra-2
env.cluster.nodes.infra.ip (Optional) IPv4 address of the infra nodes. This list can be expanded to any number of nodes,
minimum 2. Use provided list formatting.
192.168.10.8
192.168.10.9
env.cluster.nodes.infra.hostname (Optional) Hostnames for infra nodes. Must match the total number of IP addresses for infra nodes.
If DNS is hosted on the bastion, this can be anything. If DNS is hosted elsewhere, this must match
DNS definition. This will be combined with the metadata_name and base_domain
to create a Fully Qualififed Domain Name (FQDN).
infra-01
infra-02

11 - (Optional) Packages#

Variable Name Description Example
env.pkgs.galaxy A list of Ansible Galaxy collections that will be installed during the setup playbook. The
collections listed are required. Feel free to add more as needed, just make sure to follow the same list format.
community.general
env.pkgs.controller A list of packages that will be installed on the machine running Ansible during the setup
playbook. Feel free to add more as needed, just make sure to follow the same list format.
openssh
env.pkgs.kvm A list of packages that will be installed on the KVM Host during the setup_kvm_host playbook.
Feel free to add more as needed, just make sure to follow the same list format.
qemu-kvm
env.pkgs.bastion A list of packages that will be installed on the bastion during the setup_bastion playbook.
Feel free to add more as needed, just make sure to follow the same list format.
haproxy
Variable Name Description Example
env.openshift.client Link to the mirror for the OpenShift client from Red Hat. Feel free to change to a different version, but
make sure it is for s390x architecture.
https://mirror.openshift.com
/pub/openshift-v4/s390x/clients
/ocp/stable/openshift-
client-linux.tar.gz
env.openshift.installer Link to the mirror for the OpenShift installer from Red Hat.
Feel free to change to a different version, but make sure it is for s390x architecture.
https://mirror.openshift.com
/pub/openshift-v4/s390x
/clients/ocp/stable/openshift-
install-linux.tar.gz
env.coreos.kernel Link to the mirror of the CoreOS kernel to be used for the bootstrap, control and compute nodes.
Feel free to change to a different version, but make sure it is for s390x architecture.
https://mirror.openshift.com
/pub/openshift-v4/s390x
/dependencies/rhcos
/4.9/latest/rhcos-4.9.0-s390x-
live-kernel-s390x
env.coreos.initramfs Link to the mirror of the CoreOS initramfs to be used for the bootstrap, control and compute nodes.
Feel free to change to a different version, but make sure it is for s390x architecture.
https://mirror.openshift.com
/pub/openshift-v4
/s390x/dependencies/rhcos
/4.9/latest/rhcos-4.9.0-s390x-
live-initramfs.s390x.img
env.coreos.rootfs Link to the mirror of the CoreOS rootfs to be used for the bootstrap, control and compute nodes.
Feel free to change to a different version, but make sure it is for s390x architecture.
https://mirror.openshift.com
/pub/openshift-v4
/s390x/dependencies/rhcos
/4.9/latest/rhcos-4.9.0-
s390x-live-rootfs.s390x.img

13 - (Optional) OCP Install Config#

Variable Name Description Example
env.install_config.api_version Kubernetes API version for the cluster. These install_config variables will be passed to the OCP
install_config file. This file is templated in the get_ocp role during the setup_bastion playbook.
To make more fine-tuned adjustments to the install_config, you can find it at
roles/get_ocp/templates/install-config.yaml.j2
v1
env.install_config.compute.architecture Computing architecture for the compute nodes. Must be s390x for clusters on IBM zSystems. s390x
env.install_config.compute.hyperthreading Enable or disable hyperthreading on compute nodes. Recommended enabled. Enabled
env.install_config.control.architecture Computing architecture for the control nodes. Must be s390x for clusters on IBM zSystems. s390x
env.install_config.control.hyperthreading Enable or disable hyperthreading on control nodes. Recommended enabled. Enabled
env.install_config.cluster_network.cidr IPv4 block in Internal cluster networking in Classless Inter-Domain
Routing (CIDR) notation. Recommended to keep as is.
10.128.0.0/14
env.install_config.cluster_network.host_prefix The subnet prefix length to assign to each individual node. For example, if
hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr. A hostPrefix
value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses.
23
env.install_config.cluster_network.type The cluster network provider Container Network Interface (CNI) plug-in to install.
Either OpenShiftSDN (recommended) or OVNKubernetes.
OpenShiftSDN
env.install_config.service_network The IP address block for services. The default value is 172.30.0.0/16. The OpenShift SDN
and OVN-Kubernetes network providers support only a single IP address block for the service
network. An array with an IP address block in CIDR format.
172.30.0.0/16
env.install_config.fips True or False (boolean) for whether or not to use the United States' Federal Information Processing
Standards (FIPS). Not yet certified on IBM zSystems. Enclosed in 'single quotes'.
'false'

14 - (Optional) Proxy#

Variable Name Description Example
proxy_env.http_proxy (Optional) A proxy URL to use for creating HTTP connections outside the cluster. Will be
used in the install-config and applied to other Ansible hosts unless set otherwise in
no_proxy below. Must follow this pattern: http://username:pswd>@ip:port
http://ocp-admin:Pa$sw0rd@9.72.10.1:80
proxy_env.https_proxy (Optional) A proxy URL to use for creating HTTPS connections outside the cluster. Will be
used in the install-config and applied to other Ansible hosts unless set otherwise in
no_proxy below. Must follow this pattern: https://username:pswd@ip:port
https://ocp-admin:Pa$sw0rd@9.72.10.1:80
proxy_env.no_proxy (Optional) A comma-separated list (no spaces) of destination domain names, IP
addresses, or other network CIDRs to exclude from proxying. When using a
proxy, all necessary IPs and domains for your cluster will be added automatically. See
roles/get_ocp/templates/install-config.yaml.j2 for more details on the template.
Preface a domain with . to match subdomains only. For example, .y.com matches
x.y.com, but not y.com. Use * to bypass the proxy for all listed destinations.
example.com,192.168.10.1

15 - (Optional) Misc#

Variable Name Description Example
env.language What language would you like Red Hat Enterprise Linux to use? In UTF-8 language code.
Available languages and their corresponding codes can be found here, in the "Locale" column of Table 2.1.
en_US.UTF-8
env.timezone Which timezone would you like Red Hat Enterprise Linux to use? A list of available timezone
options can be found here.
America/New_York
env.ansible_key_name (Optional) Name of the SSH key that Ansible will use to connect to hosts. ansible-ocpz
env.ocp_key_name Comment to describe the SSH key used for OCP. Arbitrary value. OCPZ-01 key
env.bridge_name (Optional) Name of the macvtap bridge that will be created on the KVM host. macvtap-net