This document in intended to provide some overview of major release changes/updates/gotchas.
Due to the expanding capabilities of the redfish shim scripts and the lack of support in python2 to print without a linefeed (chomp), I’m being forced to dump python2 which means RHEL 7 / CentOS 7 will need python3 installed in order to continue using xtoph_deploy.
Switching the default CNI for deployed Openshift >= 4.9 to OVNKubernetes. Versions of Openshift < 4.9 will continue to use OpenshiftSDN.
BMC API support is improving. This is implemented with a set of what I’m calling 'shim' scripts. The playbooks interface with these shim scripts for specific functions (ie: power-on power-off power-status) and then the shims interface with the bmc to perform the actual function. Some scripts use RedFish, some RACADM, other IPMI. There are shims for Dell now, HP is being developed.
Work was primarily done with the sample-configs/ovirt-mixed/* configs. Have a look at the master-config.yml to see a sample of how pass static parms the pending RHCOS node.
Also, fixed up the bonded network setup to work on the custom built live ISOs.
Added config option h_rhcosBOND which passes config options to set up bonded nics. From my testing, the MAC is inherited from the 2nd interface specifed. Keep that in mind if you are using dhcp. Here is an example of a worker node configured with bond:dhcp
node7: ansible_host: "{{ inventory_hostname }}.{{ workshop_extras.network_fqdn }}" h_pubIP: "192.168.1.167" h_pubMAC: "00:25:90:f0:80:e0"
h_hwPROF: "supermicro_x9drd_if" h_plPROF: "baremetal" h_rsPROF: "ocp_worker" h_ksPROF: "pxe_nowait" h_pwrOFF: "False"
ipmi_fqdn: "esiso1_bmc.lab.linuxsoup.com"
h_rhcosDEV: "sda" h_rhcosNIC: "bond0:dhcp" h_rhcosBOND: "bond0:enp5s0f1,enp5s0f0:mode=active-backup" h_rhcosUSBdelay: "0"
Recent updates to xtoph_deploy allow of the specification of multiple libvirt/kvm platform nodes. Thus, you can deploy openshift across multiple hypervisors without needing to implement ovirt or redhat-virtualization (rhv). This only works with bridged networking, if you want NAT based network then you are locked in to an all-in-one (single host) deployment.
It is now required that the platform type be specificed as part of the node configuration. In the context of OCP4-Workshop, this is done in master-config.yml with h_plPROF and should be set to 'ovirt','libvirt' or 'baremetal' which are the 3 currently supported platforms.
This now allows for the speficiation of multiple (different) platforms for deployment. Meaning, if you have 2 RHV clusters for example, you can deploy a node to one cluster and different node to another cluster. Testing to follow, but it should work.
There’s some work to do in order to support multiple libvirt platforms, but that is the next step.
All of the custom configs and default configs were stripped of their top-level variable names. When the variables are loaded, the import_vars task now sets the top-level.
Also, of the configs were stripped of the repetitive nested node/variable names were existed only becuase I had not figured out how to properly prune and graft a sub-element of one hash to a sub-element and another hash. Now the variable files are simple and they can be safely copied from the role/vars directory in configs and modified.