Config-sync over management interface – while not a best practice – can be a handy thing to have, whether that is because of a lack of interfaces, switch capacity or other reason.
This has cropped up for me in a refresh / migration project where interfaces were changed out for fibre and didn’t come up immediately. This allowed us to sync the machines and configure failover before attending to the interface issue ..
Best practices for f5 High Availability configuration can be found here.
The management interface will not show up by default under the config-sync tab:
this has to be enabled using the following commands:
tmsh modify sys db configsync.allowmanagement value enable tmsh save sys config
Refresh the page and we can now choose to run the management interface:
The f5 master key is used to encrypt and decrypt all that is secure on your f5 appliance including certificate keys, passphrases and UCS configuration files; this is obviously therefore an absolutely vital piece of information in certain situations.
If you have a synchronized cluster then this is not so much of an issue: when you add a new device to the cluster – be that a new member or an RMA replacement for a failed appliance – then the master key will be synchronized as well.
Situations where this can really impact your ability to get back up and running quickly is when you have a standalone appliance or you are transferring config from one machine to another:
Standalone appliance fails, monitoring systems go crazy and red lights start blinking everywhere – you call it in and you have a new machine on site in 4 hours (if you bought the right support contract!)
You plug it in and give it a management IP and access the GUI
Feeling smug because you’ve taken regular backups and stored them offline, you upload your latest UCS archive to restore the configuration
Config load fails as it is unable to decrypt the SSL keys passphrases, LDAP profiles passphrases, cookie encryption passphrases etc. etc.
You may see the following in the logs: Decryption of the field (field_name) for object (object_name) failed
Trawl through the config, edit out the passphrases and re-key most of it from scratch
Change the master key to the one from the failed appliance, upload your UCS config and engage ultra-smug mode.
This is very easily done using the “f5 Master Key Utility” and should form part of your backup process for all your f5 appliances:
Backup the Master Key Using f5mku
Use the “-K” switch to display the master key and then copy the resulting key securely to an offline vault:
[root@f5] ~ # f5mku -K
Restore the Master Key Uing f5mku
Use the “-r” switch to restore the key to your new appliance:
Job done! Now you can upload the UCS config archive without needing to worry about decryption failures.
Note: f5kmu Options
[root@f5] ~ # f5mku --h
f5mku: invalid option -- -
Usage: f5mku [d:?fhHr:t:uUvYZ]
Commands: (one of these must be specified)
-d bits generate a base64 encoded RSA key and output to stdout
-f fetch unit key
-Z dump debug information
-r key rekey with the specifed master key (b64 encoded)
-? -h this help
-t # Timeout value in seconds (1-500)
-u Unit test posture (no HAL)
-U Test unit key functionality.
-H Force I/O to HAL storage
-v set verbose mode
-Y Answer Yes to any queries
Log messages (including debug) go to authpriv and local6 facilities.
ID 352856 “If an SCF is migrated between BIG-IP VE running on non-similar hypervisor software, a validation error may prevent configuration loading. Loading the configuration … BIGpipe interface creation error: 01070318:3: “”The requested media for interface 1.1 is invalid.”” When this condition is encountered on BIG-IP Virtual Edition, configuration may be fixed for import by removing the entire line that contains “”media fixed”” statements for each interface.”
If however, like me, you cannot find the “media fixed” anywhere in your bigip_base.conf file then it is most likely to be an issue with the vmxnet3 network adapters that are deployed by default.
My management adapter, also vmxnet3, came up fine but the other 1.1, 1.2 and 1.3 interfaces remained uninitialised and any attempts to edit just threw the error above.
My solution was to change the adapter types in the .vmx file for the virtual machine:
1. Shut down the machine2. SSH / console into your ESXi host and change directory to /vmfs/volumes/<datastore_name>/<vm_directory>
3. Use the “vi” command to edit the <your_vm>.vmx file and change the “vmxnet3” entries to “e1000.” Note: you can generally leave the first interface (management) as vmxnet3.
4. Save the file and start up your machine – you should now be able to initialise and edit your interfaces under “Network” -> “Interfaces”
Job done, let me know if this works / doesn’t work for you!