Custom Ubuntu OVAs
There may be a need to make specific configuration changes on the imported ova template before using it to create/update EKS-A clusters.
Set up SSH Access for Imported OVA
SSH user and key need to be configured in order to allow SSH login to the VM template
Clone template to VM
Create an environment variable to hold the name of modified VM/template
export VM=<vm-name>
Clone the imported OVA template to create VM
govc vm.clone -on=false -vm=<full-path-to-imported-template> - folder=<full-path-to-folder-that-will-contain-the-VM> -ds=<datastore> $VM
Configure VM with cloud-init and the VMX GuestInfo datasource
Create a metadata.yaml file
instance-id: cloud-vm
local-hostname: cloud-vm
network:
version: 2
ethernets:
nics:
match:
name: ens*
dhcp4: yes
Create a userdata.yaml file
#cloud-config
users:
- default
- name: <username>
primary_group: <username>
sudo: ALL=(ALL) NOPASSWD:ALL
groups: sudo, wheel
ssh_import_id: None
lock_passwd: true
ssh_authorized_keys:
- <user's ssh public key>
Export environment variable containing the cloud-init metadata and userdata
export METADATA=$(gzip -c9 <metadata.yaml | { base64 -w0 2>/dev/null || base64; }) \
USERDATA=$(gzip -c9 <userdata.yaml | { base64 -w0 2>/dev/null || base64; })
Assign metadata and userdata to VM’s guestinfo
govc vm.change -vm "${VM}" \
-e guestinfo.metadata="${METADATA}" \
-e guestinfo.metadata.encoding="gzip+base64" \
-e guestinfo.userdata="${USERDATA}" \
-e guestinfo.userdata.encoding="gzip+base64"
Power the VM on
govc vm.power -on “$VM”
Customize the VM
Once the VM is powered on and fetches an IP address, ssh into the VM using your private key corresponding to the public key specified in userdata.yaml
ssh -i <private-key-file> username@<VM-IP>
At this point, you can make the desired configuration changes on the VM. The following sections describe some of the things you may want to do:
Add a Certificate Authority
Copy your CA certificate under /usr/local/share/ca-certificates
and run sudo update-ca-certificates
which will place the certificate under the /etc/ssl/certs
directory.
Add Authentication Credentials for a Private Registry
If /etc/containerd/config.toml
is not present initially, the default configuration can be generated by running the containerd config default > /etc/containerd/config.toml
command. To configure a credential for a specific registry, create/modify the /etc/containerd/config.toml
as follows:
# explicitly use v2 config format
version = 2
# The registry host has to be a domain name or IP. Port number is also
# needed if the default HTTPS or HTTP port is not used.
[plugins."io.containerd.grpc.v1.cri".registry.configs."registry1-host:port".auth]
username = ""
password = ""
auth = ""
identitytoken = ""
# The registry host has to be a domain name or IP. Port number is also
# needed if the default HTTPS or HTTP port is not used.
[plugins."io.containerd.grpc.v1.cri".registry.configs."registry2-host:port".auth]
username = ""
password = ""
auth = ""
identitytoken = ""
Restart containerd service with the sudo systemctl restart containerd
command.
Convert VM to a Template
After you have customized the VM, you need to convert it to a template.
Cleanup the machine and power off the VM
This step is needed because of a known issue in Ubuntu which results in the clone VMs getting the same DHCP IP
sudo su
echo -n > /etc/machine-id
rm /var/lib/dbus/machine-id
ln -s /etc/machine-id /var/lib/dbus/machine-id
cloud-init clean -l --machine-id
Delete the hostname from file
/etc/hostname
Delete the networking config file
rm -rf /etc/netplan/50-cloud-init.yaml
Edit the cloud init config to turn preserve_hostname
to false
vi /etc/cloud/cloud.cfg
Power the VM down
govc vm.power -off "$VM"
Take a snapshot of the VM
It is recommended to take a snapshot of the VM as it reduces the provisioning time for the machines and makes cluster creation faster.
If you do snapshot the VM, you will not be able to customize the disk size of your cluster VMs. If you prefer not to take a snapshot, skip this step.
govc snapshot.create -vm "$VM" root
Convert VM to template
govc vm.markastemplate $VM
Tag the template appropriately as described here
Use this customized template to create/upgrade EKS Anywhere clusters