Part 3: Setting up Virtual Machines
While Proxmox offers lightweight LXC containers, I chose VMs for this setup. They provide better security and a richer feature set through full kernel-level isolation.
The operating system for these VMs is Ubuntu Server. I went with it because I'm familiar with the platform and there's extensive community support. Although I initially installed a more recent version, it's generally recommended to use the Long-Term Support (LTS) version for production environments due to its focus on stability and extended support window.
Setting Up a VM in Proxmox
Setting up a VM in Proxmox is straightforward. For optimal performance and features, here are my recommended settings:
Memory: Enable the 'Ballooning Device' option. This allows dynamic memory allocation - the VM can request more RAM from the host when needed and release it when idle, improving overall resource utilisation.
Processor: Set the 'Type' to 'host'. This passes the host CPU's features directly to the VM, which can significantly boost performance by enabling all the instruction sets of the host CPU.
Machine Type: Select 'Q35'. This is the recommended machine type for modern systems as it provides support for more recent technologies out of the box, including PCI Express, NVMe emulation, and USB 3.0.
QEMU Guest Agent
The QEMU Guest Agent is an important service that should be installed on all VMs running in Proxmox. It acts as a bridge between the Proxmox host and the guest operating system.
Key benefits include:
Graceful Shutdown and Reboot
Allows Proxmox to properly shut down or reboot the VM from the web interface without forcing the power off, which can lead to data corruption.
Live Snapshots
Crucial for creating consistent, live snapshots by quiescing the guest's file system during the backup process.
Accurate Information
Enables the Proxmox host to retrieve detailed information from the VM, such as its IP addresses, which then appears in the Proxmox summary panel.
Improved Resource Management
Facilitates features like memory ballooning.
Installation
First, ensure the 'Guest Agent' option is enabled in the VM's 'Options' tab within the Proxmox web interface. Then, connect to your Ubuntu Server VM and run the following commands.
Update your package lists:
sudo apt update
Install the agent:
sudo apt install qemu-guest-agent -y
Start and enable the service to ensure it runs on boot:
sudo systemctl start qemu-guest-agent
sudo systemctl enable qemu-guest-agent
After installation, a reboot is recommended. You should then see the VM's IP addresses on the 'Summary' page in the Proxmox UI.
Setting Up Email Notifications
To configure the VM to send email notifications for services like unattended-upgrades, you'll need to set up an email relay. The process is identical to configuring the Proxmox host itself.
Please refer to the instructions in Part 1
Installing Tailscale
Tailscale provides a secure and straightforward way to connect to your VMs from anywhere.
For installation instructions, see Part 1
Unattended-Upgrades
The unattended-upgrades package is useful for maintaining system security by automatically installing the latest security updates. Configuring it to send email notifications ensures you're always aware of changes made to your system.
The primary configuration file is located at /etc/apt/apt.conf.d/50unattended-upgrades.
Open the configuration file using a text editor such as nano:
sudo nano /etc/apt/apt.conf.d/50unattended-upgrades
Enable and configure email notifications. Find the following line, uncomment it by removing the //, and replace the placeholder with your email address:
- //Unattended-Upgrade::Mail "root";
+ Unattended-Upgrade::Mail "[email protected]";
Specify when to receive emails. The Unattended-Upgrade::MailOnlyOnError option controls the frequency of notifications. To receive an email every time an upgrade occurs, ensure this line is either commented out or set to false:
// Unattended-Upgrade::MailOnlyOnError "true";
or
Unattended-Upgrade::MailOnlyOnError "false";
Save your changes and exit the editor (in nano, press Ctrl+X, then Y, and Enter).
To test your configuration, perform a dry run. This simulates the upgrade process and should trigger an email if everything is configured correctly and updates are pending:
sudo unattended-upgrades --debug --dry-run
Check the command output for any errors and monitor your inbox (including the spam folder) for the notification.
Installing Docker and Docker Compose
Docker is a platform for developing, shipping, and running applications in containers. Docker Compose is a tool for defining and running multi-container Docker applications.
These steps follow the official Docker repository method, which is the recommended approach.
Set Up Docker's apt Repository
Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
Add the repository to apt sources:
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
Install the Docker Packages
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y
(Optional but Recommended) Add Your User to the Docker Group
This allows you to run Docker commands without sudo. You'll need to log out and back in for this change to take effect:
sudo usermod -aG docker $USER
Verify the Installation
Run the "hello-world" container:
docker run hello-world
Watchtower
Watchtower is a container that monitors your running Docker containers and automatically updates them to the latest image available. This simplifies maintenance and ensures your applications are always running the most recent, and often more secure, versions.
Watchtower runs as a Docker container itself. The simplest way to deploy it is with the following command:
docker run -d \
--name watchtower \
-v /var/run/docker.sock:/var/run/docker.sock \
containrrr/watchtower
This starts Watchtower and gives it access to the Docker socket, allowing it to manage other containers on the host. By default, it checks for new images every 24 hours.
Portainer
Portainer is a powerful, lightweight management UI that allows you to easily manage your Docker environments. It provides a detailed overview of your containers, images, volumes, and networks, and allows you to deploy applications quickly through its web interface.
First, create a volume for Portainer to store its data:
docker volume create portainer_data
Now, run the Portainer Server container:
docker run -d -p 8000:8000 -p 9443:9443 --name portainer \
--restart=always \
-v /var/run/docker.sock:/var/run/docker.sock \
-v portainer_data:/data \
portainer/portainer-ce:latest
This starts Portainer Community Edition, exposes its UI on port 9443 (HTTPS), and ensures it restarts automatically.
Once running, access the Portainer UI by navigating to https://<your-vm-ip>:9443 in your web browser. You'll be prompted to create an administrator account on your first visit.
Fail2ban
Fail2ban is an intrusion prevention software framework that protects servers from brute-force attacks. It monitors log files (e.g., /var/log/auth.log) for suspicious activity, such as repeated failed login attempts, and temporarily bans the offending IP addresses using firewall rules.
Installation and Configuration
Install the Fail2ban package:
sudo apt update
sudo apt install fail2ban -y
The default configuration is stored in /etc/fail2ban/jail.conf. You shouldn't edit this file directly. Instead, create a local configuration file to make your customisations, which will override the defaults:
sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
Open your new local configuration file to customise the settings. For example, you can enable the SSH protection jail:
sudo nano /etc/fail2ban/jail.local
Inside jail.local, find the [sshd] section and ensure it's enabled:
[sshd]
enabled = true
port = ssh
logpath = %(sshd_log)s
backend = %(sshd_backend)s
Restart the Fail2ban service to apply the changes:
sudo systemctl restart fail2ban
You can check the status of your jails and banned IPs with:
sudo fail2ban-client status sshd
Testing VM Migration
VM migration is the process of moving a running virtual machine from one Proxmox host to another. It's a key feature for performing hardware maintenance without service interruption.
Currently, with local storage on each node, the migration process involves copying the entire VM disk image over the network. This works but can be very slow, especially over a 1GbE network connection. For large VMs, this can lead to significant downtime.
Future Improvements
The long-term goal for this project is to implement a High Availability (HA) storage solution, such as Ceph. Ceph is a distributed storage platform that provides a unified storage pool across all nodes in the cluster. When a VM's disk is stored on Ceph, the migration process becomes nearly instantaneous. This is because the disk image is already accessible to all nodes—only the VM's running state (the contents of its RAM) needs to be transferred over the network.
Upgrading the network infrastructure from 1GbE to 10GbE or faster is also planned. This will not only speed up local storage migrations but is also a prerequisite for achieving good performance with distributed storage systems like Ceph.