A friendly introduction to virtualising Virtualmin on Ubuntu with Incus

Below are notes from setting up four incus instances of Ubuntu 24.04 on a physical server on a LAN, already running Ubuntu 24.04 desktop.

Two are container instances (not Docker) and two are virtual instances.

The container instances are one of a NAT setup and one of a LAN accessible macvlan setup

The virtual instances are also one of a NAT setup and one of a LAN accessible macvlan setup

A NAT setup instance can only be accessed in a regular manner from the localhost

A LAN accessible macvlan setup can be accessed from the LAN but not from the localhost. This is not a bug. It is designed into the Linux kernel.

So can an incus instance be accessed both from localhost and from the LAN? Not without a high level of unfriendly looking expertise that goes beyond the intended scope of providing a friendly introduction.

To keep with the friendly themes, setting up an incus web ui is included, but that is it, beacuse it is not necessary. When accessing the web ui, follow instructions on the web ui to either create a new certificate or use an existing one.

Webmin or Virtualmin, of themselves, should be OK in a pure Linux container. At least Virtualmin installs. If there are problems, it is likely due to utilities they use that are probably replaceable for utilities that do not call from user space into the kernel (which cannot be done from a container)

Most rented VPS won’t allow nested virtualization, so a container is likely necessary, if you want to try this on a VPS.

John Heenan

References:

Installing incus:

sudo su -

mkdir -p /etc/apt/keyrings/
curl -fsSL https://pkgs.zabbly.com/key.asc -o /etc/apt/keyrings/zabbly.asc

sh -c 'cat <<EOF > /etc/apt/sources.list.d/zabbly-incus-stable.sources
Enabled: yes
Types: deb
URIs: https://pkgs.zabbly.com/incus/stable
Suites: $(. /etc/os-release && echo ${VERSION_CODENAME})
Components: main
Architectures: $(dpkg --print-architecture)
Signed-By: /etc/apt/keyrings/zabbly.asc

EOF'


apt update
apt -y install incus
apt install incus-ui-canonical
apt -y install qemu-system # for qemu virtual instances managed by incus

echo "INCUS_UI=/opt/incus/ui" >> /etc/default/incus  # or /etc/default/environment

Configuring incus:


# make a default profile with defaults
# for home LAN, only change from defaults was to make server available over the network (yes)
incus admin init
#Would you like to use clustering? (yes/no) [default=no]:
#Do you want to configure a new storage pool? (yes/no) [default=yes]:
#Name of the new storage pool [default=default]:
#Name of the storage backend to use (dir, lvm, lvmcluster) [default=dir]:
#Would you like to create a new local network bridge? (yes/no) [default=yes]:
#What should the new bridge be called? [default=incusbr0]:
#What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
#What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
#Would you like the server to be available over the network? (yes/no) [default=no]: yes
#Address to bind to (not including port) [default=all]:
#Port to bind to [default=8443]:
#Would you like stale cached images to be updated automatically? (yes/no) [default=yes]:
#Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]:

# systemctl restart incus # if below does not work
# browse to https://hostname:8443

Installing two different types of Ubuntu instances, using NAT, from same image:


incus launch images:ubuntu/22.04 c1        # container
incus launch images:ubuntu/22.04 v1 --vm   # virtual

incus list -cns4t
#+------+---------+------------------------+-----------------+
#| NAME |  STATE  |          IPV4          |      TYPE       |
#+------+---------+------------------------+-----------------+
#| c1   | RUNNING | 10.79.158.18 (eth0)    | CONTAINER       |
#+------+---------+------------------------+-----------------+
#| v1   | RUNNING | 10.79.158.219 (enp5s0) | VIRTUAL-MACHINE |
#+------+---------+------------------------+-----------------+

Creating a macvlan interface and adding two more instances accessible from LAN


# add an incus macvlan network to allow access from LAN
ip address show
# choose an interface on LAN, such as enp3s0
incus network create macvlan --type=macvlan parent=enp3s0

incus launch images:ubuntu/22.04 mc1 -n macvlan      # container
incus launch images:ubuntu/22.04 mv1 -n macvlan --vm # virtual

incus list -cns4t
#+------+---------+------------------------+-----------------+
#| NAME |  STATE  |          IPV4          |      TYPE       |
#+------+---------+------------------------+-----------------+
#| c1   | RUNNING | 10.79.158.18 (eth0)    | CONTAINER       |
#+------+---------+------------------------+-----------------+
#| mc1  | RUNNING | 192.168.20.36 (eth0)   | CONTAINER       |
#+------+---------+------------------------+-----------------+
#| mv1  | RUNNING | 192.168.20.37 (enp5s0) | VIRTUAL-MACHINE |
#+------+---------+------------------------+-----------------+
#| v1   | RUNNING | 10.79.158.219 (enp5s0) | VIRTUAL-MACHINE |
#+------+---------+------------------------+-----------------+

Ping results:


sh -c 'cat <<EOF >> /etc/hosts
10.79.158.18 c1
192.168.20.36 mc1
192.168.20.37 mv1
10.79.158.219 v1
EOF'


#pinging results from localhost, as expected (not a bug)
ping c1  # ok
ping mc1 # fails
ping mv1 # fails
ping v1  # ok

#pinging from another pc on LAN, as expected (not a bug)
ping c1  # fails
ping mc1 # ok
ping mv1 # ok 
ping v1  # fails


Accessing instances and force delting instances from localhost:


# access from localhost, note the gap between -- and bash
incus exec c1  -- bash
incus exec mc1 -- bash
incus exec mv1 -- bash
incus exec v1  -- bash

# forced deletion of instances from localhost
#incus delete c1  --force
#incus delete mc1 --force
#incus delete mv1 --force
#incus delete v1  --force

I will have a look over this as TrueNAS has moved away from KVM to INCUS in their new versions.

Yes, I think they are using Incus lxc containers instead of kvm style virtualising.

Below is a link to a solution I posted for above setup that allows allows both containers and virtual instances to be accessible from both localhost and LAN at the same time, not one or the other.

Two new instances added. The solution meant the macvlan setup needed to be edited.

I am using a virtualize pfsense VM with a quad nic used on PCI pass through so I am hoping that will work when I use virtualmin as a proxy. The limitation is the speed of the ethernet rather than direct access