Hi all,
I want to run Virtualmin inside a dedicated LXD system container rather than directly on the host. The host stays minimal (LXD only); the container gets Virtualmin with the full stack (Apache, Postfix/Dovecot, BIND), with its own dedicated public IPv4 via macvlan or a bridged interface (no NAT).
I’ve tested this in my LAN-only homelab and the basics work: the installer runs cleanly, services start correctly, and Virtualmin manages them as expected. What I can’t verify at home is anything that depends on a real public IP, a working PTR record, or internet-facing mail.
- Nothing in my homelab tests suggested a fundamental blocker, but I’d like to know if there are gotchas that only surface in production.
- In my testing I disabled Virtualmin’s firewall module and handled filtering at the host level instead, since iptables/nftables don’t work reliably unprivileged. Is that the accepted pattern, or is there a better approach?
- From inside the container the interface appears as a standard
eth0, so I’d expect Virtualmin and Postfix to detect the address correctly. Are there any settings (myhostname,inet_interfaces, HELO construction) that should be pinned explicitly rather than left on auto when the IP is a real public one? /etc/resolv.confconflicts: In the homelab this wasn’t an issue because I wasn’t running BIND, but with it enabled Virtualmin will wantnameserver 127.0.0.1while LXD also writes toresolv.confon container boot. The options I can see are pinning the file after setup, disabling LXD’s DNS injection at the container level, or a systemd-resolved drop-in… I’m not sure which is cleanest in practice.
Appreciate any corrections or experience from anyone who has run this in production. The decision to use LXD rather than bare-metal is deliberate and not up for debate. I’m looking for practical guidance on making it work, not a discussion of whether I should.