*min inside an LXD container

Hi all,

I want to run Virtualmin inside a dedicated LXD system container rather than directly on the host. The host stays minimal (LXD only); the container gets Virtualmin with the full stack (Apache, Postfix/Dovecot, BIND), with its own dedicated public IPv4 via macvlan or a bridged interface (no NAT).

I’ve tested this in my LAN-only homelab and the basics work: the installer runs cleanly, services start correctly, and Virtualmin manages them as expected. What I can’t verify at home is anything that depends on a real public IP, a working PTR record, or internet-facing mail.

  1. Nothing in my homelab tests suggested a fundamental blocker, but I’d like to know if there are gotchas that only surface in production.
  2. In my testing I disabled Virtualmin’s firewall module and handled filtering at the host level instead, since iptables/nftables don’t work reliably unprivileged. Is that the accepted pattern, or is there a better approach?
  3. From inside the container the interface appears as a standard eth0, so I’d expect Virtualmin and Postfix to detect the address correctly. Are there any settings (myhostname, inet_interfaces, HELO construction) that should be pinned explicitly rather than left on auto when the IP is a real public one?
  4. /etc/resolv.conf conflicts: In the homelab this wasn’t an issue because I wasn’t running BIND, but with it enabled Virtualmin will want nameserver 127.0.0.1 while LXD also writes to resolv.conf on container boot. The options I can see are pinning the file after setup, disabling LXD’s DNS injection at the container level, or a systemd-resolved drop-in… I’m not sure which is cleanest in practice.

Appreciate any corrections or experience from anyone who has run this in production. The decision to use LXD rather than bare-metal is deliberate and not up for debate. I’m looking for practical guidance on making it work, not a discussion of whether I should.

1 Like

You don’t even need a firewall in most cases. The only useful part of a firewall in most Virtualmin systems is the automated brute force protection rules created by fail2ban.

If it detects correctly, don’t worry about it. If it doesn’t, fix it in config.

No.

1 Like

Seems I have a new project upcoming this weekend… Thanks, @Joe, as always…

1 Like

Oh, and I should probably mention that KVM is probably a better choice for virtualizing for hosting. There’s almost certainly no memory/resource benefits to using containers at this point, as the Linux kernel has crazy page-sharing stuff for KVM that makes memory usage across several identical or largely similar VMs (I mean kernel, etc.) much smaller than you’d think. There’s probably no reason to prefer containers. LXD certainly gets fewer development resources than KVM.

2 Likes

I run my Virtualmin on KVM (via TrueNAS) and I have no issues.

This topic was automatically closed 8 days after the last reply. New replies are no longer allowed.