Open WebUI on my virtual host

SYSTEM INFORMATION
OS type and version ubuntu
Webmin version 2.402
I want to install Open WebUI on my virtual host using my web domain are there any tutorials specifically for webmin or virtualmin or is there a way of implementing it as a web app ?

I wont to install Open WebUI on my virtual host using my web domain are there any tutorials specifically for webmin or virtualmin or is there a way of implementing it as a web app ?

Virtualmin, by its’ very nature is an online platform. But to your question, I doubt there are any tutorials of the sort you are looking for. If a search didn’t find them, I doubt you’ll find one here.

Thanks May someone want to collaborate in creating one ive used ollama running on two gpu’s with Docker

Here are the steps used i was test the knowledge base lol have a look or try for yourself it may attract more to pro in this option was there

Step 1: Install Docker and Docker Compose

First, connect to your server via SSH and install Docker and its dependencies.

  1. Update your server:

sh

sudo apt-get update
sudo apt-get upgrade -y

Use code with caution.

  1. Install Docker:

sh

curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh

Use code with caution.

  1. Add your user to the docker group to run Docker commands without sudo. Replace your_user with your actual username.

sh

sudo usermod -aG docker your_user
newgrp docker

Use code with caution.

  1. Install Docker Compose:

sh

sudo apt-get install docker-compose-plugin

Use code with caution.

  1. Verify installations:

sh

docker --version
docker compose version

Use code with caution.

Step 2: Create a Docker project for Open WebUI and Ollama

Create a dedicated folder for your project and use a docker-compose.yml file to manage the containers.

  1. Create the project directory:

sh

cd /home/mysite/home
mkdir ai.mysite.co.uk
cd ai.mysite.co.uk

Use code with caution.

  1. Create a docker-compose.yml file:

sh

nano docker-compose.yml

Use code with caution.

Paste the following configuration:

yml

services:
  open-webui:
    image: ghcr.io/open-webui/open-webui:ollama
    container_name: open-webui
    ports:
      - "3000:8080"
    volumes:
      - ./data:/app/backend/data
    restart: always
    environment:
      - OLLAMA_BASE_URL=http://localhost:11434

version: '3.8'

Use code with caution.

  • ghcr.io/open-webui/open-webui:ollama bundles Ollama and Open WebUI into a single, self-contained Docker image for a simplified setup.
  • ports: "3000:8080" maps the container’s internal web port to your host’s port 3000. You will use this port later to set up the Virtualmin proxy.
  1. Start the containers:

sh

docker compose up -d

Use code with caution.

This command downloads the Docker images and starts the containers in the background.

Step 3: Configure the Nginx proxy in Virtualmin

Next, you will use Virtualmin to set up a reverse proxy that directs traffic from ai.paccione.co.uk to the Open WebUI Docker container on port 3000.

  1. Log in to Virtualmin and navigate to your ai.paccione.co.uk virtual server.
  2. Go to Services > Configure Website (or Services > Configure SSL Website if you’re using HTTPS).
  3. Select the Proxy Paths tab.
  4. Add a new proxy path:
  • External path: /
  • Internal URL: http://localhost:3000/
  1. Save the changes. Virtualmin will automatically update the Nginx configuration for ai.paccione.co.uk.

Step 4: Secure with SSL (optional but recommended)

Virtualmin can automatically handle Let’s Encrypt certificates.

  1. Navigate to your ai.mysite.co.uk virtual server.
  2. Go to Server Configuration > SSL Certificate.
  3. Click the Let’s Encrypt tab.
  4. Request a new certificate. Virtualmin will handle the validation and installation for you.

Step 5: Access and configure Open WebUI

Your Open WebUI instance is now running behind the ai.mysite.co.uk domain.

  1. Navigate to https://ai.mysite.co.uk in your browser.
  2. Upon first access, create an administrator account as prompted.
  3. Because you used the bundled Docker image, Ollama is already running inside the container and connected to Open WebUI.
  4. To install your first model, go to Settings > Ollama in the Open WebUI interface. You can search and pull models from the Ollama library directly. For example:

sh

ollama pull llama3

This looks a lot like LLM-generated output, which is prohibited here.

Unless/until you have done all these steps yourself and confirmed it works, filtering it through your own experience, I’d rather you not post things like this here.

It looks mostly right from what I know about using Docker, and it’s true that for serving a docker app via a Virtualmin-managed domain, you’d proxy to it.

But, please confirm it actually works and explain where you got the steps and that it worked for you or not, with any corrections.

We prohibit AI-generated output because it is often wrong in subtle ways that are hard to spot because it is so confident and its grammar so good. It provides answer-shaped responses, but they are not always answers. Unless and until you prove it is an answer with your own experience of following the steps, it is suspect.

2 Likes