Thanks May someone want to collaborate in creating one ive used ollama running on two gpu’s with Docker
Here are the steps used i was test the knowledge base lol have a look or try for yourself it may attract more to pro in this option was there
Step 1: Install Docker and Docker Compose
First, connect to your server via SSH and install Docker and its dependencies.
- Update your server:
sh
sudo apt-get update
sudo apt-get upgrade -y
Use code with caution.
- Install Docker:
sh
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
Use code with caution.
- Add your user to the
docker
group to run Docker commands without sudo
. Replace your_user
with your actual username.
sh
sudo usermod -aG docker your_user
newgrp docker
Use code with caution.
- Install Docker Compose:
sh
sudo apt-get install docker-compose-plugin
Use code with caution.
- Verify installations:
sh
docker --version
docker compose version
Use code with caution.
Step 2: Create a Docker project for Open WebUI and Ollama
Create a dedicated folder for your project and use a docker-compose.yml
file to manage the containers.
- Create the project directory:
sh
cd /home/mysite/home
mkdir ai.mysite.co.uk
cd ai.mysite.co.uk
Use code with caution.
- Create a
docker-compose.yml
file:
sh
nano docker-compose.yml
Use code with caution.
Paste the following configuration:
yml
services:
open-webui:
image: ghcr.io/open-webui/open-webui:ollama
container_name: open-webui
ports:
- "3000:8080"
volumes:
- ./data:/app/backend/data
restart: always
environment:
- OLLAMA_BASE_URL=http://localhost:11434
version: '3.8'
Use code with caution.
ghcr.io/open-webui/open-webui:ollama
bundles Ollama and Open WebUI into a single, self-contained Docker image for a simplified setup.
ports: "3000:8080"
maps the container’s internal web port to your host’s port 3000
. You will use this port later to set up the Virtualmin proxy.
- Start the containers:
sh
docker compose up -d
Use code with caution.
This command downloads the Docker images and starts the containers in the background.
Step 3: Configure the Nginx proxy in Virtualmin
Next, you will use Virtualmin to set up a reverse proxy that directs traffic from ai.paccione.co.uk
to the Open WebUI Docker container on port 3000
.
- Log in to Virtualmin and navigate to your
ai.paccione.co.uk
virtual server.
- Go to Services > Configure Website (or Services > Configure SSL Website if you’re using HTTPS).
- Select the Proxy Paths tab.
- Add a new proxy path:
- External path:
/
- Internal URL:
http://localhost:3000/
- Save the changes. Virtualmin will automatically update the Nginx configuration for
ai.paccione.co.uk
.
Step 4: Secure with SSL (optional but recommended)
Virtualmin can automatically handle Let’s Encrypt certificates.
- Navigate to your
ai.mysite.co.uk
virtual server.
- Go to Server Configuration > SSL Certificate.
- Click the Let’s Encrypt tab.
- Request a new certificate. Virtualmin will handle the validation and installation for you.
Step 5: Access and configure Open WebUI
Your Open WebUI instance is now running behind the ai.mysite.co.uk
domain.
- Navigate to
https://ai.mysite.co.uk
in your browser.
- Upon first access, create an administrator account as prompted.
- Because you used the bundled Docker image, Ollama is already running inside the container and connected to Open WebUI.
- To install your first model, go to Settings > Ollama in the Open WebUI interface. You can search and pull models from the Ollama library directly. For example:
sh
ollama pull llama3