Private AI agent with VeilNet
Prerequisites
- Docker and Docker Compose installed
- VeilNet registration token
- Access to VeilNet Guardian service
- Sufficient disk space for AI models (Ollama models can be several GB)
Overview
This guide shows you how to deploy a private AI agent stack using:
- Ollama: Local AI model server for running large language models
- Open WebUI: User-friendly web interface for interacting with Ollama
- n8n: Workflow automation platform for building AI-powered automations
- VeilNet: Secure overlay network for accessing your AI services remotely
With VeilNet, you can securely access your private AI agent from anywhere without exposing services to the public internet.
Step 1: Create Docker Compose Configuration
Create a docker-compose.yml file with the following configuration:
services:
veilnet-conflux:
container_name: veilnet-conflux
restart: unless-stopped
env_file:
- .env
image: veilnet/conflux:beta
pull_policy: always
cap_add:
- NET_ADMIN
devices:
- /dev/net/tun
network_mode: host
ollama:
image: ollama/ollama:latest
volumes:
- ollama:/root/.ollama
network_mode: "container:veilnet-conflux"
depends_on:
- veilnet-conflux
open-webui:
image: ghcr.io/open-webui/open-webui:main
volumes:
- open-webui:/app/backend/data
network_mode: "container:veilnet-conflux"
depends_on:
- veilnet-conflux
- ollama
environment:
- OLLAMA_BASE_URL=http://localhost:11434
n8n:
image: n8nio/n8n:latest
volumes:
- n8n:/home/node/.n8n
network_mode: "container:veilnet-conflux"
depends_on:
- veilnet-conflux
environment:
- N8N_SECURE_COOKIE=false
volumes:
ollama:
driver: local
driver_opts:
type: none
o: bind
device: ./ollama-data
open-webui:
driver: local
driver_opts:
type: none
o: bind
device: ./open-webui-data
n8n:
driver: local
driver_opts:
type: none
o: bind
device: ./n8n-data
Step 2: Create Environment File
Create a .env file in the same directory as your docker-compose.yml with the following variables:
VEILNET_REGISTRATION_TOKEN=<YOUR_REGISTRATION_TOKEN>
VEILNET_GUARDIAN=<YOUR_GUARDIAN_URL>
VEILNET_PORTAL=true
VEILNET_CONFLUX_TAG=<YOUR_CONFLUX_TAG>
VEILNET_CONFLUX_CIDR=<VEILNET_CIDR>
Replace the placeholders:
<YOUR_REGISTRATION_TOKEN>: Your VeilNet registration token (obtained from the VeilNet portal)<YOUR_GUARDIAN_URL>: The URL of your VeilNet Guardian service (e.g.,https://guardian.veilnet.app)<YOUR_CONFLUX_TAG>: A tag to identify this Conflux instance (e.g.,ai-agent-server)<VEILNET_CIDR>: Any IP address (e.g.,10.128.0.5/16) in CIDR format that belongs to the realm subnet (e.g.,10.128.0.0/16)
Step 3: Create Data Directories
Create the directories for persistent data storage:
mkdir -p ollama-data open-webui-data n8n-data
These directories will store:
ollama-data: Downloaded AI models and Ollama configurationopen-webui-data: Open WebUI user data and conversationsn8n-data: n8n workflows and credentials
Step 4: Deploy the Stack
Start all services:
docker-compose up -d
This will:
- Pull all required images
- Start Ollama, Open WebUI, n8n, and VeilNet Conflux
- Create persistent volumes for data storage
- Automatically restart containers if they stop
Step 5: Verify Deployment
Check that all containers are running:
docker-compose ps
View the VeilNet Conflux logs to verify it's connecting:
docker logs veilnet-conflux -f
You should see logs indicating successful registration and connection to the VeilNet network.
Step 6: Download AI Models
Once Ollama is running, download the AI models you want to use:
docker exec -it <ollama-container-name> ollama pull llama2
Or download other models like:
llama2- Meta's Llama 2 modelmistral- Mistral AI modelcodellama- Code-focused Llama modelphi- Microsoft's Phi model
You can also download models through the Open WebUI interface.
Step 7: Access Your Services
Local Access
Once the services are running, you can access them locally:
- Open WebUI:
http://localhost:3000 - n8n:
http://localhost:5678 - Ollama API:
http://localhost:11434
Remote Access via VeilNet
With VeilNet configured, you can access these services remotely from anywhere in the world using the host's VeilNet IP address, as long as your device is also connected to the same VeilNet realm.
- Find your host's VeilNet IP address:
ip addr show veilnet
Or check the VeilNet portal to see your assigned IP address.
- Access services using the host's VeilNet IP from any device connected to VeilNet:
- Open WebUI:
http://<veilnet-ip>:3000 - n8n:
http://<veilnet-ip>:5678 - Ollama API:
http://<veilnet-ip>:11434
- Open WebUI:
For example, if your host has VeilNet IP 10.128.0.5, you can access Open WebUI from anywhere using http://10.128.0.5:3000, as long as your device is connected to VeilNet.
Step 8: Configure Services
Open WebUI
Local Access:
- Open
http://localhost:3000 - Create your first user account
- Start chatting with your AI models
- Configure model settings and preferences in the settings menu
Remote Access via VeilNet:
- Open
http://<veilnet-ip>:3000(replace<veilnet-ip>with your host's VeilNet IP, e.g.,http://10.128.0.5:3000) - Create your first user account
- Access from anywhere as long as your device is connected to VeilNet
n8n
Local Access:
- Open
http://localhost:5678 - Complete the initial setup wizard
- Create workflows that integrate with Ollama
- Use the Ollama node to trigger AI model inference in your automations
Remote Access via VeilNet:
- Open
http://<veilnet-ip>:5678(replace<veilnet-ip>with your host's VeilNet IP, e.g.,http://10.128.0.5:5678) - Complete the initial setup wizard
- Access from anywhere as long as your device is connected to VeilNet
Ollama API
Local Access:
You can interact with Ollama directly via its REST API:
curl http://localhost:11434/api/generate -d '{
"model": "llama2",
"prompt": "Why is the sky blue?",
"stream": false
}'
Remote Access via VeilNet:
Access the Ollama API remotely using the VeilNet IP:
curl http://<veilnet-ip>:11434/api/generate -d '{
"model": "llama2",
"prompt": "Why is the sky blue?",
"stream": false
}'
Replace <veilnet-ip> with your host's VeilNet IP (e.g., http://10.128.0.5:11434). This works from any device connected to VeilNet.
Updating Services
To update to newer versions:
docker-compose pull
docker-compose up -d
This will pull the latest images and restart the containers with updated versions.
Stopping and Removing
To stop all services:
docker-compose down
To remove containers and volumes (this will delete all data):
docker-compose down -v
Warning: Removing volumes will delete all downloaded models, conversations, and workflows. Make sure to back up important data before removing volumes.
FAQ
How much disk space do I need?
AI models can be large (several GB each). Plan for at least 20-50 GB of free space, depending on how many models you want to download. The ollama-data directory will grow as you download more models.
Can I access these services from my phone?
Yes! Once your phone is connected to the same VeilNet realm, you can access Open WebUI and n8n using the host's VeilNet IP address from anywhere. For example, if your server has VeilNet IP 10.128.0.5, you can access Open WebUI on your phone using http://10.128.0.5:3000 from any location, as long as your phone is connected to VeilNet. Since all containers share the network namespace with veilnet-conflux, they can also use the VeilNet TUN device for optimal network performance.
How do I share access with team members?
Add team members to the same VeilNet realm through the VeilNet portal. Once they're connected, they can access the services using the host's VeilNet IP address from anywhere in the world. They don't need to be on the same local network - as long as both devices are connected to VeilNet, they can access the services remotely. Since all containers share the network namespace with veilnet-conflux, they can also use the VeilNet TUN device for optimal network performance.
Can I run this on multiple servers?
Yes! You can deploy this stack on multiple servers, each with VeilNet Conflux configured. Each server will have its own AI models and services, and you can access all of them through VeilNet using their respective VeilNet IP addresses from anywhere, as long as your device is connected to VeilNet.
Why use NET_ADMIN capability instead of privileged mode?
The NET_ADMIN capability provides only the necessary permissions for VeilNet to create and manage network interfaces, without granting full privileged access. This is more secure while still allowing VeilNet to function properly.
