|
|
3 months ago | |
|---|---|---|
| scripts | 3 months ago | |
| .dockerignore | 3 months ago | |
| Dockerfile | 3 months ago | |
| LUAJIT_README.md | 3 months ago | |
| README.md | 3 months ago | |
| compose.yaml | 3 months ago | |
README.md
Auto-Boot Ollama Host
A Docker-based service that automatically starts and shuts down an Ollama host based on log patterns from the Paperless AI container.
Overview
This project monitors Paperless AI container logs and automatically starts a remote Ollama service on a Windows host with dedicated graphics card when specific error patterns are detected. The Windows host is powered on via Wake-on-LAN, the Ollama service is started, and after task completion the service is stopped and the host is shut down again.
Features
- Wake-on-LAN: Powers on the Windows host with dedicated graphics card via WOL
- Automatic Ollama Start: Starts the Ollama service on the Windows host
- Desktop Session Detection: Prevents interruptions during active user sessions
- Automatic Shutdown: Stops the service and shuts down the host after completion
- Energy Efficiency: Host runs only when needed and is automatically shut down
- Modular Architecture: Clean separation of functionalities
Environment Variables
The following environment variables must be set in Komodo:
SSH Configuration
SSH_USER= # Username for SSH connection
SSH_PUBLIC_KEY="[[SSH_PUBLIC_KEY_RTX]]" # Public SSH key
SSH_PRIVATE_KEY="[[SSH_PRIVATE_KEY_RTX]]" # Private SSH key
Syntax Note: The [[VARIABLE_NAME]] syntax references secrets defined in Settings → Variables of Komodo. These are replaced at runtime with the actual key values.
Wake-on-LAN
WOL_MAC= # MAC address of the target host for WOL
Additional Configuration
Additional environment variables can be viewed in scripts/config.lua.
Usage
Docker Compose
docker-compose up -d
Direct with Docker
docker build -t auto-boot-ollama-host .
docker run -d --name auto-boot-ollama-host auto-boot-ollama-host
Project Structure
├── README.md # This file
├── Dockerfile # Docker image definition
├── compose.yaml # Docker Compose configuration
├── .dockerignore # Docker ignore rules
└── scripts/ # Lua scripts
├── README.md # Detailed script documentation
├── auto-boot-ollama-host.lua # Main script
├── config.lua # Configuration management
├── utils.lua # Utility functions
├── network.lua # Network functions
├── ssh.lua # SSH operations
├── ollama_manager.lua # Ollama service management
└── session_check.lua # Windows desktop session detection
How It Works
- Log Monitoring: The script continuously monitors logs of the Paperless AI container
- Pattern Detection: When ERROR_PATTERN is detected, the Windows host with dedicated graphics card is started
- Session Check: Before starting, it checks if a user is logged into the desktop
- Wake-on-LAN: A WOL packet is sent to the Windows host to power it on
- SSH Connection: After booting, an SSH connection to the Windows host is established
- Service Start: The Ollama service is started on the Windows host via SSH
- Task Execution: The Windows host executes Ollama tasks using the dedicated graphics card
- Finish Pattern: When FINISH_PATTERN is detected, the Ollama service is stopped
- Shutdown: The Windows host is automatically shut down to save energy
Prerequisites
- Docker and Docker Compose
- Windows host with dedicated graphics card (for Ollama computations)
- SSH access to the Windows host
- Wake-on-LAN support on the Windows host
- NSSM (Non-Sucking Service Manager) on the Windows host for service management
- Ollama installation on the Windows host
Configuration
Detailed configuration options can be found in scripts/README.md.