locAI is a docker compose project which allows you to simply deploy a local AI locally using Open WebUI, Ollama and Caddy
Find a file
2025-06-06 13:02:05 +02:00
caddy Expose Ollama API via 2025-06-06 10:08:19 +02:00
.env.example Update Ollama Docker tag to 'rocm', adjust Caddyfile to use DOMAIN variable, and refine Docker Compose configuration 2025-04-11 01:39:30 +02:00
.gitignore Update environment configuration, add license, and enhance README for Open WebUI deployment 2025-03-02 19:08:32 +01:00
docker-compose.yml Fix OpenWebUI not using Postgres 2025-06-06 13:02:05 +02:00
LICENSE Update environment configuration, add license, and enhance README for Open WebUI deployment 2025-03-02 19:08:32 +01:00
README.md Add note about NVIDIA GPU acceleration requirements in README 2025-03-02 19:46:40 +01:00

LocAI - Open WebUI Deployment

A containerized deployment of Open WebUI with Ollama, featuring a complete stack for running local AI models.

Features

  • Open WebUI: Web interface for interacting with AI models
  • Ollama: Backend for running language models locally
  • PostgreSQL: Database for persistent storage
  • Redis: In-memory data structure store for WebSocket support
  • Caddy: Web server for HTTPS and reverse proxy

Quick Start

  1. Clone the repository

    git clone https://github.com/yourusername/open-webui.git
    cd open-webui
    
  2. Set up environment variables

    cp .env.example .env
    # Edit .env with your settings
    # Generate a secret key with: openssl rand -hex 32
    
  3. Start the services

    docker compose up -d
    
  4. Access the UI

    • Open your browser and navigate to https://localhost
    • For custom domain setup, update the DOMAIN variable in your .env file

Configuration

  • GPU Support: Set WEBUI_DOCKER_TAG=cuda in your .env file for NVIDIA GPU support
  • Models: Place models in ~/.ollama/models or configure OLLAMA_MODELS_PATH in .env
  • Persistence: Data is stored in subdirectories:
    • ./open-webui: Open WebUI data
    • ./postgres/data: PostgreSQL database files
    • ./ollama: Ollama configuration and models

Development

To customize the deployment:

  1. Modify the docker-compose.yml to add or adjust services
  2. Update the Caddyfile to change reverse proxy settings
  3. Contribute to the Open WebUI project on GitHub: open-webui

Troubleshooting

  • Container Startup Issues: Check logs with docker compose logs -f
  • Model Download Failures: Ensure proper network connectivity and sufficient disk space
  • GPU Access Problems: Verify that nvidia-container-toolkit is properly installed and configured
  • Connection Issues: Make sure all required ports are open and not blocked by firewalls

Security Notes

  • This deployment includes self-signed certificates for local usage
  • For production use, set up proper certificates or use a trusted certificate provider
  • The default setup is intended for local network usage, additional security measures are needed for public-facing deployments

License

This deployment configuration is provided under the MIT License. Open WebUI and other components have their own respective licenses.

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.