Appearance
Server Configuration
Overview
Jubiloop uses Ansible to automate server configuration and application deployment for all environments. The deployment process is primarily automated through GitHub Actions CI/CD, with manual execution available for testing and debugging purposes only.
Architecture
Development/QA Environment (Shared Server)
┌─────────────────────────────────────────────────────┐
│ DigitalOcean Droplet (Ubuntu 22.04) - $6/month │
│ │
│ ┌─────────────────────────────────────────────┐ │
│ │ Caddy (Reverse Proxy) │ │
│ │ ├── dev-api.jubiloop.ca → server-dev:3333 │ │
│ │ └── qa-api.jubiloop.ca → server-qa:3333 │ │
│ └─────────────────────────────────────────────┘ │
│ │
│ ┌──────────────┐ ┌──────────────┐ │
│ │ Dev Stack │ │ QA Stack │ │
│ │ - server-dev │ │ - server-qa │ │
│ │ - postgres │ │ - postgres │ │
│ │ - redis-dev │ │ - redis-qa │ │
│ └──────────────┘ └──────────────┘ │
└─────────────────────────────────────────────────────┘Production Environment (Dedicated Server)
┌─────────────────────────────────────────────────────┐
│ DigitalOcean Droplet (Ubuntu 22.04) - $6/month │
│ │
│ ┌─────────────────────────────────────────────┐ │
│ │ Caddy (Reverse Proxy) │ │
│ │ └── api.jubiloop.ca → server:3333 │ │
│ └─────────────────────────────────────────────┘ │
│ │
│ ┌─────────────────────────────────────────────┐ │
│ │ Production Stack │ │
│ │ - server (AdonisJS API) │ │
│ │ - redis (Cache/Sessions) │ │
│ └─────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────┘
│
│ Database Connection
▼
┌─────────────────────────────────────────────────────┐
│ Neon Database │
│ (Managed PostgreSQL - External) │
└─────────────────────────────────────────────────────┘Deployment Methods
Primary: GitHub Actions (Production Use)
All production, staging, and QA deployments are automated via GitHub Actions with:
- Dynamic Configuration: Environment variables from
env.deploy.yml - Secrets Management: Secure credentials in GitHub Secrets
- State Integration: Droplet IPs from Terraform state in Cloudflare R2
- Audit Trail: Complete deployment history and approvals
Secondary: Manual Execution (Testing Only)
Manual execution is only for:
- Testing playbook changes on dedicated test infrastructure
- Debugging deployment issues in isolated environments
- Emergency interventions (requires approval and must be followed by GitHub Actions deployment)
Ansible Playbook
Location
infra/deploy/ansible/
├── ansible.cfg # Ansible configuration
├── playbook.yml # Main playbook
├── requirements.yml # Collections to install
├── group_vars/ # Variable files
└── roles/ # Vendored roles and custom deploymentExecution Flow
The playbook executes these tasks in order:
1. Pre-tasks
Environment Validation:
yaml
- name: Validate environment variable
ansible.builtin.assert:
that:
- env in valid_environments
fail_msg: "Invalid environment '{{ env }}'. Must be one of: {{ valid_environments | join(', ') }}"APT Cache Update:
- Updates package cache if older than 1 hour
- Only runs on Debian-based systems
2. Deploy User Setup
Role: lnovara.deploy-user
Creates and configures the deployment user:
yaml
user_name: deploy
user_groups: ['docker', 'sudo']
user_shell: /bin/bash
public_keys:
- "{{ ansible_ssh_private_key_file | regex_replace('\\.?$', '.pub') }}"
enable_passwordless_sudo: true3. System Configuration
Swap Space (Role: geerlingguy.swap):
yaml
swap_file_path: /swapfile
swap_file_size_mb: 1024 # 1GB for small droplets
swap_swappiness: 10 # Low swappiness for server workloads
swap_file_state: presentSecurity Hardening (Role: geerlingguy.security): Configuration from group_vars/all/security.yml:
- SSH: No password auth, no root login
- Fail2ban: 60-minute ban, max 3 attempts
- Auto-updates: Enabled without auto-reboot
4. Docker Installation
Role: geerlingguy.docker
Docker configuration:
yaml
docker_edition: 'ce'
docker_install_compose_plugin: true
docker_users:
- '{{ ansible_user }}'
docker_daemon_options:
log-driver: 'json-file'
log-opts:
max-size: '10m'
max-file: '3'
storage-driver: 'overlay2'5. Application Deployment
Role: deploy_app
Deployment directory: /opt/jubiloop/{compose_dir_name}
Tasks performed:
- Create deployment directory (owner: ansible_user, mode: 0755)
- Copy from repository:
docker-compose.ymlCaddyfile- Environment files (mode: 0600 for security)
- Login to GitHub Container Registry
- Deploy with Docker Compose v2:yaml
pull: always # Always get latest images recreate: always # Clean container state state: present # Ensure running - Health checks using Docker's native status
Roles Used
All third-party roles are vendored for reliability:
- lnovara.deploy-user: Creates deploy user
- geerlingguy.swap: Configures 1GB swap space
- geerlingguy.security: SSH hardening and fail2ban
- geerlingguy.docker: Docker CE installation
- deploy_app: Custom deployment role
Variables and Configuration
Playbook Variables
yaml
# Valid environments
valid_environments: ['dev', 'qa', 'dev-qa', 'production']
# Container registry
registry_url: ghcr.io
registry_username: jubiloop
# Deployment paths
compose_dir_name: "{% if env in ['dev', 'qa', 'dev-qa'] %}dev-qa-docker-compose{% else %}prod-docker-compose{% endif %}"
compose_project_dir: '/opt/jubiloop/{{ compose_dir_name }}'
# Health checks
health_check_url: 'http://localhost:3333/health'
health_check_timeout: 30Group Variables (group_vars/all/)
main.yml:
yaml
# Health check configuration
deploy_app_health_check_retries: 30
deploy_app_health_check_delay: 10
# Environment-specific service lists
deploy_app_dev_qa_services:
- caddy
- server-dev
- server-qa
- postgres-dev
- postgres-qa
- redis-dev
- redis-qa
deploy_app_prod_services:
- caddy
- server
- redis
# Environment files to copy
deploy_app_dev_qa_env_files:
- .env.caddy
- .env.server.dev
- .env.server.qa
- .env.postgres.dev
- .env.postgres.qa
- .env.redis.dev
- .env.redis.qa
deploy_app_prod_env_files:
- .env.server.prod
- .env.redis.prodsecurity.yml:
yaml
# SSH hardening
security_ssh_password_authentication: 'no'
security_ssh_permit_root_login: 'no'
security_ssh_usedns: 'no'
security_ssh_permit_empty_password: 'no'
security_ssh_challenge_response_auth: 'no'
security_ssh_gss_api_authentication: 'no'
security_ssh_x11_forwarding: 'no'
# Fail2ban configuration
security_fail2ban_enabled: true
security_fail2ban_custom_jail_local: |
[DEFAULT]
bantime = 3600
findtime = 600
maxretry = 3
backend = systemd
# Automatic updates
security_autoupdate_enabled: true
security_autoupdate_reboot: false
security_autoupdate_reboot_time: '03:00'Inventory Management
Dynamic Inventory (from Terraform)
Generated by GitHub Actions:
bash
node scripts/extract-config.js ansible-inventory --env dev-qaOutput format:
ini
[dev-qa]
dev-qa-server ansible_host=167.172.9.197 ansible_user=root
[production]
production-server ansible_host=167.172.9.198 ansible_user=rootStatic Inventory (for testing)
ini
[dev-qa]
167.172.9.197 ansible_user=root
[production]
167.172.9.198 ansible_user=rootSecurity Configuration
SSH Hardening
- Password authentication disabled
- Root login disabled (after first run)
- Key-based authentication only
- Deploy user with sudo access
Fail2ban Configuration
- 60-minute ban for failed SSH attempts
- Maximum 3 retry attempts
- Monitors systemd journal
Firewall Rules
Managed by DigitalOcean cloud firewall (via Terraform):
- SSH (22): Open to all
- HTTP (80): Cloudflare IPs only
- HTTPS (443): Cloudflare IPs only
File Permissions
- Environment files: Mode 0600 (owner read/write only)
- Docker Compose files: Standard permissions
- Application directories: Owned by deploy user
Environment Configuration
File Locations on Server
Development/QA:
- Base directory:
/opt/jubiloop/dev-qa-docker-compose/ - Environment files:
.env.*with secure permissions - Docker Compose:
docker-compose.yml - Caddy config:
Caddyfile
Production:
- Base directory:
/opt/jubiloop/prod-docker-compose/ - Environment files:
.env.*with secure permissions - Uses managed PostgreSQL (no local database)
Environment Variables
All environment files are managed through 1Password with copies in GitHub Secrets:
.env.caddy.*- Reverse proxy configuration.env.server.*- AdonisJS application settings.env.postgres.*- Database credentials (dev/qa only).env.redis.*- Redis passwords
Service Management
Health Checks
The deployment uses Docker Compose native health checks:
bash
docker compose ps --format json | jq -r '.[] | select(.Health? and .Health.Status != "healthy") | .Name'Health check configuration:
- Retry up to 3 times
- 10-second wait between retries
- Deployment fails if services remain unhealthy
Container Management
View Status:
bash
ssh deploy@<droplet-ip> 'cd /opt/jubiloop/<compose-dir> && docker compose ps'View Logs:
bash
ssh deploy@<droplet-ip> 'cd /opt/jubiloop/<compose-dir> && docker compose logs -f'Restart Services:
bash
ssh deploy@<droplet-ip> 'cd /opt/jubiloop/<compose-dir> && docker compose restart'Troubleshooting
Common Issues
SSH Connection Problems:
- Verify SSH key permissions (600 for private key)
- Ensure droplet IP is in known_hosts
- Check if deploy user exists
Container Registry Authentication:
- Verify GitHub PAT has
read:packagesscope - Check token expiration
- Ensure token is properly exported
Service Health Failures:
- Check container logs for specific errors
- Verify environment files exist and have correct permissions
- Ensure database migrations have run
- Check if ports are already in use
Module/Collection Errors:
- Install required collections:
ansible-galaxy collection install -r requirements.yml - Ensure Python Docker SDK is installed:
pip install docker>=5.0.0
Debug Commands
Test Connectivity:
bash
ansible -i "<ip>," all -u deploy --private-key ~/.ssh/jubiloop_deploy_key -m pingGather Facts:
bash
ansible -i "<ip>," all -u deploy --private-key ~/.ssh/jubiloop_deploy_key -m setupRun with Verbose Output:
bash
ansible-playbook -i "<ip>," playbook.yml -u deploy --private-key ~/.ssh/jubiloop_deploy_key -e env=dev-qa -e cr_pat="$CR_PAT" -vvvMaintenance
Regular Updates
- Security patches are applied automatically via unattended-upgrades
- Container images are updated through CI/CD deployments
- Manual updates can be triggered by re-running the playbook
Backup Considerations
- Application data is in managed databases (automated backups)
- Environment files should be backed up in 1Password
- No persistent data stored in containers
Monitoring
- Health endpoints monitored by deployment process
- Container logs available via Docker Compose
- System logs in systemd journal
Best Practices
- Always use GitHub Actions for production deployments
- Test changes in development environment first
- Keep secrets in 1Password and GitHub Secrets only
- Document any manual interventions in deployment journal
- Monitor deployment success through GitHub Actions
- Verify health checks after deployment
- Review container logs for any issues
Related Documentation
- CI/CD Pipeline - GitHub Actions workflows
- Infrastructure - Terraform and cloud resources
- Environments - Environment-specific configurations
- Secrets Management - Credential handling