Skip to content

Server Configuration

Overview

Jubiloop uses Ansible to automate server configuration and application deployment for all environments. The deployment process is primarily automated through GitHub Actions CI/CD, with manual execution available for testing and debugging purposes only.

Architecture

Development/QA Environment (Shared Server)

┌─────────────────────────────────────────────────────┐
│  DigitalOcean Droplet (Ubuntu 22.04) - $6/month    │
│                                                     │
│  ┌─────────────────────────────────────────────┐   │
│  │  Caddy (Reverse Proxy)                      │   │
│  │  ├── dev-api.jubiloop.ca → server-dev:3333 │   │
│  │  └── qa-api.jubiloop.ca  → server-qa:3333  │   │
│  └─────────────────────────────────────────────┘   │
│                                                     │
│  ┌──────────────┐  ┌──────────────┐               │
│  │ Dev Stack    │  │ QA Stack     │               │
│  │ - server-dev │  │ - server-qa  │               │
│  │ - postgres   │  │ - postgres   │               │
│  │ - redis-dev  │  │ - redis-qa   │               │
│  └──────────────┘  └──────────────┘               │
└─────────────────────────────────────────────────────┘

Production Environment (Dedicated Server)

┌─────────────────────────────────────────────────────┐
│  DigitalOcean Droplet (Ubuntu 22.04) - $6/month    │
│                                                     │
│  ┌─────────────────────────────────────────────┐   │
│  │  Caddy (Reverse Proxy)                      │   │
│  │  └── api.jubiloop.ca → server:3333         │   │
│  └─────────────────────────────────────────────┘   │
│                                                     │
│  ┌─────────────────────────────────────────────┐   │
│  │  Production Stack                            │   │
│  │  - server (AdonisJS API)                     │   │
│  │  - redis (Cache/Sessions)                    │   │
│  └─────────────────────────────────────────────┘   │
└─────────────────────────────────────────────────────┘

                            │ Database Connection

┌─────────────────────────────────────────────────────┐
│              Neon Database                          │
│         (Managed PostgreSQL - External)             │
└─────────────────────────────────────────────────────┘

Deployment Methods

Primary: GitHub Actions (Production Use)

All production, staging, and QA deployments are automated via GitHub Actions with:

  • Dynamic Configuration: Environment variables from env.deploy.yml
  • Secrets Management: Secure credentials in GitHub Secrets
  • State Integration: Droplet IPs from Terraform state in Cloudflare R2
  • Audit Trail: Complete deployment history and approvals

Secondary: Manual Execution (Testing Only)

Manual execution is only for:

  1. Testing playbook changes on dedicated test infrastructure
  2. Debugging deployment issues in isolated environments
  3. Emergency interventions (requires approval and must be followed by GitHub Actions deployment)

Ansible Playbook

Location

infra/deploy/ansible/
├── ansible.cfg              # Ansible configuration
├── playbook.yml            # Main playbook
├── requirements.yml        # Collections to install
├── group_vars/             # Variable files
└── roles/                  # Vendored roles and custom deployment

Execution Flow

The playbook executes these tasks in order:

1. Pre-tasks

Environment Validation:

yaml
- name: Validate environment variable
  ansible.builtin.assert:
    that:
      - env in valid_environments
    fail_msg: "Invalid environment '{{ env }}'. Must be one of: {{ valid_environments | join(', ') }}"

APT Cache Update:

  • Updates package cache if older than 1 hour
  • Only runs on Debian-based systems

2. Deploy User Setup

Role: lnovara.deploy-user

Creates and configures the deployment user:

yaml
user_name: deploy
user_groups: ['docker', 'sudo']
user_shell: /bin/bash
public_keys:
  - "{{ ansible_ssh_private_key_file | regex_replace('\\.?$', '.pub') }}"
enable_passwordless_sudo: true

3. System Configuration

Swap Space (Role: geerlingguy.swap):

yaml
swap_file_path: /swapfile
swap_file_size_mb: 1024 # 1GB for small droplets
swap_swappiness: 10 # Low swappiness for server workloads
swap_file_state: present

Security Hardening (Role: geerlingguy.security): Configuration from group_vars/all/security.yml:

  • SSH: No password auth, no root login
  • Fail2ban: 60-minute ban, max 3 attempts
  • Auto-updates: Enabled without auto-reboot

4. Docker Installation

Role: geerlingguy.docker

Docker configuration:

yaml
docker_edition: 'ce'
docker_install_compose_plugin: true
docker_users:
  - '{{ ansible_user }}'
docker_daemon_options:
  log-driver: 'json-file'
  log-opts:
    max-size: '10m'
    max-file: '3'
  storage-driver: 'overlay2'

5. Application Deployment

Role: deploy_app

Deployment directory: /opt/jubiloop/{compose_dir_name}

Tasks performed:

  1. Create deployment directory (owner: ansible_user, mode: 0755)
  2. Copy from repository:
    • docker-compose.yml
    • Caddyfile
    • Environment files (mode: 0600 for security)
  3. Login to GitHub Container Registry
  4. Deploy with Docker Compose v2:
    yaml
    pull: always # Always get latest images
    recreate: always # Clean container state
    state: present # Ensure running
  5. Health checks using Docker's native status

Roles Used

All third-party roles are vendored for reliability:

  1. lnovara.deploy-user: Creates deploy user
  2. geerlingguy.swap: Configures 1GB swap space
  3. geerlingguy.security: SSH hardening and fail2ban
  4. geerlingguy.docker: Docker CE installation
  5. deploy_app: Custom deployment role

Variables and Configuration

Playbook Variables

yaml
# Valid environments
valid_environments: ['dev', 'qa', 'dev-qa', 'production']

# Container registry
registry_url: ghcr.io
registry_username: jubiloop

# Deployment paths
compose_dir_name: "{% if env in ['dev', 'qa', 'dev-qa'] %}dev-qa-docker-compose{% else %}prod-docker-compose{% endif %}"
compose_project_dir: '/opt/jubiloop/{{ compose_dir_name }}'

# Health checks
health_check_url: 'http://localhost:3333/health'
health_check_timeout: 30

Group Variables (group_vars/all/)

main.yml:

yaml
# Health check configuration
deploy_app_health_check_retries: 30
deploy_app_health_check_delay: 10

# Environment-specific service lists
deploy_app_dev_qa_services:
  - caddy
  - server-dev
  - server-qa
  - postgres-dev
  - postgres-qa
  - redis-dev
  - redis-qa

deploy_app_prod_services:
  - caddy
  - server
  - redis

# Environment files to copy
deploy_app_dev_qa_env_files:
  - .env.caddy
  - .env.server.dev
  - .env.server.qa
  - .env.postgres.dev
  - .env.postgres.qa
  - .env.redis.dev
  - .env.redis.qa

deploy_app_prod_env_files:
  - .env.server.prod
  - .env.redis.prod

security.yml:

yaml
# SSH hardening
security_ssh_password_authentication: 'no'
security_ssh_permit_root_login: 'no'
security_ssh_usedns: 'no'
security_ssh_permit_empty_password: 'no'
security_ssh_challenge_response_auth: 'no'
security_ssh_gss_api_authentication: 'no'
security_ssh_x11_forwarding: 'no'

# Fail2ban configuration
security_fail2ban_enabled: true
security_fail2ban_custom_jail_local: |
  [DEFAULT]
  bantime = 3600
  findtime = 600
  maxretry = 3
  backend = systemd

# Automatic updates
security_autoupdate_enabled: true
security_autoupdate_reboot: false
security_autoupdate_reboot_time: '03:00'

Inventory Management

Dynamic Inventory (from Terraform)

Generated by GitHub Actions:

bash
node scripts/extract-config.js ansible-inventory --env dev-qa

Output format:

ini
[dev-qa]
dev-qa-server ansible_host=167.172.9.197 ansible_user=root

[production]
production-server ansible_host=167.172.9.198 ansible_user=root

Static Inventory (for testing)

ini
[dev-qa]
167.172.9.197 ansible_user=root

[production]
167.172.9.198 ansible_user=root

Security Configuration

SSH Hardening

  • Password authentication disabled
  • Root login disabled (after first run)
  • Key-based authentication only
  • Deploy user with sudo access

Fail2ban Configuration

  • 60-minute ban for failed SSH attempts
  • Maximum 3 retry attempts
  • Monitors systemd journal

Firewall Rules

Managed by DigitalOcean cloud firewall (via Terraform):

  • SSH (22): Open to all
  • HTTP (80): Cloudflare IPs only
  • HTTPS (443): Cloudflare IPs only

File Permissions

  • Environment files: Mode 0600 (owner read/write only)
  • Docker Compose files: Standard permissions
  • Application directories: Owned by deploy user

Environment Configuration

File Locations on Server

Development/QA:

  • Base directory: /opt/jubiloop/dev-qa-docker-compose/
  • Environment files: .env.* with secure permissions
  • Docker Compose: docker-compose.yml
  • Caddy config: Caddyfile

Production:

  • Base directory: /opt/jubiloop/prod-docker-compose/
  • Environment files: .env.* with secure permissions
  • Uses managed PostgreSQL (no local database)

Environment Variables

All environment files are managed through 1Password with copies in GitHub Secrets:

  • .env.caddy.* - Reverse proxy configuration
  • .env.server.* - AdonisJS application settings
  • .env.postgres.* - Database credentials (dev/qa only)
  • .env.redis.* - Redis passwords

Service Management

Health Checks

The deployment uses Docker Compose native health checks:

bash
docker compose ps --format json | jq -r '.[] | select(.Health? and .Health.Status != "healthy") | .Name'

Health check configuration:

  • Retry up to 3 times
  • 10-second wait between retries
  • Deployment fails if services remain unhealthy

Container Management

View Status:

bash
ssh deploy@<droplet-ip> 'cd /opt/jubiloop/<compose-dir> && docker compose ps'

View Logs:

bash
ssh deploy@<droplet-ip> 'cd /opt/jubiloop/<compose-dir> && docker compose logs -f'

Restart Services:

bash
ssh deploy@<droplet-ip> 'cd /opt/jubiloop/<compose-dir> && docker compose restart'

Troubleshooting

Common Issues

SSH Connection Problems:

  • Verify SSH key permissions (600 for private key)
  • Ensure droplet IP is in known_hosts
  • Check if deploy user exists

Container Registry Authentication:

  • Verify GitHub PAT has read:packages scope
  • Check token expiration
  • Ensure token is properly exported

Service Health Failures:

  • Check container logs for specific errors
  • Verify environment files exist and have correct permissions
  • Ensure database migrations have run
  • Check if ports are already in use

Module/Collection Errors:

  • Install required collections: ansible-galaxy collection install -r requirements.yml
  • Ensure Python Docker SDK is installed: pip install docker>=5.0.0

Debug Commands

Test Connectivity:

bash
ansible -i "<ip>," all -u deploy --private-key ~/.ssh/jubiloop_deploy_key -m ping

Gather Facts:

bash
ansible -i "<ip>," all -u deploy --private-key ~/.ssh/jubiloop_deploy_key -m setup

Run with Verbose Output:

bash
ansible-playbook -i "<ip>," playbook.yml -u deploy --private-key ~/.ssh/jubiloop_deploy_key -e env=dev-qa -e cr_pat="$CR_PAT" -vvv

Maintenance

Regular Updates

  • Security patches are applied automatically via unattended-upgrades
  • Container images are updated through CI/CD deployments
  • Manual updates can be triggered by re-running the playbook

Backup Considerations

  • Application data is in managed databases (automated backups)
  • Environment files should be backed up in 1Password
  • No persistent data stored in containers

Monitoring

  • Health endpoints monitored by deployment process
  • Container logs available via Docker Compose
  • System logs in systemd journal

Best Practices

  1. Always use GitHub Actions for production deployments
  2. Test changes in development environment first
  3. Keep secrets in 1Password and GitHub Secrets only
  4. Document any manual interventions in deployment journal
  5. Monitor deployment success through GitHub Actions
  6. Verify health checks after deployment
  7. Review container logs for any issues

Built with ❤️ by the Jubiloop team