Skip to Content
⚠️ Dockflow is currently under development. Bugs may occur. Please report any issues on GitHub.
ConfigurationMulti-Host Deployment

Multi-Host Deployment

Deploy your application across multiple servers using Docker Swarm with managers and workers.

Swarm Architecture

Dockflow uses Docker Swarm for orchestration:

  • Managers: Receive deployments and orchestrate the cluster
  • Workers: Run containers distributed by managers
  • Multi-manager: For high availability (tolerates manager failures)

Basic Configuration

# .deployment/servers.yml servers: manager: host: "192.168.1.10" role: manager tags: [production] worker1: host: "192.168.1.11" role: worker tags: [production] worker2: host: "192.168.1.12" role: worker tags: [production] defaults: user: dockflow port: 22

Workers automatically join the Swarm cluster. You only deploy to the manager - Swarm distributes workloads automatically.

Multi-Manager for High Availability

For production environments, configure multiple managers for failover:

# .deployment/servers.yml servers: manager1: host: "192.168.1.10" role: manager tags: [production] manager2: host: "192.168.1.11" role: manager tags: [production] manager3: host: "192.168.1.12" role: manager tags: [production] worker1: host: "192.168.1.20" role: worker tags: [production] worker2: host: "192.168.1.21" role: worker tags: [production]

For Raft consensus, use an odd number of managers:

  • 3 managers = tolerates 1 failure
  • 5 managers = tolerates 2 failures

Automatic Failover

When deploying with multiple managers, Dockflow automatically:

  1. Checks each manager’s status via SSH
  2. Finds the Swarm leader (or any reachable manager)
  3. Falls back to the next available manager if one is down
# Dockflow finds the active leader automatically dockflow deploy production # Disable failover (use first manager only) dockflow deploy production --no-failover

Example output with failover:

⠋ Checking 3 managers for active leader... Checking manager1 (192.168.1.10)... ✗ unreachable Checking manager2 (192.168.1.11)... ✓ LEADER ⚠ Using manager2 (leader). Unreachable: manager1 (unreachable)

How Deployment Works

  1. Deploy targets the manager - Dockflow connects only to the active manager
  2. Swarm distributes workloads - Containers are scheduled across all nodes
  3. Images are pushed to workers - Via registry or direct transfer
dockflow deploy production # ✓ Deployment completed! Swarm cluster: 3 managers + 2 worker(s)

CI Secrets for Multi-Host

Each server needs its own connection credentials:

# Connection strings for each server (managers + workers) PRODUCTION_MANAGER1_CONNECTION=<base64 JSON> PRODUCTION_MANAGER2_CONNECTION=<base64 JSON> PRODUCTION_MANAGER3_CONNECTION=<base64 JSON> PRODUCTION_WORKER1_CONNECTION=<base64 JSON> PRODUCTION_WORKER2_CONNECTION=<base64 JSON> # Or if host is defined in servers.yml, just the SSH key: PRODUCTION_MANAGER1_SSH_PRIVATE_KEY=<key>

Server names use underscores in CI secrets. A server named worker1 uses PRODUCTION_WORKER1_CONNECTION.

Server-Specific Environment Variables

Override variables for specific servers:

servers: manager1: role: manager tags: [production] env: NODE_ID: "manager-1" PROMETHEUS_ENABLED: "true" worker1: role: worker tags: [production] env: NODE_ID: "worker-1" GPU_ENABLED: "true"