Deployment

The easiest way to deploy your pckg app is by using Docker Compose. The example below shows you how to run a multiple stacks with:

  • HAProxy + Letsencrypt at entrypoint

  • MySQL, Redis and RabbitMQ as a system services

  • Apache + PHP for the web/app server

Docker Compose (Swarm)

docker
|- build
|-|- Dockerfile
|- compose
|-|- docker-compose.entrypoint.prod.yml
|-|- docker-compose.entrypoint.yml
|-|- docker-compose.system.dev.yml
|-|- docker-compose.system.prod.yml
|-|- docker-compose.system.yml
|-|- docker-compose.www.dev.yml
|-|- docker-compose.www.prod.yml
|-|- docker-compose.www.yml
|- config
|-|- apache.conf
|-|- haproxy.cfg
|-|- ssl-list.txt
|- env
|-|- .env.database.docker
|-|- .env.rabbit.docker
|-|- .env.redis.docker
|-|- .env.web.docker

Entrypoint

Entrypoint is only needed in production environment.

version: "3.8"

services:
  balancer:
    image: haproxy:2.5-alpine

    networks:
      - cluster

    dns:
      - "127.0.0.11"

    deploy:
      mode: global
      restart_policy:
        condition: on-failure

  certbot:
    image: certbot/certbot
    networks:
      - cluster
    entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew --webroot --webroot-path /var/www/certbot; sleep 12h & wait $${!}; done;'"
    deploy:
      restart_policy:
        condition: on-failure

networks:
  cluster:
    driver: overlay
    name: cluster
    external: true

System

version: "3.8"

services:
  database:
    image: mysql:5
    env_file: ../env/.env.database.docker
    networks:
      - project-backend
    command:
      [
        "mysqld",
        "--character-set-server=utf8mb4",
        "--collation-server=utf8mb4_general_ci",
        "--skip-character-set-client-handshake",
      ]
    deploy:
      restart_policy:
        condition: on-failure

  cache-server:
    image: redis:5-alpine
    env_file: ../env/.env.redis.docker
    command: ["sh", "-c", "redis-server --requirepass $${REDIS_PASS}"]
    networks:
      - project-backend
    deploy:
      restart_policy:
        condition: on-failure

  queue-server:
    image: rabbitmq:3.8-management-alpine
    env_file: ../env/.env.rabbit.docker
    networks:
      - project-backend
    deploy:
      restart_policy:
        condition: on-failure

networks:
  default: {}
  project-backend:
    driver: overlay
    name: project-backend
    external: true

App

version: "3.8"

services:
  web:
    image: "registry.digitalocean.com/team/project:${PCKG_BUILD_ID}"
    env_file: ../env/.env.web.docker
    depends_on:
      - cache-server
      - database
    networks:
      - project-frontend
      - project-backend
      - cluster

networks:
  default: {}
  project-frontend: {}
  project-backend:
    driver: overlay
    name: project-backend
    external: true
  cluster:
    driver: overlay
    name: cluster
    external: true

Deployment

Deploy entrypoint stack.

$ env CONFIG_HAPROXY=/www/client/project.local/htdocs/docker/haproxy.cfg \
 env CONFIG_HAPROXY_KEY=local-project--entrypoint--haproxy \
 env VOLUME_HAPROXY=/local/local-project--entrypoint--haproxy \
 env VOLUME_HAPROXY_KEY=local-project--entrypoint--haproxy \
 env VOLUME_CERTBOT=/shared/local-project--entrypoint--certbot \
 env VOLUME_CERTBOT_KEY=local-project--entrypoint--certbot \
 env VOLUME_CERTBOT_WWW=/shared/local-project--entrypoint--certbot_www \
 env VOLUME_CERTBOT_WWW_KEY=local-project--entrypoint--certbot_www \
 docker stack deploy local-project--entrypoint \
 -c docker-compose.entrypoint.yml \
 -c docker-compose.entrypoint.prod.yml \
 --with-registry-auth --prune --resolve-image always

Deploy system services.

$ env VOLUME_DATABASE=/local/local-project--system--database \
 env VOLUME_DATABASE_KEY=local-project--system--database \
 env VOLUME_CACHE=/local/local-project--system--cache \
 env VOLUME_CACHE_KEY=local-project--system--cache \
 env VOLUME_QUEUE=/local/local-project--system--queue \
 env VOLUME_QUEUE_KEY=local-project--system--queue \
 docker stack deploy local-project--system \
 -c docker-compose.system.yml \
 -c docker-compose.system.prod.yml \
 --with-registry-auth --prune --resolve-image always

Deploy app.

$ env CONFIG_WWW=/www/client/project.local/htdocs/.env
 env CONFIG_WWW_KEY=local-project--www
 env CONFIG_APACHE=/www/client/project.local/htdocs/storage/apache.conf
 env CONFIG_APACHE_KEY=local-project--apache
 env VOLUME_STORAGE=/shared/local-project--storage \
 env VOLUME_STORAGE_KEY=local-project--storage \
 env VOLUME_CERTBOT_WWW=/shared/local-project--entrypoint--certbot_www \
 env VOLUME_CERTBOT_WWW_KEY=local-project--entrypoint--certbot_www \
 docker stack deploy local-project--www \
 -c docker-compose.www.yml \
 -c docker-compose.www.prod.yml \
 --with-registry-auth --prune --resolve-image always

Production

You need to pack your code first.

$ docker build \
 -f docker/build/Dockerfile \
 -t registry.digitalocean.com/team/project:$GITHUB_SHA \
 .
FROM schtr4jh/pckg:apache-fpm-80-alpine

COPY . /var/www/html

ENTRYPOINT ["sh", "/docker-entrypoint-apache-fpm-80.sh"]

Development

Run temp docker container with Apache + PHP FPM + MySQL + Redis + Composer + Yarn + Webpack + a lot more.

$ docker run \
 -v /path/to/project/:/var/www/html \
 -i -p 8089:443 \
 -t schtr4jh/pckg:latest \
 -e PCKG_EMAIL="your@email.net" \
 -e PCKG_NAME="John Doe" \
 --restart=always /bin/bash

CLI (with Docker Compose - Swarm)

If you have installed Pckg CLI, you can use it to automatically deploy your app to the local/dev environment or remote environment by providing SSH keys.

$ pckg-cli config:memorize --name project.local --reverse local-project --host localhost
$ # pckg-cli deploy --image haproxy:letsencrypt
$ pckg-cli deploy --image mysql:8.0
$ pckg-cli deploy --image redis
$ pckg-cli deploy --image rabbitmq
$ pckg-cli deploy --image apache2
$ pckg-cli deploy --image php:8.0

VPS

Shared hosting

Last updated

Was this helpful?