Pckg Documentation
Pckg Documentation
  • About
  • Concept
  • Start
  • Deployment
  • Tests
  • Ecosystem
    • Docker images
    • Skeleton
  • Core packages
    • Auth
    • Cache
    • Collection
    • Concept
    • Database
      • Repository
        • Driver
      • Entity
        • Relations
        • Query
      • Record
        • Fields
        • Model
      • Events
      • Extensions
    • Framework
      • Environment
        • Console
        • Development
        • Production
        • Test
      • Application
      • Providers
      • Config
      • Router
      • Request
      • Response
      • Controller
        • Middleware
        • Afterware
      • View
      • Events
      • Exception
    • Generic
      • Dynamic
      • Generic
      • Maestro
    • Htmlbuilder
      • Elements
        • Form
        • Fields
      • Datasources
      • Validators
      • Decorators
    • Locale
    • Mail
      • Drivers
      • Template
      • Mail
    • Manager
      • Asset manager
      • SEO manager
      • Meta manager
      • Upload manager
      • Vue manager
      • Job manager
      • Locale manager
      • Page manager
    • Migrator
      • Migrations
      • Fields
    • Queue
      • Driver
      • Publisher
      • Subscriber
    • Storage
      • Driver
      • Media
    • Translator
  • More packages
    • API
    • HTTP QL
      • Read
      • Write
      • Uploads
    • Task
      • Async
    • Websocket
      • Server
      • Client
  • Frontend
    • Helpers JS
    • Helpers CSS
  • Extras
    • Parser
    • Payment
    • Tenant
Powered by GitBook
On this page
  • Docker Compose (Swarm)
  • Entrypoint
  • System
  • App
  • Deployment
  • CLI (with Docker Compose - Swarm)
  • VPS
  • Shared hosting

Was this helpful?

Deployment

The easiest way to deploy your pckg app is by using Docker Compose. The example below shows you how to run a multiple stacks with:

  • HAProxy + Letsencrypt at entrypoint

  • MySQL, Redis and RabbitMQ as a system services

  • Apache + PHP for the web/app server

Docker Compose (Swarm)

docker
|- build
|-|- Dockerfile
|- compose
|-|- docker-compose.entrypoint.prod.yml
|-|- docker-compose.entrypoint.yml
|-|- docker-compose.system.dev.yml
|-|- docker-compose.system.prod.yml
|-|- docker-compose.system.yml
|-|- docker-compose.www.dev.yml
|-|- docker-compose.www.prod.yml
|-|- docker-compose.www.yml
|- config
|-|- apache.conf
|-|- haproxy.cfg
|-|- ssl-list.txt
|- env
|-|- .env.database.docker
|-|- .env.rabbit.docker
|-|- .env.redis.docker
|-|- .env.web.docker

Entrypoint

Entrypoint is only needed in production environment.

version: "3.8"

services:
  balancer:
    image: haproxy:2.5-alpine

    networks:
      - cluster

    dns:
      - "127.0.0.11"

    deploy:
      mode: global
      restart_policy:
        condition: on-failure

  certbot:
    image: certbot/certbot
    networks:
      - cluster
    entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew --webroot --webroot-path /var/www/certbot; sleep 12h & wait $${!}; done;'"
    deploy:
      restart_policy:
        condition: on-failure

networks:
  cluster:
    driver: overlay
    name: cluster
    external: true
version: "3.8"
services:
  balancer:
    environment:
      CONFIG_HAPROXY: ${CONFIG_HAPROXY?:err}
      CONFIG_HAPROXY_KEY: ${CONFIG_HAPROXY_KEY?:err}
      VOLUME_HAPROXY: ${VOLUME_HAPROXY?:err}
      VOLUME_HAPROXY_KEY: ${VOLUME_HAPROXY_KEY?:err}
      VOLUME_CERTBOT: ${VOLUME_CERTBOT?:err}
      VOLUME_CERTBOT_KEY: ${VOLUME_CERTBOT_KEY?:err}

    # + deploy: mode: global (for balancer)
    # + deploy: endpoint: dnsrr (for backends)
    ports:
      - target: 80
        published: 80
        mode: host
        protocol: tcp
      - target: 443
        published: 443
        mode: host
        protocol: tcp

    volumes:
      - "${CONFIG_HAPROXY}:/usr/local/etc/haproxy/haproxy.cfg"

      - "${VOLUME_HAPROXY}:/run/haproxy/"
      - "${VOLUME_CERTBOT}/live/yourdomain.com/fullchain.pem:/etc/haproxy/server.crt"
      - "${VOLUME_CERTBOT}/live/yourdomain.com/privkey.pem:/etc/haproxy/server.key"
      - "${VOLUME_CERTBOT}/live/yourdomain.com/merged.pem:/etc/haproxy/server.merged.crt"

    deploy:
      mode: global

  # every 12h check for letsencrypt renewal
  certbot:
    environment:
      VOLUME_CERTBOT: ${VOLUME_CERTBOT?:err}
      VOLUME_CERTBOT_KEY: ${VOLUME_CERTBOT_KEY?:err}
      VOLUME_CERTBOT_WWW: ${VOLUME_CERTBOT_WWW?:err}
      VOLUME_CERTBOT_WWW_KEY: ${VOLUME_CERTBOT_WWW_KEY?:err}
    volumes:
      - "${VOLUME_CERTBOT}:/etc/letsencrypt"

      - "${VOLUME_CERTBOT_WWW}:/var/www/certbot"

System

version: "3.8"

services:
  database:
    image: mysql:5
    env_file: ../env/.env.database.docker
    networks:
      - project-backend
    command:
      [
        "mysqld",
        "--character-set-server=utf8mb4",
        "--collation-server=utf8mb4_general_ci",
        "--skip-character-set-client-handshake",
      ]
    deploy:
      restart_policy:
        condition: on-failure

  cache-server:
    image: redis:5-alpine
    env_file: ../env/.env.redis.docker
    command: ["sh", "-c", "redis-server --requirepass $${REDIS_PASS}"]
    networks:
      - project-backend
    deploy:
      restart_policy:
        condition: on-failure

  queue-server:
    image: rabbitmq:3.8-management-alpine
    env_file: ../env/.env.rabbit.docker
    networks:
      - project-backend
    deploy:
      restart_policy:
        condition: on-failure

networks:
  default: {}
  project-backend:
    driver: overlay
    name: project-backend
    external: true
version: "3.8"

services:
  database:
    environment:
      VOLUME_DATABASE: ${VOLUME_DATABASE?:err}
      VOLUME_DATABASE_KEY: ${VOLUME_DATABASE_KEY?:err}
    # can impero merge and populate volumes and it's constraints?
    volumes:
      # this volume is required to be set, Impero automatically adds variable
      - "${VOLUME_DATABASE}:/var/lib/mysql"
      # this volume adds impero config to the database to allow password-less connections
      # - "${VOLUME_DATABASE_IMPERO}:/etc/mysql/conf.d/impero.cnf"

  cache-server:
    environment:
      VOLUME_CACHE: ${VOLUME_CACHE?:err}
      VOLUME_CACHE_KEY: ${VOLUME_CACHE_KEY?:err}
    volumes:
      - "${VOLUME_CACHE}:/data"

  queue-server:
    environment:
      VOLUME_QUEUE: ${VOLUME_QUEUE?:err}
      VOLUME_QUEUE_KEY: ${VOLUME_QUEUE_KEY?:err}
    volumes:
      - "${VOLUME_QUEUE}:/var/lib/rabbitmq"
version: "3.8"

services:
  database:
    volumes:
      - /local/local-project_database:/var/lib/mysql
# .env.database.docker
MYSQL_ROOT_PASSWORD=yourpassword
MYSQL_DATABASE=yourdatabase
# .env.rabbit.docker
RABBITMQ_DEFAULT_USER=rabbituser
RABBITMQ_DEFAULT_PASS=rabbitpass
# .env.redis.docker
REDIS_PASS=redispass

App

version: "3.8"

services:
  web:
    image: "registry.digitalocean.com/team/project:${PCKG_BUILD_ID}"
    env_file: ../env/.env.web.docker
    depends_on:
      - cache-server
      - database
    networks:
      - project-frontend
      - project-backend
      - cluster

networks:
  default: {}
  project-frontend: {}
  project-backend:
    driver: overlay
    name: project-backend
    external: true
  cluster:
    driver: overlay
    name: cluster
    external: true
version: "3.8"

services:
  web:
    environment:
      CONFIG_APACHE: ${CONFIG_APACHE?:err}
      CONFIG_APACHE_KEY: ${CONFIG_APACHE_KEY?:err}
      CONFIG_WWW: ${CONFIG_WWW?:err}
      CONFIG_WWW_KEY: ${CONFIG_WWW_KEY?:err}
      VOLUME_STORAGE: ${VOLUME_STORAGE?:err}
      VOLUME_STORAGE_KEY: ${VOLUME_STORAGE_KEY?:err}
      VOLUME_CERTBOT_WWW: ${VOLUME_CERTBOT_WWW?:err}
      VOLUME_CERTBOT_WWW_KEY: ${VOLUME_CERTBOT_WWW_KEY?:err}
    volumes:
      # configure apache
      - "${CONFIG_APACHE}:/etc/apache2/sites-enabled/000-default.conf"

      # configure dotenv()
      - "${CONFIG_WWW}:/var/www/html/.env"

      # mount persistent storage
      - "${VOLUME_STORAGE}:/var/www/html/storage"

      # support for certbot is included in our apache build
      - "${VOLUME_CERTBOT_WWW}:/var/www/certbot"

    deploy:
      endpoint_mode: dnsrr
      replicas: 3
      update_config:
        parallelism: 1
        delay: 5s
        order: start-first
      restart_policy:
        condition: on-failure
version: "3.8"

services:
  web:
    image: "schtr4jh/pckg:apache-fpm-80-alpine"
    ports:
      - "36080:80"
      - "36443:443"
    networks:
      global:
        aliases:
          - project.local
      project-system: {}
    volumes:
      - "../../:/var/www/html"

  database:
    networks:
      project-system:
        aliases:
          - database

networks:
  global:
    external: true
    driver: overlay
    name: global
  project-system:
    external: true
    driver: overlay
    name: project-system
# .env.web.docker
REDIS_PASS=redispass
REDIS_HOST=project-local_cache-server

Deployment

Deploy entrypoint stack.

$ env CONFIG_HAPROXY=/www/client/project.local/htdocs/docker/haproxy.cfg \
 env CONFIG_HAPROXY_KEY=local-project--entrypoint--haproxy \
 env VOLUME_HAPROXY=/local/local-project--entrypoint--haproxy \
 env VOLUME_HAPROXY_KEY=local-project--entrypoint--haproxy \
 env VOLUME_CERTBOT=/shared/local-project--entrypoint--certbot \
 env VOLUME_CERTBOT_KEY=local-project--entrypoint--certbot \
 env VOLUME_CERTBOT_WWW=/shared/local-project--entrypoint--certbot_www \
 env VOLUME_CERTBOT_WWW_KEY=local-project--entrypoint--certbot_www \
 docker stack deploy local-project--entrypoint \
 -c docker-compose.entrypoint.yml \
 -c docker-compose.entrypoint.prod.yml \
 --with-registry-auth --prune --resolve-image always

Deploy system services.

$ env VOLUME_DATABASE=/local/local-project--system--database \
 env VOLUME_DATABASE_KEY=local-project--system--database \
 env VOLUME_CACHE=/local/local-project--system--cache \
 env VOLUME_CACHE_KEY=local-project--system--cache \
 env VOLUME_QUEUE=/local/local-project--system--queue \
 env VOLUME_QUEUE_KEY=local-project--system--queue \
 docker stack deploy local-project--system \
 -c docker-compose.system.yml \
 -c docker-compose.system.prod.yml \
 --with-registry-auth --prune --resolve-image always

Deploy app.

$ env CONFIG_WWW=/www/client/project.local/htdocs/.env
 env CONFIG_WWW_KEY=local-project--www
 env CONFIG_APACHE=/www/client/project.local/htdocs/storage/apache.conf
 env CONFIG_APACHE_KEY=local-project--apache
 env VOLUME_STORAGE=/shared/local-project--storage \
 env VOLUME_STORAGE_KEY=local-project--storage \
 env VOLUME_CERTBOT_WWW=/shared/local-project--entrypoint--certbot_www \
 env VOLUME_CERTBOT_WWW_KEY=local-project--entrypoint--certbot_www \
 docker stack deploy local-project--www \
 -c docker-compose.www.yml \
 -c docker-compose.www.prod.yml \
 --with-registry-auth --prune --resolve-image always

Production

You need to pack your code first.

$ docker build \
 -f docker/build/Dockerfile \
 -t registry.digitalocean.com/team/project:$GITHUB_SHA \
 .
FROM schtr4jh/pckg:apache-fpm-80-alpine

COPY . /var/www/html

ENTRYPOINT ["sh", "/docker-entrypoint-apache-fpm-80.sh"]

Development

Run temp docker container with Apache + PHP FPM + MySQL + Redis + Composer + Yarn + Webpack + a lot more.

$ docker run \
 -v /path/to/project/:/var/www/html \
 -i -p 8089:443 \
 -t schtr4jh/pckg:latest \
 -e PCKG_EMAIL="your@email.net" \
 -e PCKG_NAME="John Doe" \
 --restart=always /bin/bash

CLI (with Docker Compose - Swarm)

If you have installed Pckg CLI, you can use it to automatically deploy your app to the local/dev environment or remote environment by providing SSH keys.

$ pckg-cli config:memorize --name project.local --reverse local-project --host localhost
$ # pckg-cli deploy --image haproxy:letsencrypt
$ pckg-cli deploy --image mysql:8.0
$ pckg-cli deploy --image redis
$ pckg-cli deploy --image rabbitmq
$ pckg-cli deploy --image apache2
$ pckg-cli deploy --image php:8.0

VPS

Shared hosting

PreviousStartNextTests

Last updated 2 years ago

Was this helpful?