Ghost is an excellent blogging platform and Docker is an excellent way to deploy it. But Docker is not conducive to installing npm plugins, such as the S3 storage adapter for Ghost. I'll show you how you can set up S3 with Ghost in Docker as quickly as you would normally deploy Ghost.

First a point on the benefits of using S3 with Ghost. Well, coupled with using Ghost with a MySQL database, you can create a completely stateless container; no need for any volume mounts or binds. Secondly, S3 enables you to serve images using a content delivery network (CDN), improving page load times in blogs with a vast number of images. The final benefit is that it makes redeploying your Ghost blog incredibly painless - just export your site data from the admin panel, create a new docker container, plug in your database and S3 environment variables, and re-import your content from the admin panel. I was able to move my blog between servers in less than 5 minutes.

This is because any images you upload in your post are automatically uploaded to your S3 provider, and served from that provider. In this example I'll be using DigitalOcean Spaces. If you want to follow along you can use my referral link to get $100 of credit.

Let's begin by creating a DigitalOcean Space. Login to your DigitalOcean account, hit the big green 'create' button in the top right, and select Space. Select where you want your datacenter region to be, enable CDN (add a custom domain if you want), and create a name for your space. Finally hit 'create a space' at the bottom.

Creating a space is really bloody easy

Next up you'll need an access key. Navigate over to API in the sidebar on the left hand side, then generate a new key. Give it a name and copy the ID and secret. This secret is only shown once, so really ensure you're storing it somewhere safe.

Key creation. Save that secret!

Key made it's time to fire up Docker. Install Docker and docker-compose on your server (maybe a DigitalOcean droplet with some of that $100 credit) and use this docker-compose file. It uses an image I knocked up that simply takes the official Ghost Docker image and installs /ghost-storage-adapter-s3 so it's ready to rock and roll by default. Installing that adapter yourself into a standard Ghost docker image is doable, but would require you to reinstall it every time the container updates.

In any case, here's the docker-compose.yml:

version: '3.1'

services:

  ghost:
    image: wilderingrogue/ghost-with-s3:latest
    restart: always
    ports:
      - 8080:2368
    environment:
      database__client: mysql
      database__connection__host: db
      database__connection__user: root
      database__connection__password: example
      database__connection__database: ghost
      storage__active: "s3"
      AWS_ACCESS_KEY_ID: "your spaces key ID (JGL2ST...)"
      AWS_SECRET_ACCESS_KEY: "your spaces key secret (2EfBUO...)"
      AWS_DEFAULT_REGION: "spaces region (NYC3/SGP1/SFO2/FRA1)"
      GHOST_STORAGE_ADAPTER_S3_PATH_BUCKET: "my-bucket-name"
      GHOST_STORAGE_ADAPTER_S3_ASSET_HOST: "https://my-bucket-name.nyc3.cdn.digitaloceanspaces.com/"
      GHOST_STORAGE_ADAPTER_S3_PATH_PREFIX: "myfolder"
      GHOST_STORAGE_ADAPTER_S3_ENDPOINT: "region.digitaloceanspaces.com"
      GHOST_STORAGE_ADAPTER_S3_FORCE_PATH_STYLE: "true"
  db:
    image: mysql:5.7
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: example`

Just swap out my settings for yours! Also ensure to replace the ENDPOINT with the URL for the region you created your Space in, such as nyc3.digitaloceanspaces.com or fra1.digitaloceanspaces.com. I'd also recommend exploring the use of DigitalOcean's Database service, rather than creating a Docker container for your database, for additional security and data integrity.

Images in this post, accessible via my CDN

But that should be it! If you've swapped out the right settings on creation of your container (with docker-compose up -d) you should be able to access Ghost on port 8080. Create a test post and upload an image, and double check the URL matches that of your spaces URL. If you want to stop uploading media to Spaces (or any other S3 provider) set the 'storage__active' environment variable to "" and S3 will no longer be used. You'll still be able to access all the images in posts created while the S3 adapter was turned on, as the link to the image in the post will always point to the S3 URL.

Hope you found this a tad useful! Happy blogging!