Skip to main content
  1. Blog/

Automatic Hugo Deployment with GitHub Actions

Table of Contents
Finding this helpful?
Please consider leaving a small gesture of your appreciation.
Buy me a beer

Introduction #

As part of some work last year, I migrated this website to be built via Hugo - a website creation framework. Hugo allows me to define each page and post on the site as Markdown, which is then built into HTML files automatically.

I never (or haven’t yet?!) wrote about my reasoning behind moving away from Wordpress, but the jist of that story is that I found Wordpress to be a security nightmare. Up until fairly recently, I hosted and managed a few Wordpress instances for various organisations, but was spending an inordinate amount of time dealing with near-constant attacks against the platform. Since the majority of my use-cases are for fairly static websites, the move to Hugo was a pretty obvious one.

Writing (Hugo & GitHub) #

I’m not going to replicate the Hugo Quick Start guide here - it’s a pretty straightforward process.

I used a couple of the different migration tools to help move my existing Wordpress sites across, followed by some manual editing of the structure.

In order to allow collaborative working (and to avoid hoarding important files locally!), I chose to keep my Hugo files on GitHub. This has the added advantage of giving access to GitHub Actions, which we can then use to automatically build and deploy our Hugo site.

Storing (MinIO) #

I looked at a number of ways to store the output from the Hugo build. Initially I used a Docker container based off NGINX, which then pulled in the generated HTML into the requisite folder before publishing an image to the GitHub Container Registry. However, this became a rather laborious process to have to go and manually update the container image running on my Docker Swarm every time I wanted to update a site.

Eventually, I made the move across to S3 storage. The main reason for this is interoperability - I can choose to deploy to Amazon S3, GitHub Pages, Cloudflare R2, or any of the other options available.

Because this entire exercise was meant as a learning opportunity, I chose instead to host my own S3-compatible storage - MinIO. I found the easiest way to do this was on a brand new domain name, as trying to use mybucket.s3.yourdomain.com means you need SSL for a wildcard subdomain, which doesn’t come free on Cloudflare.

My Docker Compose setup (utilising Traefik) ended up looking like this:

version: '3.7'

services:
  minio:
    image: minio/minio:latest
    command: server --console-address ":9001" --address ":80" /data
    networks:
      - traefik
    volumes:
      - data:/data
    environment:
      - MINIO_ROOT_USER=minio
      - MINIO_ROOT_PASSWORD=[YOUR PASSWORD HERE]
      - MINIO_BROWSER_REDIRECT_URL=https://console.yours3domain.com
    deploy:
      labels:
        - "traefik.enable=true"
        - "traefik.swarm.network=traefik"

        - "traefik.http.routers.minio.rule=Host(`console.yours3domain.com`)"
        - "traefik.http.routers.minio.entrypoints=websecure"
        - "traefik.http.routers.minio.tls=true"
        - "traefik.http.routers.minio.service=minio"
        - "traefik.http.routers.minio.priority=1000"

        - "traefik.http.services.minio.loadbalancer.server.port=9001"

        - "traefik.http.routers.s3.rule=Host(`yours3domain.com`) || HostRegexp(`{subhost:[a-zA-Z0-9-]+}.yours3domain.com`)"
        - "traefik.http.routers.s3.entrypoints=websecure"
        - "traefik.http.routers.s3.tls=true"
        - "traefik.http.routers.s3.service=s3"

        - "traefik.http.services.s3.loadbalancer.server.port=80"

        - "traefik.constraint=proxy-public"

networks:
  traefik:
    external: true

volumes:
  data:
    driver: local
    driver_opts:
      type: "nfs"
      o: "addr=[NFS SERVER],nolock,soft,rw"
      device: ":[NFS SHARE]/minio"

As you can see, I’m currently hosting a single instance, with the storage pointed at an NFS share. While this works, it is by no means a good idea! I’m still experimenting with S3 providers, but MinIO is an easy way to get up and running with minimal effort.

Once the service is up and running, you’ll need to access the console (console.yours3domain.com) with the specified username and password, and configure a bucket for your site. The standard permissions are fine, but you’ll need to create two Access Keys - one for GitHub Actions and another for NGINX. It’s a good idea to keep these separate and limit the permissions appropriately.

Building (GitHub Actions) #

Next up, we need a way of getting our Hugo site built and automatically deployed to the newly running S3 service. Luckily, GitHub provides an easy way to run a set of commands whenever a new commit is pushed - GitHub Actions.

I created a file in my repository at .github/workflows/publish.yaml with the following:

name: Publish to S3

on:
  push:
    branches: ['main']

jobs:
  build-and-deploy:
    runs-on: ubuntu-latest
    env:
      HUGO_VERSION: 0.145.0
    steps:
      - name: Install Hugo CLI
        run: |
          wget -O ${{ runner.temp }}/hugo.deb https://github.com/gohugoio/hugo/releases/download/v${HUGO_VERSION}/hugo_extended_${HUGO_VERSION}_linux-amd64.deb \
          && sudo dpkg -i ${{ runner.temp }}/hugo.deb          

      - name: Checkout
        uses: actions/checkout@v4
        with:
          submodules: recursive

      - name: Build with Hugo
        env:
          HUGO_CACHEDIR: ${{ runner.temp }}/hugo_cache
          HUGO_ENVIRONMENT: production
        run: |
          hugo build \
            --minify

      - name: Minio Deploy
        uses: informaticaucm/minio-deploy-action@v2023-09-28T17-48-30Z-1
        with:
          endpoint: ${{ secrets.MINIO_ENDPOINT }}
          access_key: ${{ secrets.AWS_ACCESS_KEY_ID }}
          secret_key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          remove: true
          bucket: ${{ secrets.MINIO_BUCKET }}
          source_dir: 'public'

You’ll also need to specify the 4 secrets within your repository settings:

  • MINIO_ENDPOINT: The address for your MinIO instance (yours3domain.com in the example above)
  • AWS_ACCESS_KEY_ID: The Access Key for MiniO with read/write access
  • AWS_SECRET_ACCESS_KEY: The Secret Key for the above
  • MINIO_BUCKET: The name of the MinIO bucket to write to

If everything is correct, then once you push to your repository, this action should run and automatically transfer the built site to MinIO.

Rather than using the informaticaucm/minio-deploy-action action - there are plenty of other pre-published actions available for other S3 providers, Google is your friend here!

Serving (NGINX S3 Gateway) #

Once we have the files successfully transferred to our S3 host, we need a way of making these available to the public. As I’ve written about before - I use Traefik within my network, with a Cloudflare Tunnel for public access.

I found the NGINX S3 Gateway to be the most convenient way of making S3 content available.

version: '3.7'

services:
  site:
    image: ghcr.io/nginxinc/nginx-s3-gateway/nginx-oss-s3-gateway:latest
    environment:
      - AWS_ACCESS_KEY_ID=[YOUR ACCESS KEY]
      - AWS_SECRET_ACCESS_KEY=[YOUR SECRET KEY]
      - AWS_SIGS_VERSION=2
      - DEBUG=false
      - S3_BUCKET_NAME=[YOUR BUCKET NAME]
      - S3_REGION=us-east-1
      - S3_SERVER_PORT=443
      - S3_SERVER=[YOUR S3 SERVER DOMAIN]
      - S3_SERVER_PROTO=https
      - S3_STYLE=path
      - PROXY_CACHE_VALID_OK=0
      - PROXY_CACHE_VALID_NOTFOUND=0
      - PROXY_CACHE_VALID_FORBIDDEN=0
      - PROVIDE_INDEX_PAGE=true
      - APPEND_SLASH_FOR_POSSIBLE_DIRECTORY=true
      - ALLOW_DIRECTORY_LIST=false
      - CORS_ENABLED=true
    networks:
      - traefik
    deploy:
      mode: replicated
      replicas: 3
      update_config:
        delay: 30s
        order: start-first
        monitor: 15s
      labels:
        - traefik.enable=true

        - traefik.http.routers.website.rule=Host(`yourwebsitedomain.com`)
        - traefik.http.routers.website.entrypoints=websecure
        - traefik.http.routers.website.tls=true

        - traefik.http.services.website.loadbalancer.server.port=80

        - traefik.swarm.network=traefik

        - traefik.constraint=proxy-public

networks:
  traefik:
    external: true

All being well, Traefik will then pick up these instances and make them available!

Note that I’ve disabled the cache on the gateway so the freshest content is always available. As shown below, Cloudflare handles my caching - I don’t need a second tier. Depending on your traffic demand you may find it useful to have this set to a low timeout rather than disabled entirely.

Caching (Cloudflare) #

This step is entirely optional - but as a result of routing all my inbound traffic through Cloudflare, I was finding that cached versions of my site would be served for a short period of time. Rather than wait for the Cloudflare cache to expire, I chose to add this onto the end of my publish.yaml action on GitHub:

  purge:
    runs-on: ubuntu-latest
    needs: build-and-deploy
    steps:
      - name: Purge Cloudflare cache
        uses: jakejarvis/cloudflare-purge-action@master
        env:
          CLOUDFLARE_ZONE: ${{ secrets.CLOUDFLARE_ZONE }}
          CLOUDFLARE_TOKEN: ${{ secrets.CLOUDFLARE_TOKEN }}

The documentation available on the Action Repository gives details of how to find the two additional secrets required.

Conclusion #

We now have a Hugo blog being automatically pushed onto our server whenever some content is changed - neat!

There are many different avenues to explore to take this project further. As outlined above, a lot of different S3 providers exist, so you may wish to serve your content from elsewhere. However, I hope you’ve found this write-up useful for inspiration on how to simplify your Hugo deployments!


Comments

You can use your Bluesky account to reply to this post.