blog/content/post/2024-04-forgejo-deploy-hugo.md
Peter Kurfer 852f718af7
All checks were successful
Deploy pages / deploy (push) Successful in 37s
docs: Hugo on Cloudflare pages
2024-04-30 15:21:30 +02:00

7.5 KiB

+++ author = "Peter Kurfer" title = "Build & deploy a Hugo site with Gitea/Forgejo actions" description = "How to host a Hugo site with Cloudflare pages and deploy it automatically with Forgejo actions" date = "2024-04-30" tags = [ "hugo", "cloudflare", "CD/CD", "actions" ] +++

I admit it. I like self-hosting. I like the idea of being able to control every aspect of my infrastructure. It was only consequent to also self-host my blog. This article describes my odyssey and why I ended up letting Cloudflare do the hosting.

In the beginning - there was a repository. As we all know, the repository is the truth. When the time came for deploying the blog, I already had a Kubernetes (K8s) cluster at hand so the obvious choice was to containerize the web page and host it there. I wrote a simple Dockerfile with a multi-stage build, just like this:

FROM docker.io/golang:1-alpine as builder

WORKDIR /tmp

RUN apk add -U --no-cache hugo git

WORKDIR /src

COPY . /src/

RUN hugo --minify --environment production --config config.toml

FROM caddy as runtime

COPY --from=builder /src/public /usr/share/caddy

prepared my deployment manifests and setup a CI pipeline (back then with DroneCI) to deploy everything.

So far so good, the only complicacy was that I now had two 'truths'. One was the repository and the second one was the container registry - let alone that I also had to 💸 the storage for both. Of course, various container registries have cleanup options but being a software engineer, why using something existing when you can build the 11th solution to solve the same problem, right?

Yes...actually, no!

In the beginning I just accepted the fact and went on. Every now and then, when the amount of images became costlier, I manually deleted a few until I reached a reasonable count - say...five, I mean in the end there was no reason to keep any old version at all, but you know, I was lazy. At some point I had a similar problem at work with our SPAs and I couldn't help but wonder: is this really the best way? Not only because I'm duplicating the content every time, but also the web server needs patching, every now and then a breaking change in the configuration system happens and so on and so forth. I came across the possibility to serve a S3 bucket (or similar) directly from a K8s ingress. That sounded awesome! No need to build a container image, no need to waste compute resources, simple copy to S3 bucket and be done with it!

So I came back to my blog and tried to migrate to this approach. I wasted a few hours of my spare time, only realizing that - apparently - Cloudflare R2 or some CLI or something else is ignoring the content type of my files, leaving me with application/octet-stream which is absolutely useless for web pages. It might be different when I would use MinIO or AWS S3 but I didn't want to waste even more resources (and 💵) on hosting a MinIO instance in my cluster. Also, I am already using Hetzner Cloud and didn't feel like distributing my costs around multiple cloud providers, so I started looking for alternative solutions.

I then stumbled upon Cloudflare Pages. After a 'quick' prototype I was happy and decided to migrate - actually not so quick, I spent a few evenings on migrating my whole DNS setup to external-dns and experimented with Cloudflare DNS for DoS protection but that's a topic for another day.

The only other problem I had was: I also got rid of DroneCI in favor of Forgejo Actions. I know, if I would use GitHub, there would be perfect integration from Cloudflare to build my Hugo page and deploy it, but we don't want to make things too easy, right?

But using Forgejo Actions also seemed pretty straight forward:

name: Deploy pages
on:
  push:
    branches:
      - main

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - name: Setup Hugo
        uses: peaceiris/actions-hugo@v3
      - name: Build
        run: hugo --minify --environment production
      - name: Deploy
        uses: cloudflare/wrangler-action@v3
        with:
          apiToken: ${{ secrets.CF_PAGES_TOKEN }}
          accountId: ${{ secrets.CLOUDFLARE_ACCOUNT_ID }}
          command: pages deploy public --project-name=blog

Well, not so fast kiddo!

At first I noticed, when using Hugo modules you need to fetch those modules before being able to build anything, alright:

    // ...
      - name: Build
        run: |
          hugo mod get
          hugo --minify --environment production          
    // ...

then, obviously, I realized, for being able to fetch those modules, you need a Go SDK, there you go (pun intended):

    // ...
      - name: Setup Go
        uses: actions/setup-go@v5
        with:
          go-version: "1.22.x"
      - name: Setup Hugo
        uses: peaceiris/actions-hugo@v3
    // ...

and now we're getting - finally - to the point when things got really annoying... I'm using the github.com/LordMathis/hugo-theme-nightfall theme. Although being a very minimalistic theme, it requires dart-sass. Even though this also seem straight forward, especially because there's only documentation for Github Actions, with Forgejo Actions it isn't. The key difference between Github Actions and Forgejo Actions is, that Forgejo Actions are running in containers. The officially recommended way to install dart-sass in Github Actions is via snap, but snap doesn't really work in containers, so I had find another way. When doing some research, you might come across the official dart-sass repository that mentions another installation method:

npm install -g sass

but:

The --embedded command-line flag is not available when you install Dart Sass as an npm package.

(see here)

unfortunately, Hugo requires the --embedded flag, so also not an option. Eventually I came around this abomination:

      - name: Install sass
        run: |
          export SASS_VERSION=$(curl https://api.github.com/repos/sass/dart-sass/releases | jq -r '. | first |.tag_name | capture("(?<version>[[:digit:]]+\\.[[:digit:]]+\\.[[:digit:]]+)") | .version')
          curl -L "https://github.com/sass/dart-sass/releases/download/${SASS_VERSION}/dart-sass-${SASS_VERSION}-linux-arm64.tar.gz" | tar xvz -C /opt/
          ln -s /opt/dart-sass/sass /usr/local/bin/          

Don't get confused by the huge capture in the jq expression, I'm using this snippet whenever I have to use the version of a package in the filename and this way I don't have to think about, is there a v prefix or not, looking at you 'goreleaser' 👀

That downloads the latest release of dart-sass and makes it available in the $PATH. So far I'm not considering the CPU architecture because whenever possible I'm running my CI jobs on ARM machines anyway, but if I find the time, I might try to implement a custom action similar to peaceiris/actions-hugo@v3 but with dart-sass support.

You can imagine how happy I was realizing the cloudflare/wrangler-action@v3 step 'just worked' ™.