Implementing a Continuous Delivery Pipeline for my Discord Bot with GitHub Actions, podman, and systemd

I’ve been having a lot of fun lately refining a weekend project I started a few months ago. I basically threw this bot over the wall back in early April. About a month ago, I started getting serious about learning the Go programming language, so I thought I’d just revisit my Discord bot with a more “learned” eye and find ways to polish it up a bit.

Popple is a Discord bot that I made for myself and my friends, and it has been my playground for practicing everything I was learning in a project with an extremely small blast radius. Actually, the blast radius is both small and sympathetic, since most of my friends in that server are software developers too; so it was easy to laugh about whatever bugs that had made it into the running version of the bot.

In any case, I’ve been pushing commits to Popple consistently each week (actually, much to my pleasant surprise, a few developers have contributed to closing some of my “good first issues” on the repo too!) This rate of change started to become cumbersome, especially since early on, I didn’t have any automation in place for this.

My deployments were all manual, and looked like this:

  • Push changes to the branch I wanted them on
  • SSH into my VPS
  • Run go install github.com/connorkuehl/popple@latest
  • Get annoyed that go-install seems to be caching the @latest, and re-run the command pasting in the latest SHA instead of the latest keyword…

Yuck!

Lucky for me, I was learning about Ansible, so I thought it’d be fun to write a playbook. The playbook would fetch the latest changes for the Git ref I passed in as a variable, it would build & install the binary to my PATH, and restart a systemd unit so that the new binary could “go live.”

This playbook was actually an enormous improvement to the manual steps; but there was still a lot to be desired here.

  • I didn’t want to keep the Go toolchain or git installed on my server
  • I didn’t want the server to have to build the code from source

Containers to the rescue

Long story short, I wrote a Dockerfile, I made an account on hub.docker.com, and I built and pushed images of the tip of my master branch as well as the commits I had already tagged as releases.

My Dockerfile looks like this:

FROM golang:1.15-alpine AS build
RUN apk add build-base      # for gcc
RUN mkdir /popple
ADD . /popple
WORKDIR /popple
RUN go build .

FROM alpine:latest
WORKDIR /root/
COPY --from=build /popple/popple .

# docker image run --rm -v path/to/db:/root/popple.sqlite \
#                       -v path/to/token:/root/bot.token \
#                       image_name
ENTRYPOINT ["/root/popple", "-db", "popple.sqlite", "-token", "bot.token"]

This Dockerfile takes advantage of what are known as “multi-stage builds.” The first image can balloon up with all of your build dependencies, but can hand off the final binary to a much slimmer image for actual use.

You can see my first image is called “build” and the second image has a COPY --from=build ... statement which copies the binary over. Go’s static linking is what buys us this convenient one-line copy. The binary is all we need.

Then, on my VPS, I could simply: podman run --name popple_bot -d ... docker.io/conkue/popple

It was great! Still rather manual, but this removed the need to keep a Go toolchain and git installed on my VPS. Furthermore, we haven’t exactly removed me from the equation here. I still have to build the new image, push it to Docker Hub, SSH in, pull the latest image and bounce the container.

Actually, this all sounds more involved than what the process was like before… but not for long! (And yes, I probably could have just updated the Ansible playbook but I knew we could automate even that step and that’s where I was moving towards… can’t stop now!)

GitHub Actions wasn’t far behind

Lucky for me, automatically pushing a container image from GitHub Actions is already a solved problem.

I found a tutorial on Docker’s website for configuring a GitHub Action that will build and push a container image from the Dockerfile in your git repo.

My action looks exactly like this (at least, at the time of this writing):

name: Deploy to Docker Hub

on:
  push:
    branches: [ master ]

jobs:

  build:
    runs-on: ubuntu-latest
    steps:
    - name: Set up caching
      uses: actions/cache@v2
      with:
        path: /tmp/.buildx-cache
        key: ${{ runner.os }}-buildx-${{ github.sha }}
        restore-keys: |
          ${{ runner.os }}-buildx-

    - name: Check out source code
      uses: actions/checkout@v2

    - name: Login to Docker Hub
      uses: docker/login-action@v1
      with:
        username: ${{ secrets.DOCKER_HUB_USERNAME }}
        password: ${{ secrets.DOCKER_HUB_ACCESS_TOKEN }}

    - name: Set up Docker BuildX
      id: buildx
      uses: docker/setup-buildx-action@v1

    - name: Ship it
      id: docker_build
      uses: docker/build-push-action@v2
      with:
        context: ./
        file: ./Dockerfile
        push: true
        tags: ${{ secrets.DOCKER_HUB_USERNAME }}/popple:latest
        platforms: linux/amd64,linux/arm64
        cache-from: type=local,src=/tmp/.buildx-cache
        cache-to: type=local,dest=/tmp/.buildx-cache

Okay, so far, this is fantastic, but I’ve only removed the need for me to build and push the container images from my local machine, but that’s a step in the right direction.

podman-auto-update would like to join the party

Actually, before I even knew about podman-auto-update(1), I had generated a systemd unit for my Podman container, but I am going to reorder the events so that I look like a capable and highly observant blog author.

Turns out, podman will check to see if there’s a new version of your container image for you. All you have to do is ask. The podman version shipped in the Enterprise Linux distribution that runs on my VPS doesn’t seem new enough to support the registry value for the io.containers.autoupdate label; which appears to be favored by the documentation. I would have used that if it did. It does support the image value, so:

$ podman run --label="io.containers.autoupdate=image" -d --name popple_bot \
  ...snipped \
  docker.io/conkue/popple:latest

podman-auto-update(1) is aware of the fact that many might choose to have systemd start or stop their containers, and so it will behave nicely, especially once you see the next step where we generate a systemd unit for this bot.

But first, let’s enable and start the systemd timer that podman ships, so that auto-updates are indeed automatic:

$ systemctl enable podman-auto-update
$ systemctl start podman-auto-update

The timer will automatically check for updates once per day at midnight, but this can be tweaked according to the podman-auto-update(1) man page. I’ll probably leave it at the default setting to see how it feels, especially since I’ve pretty much added everything I want to add to the bot.

You can also just SSH in and run podman auto-update if you want to trigger this early.

Giving a daemon the reigns

I like setting up systemd units so that when my VPS is restarted everything comes back up nicely. podman makes this easy too:

$ podman generate systemd --new --name popple_bot > /etc/systemd/system/popple.service
$ systemctl daemon-reload
$ systemctl enable popple
$ systemctl start popple

Conclusion

Let’s take a look at my to do list:

  • I don’t want to manually SSH in to the VPS to deploy podman-auto-update will automatically pull the latest bot container image
  • I don’t want to manually build and push container images GitHub Actions will automatically push a new image of the latest commit to master when I push
  • I don’t want my VPS to have git or the Go toolchain installed Just needs podman!
  • I want my bot to automatically come back up when the server restarts podman generated a systemd unit for me!

Sweet! All in a morning’s work. Now I can just kick back, push some commits, and watch the changes roll out automatically.

Leave a Reply

Your email address will not be published. Required fields are marked *