Arjen Wiersma

development

{{}} Currently, only use Postgres 14 on the Digital Ocean application platform for development databases. {{}}

While following the book {{< backlink “zero2prod” “Zero2Prod”>}} you will learn how to deploy a {{< backlink “rust” “Rust”>}} application to digital ocean through a Continuous Deployment pipeline. This is hardly anything new for me, I even teach a course in DevOps, but to not stray from the path of the book I followed its instructions.

The spec for digital ocean looks like this (this is abbreviated for your reading pleasure):

name: zero2prod
region: fra
services:
    - name: zero2prod
      dockerfile_path: Dockerfile
      source_dir: .
      github:
        branch: main
        deploy_on_push: true
        repo: credmp/zero2prod
      health_check:
        http_path: /health_check
      http_port: 8000
      instance_count: 1
      instance_size_slug: basic-xxs
      routes:
      - path: /
databases:
  - name: newsletter
    engine: PG
    db_name: newsletter
    db_user: newsletter
    num_nodes: 1
    size: db-s-dev-database
    version: "16"

Actually, in the book it says to use version 12, but that version is no longer available. The latest version support is 16 and I chose that. There is only a small hiccup here, since Postgres 15 in 2022 there has been a breaking change in how databases are created. Notable, a best practice following a CVE in 2018 (CVE-2018-1058), has been made the standard. The standard being that by default users do not have creation rights, as an administrator you have to explicitly grant rights to your users.

Although this has been best practice since 2018, the change in Postgres 15 confronts users with this change. To my surprise Digital Ocean seems to not be aware of this change until now.

The development database created in the application platform using the spec from above creates an user (newsletter) with the following rights:

Role name | Attributes
------------------+------------------------------------------------------------
_doadmin_managed | Cannot login
_doadmin_monitor |
_dodb | Superuser, Replication
doadmin | Create role, Create DB, Replication, Bypass RLS
doadmin_group | Cannot login
newsletter |
postgres | Superuser, Create role, Create DB, Replication, Bypass RLS

You read that correctly, none. At the moment you can still create a postgres 14 database with digital ocean, which grants rights to the user and then you can upgrade it to the latest version, keeping the rights. But that is a workaround.

After determining the cause of the error I decided to mail digital ocean support with the issue. Timeline:

  • December 30th: the answer is that I am using a development database, if I would only upgrade to a managed cluster I would have full access to the database. I politely responded explaining the problem again.
  • December 30th: a quick response from the same agent, saying that based on the information provided I am trying to do things with the doadmin user, again not reading the actual question (or not understanding the problem). I again answer with a full log of the creation of the database and the rights given to the users.
  • December 31st: another agent responds, telling me that using my spec it will create a database and that I can connect using the data from the control panel. This is exactly the information I already sent, but the agent does not actually look at the problem (no rights). I once again explain the issue.
  • December 31st: another agent answers the ticket, asking how I create the database. I once again answer with the spec (which is already in the ticket 2 times now) and the steps I use (doctl from the command line).
  • December 31st: another agent responds with some general information about creating databases, again not actually reading or understanding the issue.
  • Januari 1st: a standard follow up email asking if I am happy with the service. I respond that the problem is not solved, and that I am fearful that given the interaction it will not be solved.
  • Januari 2nd: another agent responds that they are talking internally
  • Januari 2nd: a senior agent called Nate appears in the thread. Actually asking questions that explore the issue. I promptly respond.
  • Januari 2nd: Nate acknowledges the issues and Digital Ocean starts working on a fix for their database provisioning. Provides the workaround of first using version 13 or 14 and then upgrading.
  • Januari 9th: Still working
  • Januari 15th: Still working
  • Januari 21st: Another update that the provisioning process is quite complex and they are still working on a solution.

The proces to get something so trivial through the support channel is quite painful. I do realize I do not have paid support, and I am willing to wait it out because of that, but the first 5 interactions did nothing but destroy my confidence in the Digital Ocean support system. Luckily Nate picked up the ticket.

When a solution eventually comes around I will update this post.

#development #database #programming

{{< admonition type=“note” >}} Originally posted on 2024-09-30 (Monday). It was updated in January of 2025. {{< /admonition >}}

I ❤️ to build software. I sadly do not have a lot of time next to my daily work to spend on my side projects, so I have to be disciplined in where I invest time. I wish I could spend endless amounts of time on exploring new technologies, but sadly I simply do not have that time. In writing this is sometimes referred to as “to kill your darlings”.

Sir Arthur Quiller-Couch wrote in his 1916 book On the Art of Writing: “If you here require a practical rule of me, I will present you with this: ‘Whenever you feel an impulse to perpetrate a piece of exceptionally fine writing, obey it—whole-heartedly—and delete it before sending your manuscript to press. Murder your darlings.’”

Luckily for me, I just finished my latest round of education, so I now do have time to spend on building some of the ideas that have been floating around in my head for the last 3 years. And I did start out writing stuff. Some in {{< backlink “rust” “Rust”>}}, some in Go and others in Clojure.

Like many programmers I love to explore new languages, I think you always learn something new from them. As Clojure really taught me about functional programming when all I knew was imperative languages. In the end, after having a summer of not working on my studies I have 0 projects completed, but I do have 4 versions of them.

So, I decided to step back and evaluate. I decided to kill my darlings of different programming languages and focus solely on Clojure again. Development in Clojure conforms to Rule 6 for me. While working out the problem I love the interactive build method. I actually like the parentheses, I know... weirdo me 🤗.

update 2025: during the holiday season I got the book Zero 2 Prod, which is a book about making Rust project production worthy. Experience I already have in Java and Clojure. This sparked rule 6 for me for the {{< backlink “rust” “Rust”>}} language again. The experience following the book has been quite smooth, but the real proof is, of course, creating something yourself. I know, I am like a {{< sidenote “puppy” >}}I love puppies!{{< /sidenote >}} puppy chasing his tail... Let's see where this goes.

From reading the book I already see lots of improvement for my Hed tool.

You might even remember that I used to do a live-streaming series in Clojure. I still don't have a lot of time to continue that one, but who knows... I might drop some videos later again.

Since the summer I have been somewhat involved in Biff, a rapid prototyping web framework in Clojure. It provides a set of sensible defaults to just get started, and it allows you to easily change all its bits. I have been building my latest project on top of it, which, with a bit of luck, might even make it to production.

#clojure #development #rust #emacs

I recently came across Traefik. It is a reverse proxy built specifically for services in the cloud. I was searching for a convenient (up-to-date) way to expose my project using a reverse proxy within docker-compose. I used to use nginx for this, but it then requires a generator and an lets encrypt listener (so 3 containers). Traefik only requires a single container and allows you to label your docker containers to apply rules to them.

The below configuration creates a traefik instance, sets it up to host port 80 and 443 for web, and 8080 for its dashboard (protect that port in your firewall). It also sets up letsencrypt certificates and automatic redirection from port 80 to 443.

version: '3'

services:
  reverse-proxy:
    image: traefik:v3.1
    ports:
      - "80:80"
      - "443:443"
      - "8080:8080"
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - ./traefik.yml:/traefik.yml:ro # Traefik config file
      - traefik-certs:/certs # Docker volume to store the acme file for the Certifactes

  app:
    image: your/image
    ports:
      - 8081:8080
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.app-http.rule=Host(`example.com`) || Host(`www.example.com`)"
      - "traefik.http.routers.app-http.entrypoints=web"
      - "traefik.http.routers.app-http.middlewares=redirect-to-https"
      - "traefik.http.routers.app-https.rule=Host(`example.com`) || Host(`www.example.com`)"
      - "traefik.http.routers.app-https.entrypoints=websecure"
      - "traefik.http.routers.app-https.tls=true"
      - "traefik.http.routers.app-https.tls.certresolver=letencrypt"
      - "traefik.http.middlewares.redirect-to-https.redirectscheme.scheme=https"
      - "traefik.http.middlewares.redirect-to-https.redirectscheme.permanent=true"
volumes:
  traefik-certs:
    name: traefik-certs

The mentioned config file is reproduced below:

api:
  dashboard: true # Optional can be disabled
  insecure: true # Optional can be disabled
  debug: false # Optional can be Enabled if needed for troubleshooting
entryPoints:
  web:
    address: ":80"
  websecure:
    address: ":443"
serversTransport:
  insecureSkipVerify: true
providers:
  docker:
    endpoint: "unix:///var/run/docker.sock"
    exposedByDefault: false
    network: proxy # Optional; Only use the "proxy" Docker network, even if containers are on multiple networks.
certificatesResolvers:
  letencrypt:
    acme:
      email: contact@example.com
      storage: /certs/acme.json
      #caServer: https://acme-v02.api.letsencrypt.org/directory # prod (default)
      caServer: https://acme-staging-v02.api.letsencrypt.org/directory # staging
      httpChallenge:
        entryPoint: web

#development

This is my first article in a series called Rock Solid Software. In it I explore different dimensions of software that does not simply break. You can write good software in any programming language, although some are more suited to a disciplined practice then others, Clojure is definitely in the relaxed space of discipline here.

Today I am exploring the use of Selmer templates in Clojure. If you have explored Biff at all you will know that all the UI logic works by sending Hiccup through a handler, which will turn into HTML through rum (specifically the wrap-render-rum middleware). If you provide a vector as a result for an endpoint, it will be converted to HTML.

;; You provide this...
[:h1 "test"]
;; => [:h1 "test"]

;; It will then be converted to HTML
(rum/render-static-markup
  [:h1 "test"])
;; => "<h1>test</h1>"

This is absolutely great for rapid prototyping, however it becomes quite tedious if you want to test it. The idea of testing a function is to provide it some inputs and to validate if the outputs match the expectation. Verifying if HTML matches an expectation, or a vector of hiccup for that matter, is quite difficult.

To increase testability I added selmer to my project. This separates presentation from data by having selmer render templates with a map of data. Selmer is based on Django templates, which means that it has a rich set of features, such as extending base templates, defining blocks and providing control structures such as if and for loops. A very simple template looks like this:

{% extends "_layout.html" %}

{% block content %}
<article>
Hallo <b>{{name}}</b> from simple template
</article>
{% endblock %}

As the template extends _layout.html, lets take a look at that as well. I have stripped it down to the bare minimum here, but you might expect scripts, css, nav bars and many other things in the base template. The important thing here is that the block has the name of content, and our snippet above also has a block called content, the above article will be put inside the main below.

<!doctype html>
<html class="no-js" lang="">
  <head>
    <title>{{title}}</title>
  </head>
  <body>
    <main id="main">
      {% block content %}
      {% endblock %}
    </main>
  </body>
</html>

All that is left is to provide a middleware that will handle the selmer return type from an endpoint. In this case a map with a :template and :content key. If both keys are inside the response, the given template will be rendered using the content map.

(defn wrap-render-selmer [handler]
  (fn [ctx]
    (let [response (handler ctx)]
      (if (and (map? response) (every? response [:template :content]))
        (let [res (selmer/render-file (:template response) (:content response))]
          {:status 200
           :headers {"content-type" "text/html"}
           :body res})
        response))))

My new authentication function has become quite simple, provide a login page using the auth/login.html template. Of course it requires a whole bunch of different attributes in order to render the whole page, but another wrapper adds all the required metadata known to the application already, such as css and script files, application settings and even theme information. All the endpoint has to take into account is its own required information.

(defn login [{:keys [params] :as ctx}]
  (let [err (:error params)]
    (ui/page ctx {:template "auth/login.html"
                  :content (merge {} (when err {:errors [err]}))})))

This is all great and all, but it has nothing to do with testability, right? Well, a map is easier tested then an unstructured vector, right? In other languages, such as Rust, you can get compile time validation of templates, which is great! Sadly selmer does not have that, however we can just simply render a template file and see if there are any missing values.

The below snippet takes the “missing” value and replaces it with a placeholder. So, given a template and an endpoint function we can easily check if all required entries are provided in the map. The below function renders the template, provides a csrf token which is not available inside testing, and verifies that the template does not have any missing values.

(defn missing-value-fn [tag _context-map]
  (str "<Missing value: " (or (:tag-value tag) (:tag-name tag)) ">"))

(selmer.util/set-missing-value-formatter! missing-value-fn)

;; Ensure all the page's required fields are present.
(deftest selmer-validation
  (let [s (sut/page {} {:template "_layout.html" :content {}})
        ;; CSRF is not set during testing...
        res (selmer.parser/render-file (:template s) (assoc (:content s) :csrf "csrf"))]
    (is (not (str/blank? res)))
    (is (not (str/includes? res "<Missing value:")))))

Another step into building rock solid software.

#clojure #development

This is a longer form article. I is relevant as of February 18th 2023. If the circumstances of my environment changes I will try to update this article to reflect the situation. You can find the full source code of my dotfiles on Github.

I like consistency and simplicity. I do not like to use many different tools to do different things, I rather spend my time learning to use a few tools very well then to follow the hype on the latest trend of tools for something we have been doing forever.

This philosophy I transfer to pretty much everything in life. I have been using the same laptop bag for ages, I have a small mechanical keyboard, and I run the same version of my OS on all my devices. One device for on the go, the other for at home. They look the same and act the same, courtesy of an Linux distribution called NixOS.

Below you will find 2 screenshots, one from my laptop, the other from my desktop. The only difference is the size of the screen.

{{< figure src=“/ox-hugo/desktop.png” caption=”Figure 1: My Linux desktop on my laptop” >}}

{{< figure src=“/ox-hugo/desktop-large.png” caption=”Figure 2: My Linux desktop on my desktop” >}}

NixOS {#nixos}

I use the NixOS distribution of Linux. NixOS is a wonderful operating system that works by declaring what you want your environment to be and then applying that declaration to the current version of the environment. That sounds difficult, but let me explain.

Suppose you have just installed a Linux distribution and you want to install the wonderful Emacs editor. In most distributions you will go to the package manager, search for Emacs and click on install. A few seconds later, Emacs is installed. With NixOS you edit a file that describes your environment, you will add a line to it saying that Emacs is part of your environment. When you have saved the file you will ask NixOS to create a new version of your environment, to do so it will install Emacs for you.

I say it will create a new version of your environment. This means there is an old version as well, right? Yes! NixOS has a concept of Generations. This means every change happens in its own version of the environment. So, if a change goes wrong, you just revert back to the previous version.

This sounds like a great deal of work, and it is. It is not for the new Linux user, that is for sure. If you spend some time learning NixOS I am sure you will be grateful for it. Just the other day I tried to use the wayland system on Linux, my configuration went horribly wrong and I was left with an unusable system. I rebooted the machine, selected the previous generation, and I was back where I started before the change. It is that useful!

As I share my configuration over multiple machines I split up the configuration into a machine specific version to my desktop, laptop, and the things that should run on both:

The shared configuration contains all the juice, it sets up the graphical user interface, creates users and assigns to groups. This means that when you run this configuration you will end up in a very barren i3 tiling window manager. More on that later.

Most of my applications are courtesy of something called home-manager. This is a user-space application that allows for easy changes to the environment. As none of these changes can actually wreck the environment I kept them outside of the default NixOS configuration.

My home-manager configuration takes care of installing all the user-space tools that I use. It also sets up my shell and configures the Emacs daemon.

You might wonder, do you create a configuration file every time you need a tool? No! When I just need a one-off tool I use something called nix-shell. In the screenshots above you will notice that I run neo-fetch. This program is not part of my normal system as I only use it for screenshots as the one above. Within a terminal I run it as follows: nix-shell -p neofetch --run neofetch. This will temporarily install neo-fetch and run it. Afterwards it can be cleaned up. I also do this for most of the tools, such as unzip. I only install then when I need them. This keeps everything that is installed very clean.

You might also notice that there are not programming language toolchains in my configuration. That is correct. When I have a programming project I use something called direnv, see the direnv webpage for some background.

Whenever I start a new programming project I run the following command in the project root: nix --extra-experimental-features "nix-command flakes" flake new -t github:nix-community/nix-direnv .. This will create a flake.nix file in which I can declare what this project needs as dependencies. As the rest of my environment is extremely clean, I will need to specify precisely what is needed. Take the listing below, it is part of a programming project in which I use Rust, Golang, Python and Java. Whenever I move into this project, all the tools will be installed. This also means that it works exactly the same on every single system where I use this setup.

{
  description = "A basic flake with a shell";
  inputs.nixpkgs.url = "github:NixOS/nixpkgs/nixpkgs-unstable";
  inputs.flake-utils.url = "github:numtide/flake-utils";

  outputs = { self, nixpkgs, flake-utils }:
    flake-utils.lib.eachDefaultSystem (system: let
      pkgs = nixpkgs.legacyPackages.${system};
    in {
      devShells.default = pkgs.mkShell {
        packages = with pkgs; [
          pkg-config
          openssl.dev
          cargo
          rustc
          rustfmt
          clippy
          rust-analyzer
          aoc-cli
          go
          gopls
          gotools
          govulncheck
          pkgs.jdk
          pkgs.jdt-language-server
          pkgs.python311
        ];
        # Environment variable specifying the plugin directory of
        # the language server 'jdtls'.
        JDTLS_PATH = "${pkgs.jdt-language-server}/share/java";
      };
    });
}
Code Snippet 1: A nix-direnv declaration for a polyglot programming project

This might seem like a hassle. It is true, it is more work then just installing Golang on Ubuntu and “just having it”. But once you use multiple systems or work together in groups you will start appreciating it, trust me.

i3 {#i3}

As I like simplicity I tend to not use elaborate windowing environments, such as Gnome or KDE. I try them out every once in a while, but I also go back to i3. Back in the day I ran enlightenment, but now I have been using i3 WM for quite some years. My configuration is quite mature and I generally only change it when I want to add a new tool to my daily use, or when tools get good updates such as polybar. The configuration is part of my dotfiles.

When I boot my system all I have is a top bar that contains the following information:

  • 💻 Active workspaces (each has its own icon and use)
  • 💾 Current fill state of my disks
  • 🛡️ VPN status
  • 🔊 Sound and its volume percentage
  • 🛜 Wifi state (laptop only)
  • 🔋 Battery state (laptop only)
  • ⏰ Time
  • 📥 Tray icons (flameshot, bluetooth and nextcloud)

That is it. After all those years working with computers, that is all I really need. If I could I would write a toggle for the bar as well, to only show up when needed. The very appealing thing about i3 is its tiling feature. I will never have windows that overlap. Everything is neatly ordered in workspaces and within workspaces in columns or rows. As I create dedicated workspaces everything has a specific place:

  1. Terminal (alacritty with tmux)
  2. Emacs
  3. Virtual Machines
  4. Firefox
  5. Chrome

From workspace 6 on I consider them “throw-away” workspaces. The things I will store there will be used only shortly. The exception is workspace 10 (or 0). This contains my Spotify.

To launch applications I use something called Rofi. It is a window switcher, application launcher and menu replacement tool. It is very easy to customize and you can make it exactly what you want. My configuration is available on github.

{{< figure src=“/ox-hugo/rofi.png” caption=”Figure 3: Rofi launching applications in i3” >}}

You can configure your environment exactly as you want. Take a look at r/unixporn for some more extreme versions of customized desktops.

#emacs #development #writing

Let me tell you how it was to ship a product out to half a million people back in 1999. But before I do that, let me tell you why. Today I talked to one of my students and he mentioned that he was very nervous about a change he was making. He was afraid it would break things and that he would spend the afternoon working through his CI/CD pipeline to resolve issues.

Well, back in 1999 I worked on a project. Together with some friends we were building cool software in Borland Delphi and life was good. One of the things that we had built was a nifty dialler application that you could run on Windows. It would dial into your ISP and it made the entire process a lot easier and it made all the dealings with modems and telephone lines so much simpler. Why would we make such a thing? It was actually a commission for one of the earlier internet providers in The Netherlands. It was well received and we made our first big bucks. It was awesome.

After finishing the project I received a call. There was a secret project in the parent company and they needed the software as well. The project turned out to be the creation of a free internet provider. The provider was Freeler. The term free meant you only paid for your telephone line, but not for the service itself. It was a cool and radical idea and the parent company gave the project group 1 month to put everything into place and market it. In modern times that would be 1 sprint.

Needless to say it was a pressure cooker. In hindsight I did not really understand many of things going on, I was just focused on modifying my dialler application to do the job that was asked. The idea was to have a CD ROM ready just before launch time. The CD ROM would then be placed at gas stations and other high traffic areas. It had to work flawlessly. The thing with CD ROMs is that you can't send a patch if something is wrong.

As I was just out of my teen years it was all quite hectic and I had never released software on this scale. So I made my changes to the application, but how do you make sure it is correct? It worked on my computer, but how do you test something like this? Well, first you need to make your CD ROM. So we built an image and sent it to the pressing company, we received a box a few hundred testers the next day. So, time to call all our family members? Fun fact, thanks to people of the internet archive you can still download the CD ROM image from them.

{{< figure src=“/ox-hugo/freeler.png” alt=“the Freeler CD ROM” width=“300” >}}

After some calls and research my technical partner on the project found a laboratory that actually specialized in testing CD ROMs. It was one of the coolest things ever; they take your CD ROM and feed it to a robotized setup. In this laboratory they had hundreds of machines from various manufacturers running various versions of the Windows operating system. It was pure magic to behold.

We spent several days at the laboratory getting results. Some machines did not auto start the software, others ran into issues setting up the connections. It was an effort, but at the end of the day I fixed all the issues and a master cd was made. This is basically the template from which all the copies are created.

So, now we have some tested software and a distribution medium that will work with the target audience. We are finished right? Well, no. As people use the CD ROMs they will have questions. Some people will never have dialled into the internet before, some people might not even have a modem (no, that is not a joke). So to ensure their questions are answered a call center is needed. I don't remember how big the call center was, but I do remember it was in the center of Groningen.

Given the time crunch, the deadline was only in a few days, these operators needed to be trained. They needed to be trained in working with the dialler application. So I was sent to Groningen to work with the call center. Imagine the sight; you just created an application, went to a laboratory to test it, then created a bunch (a million-ish) CD ROMs and then you wait for people to call with issues. The first time the phone rings you heart drops. “Did it not work?”, “Did I miss something?”.... it is not like you can go around people's house to fix the issues. Patching it is not possible, since they use the application to get onto the internet.

Luckily for me the software worked quite nicely. Freeler grew to 350.000 members. But to release this simple piece of software I spent weeks working through many painstaking processes.

So why do I tell this story? Well... having the luxury of CI/CD, instant feedback and the ability to patch things the same minute/hour/day should be the greatest good in the world. Be fearless, merge your changes, fix your issues, deploy without anxiety.... you will never have to see a robot feed a CD ROM to a computer in order to find out if you code works.

#development #writing