new blog post

This commit is contained in:
lourenco
2025-09-11 15:25:03 +02:00
parent 766c2bfc1a
commit e2b806fe5f

40
content/blog/bloginfra.md Normal file
View File

@@ -0,0 +1,40 @@
---
title: "The infrastructure, part 1"
date: 2025-09-12T08:00:00Z
draft: false
---
One of the things that I've always tried to implement on all the services I run is self-sufficiency. Now, this isn't completely achievable, in the sense that I still depend on services and software that other people provide, but it's still an ideal to strive for. In my particular case, all the services I run are _self-hosted_ with no external dependencies, either from a raspberry pi I host at home, or a small VPS that I rent from a provider.
Today I want to highlight a small part of this contraption I've assembled, in particular what it takes to host a blog/website like this one, why I chose these pieces of the puzzle, and what it takes to make everything work together.
There are some principles I like to follow when I choose something to run. First, different pieces need to play nicely with each other - i.e., things need to approximate the [UNIX philosophy](https://en.wikipedia.org/wiki/Unix_philosophy) as much as possible. This serves as a guarantee that I'm not dependent on any particular piece of the puzzle, and that I could swap it out if I find something that fits my uses better - and this does happen, as needs change and we figure out how to attend to them better.
So, let's start from the top! The first rule of exposing *anything* on the internet is to not do it directly. For years now, I've used [**nginx**](https://nginx.org/en/) as the frontend proxy to every single service I run on my machines that I need available through the internet. I know that there are other solutions that could also fit the bill here (such as Caddy), but I still haven't really found a reason to change.
Another piece of the puzzle here, and the one I really like, is [**hugo**](https://gohugo.io/). This is a static site generator written in Go, with a lot of really good documentation, that has proved to be reliable and extensible. But, for me, the really good thing is that, with **hugo**, I'm literally just serving static web files over the web, with very little surface of attack possible - especially when compared to more complex CMS stacks such as Django, Wordpress, or other alternatives. This makes hosting it quite a breeze: you generate the files, and point your webserver to the directory, like so:
```
server {
listen 443 ssl;
http2 on;
server_name excipio.tech;
root /var/lib/hugo/excipio/public/; # Hugo site main dir
index index.html; # Hugo generates HTML
location / {
try_files $uri $uri/ =404;
}
}
```
Another neat feature of **hugo** is that it builds in **git** as a first class tool, which allows for the management of your website in a pretty streamlined and lightweight way. I won't extol the virtues of **git** here, as it's a bit outside of the scope I had in mind for this post. But using it for a blog does pose a problem.
So, imagine this: I'm at my computer, and I clone the git repository that contains this blog. I can now write a new post, and with `hugo serve` I can even see right away how it looks like. This is super cool, and a godsend if you don't want to depend on complicated web frontends and frameworks. But, how do I actually deploy this new blog post to the blog? Just doing `git push remote master` isn't enough, as there's nothing to trigger the re-build of **hugo** and its files. So, here's the last piece of the puzzle.
If you run your own git forge - and I do, can recommend [gitea](https://about.gitea.com/) for this - you have everything you need: Gitea has it's own [runner](https://docs.gitea.com/usage/actions/act-runner) that allows it to run actions on a system. This isn't too complicated to setup, and the workflow for this particular setup is remarkably simple, you can check it [here](https://git.assilvestrar.club/lourenco/excipio/src/branch/main/.gitea/workflows/blog-deploy.yaml). What this does, in essence, is that every time I push a change to the git repo on gitea, the runner will automatically pull the changes to my webserver and trigger the `hugo build` command that's necessary to generate the new content. And in less than a second (because hugo is _fast_, like 50ms to generate the whole website fast), everything is done.
Given how hugo deals with with the front-matter, I can even schedule posts just by putting a date to the future and not worry about it again. You can see this by looking at this commit: the post was submitted a day before, but it will only become visible once we're at the date of publication.
And that was that. I know: at first glance this might sound a bit complicated, maybe even over-engineered. But this will be a common theme in this blog. What we're doing is exposing the complexities of modern software. This is what other frameworks hide from you, but at what cost? This whole setup takes a fraction of the disk space and RAM to run, with the exact same functionality, and it's much more reproducible and scalable. Also, absolutely no lock-in. I can change Nginx for Caddy, hugo for Jenkyll, my own private git forge for github (but, lol, why would I do that), and so on. And the skills you gain by doing this are probably the best part.