In case you haven’t noticed, this blog is very much a min-maxing effort for me. Minimum input and maintenance for maximum result.
I run this on kubernetes, and i’m of course not a fan of building the photos into the hugo pods, so therefore i dump photos onto a Ceph RGW bucket and reference them through the associated subdomain/ingress:
rook-values.yaml:
ingress:
# Enable an ingress for the ceph-objectstore
enabled: true
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
host:
name: cdn.nih.earth
path: /web-assets
tls:
- hosts:
- cdn.nih.earth
secretName: ceph-objectstore-tls
ingressClassName: nginx
In the past i’ve taken to compressing these in batches and organizing them as such, which can be very time consuming.
To eliminate this workflow and organization effort, I’ve written imgproxy-lite, a VERY simple and admittedly naive realtime image converter microservice.
These pods are deployed alongside hugo:
kubectl get pods | egrep 'img|hugo'
hugo-5b479655f-56t4k 1/1 Running 0 4m56s
hugo-5b479655f-z8m6k 1/1 Running 0 5m
imgproxy-lite-687dbb7dcb-4rg7t 1/1 Running 0 15m
imgproxy-lite-687dbb7dcb-52qvr 1/1 Running 0 15m
imgproxy-lite-687dbb7dcb-fp2tz 1/1 Running 0 15m
imgproxy-lite-687dbb7dcb-kw5tk 1/1 Running 0 15m
imgproxy-lite-687dbb7dcb-tnp7x 1/1 Running 0 15m
Theres a new subdomain and ingress fronting the imgproxy deployment, so I’m able to reference them as such from hugo content:
![](https://images.nih.earth/?img=main.jpg&q=50)
on simplicity, platforming
One might say “you’re over-engineering your blog”. I’d argue not. For whats really just a bit of embarrassingly simple python and some boilerplate terraform, image management going forward is extremely low effort. All I have to do is dump the originals to that ceph bucket and pick whatever compression level is appropriate on a case-by-case basis when writing posts. Farthermore, this enables users to find their way to the originals if wanted.
What this exercise really speaks to is the value of what i call ‘platform building’. It is this environment (or platform) I’ve built for myself - nixos, kubernetes, rook-ceph, cert-manager, opentofu - that enables such rapid, low-commitment development (a couple hours) of a new fault-tolerant, load balanced service complete with subdomain, certificate, SNI routing, etc. Such a task would take months of misery in some orgs; to ‘request the VMs’, request the certificate, request the ‘app gateway’ modifications, request the subdomain, and so on.
On simplicity - no, if this was all running on vmware VMs and sitting behind some F5 loadbalancer product; I would not suggest ’lets add a new layer of VMs and loadbalancer to handle the assets’ and call it ‘simpler’. What makes this simple is the elegance of the abstractions that it sits upon - 90% of the ‘real’ load-bearing components were already there.