No, at the very worst I could have done it on a different thread (although I don't expect to have needed to). Since I initially planned on supporting 3 different init systems (systemd, Shepherd, and OpenRC) which are all used on wildly different systems, the shelling approach seemed way too brittle. Paths for example are going to be significantly different for systemd on Ubuntu and NixOS for example, and then there are distros like Alpine that like to put binaries into /usr/lib instead of /usr/bin sometimes for some unknown reason (had that one happen to me with Ninja, which caused problems with some CMake scripts I was using). So ultimately, I decided that the project had too many unknowns. Bit of a shame, since I planned to use it to really get to grips with Haskell.
You abandoned your dreams of mastering Haskell by making a system operator UI using QT, but gave up out of concern that `journalctl` might not be in PATH?
I gave up because I couldn't see a way to make the program consistent. The path issues were the most immediate concern, yes, but not the only one. I really wanted a consistent UI because there are few things that irritate me more than UI layouts switching up on me, but I just couldn't see a way of doing that. For example, OpenRC stores the complete description of service definitions in 2 separate files, one of which is optional. You have a shell script that contains the code to run in /etc/init.d, but you also have a config file that describes the environment that code is ran in in /etc/conf.d. If I wanted to have services be editable live in the GUI (and that was one of the main features I wanted out of it), I would have had to split the text editor into two panes for OpenRC.
Overall, I guess this is a kind of perfectionism or choice paralysis I guess. I have continued to write Haskell since, and am now comfortable with the base language, if not the ecosystem (I have no idea what a lens is).
I often have to remind myself that we have the programs we have today only because somebody wanted them to exist more than they wanted them to express abstract properties like consistency.
As someone who’s an experienced coder but sorta new to LLMs, I’ve learned that my intuition about what is and isn’t machine-readable for an LLM isn’t well-tuned. Better to do the benchmarking so you can be sure you’re not fooling yourself about whether use of YAML is warranted.
From the perspective of the “physics” of code (for lack of a better term), this feels right. We all know that if you have a generated asset, you basically mark that asset as read-only or you’re entering a world of pain. As more and more code is generated, the same forces will apply.
Harder for brownfield projects of course (and it basically replaces Jira, which would cost me all my social capital if I try), so I won’t be adopting it now. But I will be leaning into long-lived organic, grass-fed assets like conceptual-model.md that‘s critical for evolving the product forward but doesn’t strictly map to the types in the code.
I really enjoyed engaging with the article. Loved the humor sprinkled in, and the humility demonstrated throughout. Some great first-principles clear thinking here.
Every now and then someone shares a small web link here (Kagi is one aggregator). It’s like survivors picking up the shards of civilization after the apocalypse. Of course such a project can remain viable and useful as long as it remains niche, which is virtually guaranteed as long as there’s no money in it.
It’s incredible how too much money corrupts everything it touches.
Some of my best work has been done as a labor of love. I do have the vague impression that we as a society have taken a wrong turn in selling the sacred. I am not in favor of collapsing society down to hippie communes or anything, but it does seem to me that we told better stories back when stories were freer.
I sometimes imagine gathering up some number of like-minded electrical and software engineers, and founding some sort of monastary in which everyone was fed and taken care of and built the best technology they could, as a gift to humanity. I do wonder if the day's robber barons would find a way to shut us down, of course, but I still remember a bright and optimistic time when technology was made to serve people, not to oppress them, and it seems to me like a bright expression of human spirit that oughtn't to have been sold.
I switched to Linux in 2018, and I hear you on the quirks. But now, an LLM can fix basically any quirk you hit. I’ve been surprised multiple times. I’m also on Pop OS, which feels more batteries-included than Ubuntu.
They confirmed it is, but their reply is marked dead and vouching didn't do anything.
What I'm really interested to see is if that works ends up being very Steam Frame specific, or if we also see Linux doing better for the higher end non-standalone headsets.
Not that I would buy one (a GPU to run it is not in the gaming budget) I just think it would be cool to have Linux open up as an option for more use cases.
There are differences across vendors, but there’s a way to build with k8s where the benefit far outweighs the cost.
We run a bunch of services in two very different cloud vendors (one of which used to be DIYed with kubeadm), and also on dev machines with k3s. Takes a while to figure this out and to draw the kustomize boundaries in the right place, but once you do, it’s actually really nice.
Two things work in our favor:
- we’ve been at this for around 8 years, so we didn’t have to deal with all the gotchas at once
- we aggressively avoid tech that isn’t universal (so S3 is OK, but SQS or DynamoDB is not; use haproxy instead of ingress controllers; etc)
That part was really surprising to me because for the kind of compute lake he’s talking about building, k8s seems like a pretty good fit for the layer that sits just above it.
We run k8s with several VMs in a couple different cloud providers. I’d love it if I could forget about the VMs entirely.
Is there a simpler thing than k8s that gets you all that? Probably. But if you don’t use k8s, aren’t you doomed to reimplement half of it?
Like these things:
- Service discovery or ingress/routing (“what port was the auth service deployed on again?”)
- Declarative configuration across the board, including for scale-out
- Each service gets its own service account for interacting with external systems
- Blue/green deployments, readiness checks, health checks
- Strong auditing of what was deployed and mutated, when, and by whom
Was just listening to the Lenny’s Podcast interview with Simon Willison, who mentioned another such incantation: red/green TDD. The model knows what this means and it just does it, with a nice bump in code quality apparently.
I’m trying out another, what I call the principle of path independence. It’s the idea that the code should reflect only the current requirements, and not the order in which functionality was added — in other words, if you should decide to rebuild the system again from scratch tomorrow, the code should look broadly similar to its current state. It sort of works even though this isn’t a real thing that’s in its training data.
reply