"Powered by Jekyll w/modified Slate+Simple theme."
Kind of funny seeing that at the bottom of this article. Especially given that static site generators are probably one of the biggest roll-your-own categories of software.
I was about to argue with "Old timey", but yeah, I guess any band that has existed for over 40 years is old timey. I've been a fan since '96. I guess we're all old now.
The late 90s was the first time I'd heard of them. I was a university student at the time, and one of my friends was a foreign student, from America, and a fan. He also liked this other band called The Grateful Dead. I'd never heard of them either. They sucked too.
(Though, that said: many years later - i.e., quite recently - I listened to a bunch of their stuff on YouTube, while thinking back to my student days. And, actually, some of it is pretty good. At least, it was when they had pigpen singing.)
I know language designers have a lot of trade-offs to consider... But I would say if you know a value will logically always be >= 0, better to have a type that reflects that.
The potential bugs listed would be prevented by, e.g. "x--" won't compile without explicitly supplying a case for x==0 OR by using some more verbose methods like "decrement_with_wrap".
The trade-off is lack of C-like concise code, but more safe and explicit.
> But I would say if you know a value will logically always be >= 0, better to have a type that reflects that.
Except that's not quite what unsigned types do. They are not (just) numbers that will always be >= 0, but numbers where the value of `1 - 2` is > 1 and depends on the type. This is not an accident but how these types are intended to behave because what they express is that you want modular arithmetic, not non-negative integers.
> e.g. "x--" won't compile without explicitly supplying a case for x==0
If you want non-negative types (which, again, is not what unsigned types are for) you also run into difficulties with `x - y`. It's not so simple.
There are many useful constraints that you might think it's "better to have a type that reflects that" - what about variables that can only ever be even? - but it's often easier said than done.
This is true, which means that a language has to be designed from the ground up to deal with these problems or there will always be inscrutable bugs due to misuse of arithmetic results. A simple example in a c-like language would be that the following function would not compile:
unsigned foo(unsigned a, unsigned b) { return a - b; }
but this would:
unsigned foo(unsigned a, unsigned b) {
auto c = a - b;
return c >= 0 ? c : 0;
}
Assuming 32 bit unsigned and int, the type of c should be computed as the range [-0xffffffff, 0xffffffff], which is different from int [-0x100000000, 0x7fffffff]. Subtle things like this are why I think it is generally a mistake to type annotate the result of a numerical calculation when the compiler can compute it precisely for you.
First, your code is about having unsigned types represent the notion of non-negative values, but this is not the intent of unsigned types in C/C++. They represent modular arithmetic types.
Second, it's not as simple as you present. What is the type of c? Obviously it needs to be signed so that you could compare it to zero, but how many bits does it have? What if a and b are 64 bit? What if they're 128 bit?
You could do it without storing the value and by carrying a proof that a >= b, but that is not so simple, either (I mean, the compiler can add runtime checks, but languages like C don't like invisible operations).
That's true for signed numbers too though? `int_min - 2 > int_min`
I agree they're a bit more error-prone in practice, but I suspect a huge part of that is because people are so used to signed numbers because they're usually the default (and thus most examples assume signed, if they handle extreme values correctly at all (much example code does not)). And, legitimately, zero is a more commonly-encountered value... but that can push errors to occur sooner, which is generally a desirable thing.
> That's true for signed numbers too though? `int_min - 2 > int_min`
As someone else already pointed out, that's undefined behaviour in C and C++ (in Java they wrap), but the more important point is that the vast majority of integers used in programs are much closer to zero than to int_min/max. Sizes of buffers etc. tend to be particularly small. There are, of course, overflow problems with signed integers, but they're not as common.
Which makes them even less safe than unsigned, where it is defined, yes? The optimizations that can lead to are incredibly hard to predict.
Besides, for safety there are much clearer options, like wrapping_add / saturating_add. Aborting is great as a safety tool though, agreed - it'd be nice if more code used it.
You can have the trap during production, and then it is safer. If you need to catch the problem at run-time, there are checked integer options in C that you can use.
If you have "uint x" and "uint y", then for "x - y", the programmer should explicitly write two cases (a) no underflow, i.e. x >= y, and (b) underflow, x < y. The syntax for that... that is an open question.
> what about variables that can only ever be even
Yes, maybe you should have an "EvenInt" type, if that is important. Maybe you should be able to declare a variable to be 7...13, just like a "uint8" can declare something 0...255. Of course, the type-checker can get complicated, and perhaps simply fail to type-check some things. But, having compile-time constraints to what you know your variables will be is good, IMHO.
Note that in Zig, unsigned integer have the sqle semantic qs integers on overflow (trap or wrap or UB).
You also have operators providing wrapping.
That is the correct solution.
I think it should be alike in Pascal where you have size ranges as types, and then, you can declare that this collection fall on this range (and very nicely, you can make it at enum):
They lost me at the first gif. Scrolling around a large graph that looks like mostly empty space.. seems you can make the same info in a compact screen-sized text with nested <ul>
I'm curious what they do with various kinds of credentials if they get access.
I can see trying to steal crypto, but what do they do if they get some AWS credentials? Try to run some crypto mining instances? Try to use your account for other types of crimes? Or is it mainly trying to steal data and then ask for ransoms?
It's always crypto. A client got some AWS credentials stolen and without anyone checking the account, the hacker managed to spin up big EC2 instances across many regions. The bill after a month as I recall was around 100K. Since the activity was clearly fraudulent the bill was forgiven eventually. So remember to lock down your AWS keys permissions...
When that happened to a former employer AWS was calling us within a day. Worth making sure a real phone number is on there, as that's how they contact you for anything serious (and also if your finance dept decided to change the credit card without telling anyone)
That; and also, enable the various monitoring and audit features in AWS now; start with CloudTrail. Nothing worse than being affected by this attack, and AWS not having any audit trail available.
I agree it's not a genetic algorithm, but, it's also not stochastic gradient descent. There is no gradient. The "step direction" (code modification) is chosen by an LLM, which is "smart enough" to guess something that might be an improvement.
I think this Twitter question and response (from the author) is helpful to understand the problem:
Question:
So, I'm also annoyed wit GitHub's stability (especially lately), but I'm curious: Ghostty has only a handful of PRs per day (excluding robot contribs); how is this a real problem? (and yes, I read your blog article).
Response:
1) The robot contribs don't auto-close if GH is down (cause it relies on GHA). We have retries but its pretty annoying.
(2) A PR isn't one and done. We need to comment, we need to run tests (~80 per run), and we do this multiple times per commit (due to review back and forth). So one PR has a lot of GH reliance right now.
(3) PRs tend to batch up, e.g. we don't do PR review constantly because all of us have other things to do, so we usually will try to review/merge multiple at one time. 3 PRs per day = 20 per week, which is a ton for volunteer time!
(4) We try to coordinate merge parties across maintainers in China+US+EU and if GH is down during our small time slice we just can't do any meaningful merging for 24 hours. We could alter our process here but that's just gaslighting.
(5) We get an order of magnitude more issue and discussion comments, which are affected by all of the above except CI. These are particularly affected by GHA/API outages.
(6) Dev work by maintainers happens in non-PR branches that run CI, and if CI is down we can't test our code (since Ghostty relies on a lot of testing we can't run locally, e.g. for platforms we don't have). It effectively pauses work on that branch.
(7) I've had multiple days in that 30-day window where Git operations themselves failed for different reasons. So I couldn't push a branch or whatever.
It just all adds up to be WAY too work impacting. The Ghostty maintainer channel is a stream of "oh GH is down again."
It's good to expose fraud and it does sound like this set back the field, but "Why has there been so little progress"? - probably because it's very hard? We barely understand how the brain stores memories.
I'm dealing with someone with this disease now and it's absolutely hell.
Kind of funny seeing that at the bottom of this article. Especially given that static site generators are probably one of the biggest roll-your-own categories of software.
reply