This is true, which means that a language has to be designed from the ground up to deal with these problems or there will always be inscrutable bugs due to misuse of arithmetic results. A simple example in a c-like language would be that the following function would not compile:
unsigned foo(unsigned a, unsigned b) { return a - b; }
but this would:
unsigned foo(unsigned a, unsigned b) {
auto c = a - b;
return c >= 0 ? c : 0;
}
Assuming 32 bit unsigned and int, the type of c should be computed as the range [-0xffffffff, 0xffffffff], which is different from int [-0x100000000, 0x7fffffff]. Subtle things like this are why I think it is generally a mistake to type annotate the result of a numerical calculation when the compiler can compute it precisely for you.
First, your code is about having unsigned types represent the notion of non-negative values, but this is not the intent of unsigned types in C/C++. They represent modular arithmetic types.
Second, it's not as simple as you present. What is the type of c? Obviously it needs to be signed so that you could compare it to zero, but how many bits does it have? What if a and b are 64 bit? What if they're 128 bit?
You could do it without storing the value and by carrying a proof that a >= b, but that is not so simple, either (I mean, the compiler can add runtime checks, but languages like C don't like invisible operations).
In my reading, what Stroustroup is saying is that given other problems in c/c++, that singed sizes are less bad than unsigned but both have clear and significant deficiencies. A new language doesn't have to inherit all of these deficiencies.
No. He says that signed/unsigned arithmetic is a universal problem. And in the context of std::span, using signed arithmetic is the correct choice rather than shoehorning in size_t to make it more cosmetically consistent with the rest of the STL.
I also expect that most side projects that are made with ai end up abandoned within 3 months and contribute next to nothing to the user's personal development and that the use of ai prevented them from the kind of deliberate practice that could have led to durable skill growth which ultimately will lead to much better work (side or main projects).
This highly depends on the language and your skill as a compiler writer. You can write a single pass assembler that generates great code but you have to of course write the low level code yourself (including manual register assignment). To do decent automatic register assignment, I agree you need at least two passes, but not 10 or more.
You don't need a repl for this workflow and it can be easily implemented in any language. `ls *.MY_LANG | entr -c run.sh` You get feedback whenever you save the file.
Personally, I find waiting more than 200ms unacceptable and really < 50ms is ideal. When the feedback is very small, it becomes practical to save the file on every keystroke and get nearly instantaneous results with every input char.
> You don't need a repl for this workflow and it can be easily implemented in any language. `ls *.MY_LANG | entr -c run.sh` You get feedback whenever you save the file.
As in restarting the entire program and re-running every subsequent query/state changing mechanism that got you there in the first place, being careful not to accidentally run other parts of the program, setting it up to be just so, having to then rewrite parts if you then want to try something somewhere else?
Perhaps I'm misunderstanding you, because that sounds horrible unless you're scripting something small with one path.
The whole point of the REPL is that you evaluate whatever functions you need to get into your desired state and work from there. The ad hoc nature of it is what makes you able to just dig into a part of the code base and get going, even if that project is massive.
> Personally, I find waiting more than 200ms unacceptable and really < 50ms is ideal.
Calling a function on a running program takes microseconds; you're not restarting anything.
It is easily possible to parse at > 1MM lines per second with a well designed grammar and handwritten parser. If I'm editing a file with 100k+ lines, I likely have much bigger problems than the need for incremental parsing.
It's not just speed - incremental parsing allows for better error recovery. In practice, this means that your editor can highlight the code as-you-type, even though what you're typing has broken the parse tree (especially the code after your edit point).
> I recognize that it is reminiscent of a few decades ago when old timers complained about the proliferation of high level programming languages and insisted they would lead to a generation of programmers lacking a proper understanding of how the system behaves beneath all that syntactic sugar and automatic garbage collection. They won’t have the foundational skills necessary to design and build quality software. And, for the most part, they turned out to be wrong.
What if the old timers were actually right? I tend to think they were.
It's absolutely necessary that there's a line of people somewhere who will understand the path from garbage collection to assembly instructions. We can't build upon abstractions only as long as we still run stuff on physical cpus.
But it's also unequivocally true that once we have enough long-bearded oldtimers and newtimers who do understand how writing a Python expression somewhere will end up with a register write elsewhere all the others just don't — have to.
In old times, all you had was hardware and to program you had to understand hardware. But those who then did program and did understand were the few smart people who had access to hardware. Everyone else was left out. Now we have high-level languages, scripting languages, AI, what else. As long as we can maintain the link to hardware by some people, the rest can build on that.
Yeah people in the past were always more right. The people that built the first processors were much more aware of low level stuff than those so called low level programmers with their fancy compilers.
And before them, the electrical and mechanical engineers, without them we wouldn't even have these processors. We all ultimately are dependent on them.
> In the meantime, there's nothing stopping you from using the agent to write the code that is every bit as high quality as if you sat down and typed it in yourself.
If you have well defined boundaries, you can move the stack to an arbitrarily large chunk of memory before the recursive call and restore it to the system stack upon completion.
reply