I commuted by bike 70 miles a week for a few years. Bike line obstruction was far down my list of concerns, behind: drivers looking at phones while driving, drivers looking at phones while stopped, drivers running stop signs without even noticing (probably looking at phone), driver speeding, cars belching smoke, etc.
> so many of our modern problems are because our UI toolkits suck
The remainder of the problems are caused by overly complex deployment/hosting setups. Compiling a binary from source looks like a breeze by comparison.
This concept won't reach that point because when you chisel too hard it crumbles. There are countless lower level tasks that typical programmers no longer learn how to do. Our capacity for knowledge is not unlimited so we offload everything we can to move to the next level of abstraction.
AI coding isn’t an abstraction, though. You can’t treat a prompt like source code because it will give you a different output every time you use it. An abstraction lets you offload cognitive capacity while retaining knowledge of “what you are doing”. With AI coding either you need to carefully review outputs and you aren’t saving any cognitive capacity, or you aren’t looking at the outputs and don’t know what you’re doing, in a very literal sense.
Non-determinism is not as much of a problem as the lack of spec. C++ has the C++ norm, Python has its manual. One can refer to it to predict reliably how the program will behave without thinking of the generated assembly. LLMs have no spec.
"You can’t treat a prompt like source code because it will give you a different output every time you use it"
But it seems we are heading there. For simple stuff, if I made a very clear spec - I can be almost sure, that every time I give that prompt to a AI, it will work without error, using the same algorithms. So quality of prompt is more valuable, than the generated code
So either way, this is what I focus my thinking on right now, something that always was important and now with AI even more so - crystal clear language describing what the program should do and how.
Of course it saves me overhead by not having to read all the necessary docs etc myself and just check the resulting code and not having to type all myself.
> if I made a very clear spec - I can be almost sure
That "almost" is doing a lot of heavy lifting here. This is just "make no mistakes" "you're holding it wrong" magical thinking.
In every project, there is always a gap between what you think you want and what you actually need. Part of the build process is working that out. You can't write better specs to solve this, because you don't know what it is yet.
On top of that, you introduce a _second_ gap of pulling a lever and seeing if you get a sip of juice or an electric shock lol. You can't really spec your way out of that one, either, because you're using a non-deterministic process.
Well, unfortunately it is the same with real humans who happen to be non-deterministic as well. If I give them a task, I can be allmost sure, they will do it. But even humans can have unexpected psychotic breakdowns and do destructive stuff like deleting important databases.
So right now, humans are for sure more reliable. But it is changing. There are things I already trust a LLM more than a random or certain known humans.
Isn't it an abstraction similar to how an engineering or product manager is? Tell the (human or AI coder) what you want, and the coder writes code to fulfill your request. If it's not what you want, have them modify what they've made or start over with a new approach.
No, because software engineering is more than <insert coin, receive code>. I've never had a full spec dropped on my desk lol. There's no abstraction.
Software engineering is a lot more social and communication-heavy than people think. Part of my job is to _not_ take specs at face value. You learn real quick that what people say they need and what they actually need are often miles apart. That's not arrogance, that's just how humans work.
A good product manager understands the biz needs and the consumer market and I know how to build stuff and what's worked in the past. We figure out what to build together. AIs don't think and can't do this in any effective way.
Also, if you fuck up badly enough that you make your engineers throw out code, you're gonna get fired lol
With an abstraction, you literally move your thinking up a level. So you move up a floor up the tower and no longer have to think what's happening below. The moment something leaves your floor, its course is set. If a result come back, its something familiar, not something from the lower floor.
A human coder can be seen as an abstraction level because it will talk to the PM in product terms, not in code. And the PM will be reviewing the product. What makes this work is that the underlying contract is that there's a very small amount of iterations necessary before the product is done and the latter one should require shorter time from the PM.
We've already established using a LLM tool that way does not work. You can spend a whole month doing back and forth, never looking at code and still have not something that can be made to work. And as soon as you look at the code, you've breached the abstraction layer yourself.
It's staggering to me how many times I've heard this argument that LLMs are just the next level of abstraction. Some people are even comparing them to compilers.
> Some people are even comparing them to compilers.
A lot of people are using them as such too: the amount of people talking about "my fleets of agents working on 4 different projects": they aren't reviewing that output. They say they are, but they aren't, anymore than I review the LLVM IR. It makes me feel like I'm in some fantasy land: I watch Opus 4.7 get things consistently backwards at the margins, mess up, make bugs: we wouldn't accept a compiler that did any of this at this scale or level lol
It's awful, and seeing even engineers I respected become so AI pilled they're shipping slop without review has made me lose respect for them. It also can't help but make me wonder: what am I missing? Am I holding it wrong? Am I too focused on irrelevant details?
So far, my conclusion is that while LLMs can be s productivity boost, you have to direct them carefully. They don't really care about friction and bad abstractions in your codebase and will happily keep piling cards on top of the crooked house of cards they've generated.
Just like before AI, you need a cycle of building and refactoring running on repeat with careful reviews. Otherwise you will end up with something that even an LLM will have a hard time working in.
That's true, but I think it's beside the point. The flip side of that argument, which is equally true, goes something like, "not doing cognitive push-ups leads to cognitive atrophy."
There are skills we're losing that are probably ok to lose (e.g. spacial memory & reasoning vs GPS, mental arithmetic vs calculators), primarily because those are well bounded domains, so we understand the nature of the codependency we're signing up for. AI is an amorphous and still growing domain. It is not a specific rung in the abstraction hierarchy; it is every rung simultaneously, but at different fidelity levels.
> There are skills we're losing that are probably ok to lose (e.g. spacial memory & reasoning vs GPS, mental arithmetic vs calculators)
I'd argue these are not at all OK to lose. You live in an earthquake zone? You sure better know which way is north and where you have to walk to get back home when all the lines are down after a big one. You need to do a quick mental check if a number is roughly where it should be? YOu should be able to do that in your head.
There might be better examples that support your point more effectively e.g. cursive writing
Yep, there are tons. Growing food, building shelter, etc. But, for pretty much all of the skills we've allowed to atrophy in response to the advances of capitalism, technological & scientific progress, and societal changes, one COULD make the same basic argument, which is that losing that skill is detrimental to the individual, and yet here we are, not growing our own food, not building our own shelter, etc.
The arguments you make ≤ the values you actually hold ≤ the actions you take in support of those values.
I'm only interested in any such argument to the extent to which you've personally put it into practice. Otherwise, you're living proof of the argument's weakness. (To be fair, it's extremely hard to be internally consistent on this stuff! We all want better for ourselves than we have time and energy for. But that's my point: your fully subconscious emotional calculus will often undercut at least some of your loftier aspirations. Skills that don't matter anymore invariably atrophy due to the opportunity cost of keeping them honed.)
> "not doing cognitive push-ups leads to cognitive atrophy"
This is one of the points being made in the post, at least in reference to people who already have some mastery of their craft. If they outsource their thinking without elevating it, they aren't exercising that metaphoric muscle between their ears.
I get your point, I just wonder how accurate it is. We basically never look at the output of the compiler, so I agree that tool allows one to operate at a higher level than assembly. But I always have to wade through the output from AI so I’m not sure I got to move to the next level of abstraction. But maybe that’s just me.
I'm sure someone, somewhere, once wrote one that wasn't but in general, yes they are.
The ones I use certainly are. And with a bit of training you can reason and predict how they will respond to a given input with a large degree of accuracy without being familiar with how the particular compiler under question was implemented.
Not so with the AI tools. At least with the ones I use anyway.
Given the same compiler, I believe they would be the same between runs given the same inputs. I suppose that could not be true at the margins, but I would expect correctness out of whatever path it chose.
For all intents and purposes yeh. Its really about the variance in actual outcomes vs the expected. The variance is not much is it? With LLMs that absolutely isnt the case.
The idea that a tool intended to replace all human cognitive work is the next level of abstraction is so fundamentally flawed, that I'm not sure it's made in good faith anymore. The most charitable interpretation I can think of is that it's a coping mechanism for being made redundant.
Nevermind the fact that these tools are nowhere near as capable as their marketing suggests. Once companies and society start hitting the brick wall of inevitable consequences of the current hype cycle, there will be a great crash, followed by industry correction. Only then will actually useful applications of this technology surface, of which there are plenty. We've seen how this plays out a few times before already.
It is not necessary to draw a sharp line that clearly divides everything before saying “this is too far” about something that has, in fact, gone too far.
Good question. Maybe not cook, but consider someone who picked just the right ingredients and preparation for a sandwich. Combining flavors and textures in novel ways that are as surprising as they are delicious. I would ascribe more of the creative credit to that person vs. the one cutting the bread.
So you are taking the ridiculousness of calling someone a 'sandwich artisen' (slapping together assembly line type products is slapping together assembly line type products) but trying to spin it into a positive label?
Actually the fact the inference of a SOTA model is completely Nvidia-free is the biggest attack to Nvidia every carried so far. Even American frontier AI labs may start to buy Chinese hardware if they need to continue the AI race, they can't keep paying so much money for the GPUs, especially once Huawei training versions of their GPUs will ship.
By "completely Nvidia-free" do you mean Nvidia wasn't used for training nor inference? Because if it's only inference, we know that Opus already can run on TPUs. Not to mention Gemini.
Yep but they don't run on Chinese hardware that is going to be available to everybody and will cost a lot less than NVIDIA stuff. So now you have a full non-US pipeline for AI, and soon they'll have the training GPUs as well.
reply