I think the author is describing the new incarnation of the Death March. In the Death March, contributors know that an active project will be dead-on-arrival, or cannot be redeemed. Maybe a small difference here being that the AI-equipped contributors won't be aware of the project status (i.e. futile).
Maybe this means AI has democratized Death Marches.
Good for the author for finding some success. I'd recommend seeking a significant other, somehow that sounds less daunting that making friends past 30. Cool roommates are friend-ajacent and help with loneliness; I had a cool roommate for a while until he moved in with his girlfriend, after which I was deeply lonely until I met my now wife.
'Don't believe self-serving lies about technologies being "inevitable" or "here to stay". You don't have to just go along with the dominant narrative. You can make deliberate choices and help others to do the same.'
> One engineer captured this shift perfectly in a widely shared essay, describing how AI transformed the engineering role from builder to reviewer.
I stopped here. Was this written by an an LLM? This sentence in particular reads exactly like the author supplied said essay as context and this sentence is the LLM's summarization of it. Nowhere is the original article linked, either, further decreasing trust. Moreover, there's an ad at the bottom for some BS "talent" platform to hire the author. This article is probably an LLM generated ad.
My trust is vacated.
This makes me feel that the SWE work/identity crisis is less important than the digital trust crisis.
Yeah, agreed. I probably wasn't going to delete my OpenAI account (ala the link that is also being upvoted on HN), it just seemed like a hassle vs ceasing to use OpenAI. But when the staff at OpenAI employ mental gymnastics, selective hearing, willful ignorance, or plain ignorance to justify compliance with manmade horrors, I think it's probably important to vote with our feet.
I use agentic LLMs (for side projects, properly sandboxed) as much as the next guy, but collectively the normalization of deviance is pretty apparent and shocking (https://en.wikipedia.org/wiki/Normalization_of_deviance).
Backdooring you own machine, sending your .env files, normalizing slop in code review, leaking IP (which is trained on and send to RLHF), YOLO mode, these things would have been unconscionable 2 years ago.
Tangentially related, "slop" really isn't a negative enough term for unwanted LLM garbage. "Slop" which is fed to pigs, has utility. "Slop" as a verb doesn't necessarily have a (strong) negative association ("It was slopped on the plate, but it was tasty").
I use the term "barf" more often. Barf has no utility*. Barf is always seen in a negative context. Barf is forcibly ejected from an unwilling participant (the LLM), and barf's foulness is coerced upon everyone that witnesses it. I think it's a better metaphor.
I know that this is just semantics, but still.
* even though LLM output __can__, and often does, have utility, we are specifically referring to unwanted LLM output that does not have utility. I'm not trying to argue that LLMs are objectively useless here, only that they are sometimes misused to the users' detriment.
This is an interesting observation. One could argue that some AI generated or driven things does have utility, and thus qualifies as "slop" (although not for those on the receiving end). For example, when used to drive clicks and generate revenue, to troll, or to spread propaganda. You get the idea.
In this instance however, I agree, barf is more accurate.
Seems like a catch-22. For codebases that I'm highly familiar with and regularly perform code review in, I'd say "thanks LLM, but I don't trust you, I'm more familiar with this codebase than you, and I don't need your help." For codebases that I'm not familiar with, I'm not really performing code review (at least not approving MR/PRs or doing the merging).
But still, this is very creative and a nice application of LLMs that isn't strictly barf.
At 60%, it highlights significantly more test code than the material changes that need review. Strike one.
At no threshold (0-100) does it highlight the deleted code in UniqueBroadcastEvent.php, which seems highly important to review. The maintainer even comments about the removal in the actual PR! Strike two.
The only line that gets highlighted at > 50% in the material code diffs is one that hasn't changed. Strike three.
So, honest attempt, but it didn't work out for me.
Maybe this means AI has democratized Death Marches.
reply