Hacker Newsnew | past | comments | ask | show | jobs | submit | monkpit's commentslogin


My kids’ school uses a web portal to add money to their lunch accounts. My only task when I open this website is to pick an amount and click submit and give them my money.

Whose idea was it to show me a “what’s new” popup of all the jira tickets they closed in the last sprint?

What’s new? Nothing is new. It works just like it used to. Just take my money and leave me alone, please.


But if you have (through whatever process) sent them a complaint that, say, "it doesn't work right using Firefox X.Y running on Windows 7", then those release notes might in fact be interesting to you. So there actually is a reason for you to be able to see them. Not for them to get in your way, though. 99% of the people won't care.

99% won’t care, and 59% will find the what’s-new popup actively confusing, distracting, and hostile. Bad trade.

No it won't because you're either going to already be using some other browser and you won't care or you'll be once again trying Firefox X.Y and you'll discover it does work.

It explains what it does, but not _why_, maybe I’m just not in the target audience.

Can anyone enlighten me? If GitHub actions are stored in the repo as yaml then they’re already tracked - what’s this thing for?


This seems to convert action tags like action/checkout@v3 to action/checkout@sha to avoid malicious retags.

That makes perfect sense! I knew I was missing something. Thanks.

Because of the way training works, is this possible/feasible? Not saying it shouldn’t be done, just wondering _how_ it could be done?

It’s hard to give up, but likely necessary. That doesn’t mean quality has to suffer, we can still gate with deterministic quality tooling where it matters. But yeah, at some scale it stops mattering how human readable the code is, as long as AI can effectively and efficiently (token-wise) make edits or add features.

The point is not human readability, but good structure. Spaghetti code is as bad for an LLM as for a human, because structural complexity and the amount of coupling are fundamental limits, not human-specific.

Amazing tweet.

https://x.com/stevesi/status/2050325415793951124

Here's how history rhymes with this logic. The development of compilers v writing assembly language was not without a very similar "controversy" — that is, are the new tools more efficient or less efficient.

The first compilers were measured relative to hand-tuned assembly language efficiency. The existing world of compute was very much "compute bound" and inefficient code was being chased out of every system.

The introduction of the first compilers generally delivered code "within 10-30%" as efficient as standard professional assembly. This "benchmark" was enough for almost a generation of Fortran programmers to dismiss the capabilities of compilers.

Also worth noting, early compilers (all through the 1980s) routinely had bugs that generated incorrect code. Debugging a compiler is a nightmare (personal experience). This only provided more "ammo."

With the arrival of COBOL the debate started to shift. COBOL generated decidedly "bloated" code so there was no way to win the efficiency argument. But what people started to realize was that a "modern" programming language made it possible to deliver vastly more software and for many more people to work on the same code (ASM notorious for being challenging for multiple engineers on the same portion of code). So the metric slowly started to move from "as good as hand tuned assembler" to "able to write bigger, more sophisticated code in less time with more people). Computers gained timesharing, more memory, and faster CPUs which made the efficiency argument far less compelling (only to repeat with the first 8K or 64K PCs).

This entire transition is capped off with a description in Fred Brooks "Mythical Man Month" book, one of the seminal books in the field of programming and standard issue book sitting in my office waiting for me on my first day at Microsoft. (See full book free here https://web.eecs.umich.edu/~weimerw/2018-481/readings/mythic...)

It is very early. I was not a programmer when the above happened though I did join the professional ranks while many still held these beliefs. For example, I interned writing COBOL on mainframes while PCs were using C and Pascal which were buggy and viewed as inefficient on processor/space-constrained PCs.

The debate would continue with C++, garbage collection, interpreted v compiled (Visual Basic) and more. As a fairly consistent observation over decades, every new tool is viewed through a lens (at first) by experienced programmers over what is worse while new programmers use the tool and operate in a new context (eg "more software" or "bigger projects"). The excerpt below shows this debate as captured in 1972.


> Also worth noting, early compilers (all through the 1980s) routinely had bugs that generated incorrect code.

Incorrect. They had bugs that generated incorrect code. They didn't routinely have bugs that generated incorrect code :-/

And the bugs they had were reproducible.


Didn't someone say LLMs memorize Harry Potter books? You can't have it both ways.

> Didn't someone say LLMs memorize Harry Potter books? You can't have it both ways.

What both ways? You can't consistently get it to output Harry Potter verbatim. That's my point - not reproducible.


There are many ways to write a forloop. Why do you need exact reproducibility for achieving a task?

> There are many ways to write a forloop. Why do you need exact reproducibility for achieving a task?

Didn't you just claim above, with the Harry Potter angle, that it is reproducible?


Not "exact reproducibility", but simple reproducibility at all. You need that to fix bugs, improve things and reason about them.

That’s where the tooling comes in!

3.25% is whole milk, they absolutely sell it in Canada.

> You can pay $20 a month and use $10k in api tokens.

Do you have a source? I would be interested to read more about any hard figures that have been posted like this.


I suddenly feel compelled to post about openclaw

You’re very defensive in these comments - are you the author?

Isn’t the biggest rule to have working backups with 3-2-1 strategy?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: