Hacker Newsnew | past | comments | ask | show | jobs | submit | Salgat's commentslogin

Local models are much less energy efficient right?

It's a good question, although I think hard to quantify.

If you are simply measuring Watt Cost per Token, you are missing the mark drastically. You have to measure quality output per Watt.

It sounds reasonably difficult to benchmark this, maybe I'm wrong though.


To set an example, as a deterrent to larger operations. If they go after even the smallest impact counterfeiters, it leaves no room for a plausible level at which an operation can safely run under, no matter how small.

Same. I don't care about the online connectivity or whatever, I just print a few personal things every month so the convenience and reliability far outweigh any cons for me.

My favorite is when Claude will build a completely new application to load and inspect a .dll file using reflection instead of just googling the library's interfaces.

It did this for during one of the recent outrage periods. It was unjarring deps left and right instead of googling for it. What an easy way for me to own the tokenmaxxing leaderboard I remember thinking

“Use all of the tools at your disposal, including searching the internet” is my claude-specific common instruction.

So the chimpanzees on the keyboard thing is real.

I disagree. Assuming code complexity is roughly fixed, more sophisticated code analysis will result in a smaller surface area for bugs. Bugs will still be found, but there will be less bugs to be found and less opportunities to exploit.

I'd expect the bigger shift to be toward secure-by-construction building blocks: less custom code that needs to be audited from scratch, and more hardened or verified components where common bug classes are already designed out.

A similar concept is used in Rust, where code that needs unsafe sections is generally done in very isolated and very heavily scrutinized modules.

I disagree. I routinely hit the 5 hour limit on Pro with Opus 4.7 just trying to have it do one design task or comprehensive code review on a large PR, and the worst part is, the overhead and bringing all that context back into another 5 hour window blows through 30%+ of my 5 hour usage limit.

Claude absolutely improves code review quality, but it still misses a lot. It's a second pair of eyes, it doesn't replace/remove the work you have to put in to fully review the code yourself.

It's like saying that you code reviewed faster just because someone else also reviewed the code, that's not how it works.


Agree, and with CC my volume and quality of PR review has substantially increased since 4.5. Without CC for review we would have a ridiculous bottleneck in our dev/qa pipeline.

Awaiting allows you to efficiently yield the thread to other tasks instead of blocking it. That's one of its biggest advantages.

When you block the OS does the same thing - yields to other threads.

Yes, and it is extremely expensive. This is a well-known design problem in database engines.

The computational cost of context-switching threads at yield points is often many times higher than the actual workload executed between yield points. To address this you either need fewer yield points, which reduces concurrency, or you need to greatly reduce the cost of yielding. An async architecture reduces the cost of yielding by multiple orders of magnitude relative to threads.


> The computational cost of context-switching threads at yield points is often many times higher than the actual workload executed between yield points.

I would they this often is 1% of cases. As for Rust ecosystem, it doesn't make much case to add so much complexity and inconvenient abstractions to cover 1% of use-cases.


It approaches 100% of cases for systems that care about software performance, since software performance is bandwidth bound. If almost everyone agrees that software performance is optimally fast already then I agree with you.

There is perfect performance and there is performance good enough, which is 99% of cases, where adding complexity is not justified.

And how much slower is that? What happens when I run a thousand async tasks? I'll give you a hint, with async/await, it has barely any overhead.

The vast, vast majority of programmers are going to be writing software where there are only a handful of threads (if that). The "I need thousands of concurrent executions" case is simply not relevant to most people.

You do realize what servers do in parallel right? Async/await allows ASP.NET to scale beyond 1 thread per request.

Are you going to put multiple customer’s data in the same OS process?

Did you know you can get even more performance if you manually manage memory and don’t use virtual functions?


My ti-89 titanium got me through two engineering degrees. Loved that thing.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: