Hacker Newsnew | past | comments | ask | show | jobs | submit | barbegal's commentslogin

Yeah I couldn't get past so many issues in the AI generated illustrations. Not useful at all when they are completely wrong.

Even the video is generated. The whole thing is just slop upon slop, I'm amazed that it got to the top of the front page here. I suppose it's a genuinely impressive amount of fakery all integrated together.

right

Yes I would imagine lots of those type of services would be vulnerable if they hadn't updated to the latest kernel versions.


As of this comment, Debian Stable ("Trixie", though I hate codenames) doesn't have a fix in place and remains vulnerable, or at least their CVE tracker shows it as such:

https://security-tracker.debian.org/tracker/CVE-2026-31431


"Debian Stable ("Trixie", though I hate codenames)"

You can also call it Debian 13.


I choose not to call it Debian 13 because that carries less context than Stable/Testing/sid. I'd rather not require the user to maintain that extra metnal mapping.

Anyone who knows anything about this subject immediately understands what is connoted by "Debian Stable". I run Trixie on most of my personal boxes and I had no idea what version number it is, nor do I particularly care.


> I run Trixie on most of my personal boxes and I had no idea what version number it is

It's not that hard to find though:

  $ cat /etc/debian_version 
  13.4

13.4 since 3/14


Yes you can pin it to core 1 whilst pinning all other tasks to core 0. Then will never be interrupted or preempted (except by interrupts created on core 1)


The actual research paper shows it's pure nonsense

https://cdn.skoda-storyboard.com/2026/04/Skoda-DuoBell-Resea...

As expected ANC headphones cancel less noise at low frequencies so I guess the 780Hz is a trade off between high enough frequency to be a bell and low enough frequency to get attenuated a little bit less than high frequencies.

The research paper is pretty poor quality and this is mainly a marketing exercise.


I don't understand why people get so hung up on Chrome using so much memory. A lot of this memory is "discardable" so will get dropped when the system is under memory pressure and the amount of memory allocated for this type of usage will depend on how much memory your system has available. If Chrome is using lots of memory then it's almost always because your system has lots of available memory. It allows the browser to cache large images and video assets that would otherwise have to be re-downloaded over the internet.


Or another process will die at random instead, which might be your desktop environment, the main browser process, Signal (10% chance at corrupting message history each time), a large image you were working on in Gimp...

Firefox has gotten very good at safely handling allocation failures, so instead of crashing it keeps your memory snugly at 100% full and renders your system entirely unusable until the kernel figures out (2-20 minutes later) that it really cannot allocate a single kilobyte anymore and it decides to run the OOM killer

but also

it's not cheap? Why should everyone upgrade to 32GB RAM to multitask when all the text, images, and data structures in open programs take only a few megabytes each? How can you not get hung up about the senseless exploding memory usage


That's not how it works. Process killing is one of the last ways memory is recovered. Chrome starts donating memory back well before that happens. Try compiling something and see how ram usage in chrome changes when you do that. Most of your tabs will be discarded.


I've already described above what the browser's behavior is. That your browser works differently is good for you; I'm not using a Google product as my main browser. There are also other downsides that this other behavior does not fix, mentioned in sibling comments


This is not a chrome problem but an OS problem. Android does a much better job here by comparison. Desktop Linux is simply not well optimized for low RAM users.


"your tabs will be discarded" is not an excuse for using 2.4GB for a tab


I dunno I have 96GB of RAM and I still get the whole "system dies due to resource exhaustion" thing. Yesterday I managed to somehow crash DWM from handle exhaustion. Man, people really waste resources....


Reclaiming memory is not free.

It's better not to use 2.4G RAM in the first place. Imagine LinkedIn isn't so hostile to users and instead actually cares about user experience.


Well, a few GB here and a few GB there, soon you’re talking about real RAM issues.

The other day Safari was using over 50GB with only a few tabs open.

Maybe we should also acknowledge that some companies particularly have no compassion for users (and their desires or needs) and see them as hurdles in their way to take money from users.


It's memory that the kernel cannot use to cache other applications' files.


This isn't true for OS like Windows where the kernel is informed that the memory is discardable and it can prioritize discarding that memory as necessary. It's a shame that Linux doesn't have something similar.


Linux supports it too through madvise():

       MADV_FREE (since Linux 4.5)
              The application no longer requires the pages in the range
              specified by addr and size.  The kernel can thus free these
              pages, but the freeing could be delayed until memory
              pressure occurs.
and

       MADV_DONTNEED
              Do not expect access in the near future.  (For the time
              being, the application is finished with the given range, so
              the kernel can free resources associated with it.)

              After a successful MADV_DONTNEED operation, the semantics
              of memory access in the specified region are changed:
              subsequent accesses of pages in the range will succeed, but
              will result in either repopulating the memory contents from
              the up-to-date contents of the underlying mapped file (for
              shared file mappings, shared anonymous mappings, and shmem-
              based techniques such as System V shared memory segments)
              or zero-fill-on-demand pages for anonymous private
              mappings.
Does Chrome use it, though?


...still not an excuse for using 2.4GB for a tab.


I want my compiler, language server IDE, to do that not LinkedIn


Um.

The websites are jam packed with trackers and ads. I am utterly concerned about Chrome’s memory usage because it’s passively allowing this all to occur.

How about you let me blacklist sites that are using too much memory automatically, all that means is that those website owners FUCKING HATE THE REST OF US.

Any solution to this epic fucking problem would be wonderful.


uBlock origin on Firefox or Brave, which will block most of the tracker bloat, causing the RAM spike. It's not a perfect fix, but it will cut out a significant chunk of it. Tab Wrangler also helps by suspending inactive tabs automatically. You should try out both.


Step 0- don't use a browser created by an ad company


Nope - I have to close Chrome in order to compile.


I use a Mac which has really good memory management but still seeing that 10 GB of my SSD is clogged up with useless crap just because modern development systems are complete and utter crap feels bad.

March is "MARCHintosh" month for retro Macintosh computing, for fun I wrote a networked chat client. It has some creature comfort features like loading in chat history from the server, mentions, user info, background notifications, multiple session. It runs in 128 kilobytes of RAM.

Automatic garbage collection memory management was a mistake. The memory leaks we had when people forgot to free memory was nothing compares to the memory leaks we have now when people don't even consider what memory is.


Does the KV cache really grow to use more memory than the model weights? The reduction in overall RAM relies on the KV cache being a substantial proportion of the memory usage but with very large models I can't see how that holds true.


For long context, yes this is at least plausible. And the latest models are reaching context lengths of 1M tokens or perhaps more.


This series of graphs https://www.bmj.com/content/bmj/387/bmj-2024-082194/F1.large... shows that whilst those two professions are at the bottom of the distribution they are not particularly outlying and cherry picking of those professions has occurred. The statistical analysis should have adjusted for picking the best 2 occupations of the 443 in the study. That would likely show very little statistical significance.


Total receipts were over 11,000 so more like 100 hours or around $2000 so a similar price to the LLM.


This is good work. I wish branch predictor were better reverse engineered so CPU simulation could be improved. It would be much better to be able to accurately predict how software will work on other processors in software simulation rather than having to go out and buy hardware to test on (which is the way we still have to do things in 2026)


The memory overhead is fairly significant it uses between 1.5 and 3 times the space of the data stored.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: