Hacker Newsnew | past | comments | ask | show | jobs | submit | sspiff's commentslogin

To consumers. You can sell to businesses.

They don't seem to be trying to stop me from placing an order as a consumer.

OP, the Badger Badger link is leading to a wiki page about the dead Internet theory.

Here's the correct link for all to enjoy: https://www.youtube.com/watch?v=EIyixC9NsLI

Used to be just a Flash animation on Weebl's stuff, but preserved on YouTube by the OG :)


Thank you, fixed!

There is precedent for this kind of trickery being played.

For example, Honeywell acquired Garrett AiResearch, a well known manufacturer of turbochargers for combustion engines, through a series of mergers.

Later on, it loaded them up with debt (over $1.5 billion, mostly asbestos related indemnity obligations from other parts of the business), before spinning them out as an independent entity again. Two years later, Garrett filed for bankruptcy claiming it was succumbing to the unsustainable debt burden placed upon it by its former owner.


So you mean...marrying someone but transfer all the personal debt to the others, then divorcing so that I have no responsibility whatsoever? Not even an obligation to settle for the debt just like disappeared through an expired relationship?

Is there a legal term for this kind of restructuring of debt?

I vibe asked it on Kagi Assistant and it said the closet relevant result is https://en.wikipedia.org/wiki/Texas_two-step_bankruptcy

To me it seems more like leveraged buyouts + debt restructuring all at once. I rather coin this term "debt offloading", which could also cover the cases with Enron for the tactics they used about 25 years ago


As soon as you mention Texas Two-step, you'll get a chorus of people who argue that it's a good thing, that it's really the defendants doing the plaintiffs a favor and making it easier and cheaper to sue them and the fact that each company that has done this has snuck out of between 90 and 99% of its obligations and washed their hands of them is a complete coincidence and that just wait, you'll see, the next time a company does it (using the same law firm that has handled all these) they'll really, truly, cover all of their obligations, perhaps even more, and at less cost and effort to their beleaguered plaintiffs.

In personal terms there is the "deathbed divorce", a uniquely American construct where couples, often elderly, get divorced while one is in hospital or hospice in an attempt to not saddle their soon to be widowed partner with six digits of medical debt.

In another uniquely American construct, that won't stop hospitals calling up all their relatives either implying that they are now responsible for those debts, or that it would be a mark of respect and honor if, even not, the relative would be willing to settle them anyway.


Scamming the state through private debt emission.

"Private Equity"

Welcome to late stage capitalism

This is early 1990s capitalism.

1980s even. It takes a while to siphon off all the value built up by multiple generations.

The hidden truth about economics in my lifetime.

I believe this is what they call the 'Texas Two-Step'

Sure it is not a Kansas City shuffle?

pretty sure you're just making a joke but the specific legislation exploited here is only present in Texas and Delaware iirc. Not a lawyer though

Perplexity wants to buy Google Chrome vibes.

What is the point of hosting a GitHub repo[0] with nothing in it but some links to your domain? There's no code, no license, no nothing.

[0] https://github.com/ctxrs/ctx


It is for issues reporting, similar to Claude Code.


> Open-source Agentic Development Environment (ADE) for teams using multiple coding agents.

Is it open source?


Ah I missed that, my mistake. It is fixed now. To confirm, it is not open source.


Any particular reason why not to open source it other than you plan on selling it as a PaaS?


The plan is for the app itself to remain free, no account, fully local, bring-your-own agents/endpoints/tokens, etc.

We will add paid options for Team/Enterprise when we exit our beta which include features for policy enforcement, collaboration, etc.


It's wild that none of these are set by default.

I know 90% of people I've worked with will never know these options exist.


That would likely mean same amount of people get the vulnerability, just 7 days later.


The compromised packages were removed from the registry within hours.


Because everyone got updates immediately. If the default was 7 days, almost no one would get updates immediately but after 7 days, and now someone only finds about after 7 days. Unless there is a poor soul checking packages as they are published that can alert the registry before 7 days pass, though I imagine very few do that and hence a dedicated attacker could influence them to not look too hard.


If I remember correctly, in all the recent cases it was picked up by automated scanning tools in a few hours, not because someone updated the dependency, checked the code and found the issue.

So it looks like even if no one actually updates, the vast majority of the cases will be caught by automated tools. You just need to give them a bit of time.


If everyone or a majority of people sets these options, then I think issues will simply be discovered later. So if other people run into them first, better for us, because then the issues have a chance of being fixed once our acceptable package/version age is reached.


I bought 192GB (4x 48GB) of DDR5-6400 for 299 euro in September but returned it because I couldn't get 4 DIMMS to run at decent speeds in the system.

6 or so weeks after I returned it the kit was listed at 1499.


Yeah the only way to run 4 sticks of DDR5 decently is with Intel. It's a bit of a shame that you can't cram enough RAM to run big models.

The most I could get running on 10GB VRAM + 96GB RAM was a REAP'd + quantized version of MiniMax-M2.5


Got it running with 4800MT/s and literally 30 minute boot times in an AM5 machine. The 30 minute boot time could be worked around by enabling the (off-by-default) memory context restore option in BIOS, but it really made me think something was broken and it wasn't until I found other people talking about 30 minute boot times that I stopped debugging and just let it sit for an eternity.

It's so bad. I don't get why they sell AM5 motherboards with 4 RAM slots.

At least that system has been running well for like two years. But had I known that the situation is so much more dire than with DDR4, I would've just gotten the same amount of RAM in two sticks rather than four.


I’m in the same situation! My machine will take 2-5 minute to post every few reboots, it seems random. The messed up part is the marketing material says this things can handle 256gb of ram or whatever absurd number, f me for thinking then 128gb should be no problem. Honestly this whole thing has soured me on AMD. Yea they have bigger numbers than intel but at what cost, stability?


Check you have MCR (Memory Context Restore) enabled, otherwise you train the RAM way more often than you need to (every boot).


Your machine takes 30 minutes to boot because of the RAM? Or it takes 30 minutes to load a model?


It's the RAM. It needs to "trained" which takes some time but for for some reason these boards seem to randomly forget their training, requiring it to happen again.


I've never had memory training be forgotten with my AM4 nor LPDDR5-based laptops and NUCs. Is this a new thing with AM5 or something? Or just a certain brand of BIOSes?


It's a common issue on consumer boards with DDR5 and more than two DIMMs installed.

Doesn’t affect soldered memory or lower speed memory (like DDR4). Many memory controllers fail to achieve good speeds and timings at all on 4 DDR5 DIMMs, and fall back to running DDR5 at 3600MHz instead.


Ok, so user selects too-high speed, controller tries for ages and fails, but doesn't save since it's overridden by user in BIOS?

I distinctly recall thinking my LPDDR5 NUCs were broken since they seemingly didn't boot the first time, until I recalled the training stuff. Took up to 15 minute on one of them. But neither has had any issues since, hence my question.


Wonder if DDR5 ECC ram has the same problem? I'm meaning the real ECC stuff, not the "on chip only ECC" that all DDR5 has.


The controllers which support ECC are usually a lot better and able to handle more channels. They also typically require active cooling.


Interesting. Didn't know about the active cooling requirement.

That being said, it's not hard to get a hold of a reasonably modern DDR5 EPYC board. Something like this: https://www.phoronix.com/review/gigabyte-mz33-ar1

Expensive though.


huh, its been a decade since i built a PC, whats changed?


DDR5 is much, much more fickle than DDR4 and earlier standards. I think it's primarily due to pushing clock speeds (6000 MT/s would be insanely fast for DDR4, but kinda slow for DDR5).

Memory training has always been a thing: during boot, your PC runs tests to work out what slight changes between signals and stuff it needs to adapt to the specific requirements of your particular hardware. With DDR4 and earlier, that was really fast because the timings were so relatively loose. With DDR5, it can be really slow because the timings are so tight.

That's my best understanding of it at least.


My guess is bigger numbers, higher voltages, tighter timings.


It's an AMD thing


You need to enable MCR (which trains the memory once and caches the result for (iirc) 30 days) otherwise yeah, booting is horribly slow, even the 64GB I have can take several minutes but with MCR it boots basically instantly.

Some motherboards have it off by default.


Memory training seems to be getting faster with each bios update. In 2024 when I upgraded to AM5, 64GB memory training took like 15 minutes. Now the same setup takes about a minute when it needs to retrain, then near instant with MCR (Windows 11 takes significantly longer to load than the POST process).


From my comment:

> The 30 minute boot time could be worked around by enabling the (off-by-default) memory context restore option in BIOS


I’m running 128gb on a 9550x now with 4x32gb sticks and it’s terrible. It’s unstsable, post time is about 2 minutes (not exaggerating)and I’m stuck at a lower speed. I’m considering just taking 2 of the sticks out and working with 64gb and increasing my swap partition. The nvme drive is fast at least.

This is my first time off intel and I have to say I don’t understand the hype.


> It’s unstsable, post time is about 2 minutes (not exaggerating)

The long POST times must mean it's retraining the memory each time, which is not normal. Just in case you haven'ttried it yet, I'd start by reseating them, I've had weird issues with marginally seated RAM before.

Also you definitely have to go much slower with 4 sticks compared to two, so lower speed as much as you can. If that doesn't help, I'd verify them in pairs.

If they work in pairs but not in quad at the slowest speed, something is surely wrong.

Once you get them working in quad, you can start bumping up the speed, might need voltage boost as well.


What ddr5 speed are you running? 6000 is technically an over clock, AMD only guarantees being able to run at something like 4800 or 5200.

You may need to bump up voltages slightly for your CPU's IMC (I needed to on my ryzen 8700F to run 6000 stable). Its CPU sample dependant.

Also as other commenter pointed out, typically 4 sticks will achieve lower stable clocks


I just yanked two of the sticks out. Who knows, maybe I'll sell them. 64gb is sufficient most of the time anyway, and now I'm running at 4800 instead of 3600 and the boot is much faster. Thanks AMD!


Threadripper is a good alternative. No point having a lot of dual channel ram for LLMs, too slow


I had the same issue with Intel. It's not guaranteed there either.


It's been a long time since I came across Nim. I thought it was really interesting about 12 years ago. What made you land on Nim instead of any of the more obvious alternatives?


I was looking for something that allows easy access to direct memory, with a syntax thats a little easier to explain than C. Frankly, zig was not actually a real viable option based on that syntax requirement but I still wanted to explore it.

Nim really is clean and simple.


Yeah, for a language that claims to be a better modern alternative to C, zig verbose syntax is really an eyesore to look at compared the very same codebase written in C...

I lost immediately any interest on it


Nim is really incredible. The only things I cannot get over is the fact that it goes the inheritance route in a way I find to be hacky and fragile (no more than one level, really?) and traits are not a core feature. If Nim's primary approach was composition + Rust-style traits (that work at compiletime and runtime), I'd have a hard time wanting to use anything else.


How about Odin?


yeah, Nim is great for that... much easier to explain to others than C or Zig especially for math code


I haven't bought an 8GB laptop since probably 2012 when I got a Sony Vaio that they upgrade to 12GB for free because of a delivery delay. I wouldn't buy an 8GB device in 2026, but this device isn't targeted at either of us.

For a lot of people who are looking at sub $800 laptops, the option to get an Apple will probably be enough to convince them. And apart from the limited memory, it really isn't a bad buy.

I also fully expect most budget devices to ship with 8GB of memory until the end of the DDR5 crisis anyway.


You might be surprised, with NVMe swap 8GB is surprisingly capable. ~1.6GB/s Read/Write.


Apple has a great zram implementation as well.


Flash has finite write endurance. NVMe swap can burn through it pretty quick. Which is isn't that bad because if it wears out you can replace it... unless the drive is soldered.


Mac SSDs are expected to last 8-10 years, even with high use. though Apple don't publish these values specifically, it's possible to start to extrapolate from the SMART data when it starts showing errors.

A good SSD ought to be able to cope with ~600TBW. My ~4.5-year-old MBP gives the following:

    smartctl --all /dev/disk0
    ...
    Data Units Read:                    1,134,526,088 [580.8 TB]
    Data Units Written:                 154,244,108 [78.7 TB]
    ...
    Media and Data Integrity Errors:    0
    Error Information Log Entries:      0
    ...
I'm sure an 8GB RAM machine would use more swap than my 16GB one, but probably not much more, given that mine has had heavy use for development and most people don't use their laptops for anything like that. Even so, that would still put it well within the expectation of 8-10 years, and that's for a $600 laptop.


> I'm sure an 8GB RAM machine would use more swap than my 16GB one, but probably not much more

It's non-linear. If you have a 17GB working set size, a 16GB machine is actively using 1GB of swap, but the 8GB machine is using 9GB. If you have a 14GB working set size, the 16GB machine doesn't need to thrash at all, but the 8GB machine is still doing 6GB.

Meanwhile "SSDs are fast" is the thing that screws you here. Once your actual working set (not just some data in memory the OS can swap out once and leave in swap) exceeds the size of physical memory, the machine has to swap it in and back out continuously. Which you might not notice when the SSD is fast and silent, but now the fact that the SSD will write at 2GB/sec means you can burn through that entire 600TBW in just over three days, and faster drives are even worse.

On top of that, the write endurance is proportional to the size of the drive. 600TBW is pretty typical for the better consumer 1TB drives, but a smaller drive gets proportionally less. And then the machines with less RAM are typically also paired with smaller drives.


Most people using these things aren't going to be using more than 8GB on an ongoing basis, and if they do, they'll not be swapping it like mad as you suggest, because it's only on application-switch that it will matter.

As for 600TB in just over 3 days, I want some of what you're smoking.


> Most people using these things aren't going to be using more than 8GB on an ongoing basis, and if they do, they'll not be swapping it like mad as you suggest, because it's only on application-switch that it will matter.

To begin with, a single application can pretty easily use more than 8GB by itself these days.

But suppose you are using multiple applications at once. If one of them actually has a large working set size -- rendering, AI, code compiling, etc. -- and then you run it in the background because it takes a long time (and especially takes a long time when you're swapping), its working set size is stuck in physical memory because it's actively using it even in the background and if it got swapped out it would just have to be swapped right back in again. If that takes 6GB, you now only have 2GB for your OS and whatever application you're running in the foreground. And if it takes 10GB then it doesn't matter if you're even running anything else.

Now, does that mean that everybody is doing this? Of course not. But if that is what you're doing, it's not great that you may not even notice that it's happening and then you end up with a worn out drive which is soldered on for no legitimate reason.

> As for 600TB in just over 3 days, I want some of what you're smoking.

2GB/s is 8200GB/hour is 172.8TB/day. It's the worst case scenario if you max out the drive.

In practice it might get hot and start thermally limiting before then, or be doing both reads and writes and then not be able to sustain that level of write performance, but "about a week" is hardly much better.


Yeah dude, "Rendering, AI, code compiling,..." is not the target market for this device. It's just not.

> 2GB/s is 8200GB/hour is 172.8TB/day. It's the worst case scenario if you max out the drive.

Right, which is completely and utterly unrealistic. As I said, I want what you're smoking.

I have an 8GB M1 mini lying around somewhere (I just moved country) which was my kids computer for several years before he got an MBP this Xmas. He had the sort of load that would be more typical - web-browsing, playing games, writing the occasional thing in Pages, streaming video, etc. etc. If I can find it (I was planning on making it the machine to manage my CNC) I'll look at the SMART output from that. I'm willing to bet it's not going to look much different from the above...


> Yeah dude, "Rendering, AI, code compiling,..." is not the target market for this device. It's just not.

None of the people who want to do those things but can't afford a more expensive machine will ever attempt to do them on the machine they can actually afford then, is that right?

> Right, which is completely and utterly unrealistic.

"Unrealistic" is something that doesn't happen. This is something that happens if you use that machine in a particular way, and there are many people who use machines in that way.

> He had the sort of load that would be more typical - web-browsing, playing games, writing the occasional thing in Pages, streaming video, etc. etc.

Then you would have a sample size of one determined by all kinds of arbitrary factors like whether any of the games had a large enough working set to make it swap, how many hours were spent playing that game instead of another one etc.

The problem is not that it always happens. The problem is that it can happen, and then they needlessly screw you by soldering the drive.


> The problem is not that it always happens. The problem is that it can happen

Ah. So, FUD, then. Gotcha.

“This ridiculously unlikely scenario is something I’m going to hype up and complain about because I don’t like some aspects of this companies business model”.

600 TBW in 3 days. Pull the other one, it’s got bells on.


I’ve never had an SSD crap out because of read/write cycle exhaustion, and I’ve been using SSD almost exclusively, for over a dozen years. I’ve had plenty of spinning rust ones croak, though. You don’t solder those in, so it’s not really a fair comparison.

I did have one of those dodgy Sandisks, but that was a manufacturing defect.


But how much RAM did you have?

If you have 24GB of RAM and a 12GB working set then it's fine. Likewise if you have 8GB of RAM and a 4GB working set. But 8GB of RAM and a 12GB working set, not the same thing.


Most flash memory will happily accept writes long after passing the TBW 'limit'. If write endurance would be that much of a problem I'd expect the second hand market to be saturated with 8Gb M1 MacBooks with dead SSDs by now. Since that's obviously not the case I think it's not that bad.


> Most flash memory will happily accept writes long after passing the TBW 'limit'.

That's the problem, isn't it? It does the write, it will read back fine right now, but the flash is worn out and then when you try to read back the data in six months, it's corrupt.

> If write endurance would be that much of a problem I'd expect the second hand market to be saturated with 8Gb M1 MacBooks with dead SSDs by now.

That's assuming it's sufficiently obvious to the typical buyer. You buy the machine with a fresh OS install and only newly written data, everything seems fine. Your 30 day warranty/return period expires, still fine. Then it starts acting weird.


> That's the problem, isn't it? It does the write, it will read back fine right now, but the flash is worn out and then when you try to read back the data in six months, it's corrupt.

SSD firmware does patrol reads and periodically rewrites data blocks. It also does error correction. Cold storage is a known issue with any SSD, but I don't have any insight in how bad this problem is in reality. Of course it will wear out eventually, but so will the rest of the system components. There's nothing to be gained by making SSDs that last 30 years when the other components fail in 15.

> Then it starts acting weird.

Is that speculation or do you have any facts to back that up?


the slowest DDR4 is capable of 12.6GB/s~ish per channel .

nowhere near the same performance.


The ratio between RAM speed and SSD speed is unimportant. Useful swap just needs a fast drive.


I used to run Linux (JLime Linux) and NetBSD on those. I did prefer the bigger NEC MobilePro competitors though, but I spent so much time on those Jornadas in college.


At some point some kristoff guy was developing some flashrom boards so that jlime linux on the hp jornada could have actual suspend to ram.

On one hand, it would’ve been cool.

On the other hand, at the time netbooks were becoming common and were essentially taking the spot of those kind of devices (jornada 728 etc).


Kristoffer Ericson was the driving force behind JLime Linux.

Along with OpenZaurus, these early hobbyist efforts to run Linux on embedded devices formed the basis of what became OpenEmbedded and has since been renamed Yocto, still one of the most commonly used embedded Linux development platforms.

I miss those days.


Same. I was on macOS for work for about 3 years. Never gelled with me.

I was on an M2 Macbook Pro with Asahi and it was great. It's really hard to fault Apple's hardware for most use cases.

I'm currently on a Strix Halo laptop (HP Zbook), which is about as expensive, and the hardware is great, but power efficiency and build quality lag leagues behind by Apple. A 4000 euro laptop still feels like a cheap toy.


One of us! :)

Currently in a brief macos phase before I can be issued my Linux laptop at work. It's so clunky. A major annoyance for me right now is the lack of MST multi-screen over USB which means my nice daisy-chained home setup is fine on my near-decade-old Dell but doesn't work at all on the fancy Macbook. They have the hardware to support it, they just don't.

Generally the hardware with Apple is amazing but I'll take the hit on that and things like battery life just to get an OS that feels like it's on my side.

I'd maybe consider Asahi for home use but I'd be wary of it for work. Perhaps in a few years.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: