Hacker Newsnew | past | comments | ask | show | jobs | submit | saghm's commentslogin

Building Ubuntu does not produce identical binaries to Debian, so no, that's not what they're asking for

I remember in the original there was something you could type when the game was playing the starting sequence (I think it was "hidden test"?) to be able to move the ball with your cursor. I'm curious if this works in this version so I'll probably try it out later when I'm at a computer if no one else has.

edit: It does! I installed the AUR version of it that was linked in the repo README and tested it out, and typing "hidden test" during the game startup sequence lets me drag the ball


> If you don't trust Apple, why the hell are you buying their computers at all?

This is the exact same false dichotomy they mentioned; it's perfectly reasonable to have a set of trusted software vendors that includes Apple but also some others, while the only choices that they support are either just Apple or literally anyone in the universe. You're conflating "trusting Apple" with "trusting no one but Apple to make it sound like the opposite of the latter is somehow also contradictory with the former.

Claiming it's "not worth the effort" is a lot easier when you've already muddied the waters like this.


Usage of the internet to sow dissent among citizens of a government by a power that's at odds with said government, presumably

You think that a child being bullied should have to suck it up until they commit forced violence against another child? Maybe beating up a bully is something that happens when adults abdicate all responsibility, but it's insane to claim that's better than "the adults should just do their fucking jobs".

> The problem is that most schools don't do that, would likely argue they don't have time to do that, and also probably spend a fair amount of resources and time on relatively ineffective bullying prevention.

Most schools just care more about their sports teams are doing, so they'd have no interest in a punishment that involved removing a player who otherwise was good enough to make the cut. Look at how many people looked the other way for stuff way worse than bullying at Penn State.


Ships can sail in both directions

Yeah, and then the eldest twin is born on the 31st and the youngest on the 2nd, same as before.

There are a lot more straightforward implementations of "shuffle this list of things" than "craft an RNG that doesn't repeat so that your can get the index of each song one at a time".

I've never understood why so many people like to cite that as an example of "not wanting true randomness", when it seems like people do want true randomness, just as an ordered set rather than "pick a random song from scratch each time".


> No bounds check. free_count is incremented before the write, and the write uses the pre-increment value as the index. When free_count == num_niovs at entry, the write goes to freelist[num_niovs], one slot past the end.

"No way to prevent this", Says Only Language Where This Regularly Happens


> "No way to prevent this", Says Only Language Where This Regularly Happens

   clang -fbounds-safety ...
also see lib0xc etc.: https://news.ycombinator.com/item?id=47978834

NOTE: This is a design document and the feature is not available for users yet.

https://clang.llvm.org/docs/BoundsSafety.html


It has been available in Apple's version[1] for several years, and it appears to be migrating into upstream as well.

On macOS you can try it with:

    clang -Xclang -fbounds-safety program.c
Microsoft also seems to be using it (see above link regarding lib0xc).

[1] https://github.com/swiftlang/llvm-project


Obviously the way to prevent this is by bounds checking, which is literally in the `770594e` patch. It's just a bug and they happen routinely in all languages. Since this is doing pointer arithmetic, it could just as easily happen in unsafe Rust, for example.

Like they said, "no way to prevent this" (kind of bug from happening again).

Static analysis and other tools can find this, but they're expensive; wonder what the kernel team has access to?

If static analysis could actually find these issues with a reasonable false positive rate, the companies behind them would be running them on Linux to get the publicity of having found the issues like all the AI companies are doing now. Imo the good static analysis heuristics are already built into compilers or in open source linters.

The cheap, low-hanging "fruit" lint rules have been added to today's C/C++ compilers. But these rules can be fragile, depending on what level the static analysis scan occurs - source-code-level-textual pattern matching or use of an AST/parse tree.

Possible problems within a function should be discoverable.

This particular bug would be hard to discover for a typical linter unless they knew/remembered that there are two execution paths for cleanup of a given element.


If not static analysis what would ai tools be considered? They're operating off the same source code

Also nice the onion reference by op.


"static analysis" is usually deterministic rules you can e.g. put in CI. AI is also somewhat dynamic in that it can execute commands to try stuff out. The best AI vuln finding harnesses work that way, by essentially putting the AI inside of a fuzzer-like environment and telling it to produce a crash.

It's a reference to Xe Iaso's blog (e.g. https://xeiaso.net/shitposts/no-way-to-prevent-this/CVE-2025...), which is itself a reference to The Onion.

It's possible I had seen that blog post and not remembered! I was intending to reference the Onion though (and even googled to make sure I had the wording right), but seeing someone else make the same joke and forgetting is certainly something I would do

Coverity scans several open source projects for free. see https://scan.coverity.com/faq and https://scan.coverity.com/projects

see https://scan.coverity.com/projects/linux for the linux-specific scan results - you need to create an account to view the reported defects.

This past couple of weeks isn't a good look for them with the releases of defects found in Linux and Firefox.


Linus himself wrote a static analyzer. https://en.wikipedia.org/wiki/Sparse

There are other free ones, I don't know if they're run as a matter of course.


Technically, the kernel team is sufficiently competent to design and build bespoke tools for themselves. It‘s probably a question of risk assessment and priorities.

sure, but with unsafe Rust you have a very clear marking for the section of code that requires additional care and attention. it is also customary to include a "SAFETY" comment outlining why using unsafe is OK here

You actually kind of don't, I use like a zillion crates which have unsafe Rust in them and it's not like I'm sitting here reading every single line of their code. I like Rust for various reasons, but its memory safety is (imo) overstated, especially when doing low-level stuff.

Almost all rust (95%) is safe rust. You can opt out of array bounds checks with unsafe { array.get_unchecked(idx) } instead of just typing array[idx]. But I can't remember the last time I saw anyone actually do that in the wild. Its not common practice, even in most low level code.

Rust is bounds checked by default. C is not. Defaults matter because, without a convincing reason, most people program in the default way.


But one would have to explicitly choose to use unsafe Rust for this instead of ordinary safe Rust. And safe Rust has no particular difficulty writing to slots in an array or slice or vector specified by their index.

except nearly everyone uses unsafe rust

No they really don't. 95% of rust is safe rust[1].

Also unsafe rust doesn't remove bounds checks. arr[idx] is bounds checked in every context.

You can opt out of array bounds checking by writing unsafe { arr.get_unchecked(idx) } . But thats incredibly rare in practice.

[1] https://cs.stanford.edu/~aozdemir/blog/unsafe-rust-syntax/


> 95% of rust is safe rust.

Based on the raw number of assorted crates, which has no bearing on kernel code. The more relevant question is, can a performant, cross-architecture, kernel ring-buffer be written in safe Rust?


Hubris, an embedded RTOS-like used in production by Oxide, has ~4% unsafe code in the kernel last I checked. There’s a ring buffer implementation that has one unsafe, for unchecked indexing: https://github.com/oxidecomputer/hubris/blob/master/lib/ring... (this of course does not mean that it is the one ring buffer to rule them all, but it’s to demonstrate that yes, it is at least possible to have one with minimum unsafe.)

It’s always a way lower number than folks assume. Even in spaces that have higher than average usage.


I've always had the impression that people who haven't actually tried to write low-level code in Rust to try to find out where the actual boundary of where they would need unsafe is tend not to realize how far you can push something and build safe abstractions on top of it. Almost every time I've had to wrap an unsafe API, I've been able to find a way to eliminate at least one of the invariants that are documented as needed for safety from propagating upwards, and there have been plenty of times that the specific circumstances of my use-case allowed me to eliminate it entirely.

The entirety of safe Rust is built upon unsafe Rust that's abstracted like this. The fact that you sometimes need unsafe isn't a mark against Rust, but literally the entire premise of the language and the exact problem it's designed to solve.


I doubt it, but you can probably get pretty close.

This is something a lot of people misunderstand about unsafe rust. The safe / unsafe distinction isn't at the crate level. You don't say "this entire module opts out of safety checks". Unsafe is a granular thing. The unsafe keyword doesn't turn off the borrow checker. It just lets you dereference pointers (and do a few other tricks).

Systems code written in rust often has a few unsafe functions which interact with the actual hardware. But all the high level logic - which is usually most of the code by volume - can be written using safe, higher level abstractions.

"Can all of io_uring be written in safe rust?" - probably not, no. But could you write the vast majority of io_uring in safe rust? Almost certainly. This bug is a great example. In this case, the problematic function was this one:

    static void io_zcrx_return_niov_freelist(struct net_iov *niov)
    {
        struct io_zcrx_area *area = io_zcrx_iov_to_area(niov);

        spin_lock_bh(&area->freelist_lock);
        area->freelist[area->free_count++] = net_iov_idx(niov);
        spin_unlock_bh(&area->freelist_lock);
    }
At a glance, this function absolutely could have been written in safe rust. And even if it was unsafe, array lookups in rust are still bounds checked.

"unsafe Rust" is not a binary; you don't opt into it for every single line of code. Given that the entire premise behind the idea that using C instead of Rust is fine is that people should be able to pay close attention and not make mistakes like this, having the number of places you need to look be a tiny fraction of the overall code that's explicitly marked as unsafe is a massive difference from C where literally every line of the code could be hiding stuff like this.

> except nearly everyone uses unsafe rust

Really? Why? I've not used Rust outside of some fairly small efforts, but I've never found a reason to reach for unsafe. So why is "nearly everyone" else using it?


Let's say you want to call win32 (or Mac) OS functions, all of a sudden you're doing all kinds of wonky pointer stuff because that's how these operating systems have been architected. Doing unsafe stuff is pretty inevitable if you want to do anything non-hello-world-ish.

> Doing unsafe stuff is pretty inevitable if you want to do anything non-hello-world-ish.

So the vast majority of Rust projects involve writing at least one unsafe block? Is that really your claim?


And even if you do end up writing an unsafe block, that should be a massive flag that the code in said block should deserve extra comments on why it is safe, and extra unit tests on verifying that it does not blow up.

How do you know the unsafe operation is safe? What are the preconditions the code block has? Write it down, review it, test it.


Exactly; I feel like a lot of people seem to misunderstand what Rust is trying to solve. It's fundamentally not trying to make unsafe code impossible; it's making the number of places you need to audit it a tiny fraction of your codebase compared to needing to audit the entirety of a C or C++ codebase. When I'm doing code reviews, you'd better believe I'm going to spend some extra time on any unsafe block I see to figure out if it's necessary and if so, if it's actually safe safe (with the default assumption for both of those being that they're not until I can convince myself otherwise).

The thing is you can actually write quite good C code (see OpenBSD project). The power of C is that it's pragmatic. It lets you write code with you taking the full responsibility of being a responsible person. To err is human, but we developed a set of practices to handle this (by making sure the gun is unloaded and the safety is on before storing it to avoid putting holes in feet).

I like type checking and other compile time checks, but sometimes they feel very ceremonial. And all of them are inference based, so they still relies on the axiom being right and that the chain of rules is not broken somewhere. And in the end they are annotations, not the runtime algorithm.


> To err is human

Yes, which is precisely why I write in Rust, because the compiler errs less than I do.


It may, but it still requires careful annotations. So you should hope that you have not made an error there and described the wrong structure for the code.

It seems like you have this backwards. Messing up lifetimes in safe Rust can't cause unsafety; the compiler checks if the lifetimes are valid, and if they're not, you get a compiler error. You don't need to "hope" you did it right because the entire point is that you can't compile if you didn't.

On the other hand, when you're relying on your ability to "actually write quite good C code"...you'd better hope that you have not made an error there. In practice, some of the most widely used C libraries in the world still seem to have bugs like this, so I don't really understand why you'd think that's a winning strategy.


Making use of win32 functions doesn't turn off bounds checking in your rust code.

A tiny fraction of programs need to use win32 or Mac OS functions beyond the standard library or other safe wrappers for said functions.

And even in those programs, only a fraction of the code in them is actually directly making calls to those APIs! Having everything else in safe code still makes it easier to audit than if the entire codebase is in C or C++.

So what? Just because you used the keyword `unsafe` to call an unsafe API does not mean that you are going to use unsafe pointer access to write to a vector.

That's not prevention. That's remediation.

Surely nobody could create a better language in 50 years. Surely we can't fix these issues.

And you see a lot of other languages being used to create operating systems with complicated multiprocessor and locking semantics?

It's been almost half as long since the operating system under discussion as it has been since the creation of the language under the discussion, and there haven't really been any new mainstream operating systems created since then. I don't think it's nearly as obvious as you're implying that if there were a new operating system created today that C would be a good choice for it. If we're talking about non-mainstream OS's, then I'd argue there's already more than enough evidence that safer languages than C are more than capable of it[1]

[1]: https://hubris.oxide.computer/


A lot of the really expensive cards are also foils, which for the card stock they use for English cards ends up warping quite a bit over time. I knew multiple people who refused to buy foils ever because of this.

I used to be deep into the competitive MTG scene. It goes deeper than this. Everyone knew that foils caused warping, which would lead to different theories of the “best way” to foil your deck to get an edge, while also being plausibly deniable that you were essentially marking your deck with foils if someone called a judge on you.

At the high level MTG is as much about rules lawyering as it is about actual skilled play, if you’re curious to learn more about this aspect of the game go learn about the 1997 pro tour with Mike Long, who infamously took the win by mind games and causing his opponent to concede when Long had no path to victory.


I've only played at the lowest possible levels (at local card shops), so my experience is probably not super representative of the competitive scene. Pretty much everyone I played with was more concerned about keeping their cards in good condition than trying to angle-shoot their way to a free win (getting a few more bucks of store credit isn't enough motivation to ruin one's standing in what was essentially a social community).

I'm a bit more familiar with rules lawyering mostly because incidents where people got wins from it are somewhat common topics that people would bring up for fun. Someone might play a Griselbrand deck, and someone else would ask if they knew about the Borborygmos incident that led to the rule where naming a card doesn't require literally knowing the exact name, etc.


It’s a flex to have a fully foiled deck that you pull out once in a blue moon.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: