Hacker Newsnew | past | comments | ask | show | jobs | submit | wahern's commentslogin

> and those are regularly treated as sacred

They indeed are treated as sacred, it's enshrined in the Takings Clause of the US Constitution. The big problem in the American West it that the model of property rights in water sources makes it very difficult as a technical matter to put a price on a specific claim and to adjudicate disputes, without triggering a cascade of pricing and rights dilemmas upstream and downstream (figuratively and literally). Western states could in theory exercise eminent domain to take back water rights, and I think they occasionally do, but it's just very fraught from countless legal angles even before getting into the politics of it, which compound the headaches a hundredfold (partly because of the interdependent nature of everybody's rights). Most of the time Western states try to hack around the issues with complicated regulatory and taxing schemes to try to claw back some semblance of control over water resources. But it's very inefficient and ineffective. Property rights are useful because you don't need to centralize all pricing and usage decisions, or when you do--e.g. regulation, taxation, eminent domain--the mechanisms for applying those decisions are simpler and more mechanical; but Western water rights are just a different kind of beast. What's needed is comprehensive reform that tries to shift the American West to a better water rights model, specifically a better model for how property rights inhere in water resources, to drastically improve transactional efficiency, both from a legal and market perspective. But there's no simple way, and in particular no cheap way from a budgetary perspective, to get there even if the motivation existed to get around the monumental collective action problem, which it doesn't.


> But there's no simple way, and in particular no cheap way from a budgetary perspective, to get there even if the motivation existed to get around the monumental collective action problem, which it doesn't.

It seems like maybe there is though.

The first problem is the "use it or lose it" provisions where someone has the rights to use water but not sell it, thereby encouraging waste. That one has a solid solution: If they have the right to use it, they get the right to sell it. Make sale inalienable from use. Then you don't have to pay them anything because you're giving them something instead of taking it. But you get higher water availability as now all these people wasting "free" water start selling it because the opportunity cost of not selling it is now worth more than the wasteful use. The only "problem" here is that they get a windfall, but we can solve that in the same way as the second "problem".

Which is the takings clause. The purpose of that is to prevent unequal takings. If the government needs your land to build a railroad, they have to pay you for it, because they're taking yours but not anyone else's. Whereas when they take everyone's property at the same rate it's called property tax, and that's allowed. So if you just got a windfall of water rights in a dry place, congrats, you now have a valuable property right which is subject to property tax. Not using the water and don't want to pay the tax? Then sell the water. Since the buyer values it at more than you do, and the tax is less than 100% of the value, everyone comes out ahead compared to the status quo. The previous inefficient user gets $100 in money instead of $10 worth of inefficient use, the government gets some proportion of that in new tax revenue (variously property tax on the rights and income tax on the sale), the buyer gets water it values at >$100.


Can you explain the issue from a more basic level for people who don’t know? what i’m imagining is that, like, an aquifer might connect over a very large area and every property owner in the area has the right to extract as much water as they want from it? Leading to a tragedy of the commons situation that states are unable to regulate for some reason?

Short answer: it's complicated. A somewhat longer answer: "Cadillac Desert". Marc Reisner. 1986.

> Only 15% is going into artillery and ammunition.

That simply reflects, in part, the cost differential. It's difficult to find the most recent data, but as of mid 2025 and, AFAICT, still today, Ukraine and Russia were still exchanging 10,000+ shells every day. Moreover, Russia fires 5 shells for every 1 Ukrainian shell. It used to be 10:1. It's a huge reason Ukraine can't break through Russian lines, and during offenses hold off Russian advances. Ukraine still has major supply issues with shells, and that's also likely one reason for their emphasis on drones. It's certainly the reason Germany and others are still pursuing increased artillery production.

Ceteris paribus, the marginal effectiveness of deploying more drones may be superior to more artillery, but that's against the backdrop of existing artillery usage. If Ukraine switched to only drones, they'd lose the war in weeks if not days.


155mm artillery (the type of ammunition discussed in the article) has a maximum effective range of 30km. This means the artillery gun and ammunition can only operate within 30km of the front line.

The FPV drone "kill zone", which used to be several km from the front line when the war started, has recently pushed to ~15–25 km. Some FPV drone strikes are now reaching 50-100km(!) past the front line. This means that artillery must operate at the edge of its effective range, and soon, will be completely enveloped by drones. I predict this will happen by early 2027.

Once the kill zone crosses 30km, artillery will be effectively unusable. Artillery needs a constant source of ammunition, and if this ammunition cannot reach the front, artillery is useless. Ukraine understands this, and that's why they're investing in drones over technology like artillery.

Germany, meanwhile ...


Ukraine is still investing in artillery plants, and it's why several other NATO countries are building more capacity as we speak (for their own and Ukraine's use). Artillery isn't any more obsolete than bullets, it's just not sexy, and at the margins isn't as strategically important.

You can't win a war without controlling ground. It's why the US lost the Iran War, and Vietnam before it, despite having an unfettered ability to pummel forces from the air. To control ground, artillery is essential. Not sufficient, but absolutely necessary.


Artillery is only viable on Ukraine's side because Russia is too incompetent to manufacture long-range (30-50km) FPV strike drones at scale. Currently only Ukraine has pushed the "kill zone" to the 15-25km mark. Russia is behind. But this situation will not last, drone technology will improve on all sides, China will innovate on drone range and Russia will buy those drones. The drone "kill zone" will surpass the range of artillery for all sides.

This is happening in a few years.


A good artillery team should be able to fire and scoot before counter-battery fire. And that's supersonic. You won't be able to do counter-battery with current drones. And an hypothetical supersonic drone is not going to be cost effective.

Drones are likely still great to opportunistically picking up artillery that is scouted by other surveillance drones, but it is probably not the best solution for attacking a battery that is currently firing, unless a drone happens to be already in position.


The latest strategy is to pre-position drones deep in enemy territory, then activate them when a target is found. This (combined with surveillance drones) would cause artillery teams to be targeted by drones within minutes of firing.

One of the premises behind the CAHSR is that the existing airports and runways in LA and SF are nearing capacity. There's no room to expand, and the metropolitan regions are so sprawling you'd end up driving an additional hour or more to any newly built airport. Cars and buses don't solve the problem, either, precisely because of the sprawl and traffic--it can take longer to traverse Bay Area and LA sprawl than it does to zoom the hundreds of miles down I-5.

From an engineering and planning perspective HSR makes sense anyway you look at it. The problem is our inability to build major infrastructure projects. Even highway construction and expansion in these regions is becoming absurdly expensive, along with all other forms of development. Completely independent from HSR, we need to fix our regulatory policies. The ballooning price tag for CAHSR shouldn't inspire ire against HSR, it should inspire ire against our regulatory policies and governance.


Is that really true? LAX and SFO are near capacity, at least during certain times of the day. But we still have room to increase flights at SJC, STS, OAK, ONT, BUR, SNA, and LGB. With a little more work it should also be possible to shift some cargo flights to NUQ in order to free up SFO capacity.

These questions have been exhaustively studied. For example, https://hsr.ca.gov/wp-content/uploads/docs/about/business_pl...

That analysis may be biased, but it does a decent job sketching out how to compare modes and capacities. Also, regulatory obstruction and cost inflation apply just as much to airport and highway expansion as it does to HSR. In some ways it's much worse. SFO should build a new runway, and it'd be incredibly cost effective, but NIMBY opposition was so intense SFO repudiated the idea years ago and dare not even speak of it, now.


No, that report doesn't address my point. Did you even read it? OAK in particular has been losing flights lately and is well below capacity. We don't need to build a new runway there, the existing ones are under utilized.

> that report doesn't address my point

It gives number for how much airport (gates and runways) and highway expansion would be needed to meet future demands without the HSR. And it does so apportioning traffic across the 3 major Bay Area airports and 5 LA airports.

The OAK situation only drives home the point. It's not as convenient, at least relative to the segment of people flying. Location matters. If you want to optimally utilize existing airports, we need more infrastructure, including better mass transit systems. OAK is connected to BaRT, but it's yet another transfer with about an extra 15 minutes of travel time, not including walking through the stations. And BaRT itself isn't an effective mass transit system; it's more commuter train than subway.

I'm traveling to LA next month from SF and still haven't decided whether to fly or drive. With CAHSR point-to-point the decision would be easier--take the train and rent a car.


> The use of HTTP was basically the web equivalent of the End-to-End Principle [1] for TCP/IP.

I don't think the analogy works, not in the context of connection caching and multiplexing. An intermediate gateway multiplexing multiple HTTP requests over another HTTP channel, where that channel is the terminal leg directly to the listening service (i.e. requests aren't demultiplexed before hitting the application socket), fundamentally violates the logic to end-to-end in multiple ways. The analogy only works, if at all, if you preserve 1:1 connection symmetry.

All the reverse proxy exploits can be traced directly back to violating end-to-end.

If the analogy were true, then SMTP delivery across multiple MXs would be end-to-end as well. It's not, and you see many of the same issues as with reverse proxies, including messaging boundary desync'ing.

I guess you're trying to analogize HTTP requests as messages, but it falls apart almost immediately in the context of all the hairy details. The nature of TCP and HTTP semantics and the various concrete protocol details throws a wrench into things, with predictable consequences.

The end-to-end principle doesn't permit playing fast and loose with semantics. It demands very hard, rigid boundaries regarding state management and transport layering. That's the whole point. "Mostly" end-to-end is not end-to-end, not even a little bit.


> What’s notable is that all of these bugs landed in a production Rust codebase, written by people who knew what they were doing

They knew how to write Rust, but clearly weren't sufficiently experienced with Unix APIs, semantics, and pitfalls. Most of those mistakes are exceedingly amateur from the perspective of long-time GNU coreutils (or BSD or Solaris base) developers, issues that were identified and largely hashed out decades ago, notwithstanding the continued long tail of fixes--mostly just a trickle these days--to the old codebases.


Reading that Canonical thread was jaw-dropping. Paraphrased: "Rust is more secure, security is our priority, therefore deploying this full-rewrite of core utils is an emergency. If things break that's fine, we'll fix it :)".

I would not want to run any code on my machines made by people who think like this. And I'm pro-Rust. Rust is only "more secure" all else being equal. But all else is not equal.

A rewrite necessarily has orders of magnitude more bugs and vulnerabilities than a decades-old well-maintained codebase, so the security argument was only valid for a long-term transition, not a rushed one. And the people downplaying user impact post-rollout, arguing that "this is how we'll surface bugs", and "the old coreutils didn't have proper test cases anyway" are so irresponsible. Users are not lab rats. Maintainers have a moral responsibility to not harm users' systems' reliability (I know that's a minority opinion these days). Their reasoning was flawed, and their values were wrong.


This leaves such a bad taste in my mouth. If you fucking found 44 CVEs with some relatively amateurish ones (I'm no security engineer but even I've done that exact TOCTOU mitigation before) in such a core component of your system a month before 26.04 LTS release (or a couple months if you count from their round 1), surely the response should be "we need to delay this to 28.04 LTS to give it time to mature", not "we'll ship this thing in LTS anyway but leave out the most obviously problematic parts"?

The snap BS wasn't enough to move me since I was largely unaffected once stripping it out, but this might finally convince me to ditch.


It's insane that this is going into an LTS. It's the kind of experiment I'd expect them to play with in a non-LTS and revert in LTSes until it's fully usable, like they did with Wayland being the default, which started in 2017

Ubuntu has been doing careless shit like that their entire existence, it's nothing new

What do you mean that the people who use Debian Unstable as the basis of their OS would break things?

This is a people problem and Canonical just isn't good at hiring people

I’ve gotta agree. Some horror stories were going around about their interview process. It seemed highly optimized to select people willing to put up with insane top-down BS.

Agree with the point. Asking sincerely, how to filter out installing any rust-rewrite packages on my machines? Does anyone know the way?

If you don't want Canonical's packages, you should probably just be using Debian rather than Ubuntu. It's not 2008 anymore, stock Debian is quite user-friendly.

Worth noting is that in Debian experimental coreutils defaults to coreutils-from-uutils [0]. This came as a big surprise and as far as I can tell there's been no discussion. A Canonical developer seems to have unilaterally overwritten the coreutils package without discussing with the maintainer. All the package renames that are in Ubuntu aren't in Debian so you can't switch to GNU utils either without deep trickery in a separate recovery environment.

I'm used to running experimental software but I wasn't ready for my computer to not boot one day because of uutils. The `-Z` flag for `cp` wasn't implemented in the 9 month old version shipped in Debian at that time so initramfs creation failed...

[0] https://packages.debian.org/experimental/coreutils


that... seems newsworthy on its own merit.

It's in experimental only, not unstable or testing. That said I'm surprised it hasn't even propmpted discussion on debian-devel (sans [0]). I would've thought that at least enough Debian developers run experimental to have noticed and raise the issue, but no. I thought about starting a thread myself but couldn't be bothered.

[0] https://lists.debian.org/debian-devel/2026/04/msg00004.html


Considering how Ubuntu seems to influence Debian development, this is only slightly surprising.

See: https://lists.debian.org/deity/2025/10/msg00071.html - Hard Rust requirements from May onward - by a Core Ubuntu Developer


Or Fedora.

I feel like Fedora has the same pragmatic approach (allows non-free drivers, packages, etc.) and is just as easy to use.


Or use a sane distribution like Arch or Gentoo instead of Ubuntu based systems.

Alpine Linux has a better shot at acceptable compile times.

Some FOSS software seems to maximize kernel IO last time I had a Gentoo.


I'm unaware of any Rust rewrites outside of coreutils, so:

    sudo apt install coreutils-from-gnu
https://computingforgeeks.com/ubuntu-2604-rust-coreutils-gui...

There aren't true 1:1 clones, but there's ripgrep (inspired by GNU grep) and fd (inspired by GNU find). Those two I like, though. I think they're thoughtfully designed and in ripgrep's case at least (I just haven't read posts/comments by fd's author), it was developed with some close study of other grep implementations. I still use GNU grep and GNU find as well, but rg and fd are often nice for me.

The other nice thing about rg and fd is that they work natively on Windows.

More than that: it seems that Rust stdlib nudges the developer towards using neat APIs at an incorrect level of abstraction, like path-based instead of handle-based file operations. I hope I'm wrong.

Nearly every available filesystem API in Rust's stdlib maps one-to-one with a Unix syscall (see Rust's std::fs module [0] for reference -- for example, the `File` struct is just a wrapper around a file descriptor, and its associated methods are essentially just the syscalls you can perform on file descriptors). The only exceptions are a few helper functions like `read_to_string` or `create_dir_all` that perform slightly higher-level operations.

And, yeah, the Unix syscalls are very prone to mistakes like this. For example, Unix's `rename` syscall takes two paths as arguments; you can't rename a file by handle; and so Rust has a `rename` function that takes two paths rather than an associated function on a `File`. Rust exposes path-based APIs where Unix exposes path-based APIs, and file-handle-based APIs where Unix exposes file-handle-based APIs.

So I agree that Rust's stdilb is somewhat mistake prone; not so much because it's being opinionated and "nudg[ing] the developer towards using neat APIs", but because it's so low-level that it's not offering much "safety" in filesystem access over raw syscalls beyond ensuring that you didn't write a buffer overflow.

[0]: https://doc.rust-lang.org/std/fs/index.html


> So I agree that Rust's stdilb is somewhat mistake prone; not so much because it's being opinionated and "nudg[ing] the developer towards using neat APIs", but because it's so low-level that it's not offering much "safety" in filesystem access over raw syscalls beyond ensuring that you didn't write a buffer overflow.

`openat()` and the other `*at()` syscalls are also raw syscalls, which Rust's stdlib chose not to expose. While I can understand that this may not be straight forward for a cross-platform API, I have to disagree with your statement that Rust's stdlib is mistake prone because it's so low-level. It's more mistake prone than POSIX (in some aspects) because it is missing a whole family of low-level syscalls.


openat() is there, but it's unstable (because the dirfd-related syscalls are not all fully implemented and tested across all platforms Rust supports yet): https://doc.rust-lang.org/std/fs/struct.Dir.html#method.open...

There are lots of unstable things in Rust that have been unstable for many years, and the intentional segregating of unstable means that it's a nonstarter for most use cases, like libraries. It's unstable because there's significant enough issues that nobody wants to mark it as stable, no matter what those issues are.

As long as it's unstable it's totally fair to say Rust's stdlib does not expose them. You might as well say it's fixed because someone posted a patch on a mailing list somewhere.


There are lots of unstable things in Rust that have been unstable for many years, but this isn't one of them. openat() was added in September, and the next PR in the series implementing unlinkat() and removeat() received a code review three weeks ago and is currently waiting on the author for minor revisions.

> As long as it's unstable it's totally fair to say Rust's stdlib does not expose them. You might as well say it's fixed because someone posted a patch on a mailing list somewhere

Agreed. My comment was intended to be read as "it's planned and being worked on", not "it's available".


They're not missing, Rust just ships them (including openat) as part of the first-party libc crate rather than exposing them directly from libstd. You'll find all the other libc syscalls there as well: https://docs.rs/libc/0.2.186/libc/ . I agree that Rust's stdlib could use some higher-level helper functions to help head off TOCTOU, but it's not as simple as just exposing `openat`, which, in addition to being platform-specific as you say, is also error-prone in its own right.

But those are all unsafe, taking raw strings.

Why can I easily use "*at" functions from Python's stdlib, but not Rust's?

They are much safer against path traversal and symlink attacks.

Working safely with files should not require *const c_char.

This should be fixed .


> But those are all unsafe, taking raw strings.

The parent was asking for access to the C syscall, and C syscalls are unsafe, including in C. You can wrap that syscall in a safe interface if you like, and many have. And to reiterate, I'm all for supporting this pattern in Rust's stdlib itself. But openat itself is a questionable API (I have not yet seen anyone mention that openat2 exists), and if Rust wanted to provide this, it would want to design something distinct.

> Why can I easily use "*at" functions from Python's stdlib, but not Rust's?

I'm not sure you can. The supported pattern appears to involve passing the optional `opener` parameter to `os.open`, but while the example of this shown in the official documentation works on Linux, I just tried it on Windows and it throws a PermissionError exception because AFAIK you can't open directories on Windows.


I took parent's message to be asking why the standard library fs primitives don't use `at` functions under the hood, not that they wanted the `at` functions directly exposed.

> which Rust's stdlib chose not to expose

i.e. expose through things like `File::open()`.


> why the standard library fs primitives don't use `at` functions under the hood

In this case it wouldn't seem to make sense to use `at` functions to back the standard file opening interface that Rust presents, because it requires different parameters, so a different API would need to be designed. Someone above mentioned that such an API is being considered for inclusion in libstd in this issue: https://github.com/rust-lang/rust/issues/120426


> AFAIK you can't open directories on Windows.

You can but you have to go through the lower level API: NtCreateFile can open a directory, and you can pass in a RootDirectory handle to following calls to make them handle-relative.


You can open directories using high level win32 APIs. What you need NtCreateFile for is opening files relative to an open directory.

The nix crate provides the safe wrappers. https://docs.rs/nix/latest/nix/fcntl/fn.openat2.html

The correct comparison is to rustix, not libc, and rustix is not first-party. And even then the rustix API does not encapsulate the operations into structs the same way std::fs and std::io do.

The correct comparison to someone asking for first-party access to a C syscall is to the first-party crate that provides direct bindings to C syscalls. If you're willing to go further afield to third-party crates, you might as well skip rustix's "POSIX-ish" APIs (to quote their documentation) and go directly to the openat crate, which provides a Rust-style API.

If I have to use unsafe just to open a file, I might as well use C. While Rustix is a happy middle that is usually enough and more popular than the open at crate, libc is in the same family as the "*-sys" crate and, generally speaking, it is not intended for direct use outside other FFI crates.

I agree it’d be nice if there were a safe stdlib openat API, but

> If I have to use unsafe just to open a file, I might as well use C.

is a ridiculous exaggeration.


I agree it is an exaggeration in that of course you could write a wrapper. The point was that if everyone had to write their own FFI wrappers, Rust wouldn't go far and openat is not an exception.

There is code available at the right level of abstraction (the rustix or openat crates), and while it's not managed by the Rust team, uutils already have many third party dependencies. Bringing up libc just because it's first party, instead, is comparing apple to oranges.


> For example, Unix's `rename` syscall takes two paths as arguments; you can't rename a file by handle

And then there’s renameat(2) which takes two dirfd… and two paths from there, which mostly has all the same issues rename(2) does (and does not even take flags so even O_NOFOLLOW is not available).

I’m not sure what you’d need to make a safe renameat(), maybe a triplet of (dirfd, filefd, name[1]) from the source, (dirfd, name) from the target, and some sort of flag to indicate whether it is allowed to create, overwrite, or both.

As the recent https://blog.sebastianwick.net/posts/how-hard-is-it-to-open-... talks about (just for file but it applies to everything) secure file system interaction is absolutely heinous.

[1]: not path


How about fd of the file you wanna rename, dirfd of the directory you want to open it in, and name of the new file? You could then represent a "rename within the same directory" as: dfd = opendir(...); fd = openat(dfd, "a"); rename2(fd, dfd, "b");

I can't think of a case this API doesn't cover, but maybe there is one.


The file may have been renamed or deleted since the fd was opened, and it might have been legitimate and on purpose, but there’s no way to tell what trying to resolve the fd back to a path will give you.

And you need to do that because nothing precludes having multiple entries to the same inode in the same directory, so you need to know specifically what the source direntry is, and a direntry is just a name in the directory file.


After reading this article, I'm inclined to think that the right thing for this project to do is write their own library that wraps the Rust stdlib with a file-handle-based API along with one method to get a file handle from a Path; rewrite the code to use that library rather than rust stdlib methods, and then add a lint check that guards against any use of the Rust standard library file methods anywhere outside of that wrapper.

If that's the right approach, then it would be useful to make that library public as a crate, because writing such hardened code is generally useful. Possibly as a step before inclusion in the rust stdlib itself.

Agreed. (This approach feels like a cousin of Parse, Don't Validate.)

Yeah. The idea is, if you're consistently making mistakes because the most convenient API at your disposal (here, the rust standard library file/directory APIs that are based around Paths), then after you fix the actual bugs you should write a better abstraction and then deliberately add friction around not using that better abstraction to try to constrain future developers (including future-you) from using the more-error-prone abstraction.

Parse, don't validate is also a principle that encourages people to use a less-error-prone abstraction (the parsed data structure or an error representing invalid input), rather than a more-error-prone one (the original untyped data with ad-hoc validations at various call sites).


Unfortunately, it's not the Rust stdlib, it's nearly every stdlib, if not every one. I remember being disappointed when Go came out that it didn't base the os module on openat and friends, and that was how many years ago now? I wasn't really surprised, the *at functions aren't what people expect and probably people would have been screaming about "how weird" the file APIs were in this hypothetical Go continually up to this very day... but it's still the right thing to do. Almost every language makes it very hard to do the right thing with the wrong this so readily available.

I'm hedging on the "almost" only because there are so many languages made by so many developers and if you're building a language in the 2020s it is probably because you've got some sort of strong opinion, so maybe there's one out there that defaults to *at-style file handling in the standard library because some language developer has the strong opinions about this I do. But I don't know of one.


Openat appeared in Linux in 2006 but not in FreeBSD until 2009; go started being developed in 2007. It probably missed the opportunity by a year. It would have been the right thing to change the os module at some point in the last 18 years, however.


If anything, I find the rust standard library to default to Unix too much for a generic programming language. You need to think very Unixy if you want to program Rust on Windows, unless you're directly importing the Windows crate and foregoing the Rust standard library. If you're writing COBOL style mainframe programs, things become even more forced, though I doubt the overlap between Rust programmers and mainframe programmers that don't use a Unix-like is vanishingly small.

This can also be a pain on microcontrollers sometimes, but there you're free to pretend you're on Unix if you want to.


If you want to support file I/O in the standard library, you have to choose _some_ API, and that either is limited to the features common to all platforms, or it covers all features, but call that cannot be supported return errors, or you pick a preferred platform and require all other platforms to try as hard as they can to mimic that.

Almost all languages/standard libraries pick the latter, and many choose UNIX or Linux as the preferred platform, even though its file system API has flaws we’ve known about for decades (example: using file paths too often) or made decisions back in 1970 we probably wouldn’t make today (examples: making file names sequences of bytes; not having a way to encode file types and, because of that, using heuristics to figure out file types. See https://man7.org/linux/man-pages/man1/file.1.html)


You have to choose something, and I'm glad they didn't go with the idiotic Go approach ("every path is a valid UTF-8 string" or we just garble the path at the standard library level"). You can usually abstract away platform weirdness at the implementation level, but programming on non-Unix environments it's more like programming against cygwin.

A standard library for files and paths that lacks things like ACLs and locks is weirdly Unixy for a supposedly modern language. Most systems support ACLs now, though Windows uses them a lot more. On the other hand, the lack of file descriptors/handles is weird from all points of view.

Had Windows been an uncommon target, I would've understood this design, but Windows is still the most common PC operating system in the world by a great margin. Not even considering things like "multile filesystem roots" (drive letters) "that happen to not exist on Linux", or "case insensitive paths (Windows/macOS/some Linux systems)" is a mistake for a supposedly generic language, in my opinion.


As far as I can tell from Microsoft's documentation, WinAPI access for ACLs was added in Windows 10, which Rust 1.0 predates. And std::fs attempts to provide both minimalist and cross-platform APIs, which in practice means (for better or worse) it's the lowest common denominator between Windows and Unix, with the objective being that higher-level libraries can leverage it as a building block. From the documentation for std::fs:

"This module contains basic methods to manipulate the contents of the local filesystem. All methods in this module represent cross-platform filesystem operations. Extra platform-specific functionality can be found in the extension traits of std::os::$platform."

Following its recommendation, if we look at std::os::windows::fs we see an extension trait for setting Windows-specific flags for WinAPI-specific flags, like dwDesiredAccess, dwShareMode, dwFlagsAndAttributes. I'm not a Windows dev but AFAICT we want an API to set lpSecurityAttributes. I don't see an option for that in std::os::windows::fs, likely complicated by the fact that it's a pointer, so acquiring a valid value for that parameter is more involved than just constructing a bitfield like for the aforementioned parameters. But if you think this should be simple, then please propose adding it to std::os::windows::fs; the Rust stdlib adds new APIs all the time in response to demand. (In the meantime, comprehensive Windows support is generally provided by the de-facto standard winapi crate, which provides access to the raw syscall).


> WinAPI access for ACLs was added in Windows 10

I'm not sure which docs you mean but that's not true. The NT kernel has used ACLs long before rust was invented. But it's indeed true that rust adds platform-specific methods based on demand. The trouble with ACLs is it means either creating a large API surface in the standard library to handle them or else presenting a simple interface but having to manage raw pointers (likely using a wrapper type but even then it can't be made totally safe).

> the de-facto standard winapi crate, which provides access to the raw syscall

Since the official Microsoft `windows-sys` crate was released many years ago, the winapi crate has been effectively unmaintained (it accepts security patches but that's it).


> I'm not sure which docs you mean

I was looking at these: https://learn.microsoft.com/en-us/windows/security/identity-...

> the winapi crate has been effectively unmaintained

Shows how much of a Windows dev I am. :P


You'd want to be looking at these[1] instead, especially SetFileSecurity[2].

As noted, the "minimum supported" version means exactly that, and does not reflect when the API function was introduced.

[1]: https://learn.microsoft.com/en-us/windows/win32/secauthz/low...

[2]: https://learn.microsoft.com/en-us/windows/win32/api/winbase/...


As far as I can tell even NFS got ACL support before the first Rust release. NFSv4.1 in 2010 vs Rust in 2012.

SetFileSecurityA is listed as Windows XP+ (https://learn.microsoft.com/en-us/windows/win32/api/winbase/...) but Microsoft has deprecated all pre-XP documentation.

According to https://www.geoffchappell.com/studies/windows/win32/advapi32..., the function was available first in advapi32 version 3.10, which was included in Windows NT 3.10 (14th July 1993): https://www.geoffchappell.com/studies/windows/win32/advapi32...

lpSecurityAttributes just refers to a SecurityAttributes struct (Rust bindings here: https://microsoft.github.io/windows-docs-rs/doc/windows/Win3...) Annoying pointers for sure, but nothing a Rust API can't work around with standard language features.

And sure, Rust could add the entire windows crate to the standard library, but my point is that this isn't just Windows functionality: getfacl/setfacl has been with us for decades but I don't know any standard library that tries to include any kind of ACLs.


You misunderstand the documentation. Microsoft doesn't provide online documentation for versions of Windows that are no longer supported. Functions like SetFileSecurity have existed since Windows NT 3.1 back in 1993.

But the documentation I'm using claims that it applies to Windows 10, which stopped being supported last year.

Windows 10 support is still available to people who pay for it.

> I'm glad they didn't go with the idiotic Go approach ("every path is a valid UTF-8 string" or we just garble the path at the standard library level")

Can you expound a bit on this? I haven't been able to find any articles related to this kind of problem. It's also a bit surprising, given that Go specifically did not make the same choice as Rust to make strings be Unicode / UTF-8 (Go strings are just arrays of bytes, with one minor exception related to iteration using the range syntax).


Go's docs put it like this: Path names are UTF-8-encoded, unrooted, slash-separated sequences of path elements, like “x/y/z”. If you operate on a path that's a non-UTF-8 string, then Go will do... something to make the string work with UTF-8 when passed back to standard file methods, but it likely won't end up operating on the same file.

Rust has OsStr to represent strings like paths, with a lossy/fallible conversion step instead.

Go's approach is fine for 99% of cases, and you're pretty screwed if your application falls for the 1% issue. Go has a lot of those decisions, often to simplify the standard library for most use cases most people usually run into (like their awful, lossy, incomplete conversion between Unix and Windows when it comes to permissions/read-only flags/etc.).


> Path names are UTF-8-encoded, unrooted, slash-separated sequences of path elements, like “x/y/z”

This is only for the "io/fs" package and its generic filesystem abstractions. The "os" package, which always operates on the real filesystem, doesn't actually specify how paths are encoded, nor does its associated helper package "path/filepath".

In practice, non-UTF-8 already wasn't an issue on Unix-like systems, where file paths are natively just byte sequences. You do need to be aware of this possibility to avoid mangling the paths yourself, though. The real problem was Windows, where paths are actually WTF-16, i.e. UTF-16 with unpaired surrogates. Go has addressed this issue by accepting WTF-8 paths since Go 1.21: https://github.com/golang/go/issues/32334#issuecomment-15500...


The `os` package, that is the main way everyone I've seen opens and reads files in Go, doesn't specify any restriction on its path syntax (except that it uses `string`, of course). I've tried using it on Linux with a file name that would be invalid UTF-8 and it works without any issues.

I for one hadn't even heard of the io/fs package that has the problems that you mention, and I don't remember ever seeing it used in an example. I've looked in a code base I help maintain, and the only uses I could find are related to some function type definitions that are used by filepath.WalkDir and filepath.Walk - and those functions explicitly document the fact that they don't use `io/fs` style paths when calling these functions - they don't even respect the path separator format:

  // WalkDir calls fn with paths that use the separator character appropriate
  // for the operating system. This is unlike [io/fs.WalkDir], which always
  // uses slash separated paths.
  func WalkDir(root string, fn fs.WalkDirFunc) error {
Where fs.WalkDirFunc is defined like this:

  type WalkDirFunc func(path string, d DirEntry, err error) error

> Go strings are just arrays of bytes,

https://go.dev/ref/spec#String_types: “A string value is a (possibly empty) sequence of bytes”

https://pkg.go.dev/strings@go1.26.2: “Package strings implements simple functions to manipulate UTF-8 encoded strings.”

So, yes, Go strings are just arrays of bytes in the language, but in the standard library, they’re supposed to be UTF-8 (the documentation isn’t immediately clear on how it handles non-UTF-8 strings).

I think this may be why the OP thinks the Go approach is “every path is a valid UTF-8 string”


That's the same for the C or Python standard libraries. The difference is that in C you tend to use the Win32 functions more because they're easily reached for; but Python and Rust are both just as Unixy.

Indeed, though for C it makes sense given its origins, and Python sort of grew from a fun project into a massive ecosystem by accident.

That's a norm in most languages, this is just more convenient way to operate

> They knew how to write Rust, but clearly weren't sufficiently experienced with Unix APIs, semantics, and pitfalls.

The point of Rust is that you shouldn't have to worry about the biggest, easiest to fall in pitfalls.

I think the author's point of this article, is that a proper file system API should do the same.


Having panics in these are pretty amateur hour even just on a Rust level. I could see if they were like alloc errors which you can't handle, but expect and unwraps are inexcusable unless you are very carefully guarding them with invariants that prevent that code path from ever running.

Someone once coined a related term, "disassembler rage". It's the idea that every mistake looks amateur when examined closely enough. Comes from people sitting in a disassembler and raging the high level programmers who had the gall to e.g. use conditionals instead of a switch statement inside a function call a hundred frames deep.

We're looking solely at the few things they got wrong, and not the thousands of correct lines around them.


Thing is, these tools are so critical that even one error may cause systems to be compromised; rewriting them should never be taken lightly.

(Actually ideally there's formal verification tools that can accurately test for all of the issues found in this review / audit, like the very timing specific path changes, but that's a codebase on its own)


Is formal verification able to find most of these issues? I'm no expert on formal analysis, but I suspect most systems are not able to handle many of these errors. It seems more likely that the system will assume the file doesn't change between two syscalls - which seems to be the majority of issues. Modeling that possibility at least makes the formal system much harder to make.

When I read the article I came away with the impression that shipping bugs this severe in a rewrite of utils used by hundreds of millions of people daily (hourly?) isn’t ok. I don’t think brushing the bad parts off with “most of the code was really good!” is a fair way to look at this.

Cloudflare crashed a chunk of the internet with a rust app a month or so ago, deploying a bad config file iirc.

Rust isn’t a panacea, it’s a programming language. It’s ok that it’s flawed, all languages are.


I think that legitimate real world issues in rust code should be talked about more often. Right now the language enjoys a reputation that is essentiaöly misleading marketing. It isn't possible to create a programing language that doesn't allow bugs to happen (even with formal verification you can still prove correctness based on a wrong set of assumptions). This weird, kind of religious belief that rust leads to magically completely bug free programs needs to be countered and brought in touch with reality IMO.

Nobody believes Rust programs are but free, though. Rust never promised that. It doesn't even promise memory safety, it only promises memory safety if you restrict yourself to safe APIs which simply isn't always possible.

> it only promises memory safety if you restrict yourself to safe APIs which simply isn't always possible.

Less than that actually, considering Rust has its own definition of what "safe" means.



The NSA believe it's a memory safe language.

Or... the NSA wants you to think the NSA believes that rust is a memory safe language.

Or... the NSA wants you to think that the NSA wants you to think that the NSA believes that Rust is a memory-safe language, so that everyone who distrusts the NSA keeps using C.

Is it possible you’ve misunderstood what Rust promises?

> It isn't possible to create a programing language that doesn't allow bugs to happen

Yes, that’s true. No one doubts this. Except you seem to think that Rust promises no bugs at all? I don’t know where you got this impression from, but it is incorrect.

Rust promises that certain kinds of bugs like use-after-free are much, much less likely. It eliminates some kinds of bugs, not all bugs altogether. It’s possible that you’ve read the claim on kinds of bugs, and misinterpreted it as all bugs.

I’ve had this conversation before, and it usually ends like https://www.smbc-comics.com/comic/aaaah


"Rust" obviously does not promise that.

On the other hand, there are too many less-experienced Rust fans who do claim that "Rust" promises this and that any project that does not use Rust is doomed and that any of the existing decades-old software projects should be rewritten in Rust to decrease the chances that they may have bugs.

What is described in TFA is not surprising at all, because it is exactly what has been predicted about this and other similar projects.

Anyone who desires to rewrite in Rust any old project, should certainly do it. It will be at least a good learning experience and whenever an ancient project is rewritten from scratch, the current knowledge should enable the creation of something better than the original.

Nonetheless, the rewriters should never claim that what they have just produced has currently less bugs than the original, because neither they nor Rust can guarantee this, but only a long experience with using the rewritten application.

Such rewritten software packages should remain for years as optional alternatives to the originals. Any aggressive push to substitute the originals immediately is just stupid (and yes, I have seen people trying to promote this).

Moreover, someone who proposes the substitution of something as basic as coreutils, must first present to the world the results of a huge set of correctness tests and performance benchmarks comparing the old package with the new package, before the substitution idea is even put forward.


Where are these rust fans? Are they in the room with us right now?

You’ve constructed a strawman with no basis in reality.

You know what actual Rust fans sound like? They sound like Matthias Endler, who wrote the article we’re discussing. Matthias hosts a popular podcast Rust in Production where talks with people about sharp edges and difficulties they experienced using Rust.

A true Rust advocate like him writes articles titled “Bugs Rust Won’t Catch”.

> Such rewritten software packages should remain for years as optional alternatives to the originals.

This project was started a decade ago. (https://news.ycombinator.com/item?id=7882211)

> must first present to the world the results of a huge set of correctness tests and performance benchmarks

Yeah, you can see those in https://github.com/uutils/coreutils. This project has also worked with GNU coreutils maintainers to add more tests over time. Check out the graph where the total number of tests increases over time.

> before the substitution idea is even put forward

I partly agree. But notice that these CVEs come from a thorough security audit paid for by Canonical. Canonical is paying for it because they have a plan to substitute in the immediate future.

Without a plan to substitute it’s hard to advocate for funding. Without funding it’s hard to find and fix these issues. With these issues unfixed it’s hard to plan to substitute.

Chicken and egg problem.

> less bugs

Fewer.


Those Rust fans exist on almost all Internet forums that I have seen, including on HN.

I do not care about what they say, so I have not made a list with links to what they have posted. But even only on HN, I certainly have seen much more than one hundred of such postings, more likely at least several hundreds, even on threads that did not have any close relationship with Rust, so there was no reason to discuss Rust.

Since the shameless promotion with false claims of Java by Sun, during the last years of the previous century, there has not been any other programming language affected by such a hype campaign.

I think that this is sad. Rust has introduced a few valid innovations and it is a decent programming language. Despite this, whenever someone starts mentioning Rust, my first reaction is to distrust whatever is said, until proven otherwise, because I have seen far too many ridiculous claims about Rust.


Could you find one such person on this thread? Someone making ridiculous claims about what Rust offers.

I’ll tell you what I think you’ve seen - there are hundreds of threads where you’ve seen people claim they’ve seen this everywhere. That gives you the impression that it is universal.



Perfect. Because that’s exactly what I’m saying.

The comment you linked says something specific about a specific kind of bug being eliminated - memory safety bugs. And they’re not making a claim, they’re repeating the evidence gathered from the Android codebase. So that’s a fact, memory safety bugs truly did not appear in the Rust parts of Android.

The comment you linked is not claiming Rust code is bug-free. That’s a strawman I’ve seen many, many times. Haters will claim that this happens all the time, but all I see are examples of the haters claiming this. You had to go back 5 months and still couldn’t find anything similar to the strawman.

> This one probably covers it

No, probably not.


The only language I've ever seen users make that claim for is Haskell. Rust users have never made the claim, but I've seen it a lot from advocates who appear to find "hello world" a complex hard to write program.

> On the other hand, there are too many less-experienced Rust fans who do claim that "Rust" promises this

Link some comments like this? Because I've been reading Rust discussions for years and never seen them.


I understand the (narrow) hard guarantees that rust gives. But there there are people in the wider community who think that the guarantees are much, much broader. This is a pretty widespread misconception that should get be rectified.

Who are these people? Care to share examples?

Because all I see are examples of people claiming it happens all the time. Not the examples of it actually happening.


I have never seen a comment claiming that Rust leads to magically completely bug free programs.

Could you please link one? Because I doubt it exists, or if it does, it is probably on some obscure website or downvoted to oblivion.

On the other hand, I see comments in every Rust thread that are basically restatements of yours attacking a strawman.

The reality: Rust does not prevent all bugs. In fact, it doesn't even prevent any bugs. What it actually does is make a certain particularly common and dangerous class of bugs much more difficult to write.


If I'm not mistaken, in the Cloudflare case, both the Rust rewrite and the C++ original version crashed. The primary cause being the bad config file.

Yes, but the point was that rewriting something in Rust is not sufficient per se to prevent such bugs.

The goal claimed by all these rewrites is the elimination of bugs.


The "elimination of bugs" is not synonymous with "the elimination of all bugs". The way you're presenting it, any single bug in a rewrite would be grounds to consider the the entire endeavor a failure, which is a ridiculous standard.

There are plenty of strong arguments to be made against rewriting something in Rust, but this is a pretty weak one.


I find it hilarious that this comment is being downvoted.

Exactly what is the controversial take here?

> I don’t think brushing the bad parts off with “most of the code was really good!” is a fair way to look at this.

Nope. this is fine.

> Cloudflare crashed a chunk of the internet with a rust app a month or so ago, deploying a bad config file iirc.

Maybe this?

> Rust isn’t a panacea, it’s a programming language. It’s ok that it’s flawed, all languages are.

Nope, this is fine too.


Because the bugs were caused by programmer error, not anything inherent to rust. It was more notable due to cloudflare being a critical dependency for half the internet, but that particular issue could've happened in any language.

This kind of melodramatic reaction to rust code is fatiguing, honestly. Rust does not bill itself as some programming panacea or as a bug free language, and neither do any of the people I know using it. That's a strawman that just won't go away.

Rust applies constraints regarding memory use and that nearly eliminates a class of bugs, provided safe usage. And that's compelling to enough people that it warrants migration from other languages that don't focus on memory safety. Bugs introduced during a rewrite aren't notable. It happens, they get fixed, life moves on.


> caused by programmer error, not anything inherent to Rust

Your argument does not work as a praise for Rust because the bugs in any program are caused by programmer errors, except the very rare cases when there are bugs in the compiler tool chain, which are caused by errors of other programmers.

The bugs in a C or C++ program are also caused by programmer errors, they are not inherent to C/C++. It is rather trivial to write C/C++ carefully, in order to make impossible any access outside bounds, numeric overflow, use-after-free, etc.

The problem is that many programmers are careless, especially when they might be pressed by tight time schedules, so they make some of these mistakes. For the mass production of software, it is good to use more strict programming languages, including Rust, where the compiler catches as many errors as possible, instead of relying on better programmers.


I'm neither praising or admonishing rust. Did you read the parent comment or its parents' comment I was responding to at all?

(grandparent comment): "Cloudflare crashed a chunk of the internet with a rust app a month or so ago"

The actual bug had nothing to do with rust, yet rust is specifically brought up here.

(grandparent comment): "Rust isn’t a panacea, it’s a programming language. It’s ok that it’s flawed, all languages are."

No Rust programmer thinks it's a panacea! Rust has never advertised itself this way.


The cloudflare bug was the equivalent of an uncaught exception caused by a malformed config file. There's no recovery from a malformed config file - the software couldn't possibly have done its job. What's salient is that they were using an alternative to exceptions, because people were told exceptions were error-prone, and using this thing instead would make it easier to write bug-free code. But don't do the equivalent of not catching them!

And then, it turned out to not really be any better than exceptions.

Most Rust evangelism is like this. "In Rust you do X and this makes your code have fewer bugs!" Well no it doesn't. Manually propagating exceptions still makes the program crash and requires more typing, and doesn't emit a stack trace.


That was why I brought it up. I wasn't trying to be snarky or haughty. Thank you for filling in the gaps, I should have done that instead of the 1-liner.

I didn't downvote, but I feel the last two points show a lack of nuance. It's saying "Rust doesn't prevent 100% of the bugs, like all other programming languages", while failing to acknowledge that if a programming language prevents entire classes of bugs, it's a very significant improvement.

Nobody disputes that Rust is one of the programming languages that prevent several classes of frequent bugs, which is a valuable feature when compared with C/C++, even if that is a very low bar.

What many do not accept among the claims of the Rust fans is that rewriting a mature and very big codebase from another language into Rust is likely to reduce the number of bugs of that codebase.

For some buggier codebases, a rewrite in Rust or any other safer language may indeed help, but I agree with the opinion expressed by many other people that in most cases a rewrite from scratch is much more likely to have bugs, regardless in what programming language it is written.

If someone has the time to do it, a rewrite is useful in most cases, but it should be expected that it will take a lot of time after the completion of the project until it will have as few bugs as mature projects.


As other people have mentioned, the goal of uutils was not "let's reduce bugs in coreutils by rewriting it in Rust", it was "it's 2013 and here's a pre-1.0 language that looks neat and claims to be a credible replacement for C, let's test that hypothesis by porting coreutils, giving us an excuse to learn and play with a new language in the process". It seems worth emphasizing that its creation was neither ideologically motivated nor part of some nefarious GPL-erasure scheme, it was just some people hacking on a codebase for fun.

Whether or not it was wise for Canonical to attempt to then take that codebase and uplift it into Ubuntu is a different story altogether, but one that has no bearing on the motivations of the people behind the original port itself.

You can see an alternative approach with the authors of sudo-rs. Rather than porting all of userspace to Rust for fun, they identified a single component of a particularly security-critical nature (sudo), and then further justified their rewrite by removing legacy features, thereby producing an overall simpler tool with less surface area to attack in the first place. It was not "we're going to rewrite sudo in Rust so it has fewer bugs", it was "we're going to rewrite sudo with the goal of having fewer bugs, and as one subcomponent of that, we're going to use Rust". And of course sudo-rs has had fresh bugs of its own, as any rewrite will. But the mere existence of bugs does not invalidate their hypothesis, which is that a conscientious rewrite of a tool can result in fewer bugs overall.


But are the current uutils developers the same as the 2013 developers? At least based on GitHub's graphs, that's not the case (it looks fairly bimodal to me), and so it wouldn't be unreasonable to treat the 2013-era project differently to the 2020-era project. So judging the 2020-era project for its current and ongoing failures does not seem unreasonable.

Similarly, sudo-rs dropping "legacy" features leaves a bad taste in my mind, there are multiple privilege escalation tools that exist (doas being the first that comes to mind), and doing something better and not claiming "sudo" (and rather providing a compat mode ala podman for docker) would to me seem a better long term path than causing more breakage (and as shown by uutils, breakage on "core" utils can very easily lead to security issue).

I personally find uutils lack of care to be concerning because I've been writing (as a very low priority side project) a network utility in rust, and while it not aiming to be a drop in rewrite for anything, I would much rather not attract the same drama.


doas and sudo-rs occupy different niches, specifically doas aims for extreme minimalism and deliberately sacrifices even more compatibility than sudo-rs, which represents a middle ground.

> It seems worth emphasizing that its creation was neither ideologically motivated nor part of some nefarious GPL-erasure scheme, it was just some people hacking on a codebase for fun.

What the motivation and intent was in 2013 is not necessarily relevant to what the motivation and intent is now.

It's even less relevant to what the effect is: the goal may be to replace $FOO software with $BAR software, but as things stand right now $FOO is "GPL" and $BAR is "MIT".

So, yeah, I don't want them to succeed at their primary goal, because that replaces pro-user software with pro-business software.


> its creation was neither ideologically motivated nor part of some nefarious GPL-erasure scheme

No, they openly refuse to accept any GPL code. And even have a strict policy of not even reading GPL code.


No, once you have an MIT-licensed codebase without a copyright assignment scheme, you no longer have the freedom to relicense it at will. You could attempt to have a mixed-license codebase, which is supported by the GPL, and specify that all new contributions must accept the GPL, but this is tantamount to an incompatible fork of the project from the perspective of any downstream users, and anyone who insists on contributing code under the GPL has the freedom to perform this fork themselves.

This is simply false. You can accept GPL contributions and clearly indicate the names of the contributors as required by MIT. There is no "incompatible fork".

No, GPL and MIT have significantly different compliance requirements. You cannot suddenly begin shipping code with stricter compliance requirements to downstream users without potentially exposing them to legal liability.

It's not a low bar when C/C++/D are basically the only languages in which you can write certain kinds of programs.

Memory safety catches buffer overflows. CI catches logic bugs. Neither catches the Unix API gotchas nobody documented.

They're not API gotchas in most cases.

And writing comprehensive tests for this behaviour is very difficult regardless of which language you are using.

I am all for rust rewrites of things. But in this case, these are mistakes which were encouraged by the lazy design of `std::fs` and the developers' lack of relevant experience.

And to clarify, I don't blame the developers for lacking the relevant experience. Working on such a project is precisely the right place to learn stuff like this.

I think it's an absurdly dumb move by Canonical to take this project and beta-test it on normal users' machines though…


How does CI catch logic bugs?

That depends on what tests you are running. In any significant projects you need a test suite so large that you wouldn't run all the tests before pushing to CI - instead you are the targeted tests that test the area of code you changed, but there are more "integration tests" that go through you code and thus could break, but you don't actually run.

You can also run some static analysis that is too long to run locally every time, but once in a while it will point out "this code pattern is legal buy is almost always a bug"

It is also possible to do some formal analysis of code on CI that you wouldn't always run locally - I'm not an expert on these.


That's true in general. In this case where the logic bugs are from not understanding the API being implemented (and in any similar case), tests wouldn't catch the bugs either (even integration tests) because good tests require understanding the contract of the unit being tested.

CI catches all kinds of bugs.

LLM account

Seems pretty impressive they rewrote the coreutils in a new language, with so little Unix experience, and managed to do such a good job with very little bugs or vulns. I would have expected an order of magnitude more at least.

Shows how good Rust is, that even inexperienced Unix devs can write stuff like this and make almost no mistakes.


Yes, it's the lack of Unix experience that's terrifying. So many of mistakes listed are rookie mistakes, like not propagating the most severe errors, or the `kill -1` thing. Why were people who apparently did not have much experience using coreutils assigned to rewrite coreutils?

> Why were people who apparently did not have much experience using coreutils assigned to rewrite coreutils?

From what I understand, "assigned" probably isn't the best way to put it. uutils started off back in 2013 as a way to learn Rust [0] way before the present kerfuffle.

[0]: https://github.com/uutils/coreutils/tree/9653ed81a2fbf393f42...


Yeah perhaps learning UNIX API's and Rust at the same time doesn't lead to a drop in replacement ready to be shipped in major distributions. Who whould have thunk it.

Strictly speaking it doesn't preclude eventually producing a production-ready drop-in replacement either, though evidently that needs a fresh set of eyes.

exactly this. I wrote one of them back then as a learning experience. some of the code I wrote is still intact, incredibly.

Why is it even possible to represent a negative PID, let alone treat the integer -1 as a PID meaning "all effective processes"? This seems like a mistake (if not a rookie mistake) in the Linux kernel API itself.

-1 is a special case, a way to represent a PID with all bits set in a platform-independent way. It's not very clean, and it comes from ancient times when writing some extra code and storing an extra few bytes was way more expensive.

No, -1 is simply the process group with pgid 1:

https://stackoverflow.com/questions/392022/whats-the-best-wa...

The problem is that -DIGIT doubles as both "signal number" and process group. The right way to invoke kill for a process group however would be "kill [OPTS]... -- -PGID".


It feels a bit like a "better is better" language hitting all of the quirks of a "worse is better" environment.

Pretty much all the rough edges being discussed here are design mistakes in Linux or Unix, and/or a consequence of using an unsafe language with limited abstractions and a weak type system. But because of ubiquity, this is everyone’s problem now.

You are right, but those who set for themselves the goal to substitute a Linux/UNIX package must implement programs that handle correctly all the quirks of the existing Linux/POSIX specifications.

If they do not like the design mistakes, great, they should set for themselves the goal to write a new operating system together with all base applications, where all these mistakes are corrected.

As long as they have not chosen the second goal, but the first, they are constrained by the existing interfaces and they must use them correctly, no matter how inconvenient that may be.

Anyone who learns English may be frustrated by many design mistakes of English, but they must still use English as it is spoken by the natives, otherwise they will not be understood.


Rewriting perfectly good code was a colossal mistake.

Not necessarily, but was the reasoning sound and have the tradeoffs been made? The website (https://uutils.github.io/) shows some reasonable "why"s (although I disagree with making "Rust is more appealing" a compelling reason, but that's just me (disclaimer: I don't like C and don't know Rust so take this comment as you will)), but I think what's missing is how they will ensure both compatibility and security / edge case handling, which requires deep knowledge and experience in the original code and "tribal knowledge" of deep *nix internals.

I do wonder whether people got down the article enough to see the list of bugs patched in GNU coreutils.

That "perfectly good code" that it sounds like no one should question included "split --line-bytes has a user controlled heap buffer overflow".


Yes, perfectly good code can have bugs. This is ridiculous thinking to scrap a codebase because it's not bug-free, to replace it with one riddled with differences in behavior that break everything that uses it.

The irony here being that GNU's coreutils themselves originated as rewrites, from back when BSD's copyright status was still legally unclear.

Understandable as GNU was founded on software freedom. I guess one could argue that the Rust rewrite is to establish some kind of higher standard for correctness.

The article might disagree. See the subsection, "The importance of tacit knowledge". OTOH, if that tacit knowledge is indeed so critical then there's less risk (e.g. regarding future investment incentives) to narrowing patent protections. OTOOH, ASML's supply chain is deep and complex, and the patent portfolio is presumably similarly diffuse, which makes it difficult to analyze or even, short of a complete patent regime overhaul, identify which patents to open up to accelerate adoption.

ASML's supply chain is deep and complex - and secret. But if it were F/OSS (just imagine it) from sand to chip, that complexity would have a wider scope of human attention applied to it.

What is happening with ASML now, once happened with the wheel.

Think about that.


Patents are supposed to be the antidote to industrial secrets. Of course, it doesn't really work out that way because in addition to patent writers hiding the ball or strategically layering patents and secrecy, things like tacit knowledge and organization play a huge role in exploring, building, and applying solutions. FOSS doesn't really help with the tacit stuff. It's partly why it's so difficult for projects to survive after the original authors move on. With software that's not necessarily immediately fatal as long as the software works well and is easy enough to tweak around the edges to keep it compiling and interfacing well, qualities which FOSS is meant to foster and preserve. But outside software, and especially in the industrial sphere, the loss of that tacit knowledge and organization is often immediately fatal. You can't just copy stuff, you have to rebuild all that tacit knowledge and process. Often times, like in software, the resulting product that nominally achieves the same results is built around an entirely different technical approach.

.. all the more reason to support approaches which reduce dependency on an elite group of knowledge-holders.

A wider scope of attention yes. I'm not convinced there would be nearly as many multi-billion dollar research projects.

RFC 1855, Netiquette Guidelines[1], specifies underscore for underlining. However, it says asterisks are for emphasis, not bold, per se. They just happened to (often?) display as bold because italics in terminals weren't a common thing. For the same reason, using /'s for italics didn't make much sense except maybe in word processors. I also suspect underscore become conflated with asterisk because some people preferred using the former for emphasis--people weren't usually trying to adhere to professional styling guides, and some people may have preferred underlining to impart emphasis, or just got into the habit without thinking about it.

I don't know how well RFC 1855 reflected common practice, though. It might be worthwhile to check the rendering code in clients like tin and mutt.

[1] https://datatracker.ietf.org/doc/html/rfc1855


In California cops, family members of cops, and related personnel (e.g. police union officials) can get a special insignia on their license. So when they're pulled over and are asked to present their license....

The FOP (Fraternal Order of Police). Also a thing in NY and NJ.

Fun facts... the insignia you put on your license or on your car also has a thing like a registration tab... The FOP says its "to show your ongoing support", everyone else with a room temperature IQ knows its "to show you're 'paid up' on your protection money for the year".

Oh, and some enterprising souls have created "counterfeit" FOP insignia and stickers and other regalia (or for those tabs), and sold them on eBay... only to have the weight of the police union's attorneys come down on them with cease and desists, etc.

Biggest gang, etc.


It's probably for the better they're taken down. In California, and perhaps NY and NJ, too, the status shows up on your DMV records, so when a cop runs your license or your plate (and I presume plates are scanned and run automatically), they'll see the discrepancy immediately. So someone is just asking for trouble by using fake stickers, just like if they went around flashing a gang sign when they're not actually a member.

That in itself blows my mind, why on earth should someone see your membership in this order? It's not a LE agency, and in many states the FOP allows membership for retired cops.

I do agree with what you're saying, though, but the issue to me is why that's even something that should show up when your plates are run, "Oh, you're a cop somewhere, or used to be".


I don't remember if the DMV status is actually FOP, or something else, but I knew a lawyer who worked with a police union who had this status. But that's just icing on the cake compared to stuff like https://en.wikipedia.org/wiki/Law_Enforcement_Officers%27_Bi...

I have a friend who's a union leader (as in actually runs a sizeable union) and, in the eyes of most people, a straight-up socialist. He convinced me public sector unions are a horrible idea precisely because of the above. I had known about the above, but I always had trouble squaring my support for the right to unionize with the problems with public sector unions. He basically gave me permission to call a spade a spade.


What about the Federal register of LEOs who have been terminated or resigned to avoid termination? Very useful concept for transparency...

... but the police unions that represent approximately 70% of the nation's police have negotiated it into their CBAs that this register "cannot be used for hiring or promotional purposes".


I think the FOP stickers are quite bad, but it's obviously not a "protection racket"; virtually nobody around here has them, and for a protection scheme to work there has to be some pressure to buy in.

I think you're generalizing too much. Rural communities take gun safety seriously. Farming communities take farming equipment seriously. Kids grow up internalizing the seriousness of these things, which is communicated expressly and tacitly their whole lives by countless people around them, including their friends. Plus they encounter walking examples of what can go wrong, like a missing finger, burn scars (not careful around bonfires or burn pits), or bullet holes (I knew at least 2 or 3 kids growing up with scars from shot). But put those same kids or adults who are careful with those machines in a similarly dangerous but novel situation, and they'll do dumb shit like anyone else. I'm tempted to argue they're more likely to do something dumb because they have a false confidence from their experience with other dangerous situations, whereas suburban and city kids may be more likely to be too scared to play around with any dangerous machine or situation.

I lived on a farm for a year as a young kid (farmer rented a couple of trailers on his land). I remember one day I was hanging around the hog pen watching the giant hogs mill about, probably contemplating trying to pet one. Mr Austin came by and sternly told me to not to reach through the fencing, then knelt down and showed me his ear, which was missing a big chunk.


On the flip side, plenty of Rural and Suburban people are terrified by the city, which kids growing up in the city shrug off.

Rural folks might learn to respect a PTO or the varmint rifle by age 10, but city kids learn how to navigate the bus routes and subway. They learn how to walk on crowded streets, how to live among a lot of different people, including dangerous people(and how to avoid the conflict).

It's all quite interesting. Different kinds of toughness, different kinds of mental fortitude.


I think that there's a major difference in the resulting mindsets that the two types of experiences form, though.

The first learn that nature is always present and doing its best to kill you / wreck your harvest, and that it is only through man's intelligence and social bonds that we thrive. I would argue a corollary of this is that one cannot tolerate malicious or grossly neglectful people around.

The second group learns that other people are a liability and that bad actors are just a fact of life to be tolerated and worked around.

Both approaches are clearly optimal for their respective environment. The former seems like a stronger foundation for building a civilization on, though.


This is becoming such a weird romanticisation of rural Americana!

Your civilisation is being destroyed because a largely rural constituency is able to clean a rifle in 60s but appears to have no critical thinking skills when it comes to a certain New Yorker.

Yes it’s good to learn how to be resilient in nature, but it’s also important to learn how to get along with and manage relationships with larger groups who are not always to be trusted.

The point missing from this discussion is that because of hysteria over stranger danger (not supported out by any real evaluation of or changes in risk) and because we allow cars to dominate our urban spaces, city kids are being denied opportunities for independence they previously had. That’s the real change that’s happened … and we’re replacing real urban experience with corporate attention economies.


City kids can get on the bus or urban rail in actual big cities. Even in places like urban philippines or mexico where there is [often] no public transport, collectivos take up this niche. Kids abound in these places even in places like Manila where traffic is way worse and way more homicidal, and they take the jeepnee to go to the next barangay.

It's really mainly in the suburbs where neighborhoods are choked off by bike unfriendly freeways and no for-hire transit.


> The first learn that nature is always present and doing its best to kill you

> The second group learns that other people are a liability

Sounds like nature is simply survival + entropy and sometimes that leads to mixed incentives. Rural folks also understand people are dangerous. Per capita violent crime and murder is higher in Rural areas.

That's why I find it interesting, they're different expressions of common survival needs.


San Francisco doesn't have alleys, either, not anymore than NYC. In older buildings, including older apartment buildings, trash cans are kept under stairways, in service rooms, in ground-level hallways, or for single-family homes in garages or backyards, then wheeled out to the sidewalk the night before collection day, blocking pedestrians. Then the garbage men have to roll those bins into the street, maneuvering around parked cars, etc. NYC doesn't have trash cans because New Yorkers perennially chose to continue to throw their trash on the ground like they always had. Blame unions, blame habituation, but you can't blame NYC's architecture and layout; nothing about it is unique compared to other cities globally or even nationally.

Chicago has allys. Trash goes in ally. Streets smell nice.

NYC has no allys. Trash goes on sidewalk. Streets smell stinky.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: