This is the effect of "every vulnerability is a bug" and "we can't rate the severity of any vulnerabilities".
Which very clearly results in "bugfixes" (security patches) not making it everywhere in time because it's just simply ridiculous to ask for each downstream consumer to rate the severity of everything on their own. It's easy to shit on CVEs, some even put out shit CVEs, but at the same time contribute absolutely nothing towards providing a better alternative.
It's quite certain that both the Linux project and the Linux CNA needs to take some responsibility and put in some effort at communication and making it easier to triage.
They can't. Linux has too high a profile. Any additional "in group" that had access to embargoed critical security information would have a much higher chance of being compromised.
The solution is not to tell more people that patch xxxxxx is a critical security bugfix that needs distros to roll new kernel versions immediately.
Major vendors (all the cloud providers) will have security teams that can have the bug mitigated in a few minutes once they're notified.
For everyone else...
Part of the solution is that distros need to stop believing that their distro kernel branches are any better than linux-stable, and use linux-stable and engage with the linux-stable list and patchsets if they're concerned about what's going into them.
Part of the solution is each distro needs a process for pushing critical updates (module blacklists, ebpf patches) to address things like this without forcing all distro users to reboot, which many won't do promptly anyway.
These things were caught and basically all of them weren't covered by any test suite (not even GNU coreutils'). It's a bit bold to claim that it's actively worsening it when it's not an LTS.
Tons of bugs in scripting in Unix come from the fact that data and metadata are interspersed in the same stream (you can mitigate somewhat with stderr vs stdout but hardly anyone does). Examples include things like trying to handle random filenames from * expansions.
It’s a bit more annoying to deal with sometimes, but for actual scripts it’s much more foolproof.
xargs is one of the programs that is designed to work around the original issue.
Yes, structured data between scripts and programs. No xargs, tee, awk, sed, grep mangling. No "argument list too long" errors.
So many problems are avoided, but at the same time the Windows ecosystem is just so far from providing an properly usable terminal experience. Things are still really not designed to be used from PowerShell.
> but they'll still hold hostage of the vast swathes of average white collar workers with Office, people that don't care at all about technology as long as they have Word and Excel.
I can't wait for the anti-trust lawsuits. M365 and O365 are already super shady in terms of being able to migrate out or be interoperable with other solutions. "Accidental" roadblocks almost everywhere.
Basically, Microsoft furiously bribed their way into formally standardizing the utterly broken MS Office formats, so EU and potentially other regulators couldn't mandate them to be "interoperable" with existing standards (e.g. OpenDocument, based on OpenOffice, which was on its normal way to become standardized with no fast tracking and no bribing). They even called it "Office Open" to foster confusion.
They can do whatever they want and get away with it because a big part of their business model is, much like Oracle and SAP, based on bribing government bodies across the world.
Yes, but this time there’s the additional driving force of countries trying to become more self reliant and not get locked into US software giants (France and Germany for example). A long way to go, but it’s gaining more traction than the past half-assed attempts.
> See also: modern practices and sanitizers and tools and test frameworks to avoid writing memory errors in C, and the reality that we keep writing memory errors in C.
I think there's a difference in how trivial some of these things are to detect and how difficult others are. IDOR and SQLi aren't nearly as complex as C unsafety is.
My understanding of the context is the author is no longer using Emacs, and is very excited about the productivity from AI.
My experience with LLM technologies is it does make generating the code a really quick part. It may be reasonable to take much more time to specify things up front (rather than emergently as you would by hand). -- I mean, if you've got a well crafted description of what you want, you'll be able to get a working program MUCH quicker with an LLM, today, compared to writing it out by hand.
Would it really be surprising/shocking if an LLM was able to rewrite (most) features from an existing software, to a new software?
It seems like the reality today is, we've gone from a maintained software in a niche ecosystem with happy users, to a more fragmented one where everyone has an LLM write their own half-baked one.
Probably because it's closer to a reimplementation than anything else, and in Emacs you can use libraries with much less friction than in self-contained languages.
I also suspect it allows easier consolidation. Moving from a deprecated lib to a new (and better) one for example.
Implementations will likely homogenize a bit as well, but on the other hand boy am I glad not to see an increasing amount of bizarre naïve hand-rolled implementations for some things.
Which very clearly results in "bugfixes" (security patches) not making it everywhere in time because it's just simply ridiculous to ask for each downstream consumer to rate the severity of everything on their own. It's easy to shit on CVEs, some even put out shit CVEs, but at the same time contribute absolutely nothing towards providing a better alternative.
It's quite certain that both the Linux project and the Linux CNA needs to take some responsibility and put in some effort at communication and making it easier to triage.
reply