> If you can't write it down, why would you expect it to be universal and enforceable?
and this is the problem. It used to be the case that if you were smart enough to find an exploit you were also smart enough to realise what would happen if you irresponsibly disclosed it. I guess these tools have made that pattern no longer apply.
From my point of view, they told the kernel security team which is in charge of fixing this. If it’s important for them to tell other people, then it should’ve been written down and further reiterated when they made their report.
The skills to detect code exploits is not the same as the skills to navigate an informal org chart to the satisfaction of an amorphous audience if end users (i.e. us on HN).
That said… as they are a company that supposedly specializes in this field, and is trying to sell a product, I do believe they should do better. Right now, I don’t have much confidence in their product.
and its your opinion that it doesn't. Shall we continue stating the obvious? We are communicating using glyphs. This language is English. We are on Hacker News. This branch of the conversation is extremely unproductive.
I asked a question and you replied with a statement. Your statement didn’t frame itself as an opinion but as fact.
The hilarious bit is that the idea that they needed to coordinate is clearly broken even in just this example. They did give prior notice to the Linux developers, who issued a patch. And they’re still getting raked over the coals in this comment page by armchair quarterbacks who have decided they needed to coordinate with specific distros. If they’d coordinated with those distros, somebody would have a pet distro that didn’t make the cut and they’d be pissed about that.
There are risks no matter how they do it, and there will be people who are pissed no matter how they do it. Security researchers don’t owe anybody a specific methodology.
you seemed to suggest with your initial statement that any disclosure was acceptable as people would have been using the exploit prior to the disclosure. I don't think that's a strong argument given now the initial people who were using the exploit prior to disclosure are now joined by people who have learned of the exploit as a consequence of the disclosure happening before all the distribtions were ready.
So I feel like the argument reduces into "why is it a problem that now anyone could exploit it, if some people were exploiting it already". Which imho isn't a sensible argument because the issue is clearly the amount of people capable of using the exploit for nefarious purposes, which has increased.
Idk why you felt the need to use quotes to wrap something I didn’t say, and that is a pretty uncharitable attempt at reframing my question. If you wanted a quote, here’s what I’d say:
“Because we can’t know if there was exploitation by existing parties who had discovered the vulnerability on their own, there are upsides to disclosing earlier so that affected users can take mitigating steps and review their systems for indicators of compromise. Additionally, the more projects the researchers pull into the loop for coordinated disclosure, the higher the likelihood that they further leak the vulnerability to more attackers.”
Idk why you felt the need to use quotes to wrap something I didn’t say. Despite the fact I didn't say that, its a much more interesting argument than your original statement implies and it is unfortunate we didn't start there.
However the issue is that we cannot know if the attack space has been broadened or lessened as a consequence of this disclosure, because of how eager it was. If it wasn't eager then we could much more comfortable in suggesting that the attack space has probably been reduced.
Given the exploit had been living in the linux code base undetected for so long in the first place, I think its fair to state that disclosing the exploit prior to the distributions being ready and given the distributions are the principal attack vector of the exploit: that the researcher has made the situation worse and should reflect on their actions.
… I used quotes to wrap something that I was saying. I even called out that it was something I was saying, as a more accurate variant of what you’d claimed I meant.
and I prefaced my quotes with the statement "So I feel like the argument reduces into". I mean, idk what punctuation I'm supposed to use there that doesn't offend you, but I just figured we can all read words and it was clear that I wasn't saying you said that, but rather, as I read the argument it was reducable to that and I took issue with that potential reduction.
The idea about the available exploit space and how the actors within it might, or might not move is a much more interesting avenue of conversation and I thank you for elaborating on your initial comment. <3
I do however feel that its hard to be confident about whether or not the attack space has been increased or reduced as a consequence of the eager disclosure. I feel we could make the case either way.
You could try to make that case either way, but as has been pointed out by others all over this thread, the system we've landed on (90/+30) is industry standard after over two and a half decades of experimentation.
Anything else inevitably has worse for the public good.
Having spent that entire time and then some on both offensive and defensive teams, I assure you longer delays after notification do NOT decrease the overall risk to the public.
There's a reason we've landed where we have as a security community.
it didn't have to be like this. If we had trusted NGOs with strong funding and a track record of independence and integrity they could shim between token generation and application. Allowing governments to produce identity tokens and applications to verify them with the shim blocking each side from knowing of the other.
I think specifically those opinionated within current generations mostly hate it. Fruit celebrity love island shows that plenty of younger generations are entirely content to consume its content. Its possible that most current generations would as well. Trash TV is extremely popular and LLMs are well suited to produce that type of content.
Nicki Minaj has factor 10 listens over a band like Tool. The money will pick the close to zero production cost for 9/10ths of the viewers every time and the platforms will prioritise the many.
I very much doubt boycott has enough weight in the long run to hold back generated content from taking over most of the bigger spaces. We've already seen this happen in recent years with staged content mopping up a lot of the most viewed content by manipulating potential viewers. Cheap influencer content has similarly squeezed cultivated content ad revenue through volume and consistency on YouTube.
If you want change then the route would have to be a legal one, not a social movement. Especially since we've mostly forgotten how to do the groundwork for social movements, leaving us all hand wringing and shouting into the void.
> Someone said it as a joke, but I want AI to be doing my dishes and sorting my laundry while I write books and compose music. I don't want AI writing books and composing music so I have more time to do my dishes and sort my laundry.
Well then we should maybe ask ourselves why RealityTV gets more views than well written work.
and you'll blow the context over time and send to the LLM sanitorium. It doesn't fit like the human brain can.
If a junior fucks production that will have extroadinary weight because it appreciates the severity, the social shame and they will have nightmares about it. If you write some negative prompt to "not destroy production" then you also need to define some sort of non-existing watertight memory weighting system and specify it in great detail. Otherwise the LLM will treat that command only as important as the last negative prompt you typed in or ignore it when it conflicts with a more recent command.
> and you'll blow the context over time and send to the LLM sanitorium. It doesn't fit like the human brain can.
The LLM did have this capability at training time, but weights are frozen at inference time. This is a big weakness in current transformer architectures.
> the employer can hire a new senior engineer at below market rates to accommodate the specific learning they have to do for their new role.
Money doesn't cleanly convert into time.
Having juniors and mid-levels is about being able to promote an existing mid-level that knows the team and the system, with zero downtime. It's much easier to replace a junior than a senior because of the lower expectations and risk.
Furthermore, a lot of companies are struggling to hire right now because the market conditions creates a flood of applications and its quite hard to discern who's a waste of time or not which leads to hiring processes taking longer.
> Furthermore, a lot of companies are struggling to hire right now because the market conditions creates a flood of applications and its quite hard to discern who's a waste of time or not which leads to hiring processes taking longer.
Hiring is the most important function in any company, full stop.
If they aren't good at hiring, well then they shouldn't be running a company. There are no excuses. If HR aren't up to the task, then they should be replaced, and so on up to the level of the CEO, until whatever incompetence has been flushed out. Shareholders have to demand this.
A company not being able to hire is just as ridiculous as a restaurant not being able to serve food.
If they are receiving a flood of applications which is hard to sift through, they are already doing everything wrong. Shareholders have to nuke these kind of people.
> If they are receiving a flood of applications which is hard to sift through, they are already doing everything wrong. Shareholders have to nuke these kind of people.
That's why some companies will contract with temp agencies for relief.
> Having juniors and mid-levels is about being able to promote an existing mid-level that knows the team and the system, with zero downtime. It's much easier to replace a junior than a senior.
Yeah but the point of this post is that it makes an assumption that your company doesn't have mid-levels or juniors.
as knowledge is commoditized the bar for junior raises, what was advanced math research a century ago is now undergrads' homework. I don't see why code is so special in that regard that cannot progress beyond artisanship.
> This isn't a story about one bad agent or one bad API. It's about an entire industry building AI-agent integrations into production infrastructure faster than it's building the safety architecture to make those integrations safe.
Are they really so clueless that they cannot recognise that there is no guardrail to give an agent other than restricted tokens?
Through this entire rant (which, by the way, they didn't even bother to fucking write themselves), they point blank refuse to acknowledge that they chose to hand the reins over to something that can never have guardrails, knowing full well that it can never have guardrails, and now they're trying to blame the supplier of the can't-have-guardrails product, complaining that the product that literally cannot have guardrails did not, in actual fact, have guardrails.
They get exactly the sympathy that I reserve for people who buy magic crystals and who then complain that they don't work. Of course they don't fucking work.
Now they're blaming their suppliers for not performing the impossible.
and this is the problem. It used to be the case that if you were smart enough to find an exploit you were also smart enough to realise what would happen if you irresponsibly disclosed it. I guess these tools have made that pattern no longer apply.
reply