Hacker Newsnew | past | comments | ask | show | jobs | submit | Intralexical's commentslogin

Not only is the study testing something which only vaguely resembles how doctors diagnose patients, but isolated accuracy percentages are also a terrible way to measure healthcare quality.

If 90% of patients have a cold, and 10% have metastatic aneuristic super-boneitis, then you can get 90% accuracy by saying every patient has a cold. I would expect a probabilistic token-prediction machine to be good at that. But hopefully, you can see why a human doctor might accept scoring a lower accuracy percentage, if it means they follow up with more tests that catch the 10% boneitis.


What percentage of patients have blood clots in their lungs and a history of lupus, like the article described? That's not on the same level as a common cold at all.

> One experiment focused on 76 patients who arrived at the emergency room of a Boston hospital.

> In one case in the Harvard study, a patient presented with a blood clot to the lungs and worsening symptoms.

That's a single anecdotal fluke from the study, which is misleadingly used to represent the headlining percentages.

If you read the linked paper, it says the LLMs did not outperform any group of doctors in the most important cases:

> The median proportion of cannot-miss diagnoses included for o1-preview was 0.92 [interquartile range (IQR) 0.62 to 1.0], although this was not significantly higher than GPT-4, attending physicians, or residents.

And again, the bigger issue is that skimming nurse's notes and predicting the next tokens, as the study made the doctors do, is not how doctors diagnose medical conditions.


But that's not what I was responding to. "Oh, all of the cases are probably just common colds, so it just guessed cold and was right by sheer luck" is not what happened in the article.

Do you know how examples work? Or methodology? The claim I made is that statistical accuracy percentage ≠ healthcare outcomes, and you will mislead yourself in dangerous ways if you believe a headline that implies they're interchangeable. Not that the model literally guessed common colds when the patients had... boneitis...

The lupus anecdote on its own is irrelevant to the whether the statistics are being interpreted in valid ways or not. Also, I said nothing about luck.


Does it still work, though?

Where else would you put the repository domains?


I would put them into a configuration file. You know, so people can configure which repositories are being searched.

Generally I advice against hard doing stuff that changes often and may need to be adjusted for different users or organizations.


The search APIs are separate from the repository URLs, and the different distros' APIs need to be parsed in different ways. And before you ask, the search APIs have to be separate from the repositories, if you don't want to waste disk, network, and time keeping hundreds of local index files up-to-date every week.

They can't just be "configured" by changing a URL. I guess maybe you could self-host the search page for some of the distros, and reuse the parser, but are people really doing that? Otherwise, you'd have to write new code to parse the results, at which point you might as well soft-fork the script anyway.

> Generally I advice against hard doing stuff that changes often and may need to be adjusted for different users or organizations.

YAGNI. And if your org does need it for some reason, you're probably better off running something specifically tailored for your own needs instead of whatever implementation makes it in.

The whole script's only 1300 lines. Would adding spending 150 lines on configuration and littering the user's dotfiles be worth it? Now what happens if the configuration's missing/corrupted? When you update the script, do you keep the old dotfile that might be using a deprecated API, or do you replace the old configuration and clobber any customization the user's done? Oops, there go another 1,000 lines, on edge cases, option flags, conf merging, warning messages... And good luck getting bug reporters to explain their configuration changes!

Also, this stuff doesn't "change often". The distros literally can't change it often, because doing so might break LTS stability. I know it's fun to point out perceived flaws in other people's work, but in this case, the URLs are tightly bound to the parsing logic, which is the right place to put them IMO.


Are you asking if this tool can find something on ubuntu 26.04 when the urls it has were hardcoded 11 years ago?

The URL to search for packages in Ubuntu for example hasn't changed to my knowledge. Are you assuming it's only looking for packages in releases that were current at the time?

The site it hardcodes is https://packages.ubuntu.com, so yes I would expect it to work fine

In about a hundred or so separate microservices, of course…

I like the idea of fuel cells, but hydrogen's going to have an image problem as soon as people see the failure mode, if it's just being stored as H2 in compressed tanks. Liquid fossil fuels and electric batteries burn with a gradual flame. Hydrogen suddenly detonates, with a supersonic, shattering shockwave, if it's mishandled.

Even with Cold War money, Lockheed's famed Kelly Johnson couldn't make the logistics work for the CL-400.


Copying my own comment below, with GH links and my (non-AI) summary after skimming:

> https://github.com/zed-industries/zed/issues/7054

> https://github.com/zed-industries/zed/issues/12589

> TL;DR: Mix of language tooling, unsigned proprietary blobs, corrupted and/or GLIBC-dependent files, redundant copies of already-installed executables. The Node packages especially are able to run scripts on install. Personal preference aside, might also create issues with security laws, certifications. All without user consent.

> Issues opened in January and June 2024. They've been rejected, closed, and opened a couple times since then. No changes directly improving this yet as of April 2026.

So... If you want broad language support via LSP servers, then you're going to have to bring in other ecosystems, and Node/Typescript is a big one that doesn't always have alternatives. [0] That's not a Zed-specific problem.

IMO the real issue with Zed is the "runs them by default without asking" part. Plus the questionable practices with binary blobs and the cavalier attitude in the discussions, when I can just use an editor that... Doesn't do any of that.

[0] https://microsoft.github.io/language-server-protocol/impleme...


What are they doing with proprietary binary blobs? I thought it's open source.

If you need an education in law to be able to trust a business isn't trying to steal from you, then maybe you just shouldn't trust that business at all.

Especially for something like a code editor, where plenty of less-shady competitors are available.


> Especially for something like a code editor, where plenty of less-shady competitors are available.

On what basis are you claiming Zed is shady? I seek evidence, not feels.

If you don't understand the contract language, it seems rather presumptuous to make that kind of claim. See what I mean?

If you want to make a _relative_ claim, then I have to ask: have you read the licenses of VS Code, JetBrains, Cursor, WindSurf?


https://github.com/zed-industries/zed/issues/7054

https://github.com/zed-industries/zed/issues/12589

TL;DR: Mix of language tooling, unsigned proprietary blobs, corrupted and/or GLIBC-dependent files, redundant copies of already-installed executables. The Node packages especially are able to run scripts on install. Personal preference aside, might also create issues with security laws, certifications. All without user consent.

Issues opened in January and June 2024. They've been rejected, closed, and opened a couple times since then. No changes directly improving this yet as of April 2026.

Personally, I think even if they eventually fix this, given the attitude shown towards their users' machines, I should probably just use an editor where I don't have to worry about it.


Asbestos... Lead, CFCs, mercury, cadmium, radium, petroleum, DDT, BPA, microplastics, PFAS, organophosphates, pyrethrins... The more wonder materials turn out to be devastating for human health or environmental stability, the more I think maybe the "no (synthetic) chemicals" crowd have a point.

Or rather, that maybe we're learning the wrong lesson each time. Maybe instead of "asbestos is bad" or "DDT is bad", the real lesson should have been "biological and ecological systems are incredibly fragile outside of the exact combination of environmental conditions and chemical inputs they've specifically evolved to handle".

Too much complexity, too many delicate mechanisms and feedback loops. Can't afford to keep playing whack-a-mole, every generation we replace the old poisons and add some new ones. If we keep introducing new molecules and quantities of substances that evolution hasn't had a chance to adapt to, then we shouldn't be surprised that we keep breaking things.

But let's not pretend we don't use pesticides for a reason. People gotta eat, and pyrethrins are already an improvement AFAIU, less toxic to mammals, similar to molecules that exist in nature. But still, a cudgel. Maybe we need to take ecological engineering seriously, control pest species by simultaneously cultivating stable ecosystems of insectivores/predators and hyperparasites, poison spray not required...


Small but important nitpick. I think, most commonly their worldview was already built, and would have been the same regardless. Milgram just provides a veneer of legitimacy, losing which would cause problems for them.


6 of the 7 "replications" mentioned in that Wikipedia section are literally TV shows and performance artists.

...Which is a good metaphor for the "experiment" as a whole.


Actually, all of them are bs. There’s no records of the experiment in Australia. I would guess it’s just a hoax by the author of “behind the shock machine” if not, it still certainly doesn’t count as a replication.


It should have been rejected from the outset. What Milgram did in his experiments was nothing less than construct an elaborate setup so he could psychologically torture dozens of well-meaning people. The ethical violation was already recognized at the time, and given that, nothing else he claims about method or implications can be trusted.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: