Hacker Newsnew | past | comments | ask | show | jobs | submit | sdwr's commentslogin

Same as it ever was

And the EU wants to insulate itself from Russia with friendly, ideologically-compatible countries. Can't put a price tag on safety

That also works the other way around: Eastern European countries wanted to join the EU (ok, more importantly NATO, but also the EU) to make sure they never ever again slid into Russia's "sphere of influence". Notwithstanding certain populist EU-skeptic right wing parties that don't seem to mind that anymore (some would say because they are financed by Russia), that's generally still true...

Emergency medicine is the coding of medicine. Fast feedback loop, requires broad rather than deep judgement, concrete next steps.

The AI coding improvement should be partially transferrable to other disciplines without recreating the training environment that made it possible in the first place. The model itself has learned what correct solutions "feel like", and the training process and meta-knowledge must have improved a huge amount.


I would argue that the ED is the least similar to code. You have the most unknowns, unreliable data and history, non deterministic options and time constraints.

An ER staff is frequently making inferences based on a variety of things like weather, what the pt is wearing, what smells are present, and a whole lot of other intangibles. Frequently the patients are just outright lying to the doctor. An AI will not pick up on any of that.


> An AI will not pick up on any of that.

It will if it trains on data like that. It's all about the training data.


Unfortunately the training data is absolute garbage.

Diagnostic standards in (at least emergency, but I think other specialties) medicine are largely a joke -- ultimately it's often either autopsy or "expert consensus."

We get to bill more for more serious diagnoses. The amount of patients I see with a "stroke" or "heart attack" diagnosis that clearly had no such thing is truly wild.

We can be sued for tens of millions of dollars for missing a serious diagnosis, even if we know an alternative explanation is more likely.

If AI is able to beat an average doctor, it will be due to alleviating perverse incentives. But I can't imagine where we could get training data that would let it be any less of a fountain of garbage than many doctors.

Without a large amount of good training data, how could AI possibly be good at doctoring IRL?


You just get 1M doctors to wear body cams for a year. Now you have a model that has thousands of times your experience with patients, encyclopedic knowledge of every ailment including ones that never present in your geography, read all the latest papers, etc..

I don't understand how you think this doesn't win vs a human doctor.


This wouldn't solve the problem of diagnostic standards. Let's say you are a pediatrician and want to predict which kids with bronchiolitis will develop respiratory failure and need the ICU versus the ones who can go home. How do you determine from the body cams which kids had bronchiolitis in the first place? Bronchiolitis is a clinical diagnosis with symptoms that overlap with other respiratory illnesses such as asthma, bacterial pneumonia, croup, foreign body ingestion, etc.

you would have footage of the doctors diagnosing them. I don't understand what you're asking. The body cams have microphones too in case that wasn't clear.

In healthcare, HIPAA/GDPR equivalent would block this. Let's be realistic in our discussion; this is not the same as google buying up a library worth of books, scanning and destroying them

There are other countries, and the patients in them all have similar data

Other countries actually don't necessarily have a similar mix of ailments, median patient appearance and style of communication or even recommended course of action and most of the ones with more sophisticated medical care also have strict medical privacy laws. If you're genuinely unaware of this, I'm not sure you're in a position to be making "one year with a camera, how hard can it be" arguments...

(Where AI is likely to actually excel in medicine is parsing datasets that are much easier to do context free number crunching on than ER rooms, some of which physicians don't even have access to ...)


I think you're being silly if you think the amount of money at stake here, not the mention the health of billions of people is going to be stymied by privacy laws.

Similar data?!

We have wildly heterogeneous data just within the US!

And again, how exactly is this interface going to work? How does the AI determine how hard to press on an abdomen, and where, and how does it press there once it has that information?


How is training on bad data going to give you better results than the current system?

What kind of embedding helps the AI learn to do a physical exam?

Not to mention patient privacy, I can't even take a still photo of a patient in my current system (even with a hospital-owned camera).


The user will be adversarial and probably learn new tricks to trick the machine, this is not solvable (only) via training data.

We have that expression “garbage in, garbage out.

My sense is that doctors and AI would be doing a lot better if they were just doing medicine, not being a contact surface for failures of housing, mental health and addiction services, and social systems. Drug seeking and the rest should be non-issues, but drug seekers are informed and adaptive adversariesz


To give this more credit than it perhaps deserves: training aside, getting the situational data into the context is a more significant problem here.

Pt's chart is complex/wrong? Gotta ingest that into context.

Chart contains images/scanned and not OCR'd text? Gotta do an image recognition pass.

Diagnosis needs to know what the pt's wearing (i.e. radiation badge)? Gotta do an image recognition pass.

Diagnosis needs to know what the weather's like? Internet API access of some kind. Hope the WAN/API are all working! If they're not, do you fail open or closed?

Patient might be lying? Gotta do video/audio analysis to assess that likelihood--oh, and train a model that fully solves one of the holy grails of computer vision/audio analysis reliably and with a super low false-positive rate before you do. And if it guesses wrong, enjoy the incredibly easy-to-prosecute lawsuit.

Patient might be lying, but the biggest clue is e.g. smell of alcohol on their breath? Now you need some sort of olfactory sensor kit and training for it--a lot more than just "low quality body cam and a mic".

Patient's ODing on a street drug that became abundant in the last few months? Gotta somehow learn about recent local medical/police history that post-dates the training set, or else you might be pouring gas on a fire if you give them Narcan. And that's assuming you know enough to search for information about that drug, and that they didn't lie to you about what they took. Addicts never do that.

Failures in each of those systems bring down the chance of an effective diagnosis, so they need a fairly obsessive amount of model introspection/thinking/double-checking, and humans on standby as a fallback if the AI's less than confident (assuming that LLMs can be given a sense of a confidence level in the future, versus the current state of the art of "text-predict a guess about what your confidence level might be").

Put that all together, and even with the AI compute speed available years from now and a perfectly trained futuristic model that's preternaturally good at this stuff, I'm not sure that that the reliability and, more importantly, the turnaround time of that diagnostic pass is going to be any good compared to a human ER doc.


Yeah, fractal means you see the same structure, or an equally complex structure, at the smaller scale. This is just details, there's no sustained complexity

Fractal means that something has a fractional spatial dimension. Ie, a fractal plane filling curve would have a dimension somewhere between 1 and 2

So poor people and historical people are in the same bucket as dogs, got it.

How lucky we are to be the only generation in history capable of thinking and reflecting!


Economy of scale, Lego can invest the billions(?) in machines and molds that don't leave connection points (?), partially by reusing pieces between sets.


This is a peak midwit, popsci, plausible-sounding but foundationally-bereft article.


Yeah it's fascinating to have an alternate source for intelligence, feels like a mental Rosetta Stone


Recklessness is based on effort, likelihood, and consequence. If you live in a small town, you might not lock your front door. No matter where you live, you probably don't lock your second floor windows.


Are we doing enough effort tho, AI era invites us to get our shit together as well, we are all guilty of it, but we must also understand that if you live in an area with a high crime rate, you adapt and lock your door, the same must be applied online now that we will have 24/7 rogue agents with sole purpose of doing ransoms and attacks of all kind.


Is it healthy? Maybe every company is a profit-maximizer wearing a skin suit, and people support their siblings exactly twice as much as their cousins.

When you slice down to the game-theory-optimal bone, you are, in some sense, cutting off their wiggle room to do anything else


I take your point, but the AI race is a strange environment. We see wild claims being thrown out all the time from other companies and executives with little to no evidence. It's cut-throat, there's a ton of money at stake.

All I'm saying is that Anthropic isn't unique here. Their claims may be more measured by comparison and come with anecdotal evidence, but the hype is still there behind the scenes.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: