Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> to eliminate "useless eaters"

It can't. It can't even deal with emails without randomly deleting your email folder [1]. Saying that it can make decisions and replace humans is akin of saying that random number generator can make decisions and can replace people.

It's just an automation tool, and just like all automation tools before it it will create more jobs than destroy. All the CEOs' talks about labor replacement are a fuss, a pile of lies to justify layoffs and worsening financial situation.

[1] https://www.pcmag.com/news/meta-security-researchers-opencla...

 help



People have this misconception that first it was one way, and then <tech was released>, and they'll wake up and suddenly it is another. It's a slow creep. 10 years ago there were 5 of us on a team each responsible for something specific. Now I can do all of that. Teams and companies will downsize. How do you see AI creating more jobs? (I need some hope right now lol).

My hope is that there is a sort of Cambrian explosion of small software projects built by people who have absolutely no clue what they're doing. Many such projects will go nowhere, but some percentage of them will see success and growth. My second hope is that there will always come some threshold of complexity beyond which AI cannot effectively iterate on a project without (at minimum) the prompting of an expert in the field.

The combination of these two things could lead to a situation where there is a massive, startup-dominated market for engineers who can take projects from 0.5 to 1, as well as for consulting companies or services that help founders to do the same.

Another pair of hopes is that a) the LLM systems plateau at a level where any use on complex or important projects requires expert knowledge and prompting, and b) that because of this, the hype of using them to replace engineers dies down. This would hopefully lead to a situation where they are treated like any other tool in our toolbox. Then, just like no one forces me to use emacs or vim (despite the fact that they unambiguously help me to be at least 2x more productive), no one will force me to use LLMs just for the sake of it.


> My hope is that there is a sort of Cambrian explosion of small software projects built by people who have absolutely no clue what they're doing

It doesn’t even have to be people with no idea what they’re doing. If you lay off enough smart people from big tech companies, those people might put together small companies that directly compete with larger ones at a fraction of the cost.


Lots of small companies pumping out software to solve problems. What will happen is companies will switch to their software, get burnt and then never move off established large core tech similar to what we see today but far worse.

These small companies will only be able to sell through basically scam marketing.


Why? That seems like a huge leap. It's not even uncommon for small companies to make better products and take customers from bigger ones. That's how a lot of companies get started.

That all implies that AI won't get ( much) better anymore.

I doubt that


It's made it cheaper to do whatever it is you did therefore the demand for it will go up. It's somewhat of an open question of where the new equilibrium is. Historically that can go either way. We have fewer farmers that we once did because there's a limit to how much food people will eat. But we probably don't have fewer carpenters as a result of power saws and nail guns. We probably have more because the demand to build things out of wood is effectively unbound.

Massive job loss from AI requires one of two things: actual human-equivalent AGI or no increase in demand.

Focusing on option 2 and software development, teams and companies will only downsize if the demand for software doesn’t increase. Make the same amount of stuff you do now but with less people.

What I think will happen is that enough companies will choose to do things that they couldn’t afford or weren’t possible without AI (and new companies will be created to do the same) to offset the ones that choose to cut costs and actually increase the amount of people making software.

I am pretty sure these are well known economic ideas but I don’t know the specific terminology for it.


> no increase in demand.

We are already hitting the limits of demand in many areas of life. The fundamental currency that is not growing is human attention.

Sure, now you can be a musician and use AI to help you make an album in a weekend. Great. So can a million other people. Who's going to listen to them? Everyone is already inundated with more music than they could ever listen to in a lifetime.

Now someone who's never written a line of code can vibe code an app and upload it to an app store. Great. So can a million other people. Who's going to install those apps? When was the last time you found yourself thinking, "I wish I had more unmaintained apps on my phone!"?

Now someone who aspires to be a "writer" but lacks the willpower to craft sentences can throw a couple of bullet points at an AI and get a thousand word article out. Great, so can a million other people. Who wants to read more AI slop text on the web? There are already a million self-published authors whose books never get read. That's not going to get better when there are a billion of them.

All of us, every single one of us, is already drowning in information overload and is stressed out because of it. The last thing any of us want is more stuff to pay attention to. All of this AI generated stuff will just be thrown into the void and ignored by most.


Also, I said it way before LLMs, when X started firing people. Software is more or less on maintenance mode. You don't need that many developers anymore and so many new features.

You don't need to create the next Facebook, Shopify, X etc.... Because it already exists and controls the market.


There are more options:

Mass unemployment, consolidation of all AI-related benefits in the hands of a few, an increase in demand that doesn't outpaced the loss of employment, increase in capabilities (not AGI) that mean a few chosen people can do most things without hiring other people, etc.


If there is mass unemployment, who is going to buy anything from anyone? The "few" don't need or want us to be scraping in the dirt. They want us spending lots of money on their products, so their wealth increases.

I know it is the classic sci-fi dystopia where somehow despite endless advances in tech and automation, the masses can't figure out how to make it work for themselves and end up living in shanty towns on top of each other waiting for gifts from the elite, or scraping in dirt outside the cities, but come on... I just don't see that as being credible.


> If there is mass unemployment, who is going to buy anything from anyone? The "few" don't need or want us to be scraping in the dirt.

> They want us spending lots of money on their products, so their wealth increases.

If we're considering scifi scenarios, imagine this: if full blown automation of everything is achieved, why would the "haves" need the "have-nots" buying anything at all? Why would they need them to exist, at all? Think about it. It's an extreme and we're not near it... yet.

> despite endless advances in tech and automation, the masses can't figure out how to make it work for themselves

If the tech (or the really helpful tech) is guarded behind a lock, and they don't hold a key, it's not a matter of figuring things out. Unless by figuring out you mean revolt?


> If we're considering scifi scenarios, imagine this: if full blown automation of everything is achieved, why would the "haves" need the "have-nots" buying anything at all? Why would they need them to exist, at all? Think about it. It's an extreme and we're not near it... yet.

So we reach this post scarcity society, where everyone could be living a life of luxury, but this whole group of "haves" as you call them (who would they be?), somehow form this uniform view that they just don't want 99.9% of other people around and let them all die off while they guard themselves in gated cities or something.

It just makes no sense at all to me. Like in a sci-fi novel or movie where it is a plot requirement, ok, but in reality, I just cannot see the path and all the things required to get to that particular reality. So many ways it would work out differently.


80% of “serious” discussion on contemporary LLMs is no better then sci-fi. Worse, even, because it’s by the readers and not the writers, who ostensibly made some effort to make their works realistic.

I'll add to this that 80% of any discussion of LLMs is instigated by CEOs of AI companies, and they themselves seem to believe scifi is a real-world education.

So yes, it's a bunch of scifi-addled selfish amateurs guiding and predicting the future. The AI people.

(Remember the "do not build the Torment Nexus" meme? It has a point).


> So we reach this post scarcity society

A full automation society, where the implied post scarcity is not necessarily for everyone. Maybe it needs most of the population not to exist in order for the few to enjoy the lack of scarcity. Resources aren't infinite, but greed is.

I mean, resources and wealth could be far better distributed right now, no need for AI, yet most times this is attempted the wealthy fight tooth and nails against it, even though the impact for them would be very small. What makes you think having AI will magically make them better people?

> [...] this whole group of "haves" as you call them (who would they be?) somehow form this uniform view that they just don't want 99.9% of other people around

A uniform view on this matter is easier to achieve by an extremely small subset of people.

And really, do you need to ask "who are they"? I mean, the billionaires and owners of concentrated capital of the world?

> I just cannot see the path and all the things required to get to that particular reality.

You cannot see a path from unchecked capitalism and extreme concentration of capital, via total automation, to this particular reality?

It sounds like a failure of imagination. I see the people at the top being lying sociopaths and have no trouble believing this.


Powerful people like to wield power over others. They want the masses to exist specifically so that they can feel superior and exercise their authority over others. They simply want the masses to be forever below them.

This is actually an argument I find convincing. The powerful need the less powerful to exist, because otherwise in relation to whom would they be powerful? Who would show them they are powerful?

But even then, how many of the others would they need to exist?


> It sounds like a failure of imagination.

I see it as the opposite. Doomerism is the easy path. It takes no imagination to repeat doomer memes and sci-fi dystopian tropes, without articulating exactly how we get there. I think what is far more likely is that as these tools proliferate, we continue on the path we've always done, some discomfort, probably negatively impacting some, but ultimately a better life when measured on the median. I don't see a way the billionaires take all power away from 99.999% of the rest of humanity without literally murdering them. And why would they want to murder them? It's much easier to just let everyone benefit.


> Doomerism is the easy path. It takes no imagination to repeat doomer memes and sci-fi dystopian tropes, without articulating exactly how we get there.

It's not "doomerism" because there is a call to action, impractical as it may seem. TFA is stating one possible, if flawed course of action. There may be others. Doomerism just cries "the comet is coming, end your lives now!". Also, if you're honest, there is some articulation of how this may come to be, it's just that nobody is an oracle and the particulars are shifting.

> I don't see a way the billionaires take all power away from 99.999% of the rest of humanity without literally murdering them. And why would they want to murder them?

They don't need to actively murder them, they just need to restrict access to resources required for living (maybe made worse by the climate crisis) and this would alone cull the population "naturally".

Imagine a world of full, total automation of everything. The rich always needed the less rich to work for them, make things for them, pick up raw materials for them, take care of them, even be their security forces. But all of this would be unneeded with an inexhaustible force of robot labor [1]. This is one of my worries if they ever go all-in with the automation of the military... who will be there to have a crisis of conscience if given immoral orders? We're not there yet, but this is something to ponder.

> It's much easier to just let everyone benefit.

There are things right now that would be easy to do that do not get done. And in any case, I don't think anybody is arguing about what would be easier? Also, before you say it: who cares if it's self-destructive? There's a current subset of rich people who don't care if we're destroying the planet, presumably they don't care that much about their children or their children's children. Or maybe they hand wave it away, "someone, somehow, will take care of this problem in the future".

----

[1] a funny tangent, obligatory Bob the Angry Flower: https://www.angryflower.com/atlass.gif


I just object to your reasoning on so many levels. I regard it as the current zeitgeist of anti-capitalism. Just lazy blame.

We are objectively living in the best times of human history, ever. The global median person in the world is much better off than their predecessors.

Is wealth inequality growing? Yes! This makes people angry. Does that automatically extrapolate to billionaires will murder people (actively or inactively) simply because they can?

A resounding, emphatic, NO. It doesn't extrapolate to that.

What will almost certainly happen is the same as every other time. The technology will disrupt, cause short term pain for some, but ultimately become just another commodity and push up the standard of living for the median person. Billionaires will continue to be billionaires, normal people will adjust, we'll find out ways to put human productivity to use, life will go on.


> What will almost certainly happen is the same as every other time.

This is what seems to me like a failure of imagination. As I said, I envision other possible and even likely futures. I'm not an oracle so I don't guarantee them, I'm just saying we should be aware of those possible futures, and if possible do something about them.

There's no inevitability of progress. That's just wishful thinking.

I respect that you come from a different ideological perspective, but don't disregard mine as lazy. Chalking this up to "lazy anti-capitalism" is, in itself, a lazy position to adopt.


Like you said, it is a failure of imagination. When someone says, "the billionaires and trillionaires won't need anyone else," the dystopian scenerio is not neccesarily "therefore other people won't exist or will eventually become extinct or killed" it's that other people will be straight out enslaved. With all the torture and suffering that entails. You know, the dystopian scenario that is more in line with centuries of recorded human history...The point is the rich won't need to listen to anyone else.

Why on earth would billionaires want to do this?!

It is complete dystopian fantasy.


They don't just wake up one day and want to do this. They fear losing their power and want and try to maintain it at considerable cost to others due to that fear. The dynamics of society become such that the power imbalance and wealth inequality continues to increase, until eventually the threshold to something that is indistinguishable from slavery is passed.

Edit: By the way, just the other day the Trump admin trotted out a Doordash grandma in front of the cameras and asked her what she thought of trans women in sports. This grandma is doing doordash to pay off the medical debt of the cancer treatment of her dead husband because the US of A does not provide the minimum healthcare befitting of the richest country on Earth. We are already living in a dystopian fantasy.


Eh, nope.

We’ve had economies where the majority of rich people existed in a different economy, and everyone else lived in a different economy. Class mobility was poor.

Take the current K shaped economy, where a majority of retail spending is from rich people, and not the majority.


> How do you see AI creating more jobs? (I need some hope right now lol).

More armies of one. That single team of five now becomes 30 teams of one or two each.

Which largely how automation resulted in more jobs - the cost decrease induced demand. Think about how cheap cameras, laptops & internet up-ended traditional media. We went from 3-4 channels on TV in the 60s, to 3-400 channels on cable by the 90s, to 115 million channels on YouTube right now. Because anyone with a basic phone can record and edit content, which used to require millions of dollars in equipment and took years to learn to do. And people are happy to do so for a fraction of the revenue a TV station would require.


A few hundred years ago it took a team of 5 plus draft animals plough a field. Now one guy with a tractor can do it. Some teams and companies will downsize. New companies will appear doing things that we can't even imagine yet.

> New companies will appear doing things that we can't even imagine yet.

I read this take a lot but I don't buy it. This isn't guaranteed by any means. And even if it does happen, isn't it just as likely that AI is deployed into those companies too and they don't actually result in any job growth?


[flagged]


That's not the rebuke you think it is. You made a claim (not original, I've read it before), someone expressed doubts about your claim (which if proven false, will have dire consequences) and you cannot wave it off with "there are no guarantees in life".

Sorry, you made a claim, there's good reason to believe your claim may not pan out, and if it doesn't the consequences are dire.


I don't think it's a rebuke. I'm just explaining the reality of the situation.

You said

> New companies will appear doing things that we can't even imagine yet

I have a really big imagination, so I will believe it when I see it. If you have any real idea what these new companies might be doing in the future then I'm all ears. But until then maybe stop trying to claim some kind of future knowledge based on some handwaved nonsense like "we can't even imagine what the future will look like"

And then trying to claim that's "the reality of the situation", please be serious

Edit: Maybe if you think the future is so unimaginable, you should take a look around at the present. Can you identify anything in our lives today that was not imagined by anyone in the past? Think about how every piece of technology ever made nowadays, someone can say "it's like the Torment Nexus from Famous Piece of Literature!"


This comment equates to saying “I don’t care what you think”, and is a perfect example of something that is literally never justified to say on a forum where you have no requirement to interact with them.

If you don’t care what individual people think then simply don’t talk to them.


No, I'm giving him actionable advice to improve his life. Take it or leave it.

Yes, if we were computers that could be reprogrammed to new skills.

In the old days change was slow enough that few people got displaced from jobs requiring any substantial skill (although there was local devastation: for example, court reporters.)

Now, however, we are seeing change happening faster than people's careers. You can not realistically retrain into another high skill job--you're going to be the last to be hired. (There's a good reason Social Security Disability has cutoffs a 50 and 60 for how much change can be required!) And, likewise, someone who has worked a desk for decades is not going to be hired for a physical job. (Assuming they even can do it. I can't think of any physical job that wouldn't have me in a lot of pain in weeks at the most.)


Are SWEs the farmers of the draft animals in this analogy?

The SWEs are the draft animals, to be put out to pasture in the AI future.

Here's some hope: You will figure it out. You are already a person who is very curious, competent and experienced with making stuff.

> It can't. It can't even deal with emails without randomly deleting your email folder [1].

And early cars were expensive, dangerous, highly unreliable, uncomfortable, belched foul exhaust, and required knowledge of how to drive AND maintain them. We are far, far from that scenario these days.


That's not proof that it will ever do those things in the future either, however.

We have no proof what it will do in the future. I'm just maintaining the car analogy theme.

> Saying that it can make decisions and replace humans is akin of saying that random number generator can make decisions and can replace people.

There are many studies concluding that for some tasks, experts make decisions that are no better than a dice roll, sometimes worse. So the game here is not to make good decisions, but to make a convincing argument. And it is something LLMs are really good at.

And it is ironic because it matches the job of a CEO pretty well. CEOs often make decisions with high uncertainty, the kind where it is hard to beat random, and they are expected to communicate with authority.


This is where the logic behind AI goes against conventional wisdom.

It doesn’t have to be effective. It has to make CEOs believe it is effective.


Ah, you mean, like lithium or Prozac?

A year ago it couldn't do tasks like this at all, what makes you beleive it can progress only this far but no further?

Random number generators can't solve open math problems, but it looks like AI agents can? [1]

[1] https://www-cs-faculty.stanford.edu/~knuth/papers/claude-cyc...


> It can't. It can't even deal with emails without randomly deleting your email folder [1]. Saying that it can make decisions and replace humans is akin of saying that random number generator can make decisions and can replace people.

I don't think the comment you're replying to is saying that an evil AI bot will kill people. They are saying something along the lines of: mass job loss doesn't bother the AI companies because in the AI-powered future they envision, population reduction is a positive side effect.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: