Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Do I belong in tech anymore? (ky.fyi)
168 points by patrikcsak 2 days ago | hide | past | favorite | 129 comments
 help



> The point of a code review is not simply for good code to make it into a codebase, but to build institutional knowledge as people debate and iterate and compromise, slow as it may be.

I feel like this is a very profound insight.

Of course processes like this can become about the immediate utility. Reviewing is then checking work so, it can be merged and used.

But the process is more about us than the code. And we lose the deeper part when we only care about the superficial one.


There's a Dutch idiom: no shine without friction.

AI is all about losing every possible bit of friction, severely underestimating the value that friction brings.


> AI is all about losing every possible bit of friction, severely underestimating the value that friction brings.

And it's not just AI, the removal of friction seems to be pursued mindlessly in all areas, with not even an attempt to understand what value it might be providing.


It’s almost as though some people fall over themselves trying to achieve maximum possible speed without giving any thought to where they want to be heading.

It boils down to "Can we? vs. Should we?"

People in tech seem to almost NEVER consider "Should we?"


"No friction no shine" is easier to remember and get since it's analogous to "no pain no gain"

It shows that previously he likely worked only at companies which catered to him, honestly.

That was pretty widespread during 2005-2015, but it's been dropping extremely quickly now.

Developers are generally seen as replaceable cogs. Middle management loves to talk about "scaling" - by which they don't mean scaling how devs understand it, but instead multiplying headcount - because surely throwing x-n devs at the same software will multiply the velocity by the same factor amiright?

The biggest value you can get is by having a very small team of extremely capable people (with extremely high bus factor) being fully in control of everything they do.

Realistically speaking, that'd be impossible to "scale" in the perspective of an MBA however, hence the industry at wide doesn't to that.

You may notice that some employers do, however.

You're just unlikely to get a job there, because their team is already established.


For fucks' sake. Has no one read:

Brooks, Frederick P., Jr., 1931-2022. The Mythical Man-Month : Essays on Software Engineering. Reading, Mass. :Addison-Wesley Pub. Co., 1982.

Those who ignore history are doomed to repeat it?


I'm on round 3 of arguing with my boss's LLM about a terrible PR he refuses to review manually. I can tell from the PR that it was 100% generated by Claude code because I've seen identical suggestions in PRs from juniors. But this man is my boss. He won't listen.

Honest to god, were the programming job market like 5% better than it is (so, y'know, years away) i would already have quit. I've been applying places but it's a slaughterhouse out there. I got ghosted after a fourth round interview at a non-tech company over the winter.

Shit sucks.

I'm immensely jealous of the author; i have savings as a safety net, but not enough to take a year off work. But this next year of my role is guaranteed to be hell and the last year of applying for jobs has not been better.


it's an extension of the principle that the purpose of writing code is to write it for your successor.

I'm beginning to think that the only reason code reviews still exist is that "all changes are reviewed before going into production" is probably a checkbox on some security certification checklist.

It’s for accountability. They still need a human to blame when it falls. Meanwhile, the message from management is with AI we expect you to 2x-3x speed of delivery now.

Devs cannot reasonably be expected to have ownership over code that is generated overwhelmingly faster than it can be reviewed.

The way AI is being used feels like it is proving that, in many orgs, what has always mattered has been the appearance of work, not results of work. Will we wake up in a few years and find out we’ve fired all the doers and are now overloaded with the fakers?

I find that to be a very defeatist take. It always mattered how much value you provide to the business. Writing pretty code or arguing about some implementation detail never really mattered. If you are good at coming up with solutions to problems AI is just one additional tool in your toolbox and personally it allows me to do much more than before.

There were fakers before, and there will be fakers after.


> Writing pretty code or arguing about some implementation detail never really mattered.

True, in the same sense that sharpening your tools if you're a trader doesn't matter to your customers: what matters is that the job you deliver is good.

Making sure you put all electrical wiring in conduits rather than buried in plaster is not what most customers care about, but it will mean easier repairs and quicker improvements in the future.

Writing good (not necessarily "pretty") code and arguing about implementation details means you will have an easier time delivering your work, both now and in the future. You have a better chance of delivering code that can be maintained and understood by yourself and others, including the people who come after you.

Furthermore, when done right, these discussions keep a trace for understanding bugs and for code archeology when in the future you're trying to understand how decisions were made and the tradeoffs considered, which could massively help refactors, rewrites and decisions to drop certain parts of the code base.

Of course, you can sharpen a tool too much or at the wrong angle, or you can make a mistake and fill up your conduits with plaster, but you stand a much better chance of ending with a better, cleaner, more maintainable and understandable product if you do practice those steps than if you skip them altogether.


Are you willing to wake up at 3 AM when that "valuable" AI-written code pages on-call?

I agree there is some value in AI tools, but implementation details do matter. People shouldn't be pushing unread code to prod. That's how you end up with security holes and other bugs. That's how you end up dropping millions of orders on Amazon.com.


I think the last ten+ years has taught us that massive security breaches are more of an insurance claim problem and some $4/mo credit monitoring payouts.

And major corporations certainly don’t seem to care that much about leaving massive amounts of money on the table from jr level tech issues. I see it all the time. I mentioned a few from Walmart, Meta, and Amazon recently.

Everyone talks like these things matter, but the results say everyone is just playing pretend.


Excuse me? Amazon lost more money in one day than most companies have in revenue, from dropped orders. I would say that matters. Believe it or not, the systems we work on do things that matter in the real world.

Seems to be an instance of the prevention paradox: Security (in general) is taken seriously enough that major incidences are low enough that people think that security does not matter that much.

I would too. I’m saying businesses don’t seem to. At least not like we assume.

The quality of our work is too subordinated to business leaderships who see the forms of technical insurance we build into software development processes as fat, and are fundamentally opposed to doing things right. Besides solidarity this is the major reason for tech workers to unionize. We won't because we don't have any sense.

People pushed unread and buggy code to production long before AI.

There is a shift to software mass production over the last decade(s). AI is now speeding up this process extremely. There will be most software produced with AI and "cog coders", similar to a production line in manufacturing.

Some few (good ones) will find niches and "hand craft" software, similar to today when you still can buy hand forged axes etc. Obviously the market for these products will be much smaller but it will exist.

I you love programming you should try to get into the second category. Be a master craftsman.


Actually i think we will see a faker take over and then a doer conquest. All those going now take the recipe with them and are capable of cooking it elsewhere. Elsewhere being a place without ai management.

It feels like it but this is not true.

Imagine that you're given a business problem to solve. You represent the process of writing the code with a graph - each vertex is a git commit. We consider the space of all possible git commits, so the graph is infinite. All vertices are connected with directional edges, and each edge has a value "cost". If you are in commit A and you want to go to commit B, you have to pay the cost from A to B. Your goal is to find a relatively short path from empty git commit to any vertex which contains code that has some specific observable business properties.

You might notice that not everyone is equally smart, so when giving this task to real people, we'll associate "speed" with each person. The higher the speed, the lower the paid costs when traversing the graph. I'll leave the specifics vaguely undefined.

Since a part of the task is to discover information about the graph, we also need to specify that every person has some kind of heuristic function that evaluates how likely given node is to get you closer towards some vertex that can be considered a goal. Obviously, smarter people have heuristic functions that are more closer to ground truth, while stupid people are more biased towards random noise. This also models the fact that it takes knowledge to recognize what a correct solution is.

This model predicts what we intuitively think - smart specialists will quickly discover connections that take them towards the goal and pay low costs associated with them, while idiots will take the scenic route, but by and large will also eventually get to some vertex that satisfies the business requirements, even if it's a vertex that contains mostly low-quality code, because for idiots the cheap edges that seem good at first glance are the only edges they can realistically traverse.

Obviously, if you have a group of people working on the same task, you'll reach the business goal faster. Therefore, a group of people is equivalent to one person with higher speed, and some better heuristic.

This conclusion suddenly creates a well-known, but interesting situation - each smart specialist can be replaced by a group of idiots. Or, the way I heard it, "the theorem of interns - every senior can be replaced by a finite number of interns".

What AI does is it increases people's speed. Not the heuristic function, but the speed. Importantly, the better the heuristic function, the smaller the speed gains. Makes sense - an idiot who doesn't know shit and copy-pastes things from ChatGPT will have massive speed gains, while a specialist will only modestly benefit from AI.

From business perspective though, by having more idiots write more slop with more AI we traverse the graph significantly faster. Sure, we still take the scenic route, and maybe even with AI we take the really fucking long scenic route, but because the speed is so high, it doesn't matter.

And because AI supercharges idiots more than smart specialists, we have a situation where the skill of working with idiots is more valuable on the job market than the skill of doing your job right. Your goal isn't to find the shortest path, or the prettiest code, your goal is to prompt AI as quickly as possible to get you to any vertex that satisfies the business requirements.


Your graph model lack the aspect of increasing complexity. As you traverse the graph every available node gets increasingly more distant. In some areas of the graph less so than others, a good heuristic function not only identifies a single shortest path, but also dense areas of possible value in the graph.

The question is if blind speed scales quicker then distances grow.


That's true, and I guess the reason why we're building so many datacenters is to answer the question how far exactly will blind speed take us, assuming that we fail to make substantial improvements to AI architecture.

Inshallah.

I work at MSFT and I feel burnt out too and am in a similar situation where I feel like resigning would be better for my mental health but AI isn’t a big contributing factor. I do have some arguments against speculative uses of AI though.

Experimenting with speculative uses is fine, technological breakthroughs require lot of iterations and some would naturally never make it but with the enormous amounts of capex that companies are investing, these have to impact the top line and eventually the bottom line as well. I just don’t see that happening now, I could be wrong.

1. To me speculative uses of AI like meeting notes summarisers seem to add little value if at all. First off, most meetings are performative work especially at big companies. Add to this, when someone just casually pastes the meeting notes from an AI summary and asks the meeting organiser to “pls check for correctness”, my blood just boils. Are we spending billions of dollars of capex for this ?

2. Every team builds their own “agent” for diagnosing incidents which is announced to huge fanfare but people rarely end up using it irl.

3. Devs and PMs chasing “volume” of work. You prompt GPT for an issue and it is bound to give you pages of text that you can use to show how much of output you can churn. I have seen excessively verbose design docs that only the writer (and prompter) could understand and all this was accepted because “Hey, I used AI for this and it must be good”.

There are legit uses of AI and I do have a 20$ Claude subscription which I like and use but at big companies they are shoving AI into every nook and cranny hoping it shows up in the top line and bottom line and so far it doesn’t add up.

Lot of these uses are driven by fear, by repeated exhortations from upper management about shoving AI into every nook and cranny when they are just as much clueless as us. People’s mortgages, their children’s education and their retirement, in short their whole livelihoods are at stake even more so when companies will happily lay off workers without a second thought. So people have to use AI even when it adds questionable value, if at all.

I am not resistant to change and am not an AI Luddite. I am happy to use AI to become a better developer but most current use cases seem to add questionable value.


CEO see performative work happening as the cut is still not deep enough.

Can you add in the missing words that make this comment make sense, please?

CEO observes performative work, and his inference will be that means more people need to be fired. Let only the AI native, customer obsessed 10x engineers(/ AI swarm managers) remain.

Before LLMs, a friend of mine lamented that all the juniors at his gig were really fast at producing buggy code. The greater lament was that his bosses loved it. And as a dev, you're getting paid to do what your bosses want.

LLMs can really help you get what your bosses want a lot faster.

As an older dev, myself, I'd already been bitching about the state of software quality before all of this. Companies just didn't give a shit. Sure, people within them did, but as a whole companies will do the bare minimum to not lose your business (because that's what's best for the bottom line). Can't really fault them for their nature.[1]

And then I step back and look at something like Linux or GNU. Perfect and bug-free? Certainly not. But they're damn fine pieces of software. Many open-source projects have historically been damn fine pieces of software. Because they don't care if they lose your "business". They just want to build something cool that they can be proud of.[2]

It's why so many of us agonize over the details of the things we produce and give away for free. It might not even net us another user, but we have pride in our craft and want to do the best we possibly can.

But that way of thinking is a money loser, at least in the short-term. And companies live in the short-term.

So what's going to stop software from just collapsing into a massive pile of crap?

I don't know. Maybe it just has to get so bad that people start going to the marginally-better competition. Isn't exactly a great consolation to me, that.

[1] Small companies are often idealistic and try to do the Right Thing, admittedly. But big ones who tend to be market leaders tend to not.

[2] Insert the entire GNU philosophy here because I just glossed over it completely and I don't want to get called out on it. :)


Maybe this is obvious, but if my name were attached to well-known public OSS code, I'd definitely put forward more effort to ensure it's of as high quality as possible. Within a company you can write somewhat crappy code and have your close team members review it, and honestly there isn't much at stake as long as it works and you haven't done anything egregious. But yeah, you don't get promoted for following the style guide.

Hey OP, I quit my job and said "screw it" at the start of the Year for very similar reasons.

I had a "good" job, it was extremely stable and in the public sector, the work hypothetically mattered... I was miserable because it didn't matter. If I would have died in my study, the system would have happily churned on accomplishing nothing without me. There were so many many obstacles to accomplishing anything too, like I'm all about "perfect shouldn't be the enemy of good" - but hypothetically we should do something. I went on vacation in November and when I got back the latest ServiceNow update nuked a bunch of the changes I had worked for months trying to get done.

I quit at the start of the year and honestly, it's been great? Not fast, not suddenly lucrative, but I've been taking it slow. I'm literally building little vibe-engineered tools for local companies. I can now do what would have taken me a team to do by myself, it is paying (albeit slowly), it's fun, and I have time to do the things I care about in this life.

Don't work for the man. Your job cannot love you back, in fact, it actively hates you.


> I'm literally building little vibe-engineered tools for local companies.

Sounds interesting. Care to elaborate?


I have some of my old contacts from my prior life flying airplanes for a living. I started there because I know the field extremely well. These are my first customers so far.

The first thing was just some really simple stuff a bush airline I used to work for needed too, like, their software is through a DB run by this other company, they wanted a status board customers could view. That shouldn't be a huge lift, but the company that runs the enterprise software doesn't have the time to build it.

I sent a series of emails, got permission to hit the API, and was able to connect things so now this little bush airline has a customer facing schedule app and people don't call the office 30 times an hour to see if the flight is late or on time or early. Even in the middle of nowhere, if they have Wifi the can check the flight schedule on their phone. That has spread to "hey, do you think you could use this data to auto-populate flight and duty logs?" Yup, not a huge deal. Then onto the next one. Every month it seems I take on a new project for them and the scope of their tooling keeps growing and the recurring costs I charge to maintain things is low enough where I'm worth it. There's a dashboard of data science stuff, then a compliance auditing tool, and the list of bespoke features that are critical to them continues to grow, and they continue to pay me. It's pretty cool.

This has lead to another customer pinging me that wants me to work on an app for their factory floor to help their technicians. Nothing crazy, just a kind of wrapper over USB tool they have and a CRUD app. 99% of the real work is going to be testing out like 30 different layouts and making sure that it works properly in practice, but a big company would never bother to do this. I will go down to their factory this week, set up computer, and talk with their technicians while I vibe code it out with Codex and draw process diagrams and think. 90% of it is really just thinking about what's a prudent choice.

The SaaS the first company is paying for is incredibly necessary to run their business, those guys will probably have their hooks into that operation for many more years because of the inertia to change, but there is tons of room to fix some of the little small annoyances that not having bespoke custom software creates. Also, the software they are kind of locked into is 10s of thousands of dollars a month. I reckon in the long run I'll end up trying to build a replacement for it entirely then charging way less to give them exactly what they need.

Then there's the existential angst of vibe coding this stuff. The truth is, I could write all this code myself. It's mostly Python, and JS, but it would take me a month to do what I can do in a week and I'd be working myself to the bone. Instead, this is more like an extremely fun part-time job that's growing in scope and pay but not growing in time required of me. Seriously, these tools are cool! They're like I have a team of idiot savants/interns working for me but the entire company so far is literally just me and my wife (and she isn't really involved in the technical stuff at all). Codex is dumb and does not understand the use case at all, but good lord does it churn out boilerplate code that solves real engineering problems for customers. My job is largely playing "software plumber foreman" and making sure all the lego pieces fit together nicely and that they're good architectural choices.

For example, I was skimming the code base last week and noticed a ton of just unused code from an early iteration. I spent a bunch of time pruning that as a human, then also having codex refactor code smells I didn't like. "This file is ridiculous, it's like a monolith of 30 different concepts hammered into one place - refactor all this stuff and spread it out, move function X to a separate file, use a functional style" etc. Stuff like that is kind of mandatory, otherwise your codebase will give you a stroke and you can grow it to an extraordinary size that will hurt your ability to iterate because you'll be running into context length issues. But the robot doesn't do too horrible of a job.

I could write all of the code, but the customers don't care if it's written by a human or not? They just want it to work. So I spend a lot of the time coming up with test-cases, then interactively evaluating what the robot is building? Kind of like a really slow REPL? But I'm definitely less of an engineer and more of an architect now. That pains me a bit? But all things must come to an end.

One thing I'd say is important if you're going to do this... use the dumbest possible solution you can. You'll need to specify that to these tools otherwise they'll build you a cathedral? You probably do not need some monster system with 80 layers of abstraction. KISS is important.


Thank you for the detailed response! My background is Ruby on Rails web development, but I started vibe engineering add-ons to my wife's dental practice software lately. Tools to reconcile payments and open invoices, by reading straight from DBF database files of the ancient Windows desktop app. Windows tray apps cross-compiled in Go from my Mac. Things that would have taken me weeks to learn the boilerplate previously. Only possible since about December. Wild times.

Good luck!


Least I can do! And that's awesome! You could totally make money at that.

I think the key is to not think that you need to build some crazy big SaaS app that you're going to charge $1000 a month for or whatever for a product that isn't specifically tuned to the specific customer. That world is fundamentally dead outside of some niches, so your value comes in building some bespoke tools that EXACTLY solve a problem the customer has, and charging for maintenance on it.

If I can keep this going, I figure I'm physically capable of supporting about 5 to 10 small businesses with this strategy, which by the time the dust settles probably generates something around $3000-$10,000 month depending on how much they want, etc. And a lot of this is going to be recurring revenue to maintain various things and produce new creative solutions to problems, so even though I won't be clacking out the code on my keyboard as much I'll be thinking about things much more than I was at my old job.

You could probably "scale" this model or whatever, but I honestly don't really want that? Everything that gets big like that turns to crap in my experience. So, I'm going to try to be a boutique and do a good job. The other thing I'm doing along the way is reinvesting in myself with education. If you pour your wallet into your head, nobody can ever take it away from you. This sort of "bespoke applications engineering" or whatever is going to grow in scope too.

Right now it's just software, but in a few years someone is going to need to set up the robots at Bob's grass cutting service or configure the drones for Steve's delivery service. That's going to be me.

After that though, the economy probably breaks? And I'm fine with that too - it'd be nice to step off the hamster wheel, but I can't yet.


I think a lot of people relate with this but kind of sit with this silently for reasons the author mentioned:

“ Would initiating these discussions result in interpersonal stress? Should I just let things slide? Would I become known as a “difficult” coworker for pushing back on AI use? Does any of it really matter? Does anyone really care? “


A lot of this is about knowledge debt if I’m reading it correctly (people not knowing things that they should know, or knowing the wrong things). In my last few jobs, I’ve maintained an Anki deck about facts relevant to my job (who certain people are, how certain systems work, details of the programming languages we use, etc.)

I’ve started kind of a funny rule, which is that when I make a change now, I can use Claude or not. But if I use Claude, some cards have to go into the deck. Both about how the implementation works, and also about anything within the implementation I didn’t know about. It does force you to double-check things before committing them to memory.


This resonates a lot with me.

Long breaks help. Take your mind off of things that bothered you. Do things you enjoy. Which may include tech work, but on your own terms.

I wouldn't be surprised if you decide to not go back. The status quo of most organizations is grim. But there are still people who care about the same things as you. You can seek them out and work together, much like you did 15 years ago. This is more difficult now among the noise, but you can tune that out. The industry will never recover altogether, but this current period is a blip of high insanity, which will subside in a few years.

Good luck!


Can definitely relate. It is no more complicated than I really enjoyed designing and writing code by hand, and get very little joy out of agentic processes. I use the tools and see the velocity increase, but it has just become… bland work. I completely get others’ excitement around the tools and the newfound “super powers”, but it hasn’t much resonated with me.

That’s ok! I was fascinated by coding when many others weren’t and found a great career as a result. A different cohort will love Development 2.0.


"The psychic toll of AI" -- It's sad, but each of these scenarios (barring the AI notetaker, which I haven't found to be an issue personally but ymmv) are indicative more of the culture of the company than the tool itself. From my experience it seems like the most frontier companies have the best AI-use culture.

I work at a very 'AI-pilled' company, but:

- Everyone reads and reviews every PR and leaves human comments

- Documentation is written well and tended to by humans

- There's no 'AI mandate'

- Whether features are possible are first explored by an agent but manually traced by a human through the codebase

You can treat AI like a very powerful tool to augment you and run your agent swarms at the same time.


Are there any companies that aren't AI-pilled at this point?

Odoo, Belgium, cloud ERP. Not very AI pilled, even if AI is considered and used somehow

Odoo suffers from others issues though. Not sure if this is still the case, but the mix of inline Python 2 Flask + XML was basically tech debt-as-a-service.

Also the very ugly death they gave OpenERP/Odoo on-premise.


It's Python 3, no Flask (but werkzeug) and XML templates. It works for hundred thkusand clients, and you can install Odoo on premise as you like. I'm 90% dedicated to that. So... explain the "tech debt" thing, as I don't get it. You don't need Rust or microservices for every use case. Don't be fooled by marketing style "old style technology" bias and set up an account. PostgreSQL with synchronous workers works perfectly for most people.

I am absolutely not a fan of "new style technology" as you might have understood.

I used to run Odoo on-premise for a small company about 3-4 years ago. The upgrade path (with the OpenUpgrade fork) was awful, many features (that WYSIWYG editor, Odoo Studio?) were locked to the cloud version, and there was little to no documentation. IIRC we even had to drop it because the delay between on-prem updates & cloud updates was too high.

And there were mentions of Flask in the logs, so no it wasn't just Werkzeug (which is synonymous with Flask since its inception, anyways).

I do not have fond memories of editing invoice templates blindly.


most are not, e.g. if your company has any of these you're probably not ai pilled

- mandatory ai usage

- ai usage tied to kpis or performance reviews

- trainings on how to use claude code

- restrictions on what tools you can use

- layoffs

- engineers still typing every line of code by hand


Wait, I don't get it. Some of those are a bit contradicting, and for others I don't see how they _don't_ mean your company is "AI pilled"?

sorry, i shouldve defined it better. my point of view is an 'ai pilled' company is one that has a realistic understanding of the benefits and limitations of ai productivity, and leadership + employees are fully bought in, and theres a general high trust environment

if ai has to be enforced (mandatory usage, kpis, training, restrictions on tools) -> clearly the execs think the employees are not bought in

typing every line by hand -> self explanatory

layoffs -> this one is a bit of a stretch, but from what i've seen the best companies at leveraging ai are not laying people off, instead continuing to hire more to capture the market or capitalize on the demand. could be confounding variables though


Thank you for writing this. I didn't realize it, but I feel a lot more of this than I thought.

Good article.

I want to zoom in on the rise of AI notetakers. AI that generates transcripts alongside recorded video that you can watch later? Amazing. I can catch up later and find people asyhc if I need more info; the videos are discoverable/shareable and anyone who needs to be in the know can be. AI notetakers that give you a summary and nothing else? Useless. These generat concepts of overviews and tend to miss small, but, key details.

I'd rather (and often do) take notes manually than turn on the notetaker.


I'd like for these tools to:

* Cut the video down into chapters, e.g.

    - 0:00 - Introductions
    - 3:30 - Joe gives a summary of the problem and shows diagrams
    - 7:52 - Kim asks clarification questions and introduces relevant infrastructural concepts
    - 10:25 - People waffling about unrelated stuff
    ...
* Put the video and the transcript on the same GUI, where I can shuffle through the timeline, choose chapters or click the transcript to be taken to the relevant part of the video.

* Bonus points if it highlights the relevant part of the summary as the video is playing.


I see this as a temporary phase driven by AI hype.

In the long run, strong senior specialists — in design, development, and other IT fields — will likely be more valuable than ever. Meanwhile, those who rely entirely on AI without developing fundamentals may never reach that level.

AI isn’t really capable of creating truly complex solutions or top-tier UI/UX — it mostly recombines existing ideas.

So it’s probably better to focus on your craft and avoid burnout — that’s what will matter.


This report lists failures of some AI systems. They look consequential - but the company does not seem to care. This is very strange - how can it be? I really like AI products they help me all the time - but I know I need to take into account their failure modes and be careful. But lots of organisations don't seem to do that calculation. Will competition root them out? I don't know - I am so enthusiastic about AI - but ever after the LangChain situation I can see that what is adopted is always something that has a lot of flows. The more careful developers that notice the flaws and try to find true workarounds fail because it takes time to do the design well. It is not new thing - there were Betamax mourners for decades - but it seems that the hype machine is now more and more powerful.

Which "LangChain situation" are you talking about? Anything specific, or just everything that's happened in the past year or so?

What I meant was how LangChain dominated the llm frameworks scene because it loaded VC money. It was just at the beginning - now it has normalised - but I believe it did a lot of damage at that early stage by sucking all oxygen.

Obviously the author's experience is a nightmare but what was this place like pre-AI? I have a hard time believing people who are this willing to hand over all of their thinking to LLMs were doing anything productive beforehand.

I think you must be right to _some_ degree. The article illustrates that this org doesn’t know why they are doing certain things.

But there‘s something psychologically powerful happening with the interaction of AI. I think we overestimate our ability to be rational and underestimate how essily influenced we are.


I want to focus on the "colleagues submit thousands of AI generated lines of code for review" comment.

Humanity developed Code and programming languages for people. They are supposed to provide sufficient expressiveness so that we people can understand what is happening, and 0 ambiguity, so that the machine can perform is instructions.

But computer code has been a way to communicate among us people on our intentions (what we intend the machine to do). Otherwise, we would still be writing in assembler.

But now, computers are generating code, A LOT of code. So much, that it's becoming more and more difficult to stay on top with our verbose languages.

We will need to develop a better way for the computers to a) produce the instructions to perform the tasks we tell them to , b) produce reports or some accessible way for us people to understand and share what the instructions are doing.


I'm asking myself the same question for a different reason: nobody will even interview me. I've been out of work for a while. Savings are running out. I apparently don't even know how to look for a job anymore.

Yeah. Got word I was being laid off in November. Officially because of restructuring, but after having had some conversations it's clear I've been replaced by a junior with a Claude subscription.

20 years coding experience. Gone through the sweaty junior years, senior, founding engineer, CTO (and back to software Engineering again because it's my preference) -- and now I can't even get an interview with a human.

Due to unfortunate life events my savings are now all but gone and I don't even know how if I will be able to keep a roof over our heads. It's messed up.

If anyone is hiring send me a message. I'm a .eu citizen but work have residency in and work out of Mexico.


Use AI to mass-apply to all available job postings. It's a numbers game.

The era of anyone interested in programming for fun being able to make upper 10% incomes is drawing to a close. You'll unfortunately have to join the rest of us who work for money and program for fun. I suggest engineering (the real kind, not software 'engineering')

Unfortunately, I have a visual-spatial processing disability. You don't want me near anything mechanical, and I can't do visualization-based tasks because I literally can't visualize. That eliminates most engineering jobs.

There's also the matter of going back to school, and the associated debt I'd have to take. I'd never be able to pay the loans off if I did that.


Electrical engineering doesn't need much in the way of mechanical aptitude, has a substantial overlap with what you already know depending on specialization, and might not have as much new schooling required as you would think.

Something like industrial controls engineering might be right up your alley.


The best way to find out: just start. You’ll improve along the way. Questions like this (and anxiety) are best fixed by action.

I mean, I am. How else would I know nobody wants to interview me? :)

Fair enough :) wasn’t clear to me from your first comment. It’s definitely pretty tough out there right now.

It was completely clear from the first comment, which is why yours was so clearly unhelpful.

Yes absolutely. It’s even scientifically proven to be helpful advice.

Look up the work by Seligman et al. on resilience.


When someone says “no one will interview me” this is a pretty unhelpful response.

My response is probably controversial. But I genuinely think it’s generally helpful advice. Ofc I don’t have any other information than the comment about this person.

You literally said they should do something.

Yes exactly. I stand by that advice. What’s the alternative? Do nothing?

So you advise that they do not need to change their approach at all, since they’re already doing something: posting on hacker news.

Ok, so comments like this are helpful then?

Yes, by explaining to you why yours was bad and hopefully preventing future such comments.

I have no advice to offer, I only wish you good luck. I am still lucky enough to be employed, but when this whole parade ends, I have no idea what comes next - my only skill is programming and related knowledge work. I think the only path forward is to try to jump ship to another white or blue collar industry…

I thought along those lines as well. The only thing I could come up with that would be semi-viable was medical school, and I"m not sure I'd survive residency. I definitely would never be able to pay back the debt, if I had to take any.

[flagged]


Bay Area, 9 YoE primarily backend, US citizen. I'm familiar with AI coding tools. I've done real work on real systems.

What is your experience in? The company I work for is constantly hiring

Huh, weird that you can't reply.

I'm tailoring my resume to individual postings a good portion of the time. My "default" resume is by ChatGPT; it's essentially my human-written resume, jazzed up a bit for ATS-friendliness. There are no hallucinations in it, and I feel it accurately represents my experience.


> Huh, weird that you can't reply.

It happens to many, it's happened to me three times so far - the mods rate limit (only X comments per Y time period) people who have been flagged, judged, and found to be a bit prone to get in rapid back n forth exchanges that have crossed guidelines.

It can generally be reversed on request via hn email, sometimes it's a blessing, sometimes it's not even something that impacts a user very often unless they find themselves in an interesting exchange.


By hand written, I think he means something like a letter written by hand, or anyway sent via post. Not "chatgpt that is basically handwritten"

Nope. I mean text created by a human not an LLM.

At this conversation depth thete is no reply button here but you can open the comment by clicking the time "8 hours ago" then reply.

Most of the job openings for humans are remote and not in big tech, but the pay in absolute terms is significantly lower (same wage percentile for the area you live though).

It's important to understand the world beyond your bubble. If those jobs seem unrealistic as an option, you may need to consider if your cost of living is unrealistic.


I'm fine with "not big tech," along with a "not big tech" salary. In fact, I prefer "not big tech." My cost of living is not absurd for the Bay Area. I'd even be willing to take a little less than what I made before. After all, less than before is still better than 0. I'm using AI to tailor my resume to every posting, and still not getting calls.

You’ve got nine years of experience, so work your network and get referrals. It’s very hard to get mid-career jobs through the front door; most people want someone they trust to vouch for you.

I've tried that. They don't have anything for me.

> not absurd for the Bay Area

Yeah I was implying you might need to move to optimize for cost of living, but I don't know your situation and am not really asking. It's actually surprising sometimes to hear how long this took to affect some tech workers. You're lucky it's now that housing prices have stabilized (everyone else has stopped moving), and not a few years ago.

Remote work doesn't necessarily mean you aren't still tethered to some radius. Otherwise I'd be living in Monaco or something haha.


> Most of the job openings for humans are remote and not in big tech

where do you find these?


While I certainly relate to some of your points, and I'm not an AI maximalist by any means, a few thoughts:

> You join a meeting with a coworker. Your coworker has enabled an AI tool to automatically take notes and summarize the meeting. They do not ask for consent to turn it on. The tool mischaracterizes what you discuss.

Asking for consent to what is more or less meeting transcription (already enabled, presumably) seems a little odd. If you don't like it, why not just talk to the coworker and ask them not to use it? Offer to take notes yourself, perhaps.

> A team lead adds an AI chatbot to a Slack channel. Anyone can tag the bot to answer questions about the company’s products. Coworkers tag the chatbot many times a day. You never see someone check that the bot’s responses are correct.

Why would that happen in the Slack channel? Presumably you'd be googling it or reading documentation to do this, not posting in the channel.

> An engineer adds 12,000 lines of code affecting your app’s authentication. They ask that it be reviewed and merged same-day. Another engineer enlists a “swarm” of AI agents to review the code. The code merges with no one having read the full set of changes.

This is an insanely reckless thing to do with or without AI. If this actually happened at your company...I think there were deeper issues than overuse of AI.

> One of your pull requests has been open for a few days. You ask other engineers to leave a code review. Minutes later, an engineer pastes a review that was generated by an AI tool. There are no additional thoughts of their own.

Again, I think you should communicate with your coworkers on this. Possibly even bring it up in 1 on 1s with your manager. Not "I want to discourage use of AI" but "copying and pasting AI responses shows a lack of respect for others' time" and "lack of due diligence," show a horror story of an AI deleting someone's PROD database, etc. it's a useful but imperfect tool, not a replacement for thought.


@arcfour - you are absolutely correct and you will be PIP'd and pushed out if you try this in FAANG company today, where *everything* is about AI.

The worst part so far has been some people have Claude write tickets and they don’t check what the very detailed piece of crap ticket says. Just tell me the few pieces of true knowledge you know rather than a full page of AI slop that has multiple errors in it that causes me to waste hours trying to figure out what’s true

i never got along with tickets, anyway.

No comment on the ethics; however, I think when people's instincts to survive kick in, many of these larger goals get sidelined. There's a growing belief that it's now or never as far as accumulating wealth, securing a house, etc. go because people think once AGI comes their chances of having the lives they want will diminish. The bay area has only gotten more expensive to live in, and that's where all of the AI folks are, so no surprise.

I think in general, if it were cheaper to live, we would see a shift in priorities, what people focus on, etc. More art, less grift.

Genuinely good people get caught up in rat races trying to reach their ceiling while they can. If they didn't feel that pressure, maybe they'd be doing something else.


I genuinely enjoy software development, but if I could provide for my family, I’d also enjoy selling croissants at a local bakery or filling up shelves at the supermarket.

You can just... not live in California. Most other places are doing just fine and experiencing the usual moderate economic instability that happens every decade or two along with the rest of the world.

If we do consider the ethics, there's a lot of contradictions built into why someone would want to live there so badly to do the kind of work the blog post is concerned with.

Their efforts are better rewarded moving their passion into an open source project while keeping a job in tech that they don't care so much about and are qualified for. This is a normal part of growing up. Some people switch careers while others stay in it while decoupling their passions from their paycheck.


I actually considered that, myself. The thing is, California is where the jobs are for me. If I move out of California, I may never be able to come back. That could cost me a lot.

Who cares about California? If you dont have family there, just head to Europe as fast as you can, one way ticket, don't ever pay the IRS to come back.

I don't think the now or never thinking is healthy, but I certainly understand the motivation. I myself have never really fit into a career path climbing the corporate ladder, and entrepreneurship is a skill that takes time to develop. When you're oscillating between stability and bleeding money, it's natural to want to go all in on an opportunity when it presents itself.

I feel like all this hype around generated code overlooks a distinct opportunity for enterprising focus on excellent, clean, maintainable, curated code - baked by humans, for other humans.

We also haven't really seen how large volumes of generated sourcecode will stand up over time (like, decades) in terms of maintainability. My prediction is you'll encounter a lot more disposable software. That's fine for making general code more of a commodity (cheap and accessible), but where you get commodities you eventually find demand for more premium flavors of product. Those tend to derive from taste and opinion (attributes which, for example, were major success factors of the iPhone at its peak design).

The act of software development formalizes paradigms, surfaces unknowns and forces their resolution. Traditionally the work product gets better over time as you iterate. My own coarse rule of thumb is on average it takes until version 3 or so - i.e. 3 rewrites - until you to land at the kind of high caliber product that stems from really understanding the problem space and having worked in it extensively enough to have a good mental model and have uncovered the edge cases and hammered out an optimal solution.

While AI is famous for fast iteration, I expect in cases where the designers wielding the tool lack a deep understanding of what's going on, potentially exacerbated by never actually having to work with the codebase, it may actually turn out to impede their ability to reach that plateau. Not saying this will be true for all use cases, just that the tool makes it seductively easy to fall into that trap.


What would that look like? In my experience, real production codebases tend to have lots of bugs. Most of them never get prioritized, because features matter more than fixing obscure bugs.

Indeed - one of my biggest pet peeves is when organizations chronically avoid budgeting the time and resources to deal with their technical debt. Or when they lack leadership that is confident and bold enough to make the hard decisions to do so (which requires experience and reputation), or suffer a culture that doesn't tolerate some degree of risk-taking, with contingencies (particularly in schedule and blast radius containment) to safely deal with occasional failure on the road to improvement.

I'd love to reinvent computing from the ground up, stripping away the many patchwork layers of complexity we've accreted over time and applying an obsession for making each individual component uncommonly robust and engineered for clarity. I feel that kind of project would be a great candidate for human-written code. I think AI tools would make a great sounding board / linter / reviewer in such a scenario, but since they were trained on existing examples and legacy patterns I'm not convinced they'd be as good as a human at the actual constructing, in terms of what I'm optimizing for.

I personally tend to favor longer lead times and slower public ship pace (but not slower betas or delay in customer feedback) in order to maintain a higher bar of quality. Even if saying so out loud risks branding me heretical by some corners of Silicon Valley!


> Do I belong in tech anymore?

A valid question.

After 42 years as an electrical engineer and embedded software developer, I chose to retire rather than transition to an LLM based workflow...


This happened once with open sores now this behavior has turned up to 11. People taking dependencies they don't even know what, full of incorrect code, vulns intentionally or not, delegate everything take no responsibility.

This is all caused by incompetent, panicked leadership. Where are the managers? Are they barricaded in their mansions?

I'm going through something similar. All the symptoms described in the article are present in the company where I work. But I don't blame AI. AI is just a tool. I blame the company culture, because it's the source of those problems.

Exactly, although it increasingly is the default culture at too many companies. As you noted, AI is not the problem here. Clueless VCs encourage clueless bosses to encourage clueless engineers to produce buggy garbage that lives long enough to pass the buck to the next clueless VC. No human in this chain cares about the virtues of good engineering. It is a grift all around, and grift ultimately results in systemic doom.

what a clear reason to never use Vercel or next.js

Another problem the author may be facing that if they decide to get back to the tech market and get a new job, it may be difficult with tech still going forward - not in a meaningful way, as computers still compute as before, but enough that lack of experience with a new tool or framework will make them unattractive compared to other candidates.

Otherwise, if they decide to go into another field that they will be starting from scratch in will pay only a small fraction and whatever lifestyle they were used to will have to change.


Bluntly, no, you probably don't belong in tech.

This is what tech has always been. A never (yet) ending race to automate. Our job will be done when there's nothing left to automate.


Automation seems like a very surface-level reading of this article.

Outsourcing your thinking, especially uncritically, is. There is a very obvious cognitive bias in the most vehement AI advocates where the one time a tool worked really well for them makes it worth the dozen of times it blows up in your face and makes that someone else's problem. The gain is romanticized and the losses set aside, without checking the balance or how badly the losses wear on morale.


I’m not part of the owner class so what tech jobs has and always will be is a paycheck. Why should I be excited about automating myself to homelessness

I really don't see this getting better from the sound if it, at-least from all the headlines present at the moment. The spending taking place from these big tech companies is alarming, not only is it centralized to single category " well by a large percentage". We still don't have a clear picture of the landscape for tech yet, yes there are some great tech innovation taking place in the US.

Being cutoff from China " A market that is also advancing in the same sectors as the US.Not allowing competition to enter the west will cause a recipe for disaster in the future. The current government is not "focused" on growth, despite the contrary to what's being said publicly. Where this will take the US is a place were stagnation is okay, so to make up for it there is a surge in investments in AI craze at the moment. The feedback is required in order to grow that goes for companies too not just the junior-varsity wrestler at you local high school. I mean taking abundance of data to utilize a summarize tool so that it can auto complete a prompt was bound to happen sooner or later, take elastic search for example, it's a search bar that as you type shows what that database has to offer with either a weighted response or indexed response depending on setting. This tool also shows images and information in regards to the search query. All that was needed to happen in that scenarios was something to compute this mess of data in abundance and project a response from it not just a search result. Marvelous you might say, but it has been around for a while now.The idea was there, it just needed the actor to execute it. The firings alone tell you the health and implications of these actions taking place. There was promises behind these investments that this war is interrupting or severing the deals even post-conflict.

The DotCom bubble was push on society to use the web and to digitize some parts of our lives, which the few companies that survived DotCom era are whats driving the push to the next era of tech or digital. It seems the AI idea is born without a guardian nor ownership, but to leave the courage to act upon it is open to any takers. The overwhelming spillover of data had to go somewhere. The useless data " how fast does a 2001 Porsche 911 go?" was tiresome to search for anymore.

The education system is already fallen apart in the US and this only makes things worse. Where is education heading with all of the adoption of AI all around us, how will you argue with your children, how will you learn new things? I don;t think I'm the only one thinking this at the moment by all means. The solution? well I'm, not sure if there is a solution to this? Companies want to see results from their spending and they will not stop until that is evident.

optimism is clearer without fog.


> Generative AI tools, .. supercharge the spread of disinformation and fascism, ... and concentrate wealth in fewer hands

People caught up in this line of beliefs generally tend to be more neurotic and unhappy about most things.


Can you elaborate? What "line of beliefs" is it?

OP is just blaming someone for the audacity of having a morality.

There's no such thing as a single universal morality. TFA was quite political, therefore it's only natural for disagreements to happen. That's kinda the whole point of political debate.

Probably. I hate the AI boom too and see no need to get all political, or even outrught blame the politicians. What'd you expect, politicians with a master degree in every field there is? Not gonna happen.

If we're putting the blame on anything, it's on us hacker types for going where the money flows and not fighting the corporate overlords nail and tooth.


[flagged]


It’s just a title. The job seemed real.

All kind of interesting point and then suddenly a wild Trump card appears.

What does Trump have to do with AI?


Absolutely everything. Did you see the coterie of CEOs behind him during his second inauguration?



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: