> I wonder what it means for an agent to “agree” to terms of service
It's already not clear what it means for humans to do it, but it doesn't prevent every single service from asking it. At least an AI has a chance to ingest it all.
There are many projects that should not be built agentically.
For things that are appropriate to build with agents, I have come to hold the strong opinion that you need to go all-in. If you built it with an agent, then you fix it with an agent, you debug it with an agent, and you change it with an agent.
In that case you should not consider yourself the steward of the source code and worry about “cognitive debt”- it’s literally not your job anymore. Your job is the keeper of the specification and care and feeding of the agents.
If you adopt the mindset that “I’m not going to build the documentation for me, I’m going to build it for the agent”, and “I’m not going to try to use my development skills to debug something I didn’t write, I’m going to make specific interfaces for the agent to understand the state and activity of the running code”, etc.- you’ll be a lot happier and more successful.
If you are using agents for autocomplete in your editor, or you open a separate chat window to ask a question about your code- that’s a very low level of agent usage and all your existing dev skills and responsibilities still apply.
If you’re using a planning framework like superpowers (the skill) and just laying out the spec for the program, then keep your fingers out of the source code, and don’t waste your time reading it. Have the agent explain it, showing you in the IDE, and make the agent make any changes you want.
This is correct, but it misses out an important dimension.
You can inject philosophy into the agent and ensure that it sticks to it. The LLM will, with sufficient drilling, begrudgingly implement it, most important of which is SIMPLE>COMPLEX on all levels and you have to either manually or agentically continuously monitor this.
Alternatively, LLM will use its tiny context window to build a true spaghetti that even itself cannot fix any more. This is the default path, and the path that way too many has taken.
> Alternatively, LLM will use its tiny context window to build a true spaghetti that even itself cannot fix any more.
And this is (probably) what is happening to the Claude Code product itself. The harness itself has regressed and is increasingly unstable. I get lots of weird glitches:
- I scroll back in the conversation and keep seeing the the same sections repeated, I am not actually able to see the earlier parts of the conversation because of this.
- The whole CLI UI glitches out such that you can't even make sense of what you are seeing. This is usually fixed by resizing the terminal window
- The previous edit in the conversation history gets lost when I escape it to provide direction
- The CLI sometimes consumes huge amounts of memory (more than 10GB per window, multiplied by the number of windows I'm working in)
90-95% of all projects suffer this fate, and it didn't start with LLM. These projects include major commercial successes such as certain popular desktop Operating System and are essentially the standard state for many web services.
The projects that keep it simple and bare-necessity are either the ones that have scaled to enormous size (and complexity had to be removed for it to work), or ones that had strong philosophical and opinionated guardians, they are quite rare in practice.
I wonder if there is some easy way to have a "spaghetti index" of a codebase. One could then have different criteria for different projects / parts of projects. If it's a business critical thing that also expects to see a lot of future development, one could then communicate to management that the spaghetti index is too high and that must be first lowered before further development can happen.
And on the other hand, if it's a throwaway tool that's used for a limited time, no matter.
Yes, sure, it won't solve everything and wouldn't be perfect.
In a corporate environment, if you can measure something with a number, then you can set a target and pass/fail criteria and so on. Developers' personal opinions of some code base's quality are harder to build corporate processes around.
There's things like integration tests and static analysis. Of course again, not foolproof and don't solve all problems. But they help. Especially in a corporate environment where you need all the mechanisms you can get to prevent skimping on quality.
Not my experience at all. I’ve been using Claude Code on a large "hand written" project and it’s genuinely excellent at finding bugs and generating new methods or classes.
That said, it still frequently introduces subtle bugs, so I have to review every change carefully.
The real trick is learning when to use it. Some tasks are much faster to do myself, while others are faster with Claude Code.
Same experience here. It produces code that I would not accept if it was written by a human, but it also produces a lot of completely fine code. It made me clear tasks that I had been procrastinating for a long time.
Flock is doing something I find unethical, even immoral, but maybe not illegal.
I want people who break the law to go to jail. I don’t care if they’re cops or c-suite execs.
But what I really want is laws (preferably federal) that make it illegal to build systems that can be used for mass surveillance, and I want law enforcement to HAVE to get a warrant to receive data from surveillance companies, even if they offer it without a warrant, because I want oversight.
> make it illegal to build systems that can be used for mass surveillance
Is such a law realistically enforceable? A lot of the surveillance systems used today are benign services like Push Notifications, SMS and online filesharing sites. A significantly motivated threat actor (like the NSA, Unit 8200, Salt Typhoon, etc.) would have no problem appropriating that data for themselves.
Something like an oversight committee might work better, but there would be a bipartisan effort to neuter them the moment they take action.
We live in a strange time politically where the consensus on ethics is incredibly detached from justice. There is a danger in giving in to mob rules when it comes to the legal system but at this point we've wandered too far in the other direction with clear corruption around Flynn, Ticketmaster and others.
I simply don't find the argument that something isn't illegal compelling anymore since our justice system is so deeply misaligned with society. We live in the era of grift.
What sorts of things are you personally doing that aren't illegal, but that someone would want to imprison you for because they think you are unsympathetic for some reason and no longer find the argument that what you're doing isn't illegal to be compelling?
I disagree with the premise - interesting but I interpret the same fact pattern differently.
The history of technology is the replacement of manual processes with automated ones.
Consider a very basic process: checkout of a restaurant.
Writing the price of each item on a sheet of paper, manually adding them and writing the total was replaced with typing in the prices and eventually with just pushing the button for the item. Paper still exists for jotting down your order but within seconds of leaving the table it’s transitioned to computer.
This has enabled lots of desirable advances- speed, accuracy, new payment rails, and increasingly, elimination of the server in checkout- you tap a credit card on a tabletop device.
Did we “forget” how to do checkout? No. We purposely changed it.
But if the internet connection goes down or the backend server powering the cash register app goes down, there is an atrophied and not-regularly exercised skill set (maybe not even trained, IDK) that has to be implemented on-the-fly and it’s slow and frustrating for everyone.
Businesses don’t exercise (or perhaps even train) this process because it’s just not needed enough to warrant the cost.
Military procurement of weapons systems is hardly the place to point to as a technological tradition. There are lots of cases where no one pays the money to keep a production process in place; the reasons are all related to shortsighted “cost savings” or failing to anticipate changing needs.
With coding today, we are seeing the same kind of shift in priorities as my restaurant example. Having humans write code in the 2020 (pre-GPT) tradition was extremely inefficient in terms of time-from-idea-to-implementation.
We’ve found a new way to do the mundane part of that task (the mechanics of translating spec to implementation).
We are figuring out how to do that while preserving quality (and a lot of it is learning how to specify appropriately).
Will we “forget” how to “build” code?
No, but the skills to generate source code by hand will atrophy just as the skills to draw blueprints by hand atrophied with the advent of CAD.
Will we find examples where someone prematurely optimized away knowledge of a skill or process, incorrectly thinking it was no longer needed? Of course.
But the productivity gains we get will be so great on average that no one will go back to doing things the old way.
There will be old-timers and hobbyists who will preserve some of that knowledge; for most it will just be a curiosity.
Everyone is taught at a young age how to do basic addition and multiplication. That's all check out requires. People are not taught at a young age how Rust lifetimes work or how to write human maintainable code.
I agree, as with everything in 2026, the reality lands somewhere in the middle of the discourse online. But pretending this is in practice anything like the check out example is wrong.
Though I do believe you are making them in good faith, I find those comparisons do not hold.
CAD still requires you know what to do, and without CAD you can still draw blueprints by hand because you know what the result should be. Checkout is basic arithmetic you can do on a paper or even your personal phone. In both cases it is clear what the process is and what the output should be, and it doesn’t replace knowledge and training and certification.
With coding, none of that is true. By and large, there is a trend of people who don’t know what they’re doing shitting out software, or people who should know better not verifying the very flawed output they get. That is already having negative consequences in people’s lives.
My father, who was a mechanical engineer, has noted an instance of "brainrot" occurring with younger engineers: they are instructed in how to design parts, but not how to machine them, so they lack physical intuition about what kind of finish and tolerance is appropriate for a given part. This isn't really the fault of the young engineers, nor is it the fault of CAD which is still mainly a more efficient, more expensive draftsman's pencil, just a consequence of the fact that engineering curricula have largely optimized away the craftsmanship aspects of actually building things, leaving mechanical design work to be a mainly theoretical exercise.
With AI-assisted development we are at risk of something similar happening; the promise of LLM-based programming assistance is the ability to very rapidly knock together something according to a high-level specification without developing the craftsman's "feel" for how it actually runs. The scope of what's passed on in the discipline is narrowing, and people are forgetting essential skills they used to rely on in order to craft quality software.
The point you seem to be missing is that focusing only on optimization makes us all fragile to system shocks.
> Businesses don’t exercise (or perhaps even train) this process because it’s just not needed enough to warrant the cost.
Until a crisis hits. Covid and supply chain failures. Iran war and straight of Hormuz. Prolonged War in Europe with no production pipeline available. Banks collapsing after unsustainable overleveraging in supposedly "safe" mortgages.
For every optimization and cost-saving measure that is deployed, there should be a backup plan in place. MBA types and "technologists" keep missing this. What is the backup plan for the case where most of the economy activity is built on software produced by business who overleveraged on LLM for code generation?
AI layoffs are very shortsighted IMO and should be viewed by investors as a sign of weakness in management or the business itself.
If everyone is going to increase productivity by some factor k per employee, then kx is the new norm of overall productivity of x employees.
If you lay off some percentage Y of your work force, then your expected gains will only be k(x(100-y)/100). In other words, you will not recognize the same productivity gains as your competitors that chose not to lay off.
Yes I realize it is more complex than that, because of reduced opex, but there are diminishing returns very quickly.
Mixing in geothermal and hydro really distorts the story. Although technically correct, the common usage connotation of “renewable energy “ today is “wind and solar”.
reply