Not all code is cheap. Some code remains very expensive.
But the idea that some code is cheap and some code is expensive is not new.
The only new thing is there are some adjustments on how to asses the value of the code you’re presently, or about to, work on.
AI has absolutely expanded the set of code that is cheap and if you can make a thing easily with AI then so can someone else. That project is unlikely to result in valuable code. Which is not to say it doesn’t have utility. Just its monetary value is low.
This is one of those news stories that when you read it you think it’s so ridiculous, on so many levels, that it can’t possibly be true. But then I look at the calendar and I’m reminded it’s 2026, which means it probably is true.
Agree, but that's what we know. The man in the statue is walking into a void from his perspective because he lacks knowledge of his true predicament and is blindly marching forward.
A lot of the people who post online have no experience with the paid PCB tools and those tools already have quite a lot of automation, and the automation interfaces work between different CAD & EDA vendors. Shared, hierarchical, and repurposed schematics are also totally a thing.
I spend almost no time on boiler plate stuff. And with good constraints, which require serious thought and understanding, tons of routing & checks can be automated too. Right now.
So, IMHO, there is not a lot of fat in the process for AI to automate away without a lot more EE and physics models, and the ability to interpret multiple specs, built in. And the current AI tools are very far from that.
> those tools already have quite a lot of automation
Not to mention the level of customization and tooling that companies like Apple have themselves built out around the PCB tool. Playing around with Cadence at home is going to be a different experience than using it at a large tier1 company.
I was mostly sticking with more systemic factors against AI adoption, but I agree with you completely.
As you said, professional PCB design has largely automated the easy stuff, and the hard stuff is going to be largely illegible to an LLM. A competent engineer could route a 10L HDI board which powers on in under a week, getting it ready for mass production is what takes the other 8+ months and 5 design spins, and I don't see much opportunity for AI to help there.
If nothing else, wouldn't access to training data be a hard limit here? I doubt one could get multiple companies to provide the complete history of their product board designs _and_ the background on why each change was made.
Your analogy is more spot on that you may know.
The syntax is just a bit off ;)
"File > New Project from Template"
KiCAD comes with all the usual suspects, including Arduino and the various hats. You can get pmod templates, etc. They're actually really nice.
I use the pmod template all the time because it saves time and they're convenient to plug into Arty dev boards. PCBs are so cheap and quick I'll often make a quick PCB with a template because I just want a cleaner connector system. PCBs are basically bread boards these days.
What counts as AI help and therefore should be disclosed? For example I often use Grammarly to edit some of my more important writing (but not this post obviously) because it does find grammar mistakes and it does give good readability suggestions (I have a tendency to be wordy) and the process is quicker saving time. I don't always take its advice, as many of its suggestions are not my voice, but it is a useful tool. So do I disclose?
I get your point, I just wonder how accurate it is. We basically never look at the output of the compiler, so I agree that tool allows one to operate at a higher level than assembly. But I always have to wade through the output from AI so I’m not sure I got to move to the next level of abstraction. But maybe that’s just me.
I'm sure someone, somewhere, once wrote one that wasn't but in general, yes they are.
The ones I use certainly are. And with a bit of training you can reason and predict how they will respond to a given input with a large degree of accuracy without being familiar with how the particular compiler under question was implemented.
Not so with the AI tools. At least with the ones I use anyway.
Technically LLMs can be ran in deterministic mode as well, but I don't think that is enough. Even a deterministic LLM is too chaotic, small changes in prompts or the otherwise general context can result in vastly different outputs. This makes me think we need some other factor that is stronger than (or maybe orthogonal to) determinism. A notion of being well-behaved or some other non-chaotic term, so that slightly different inputs don't lead to vastly unexpected results.
Even that doesn't feel quite correct, because a compiler does seem quite chaotic. Forget a semi colon and an otherwise 99.99% code base results in a vastly different output. But it is still a very understandable output. Very predictable. So while treating both compilers and LLMs as functions that map massive input strings to massive output strings, there is some property I can't well define that compilers have that LLMs still lack (and the question is if they'll always lack it). I can't really define what it is, but it is something more than determinism.
Given the same compiler, I believe they would be the same between runs given the same inputs. I suppose that could not be true at the margins, but I would expect correctness out of whatever path it chose.
For all intents and purposes yeh. Its really about the variance in actual outcomes vs the expected. The variance is not much is it? With LLMs that absolutely isnt the case.
reply