Hacker Newsnew | past | comments | ask | show | jobs | submit | chrisldgk's commentslogin

At this point, why not just write the code yourself? Defining exactly what the product is supposed to do is the hard part, writing code is the easy part. Write your specs as code and you have your product - why let your LLM do the fun part?

Just because I am capable of "writing all that code", doesn't make the option preferable to defining a vast majority of spec up front and having an LLM generate an implementation. I am already going to spend the brain power on reviewing the code. I am already going to spend the brain power on pontificating edge cases, external module interactions, and next steps. Why not fast forward to that point and save 80% of the time (and brain power/attention/motivation to boot)?

If you can define the spec up front, this is probably true.

For anything large, the spec becomes increasingly more complicated. Look at software schedules in the old waterfall days of the 80s/90s: the spec / planning period was maybe 30-70% of the project.

Unless you’re working on pretty routine stuff, the real problem is that the customer (which might be you) almost never knows what they want. The spec will change the minute a customer gets something to play with.

This was the real value of agile in my mind: letting a customer change their mind as early as possible.


> I am already going to spend the brain power on reviewing the code.

Very few devs are actually reviewing any generated code.

> Why not fast forward to that point and save 80% of the time

If you are saving 80% of time, you aren't actually reviewing the code.


> Very few devs are actually reviewing any generated code.

Just because very few devs are qualified at doing their fucking job, it doesn't make someone trying to use AI properly wrong.

> If you are saving 80% of time, you aren't actually reviewing the code.

The idea is that if you spend time in specification ahead of time, reviewing and validating will be easier and less time consuming later.

I haven't tried it myself, but the idea rings true to me.


I've never seen a spec survive first contact with implementation. The spec is refined while writing the code.

Hell, you probably couldn't even build a simple bike shed from plans without having to revise them while building, so I am skeptical that without writing you are going to pinpoint the problems in the spec.

Reading only gets you a short way towards learning.


> I've never seen a spec survive first contact with implementation. The spec is refined while writing the code.

Neither have I. This does not make the spec useless. I don't spec hoping that it will be the source of truth, I spec because planning more often than not allows me to spot inconsistencies and ambiguity ahead of time, not halfway through implementation.

> Hell, you probably couldn't even build a simple bike shed from plans without having to revise them while building, so I am skeptical that without writing you are going to pinpoint the problems in the spec.

I think you are using specification and design wrong.

It's not supposed to be a bible that implementation can't deviate from. It's a plan, not law. It's okay for the plan to be adjusted in contact with reality.

It's still useful to know ahead of time constraints, expected output, assumptions, premises, etc.


> It's not supposed to be a bible that implementation can't deviate from. It's a plan, not law. It's okay for the plan to be adjusted in contact with reality.

My point is that without writing, you can't surface the type of problems you usually surface. The AI isn't going to surface those problems for you.

It's rare when reviewing that you think "Oh shit, this approach is totally wrong, we need to throw it away", while it's common when writing code to have that reaction.

If you aren't writing, you aren't having that reaction, and you aren't going to get it from reviewing code that has thousands of green "passed" lines in the testsuite.


> It's rare when reviewing that you think "Oh shit, this approach is totally wrong, we need to throw it away", while it's common when writing code to have that reaction.

That's not my experience. It is actually very common for specifications and design to be reviewed and improved.


> It is actually very common for specifications and design to be reviewed and improved.

I think there may be crossed wires here - specs and designs are reviewed, but I've never seen a code review result in a spec+design review, while I always see spec+desiogn review happen during the "writing code" phase.

In short, reading code does not result in a spec+design review, writing code does. If you are not writing code and only reading it is unlikely you will trigger a spec+design review.


I use a multimodal approach to defining my spec: different layers of criteria for how the software looks, behaves, what it produces, and under what constraints.

For the literal code:

• A healthy cocktail of /WX + /Wall, plus clang-tidy with very few suppressions

• An extremely opinionated mix of clang-format and LLM-generated bespoke formatting that AST-based tools cant express

• Hungarian notation; all stack locals pre-hoisted, declared in order of appearance, and separated from subsequent assignments

• Enforced dataflow: all memory accesses are bounded independent of branch resolution, with only data-oblivious indexing

• Functions have a single point of return

In a C89 workflow, this pushes agents to produce code where wrong business/domain decisions are unmistakably obvious, while eliminating the vast majority of bug classes before I ever read it.

So yeah, Ill reassert 80%, if not more.


i spent an inordinate amount of time thinking about this comment yesterday. i’ll cut to the crux of it in a second, but wanted to preface what i’m going to say by clearly stating that i don’t intend for this to be snarky or putting anyone down. that’s not my intent. ready? here we go…

is it possible that you might be in a job that’s not right for you?

it sounds like you want to cut out 80% of your job. if you want to cut out 80% of your job, maybe you’re doing the wrong job? y’know?

like, i read this comment and my mind goes to project/product manager who has real experience of coding. going from a spec (tickets / design docs / customer feedback notes / epics / stories / whatever) to a working implementation (team of engineers build it and you don’t have to use brain power). it sounds like you’re describing turning your job into the job of a PM.

we need more PMs like that in the industry. good PMs are few and far between, good PMs who know what’s it’s like to code — even fewer. so maybe have a think about why you’re working this way? i don’t really care if you do or don’t, but future you might be glad if you took some to think about it.


I'm with you all the way here. I derive zero pleasure from simply typing out the code once the spec is clear. Having a fast forward button to skip that phase is a pure win in my book.

I do get pleasure from typing out the code in some languages (and not in others; hello javascript, java!). Similarly, I love writing text with a calligraphy or fountain pen. However, I can't dedicate too of the much work / business time to whatever is more pleasurable.

So, I "doodle" some text / ideas / planning with a calligraphy pen, and type in some code, occasionally, both mainly for the fun aspects. There are side benefits to both, too. Writing some plans slowly and "beautifully" drags them out and I get to think longer on them, so the sporadic "nice looking plans" are often more well thought. And doing the coding all by myself stops my brain from losing the ability. I was initially in the 100% AI-writes-all-code camp for a while and noticed I am getting notably slow in some personal coding skills. It is too early to treat specs as the new code and old languages as assembly (but I admit we might get there some day).

In other words, I think AI doing 90-99% of the coding, depending on the language verbosity and AI accuracy for the code at hand, is quite reasonable.


Personally, this is an experience I thought about first before writing my comment. I think in the days pre-AI coding assists, I believe you describe the intrinsically human experience that's requisite to write code by hand. The wonder, the joy, the frustration, the confusion, the elation--the discovery. These days, the things I wonder about lie deeper and deeper behind more and more lines of code, through journey's that provide less and less joy, and thusly becoming more and more unreachable as I'm human, bound by an excess of things in addition to time. AI has helped me rediscover some of this sporadic creativity demonstrably due its ability to prototype recreational ideas on a whim

Professionally, I'm employed writing safety-critical avionics software. Superfluous amounts of cogent tooling putting guardrails on agents has enabled me to spend heaps more time to think deeply about how the software should work at a systemic level. The code by definition must be heavily criticized and battle-tested before it can go out the door to begin with. Albeit a beautiful part of coding, those sporadic bursts of creativity drive the code leaving my desk less and less, and I feel strongly that has made its quality paradoxically better since I'd spent much more time on broader implications and interactions.


100% the opposite here. I derive all the pleasure from writing code, which is why I'm still writing code.

a spec can be wrong until you prove it is right..

Not a developer would come to mind.

I do this because I'm wagering that LLMs will keep getting better. I'm wagering that specs will maintain value while code will degrade in value (become commoditized).

Code lacks the surrounding theory that situates the code in the world [1]. My specs contain the theory that the code lacks, which makes specs more valuable in the future. Specs are proprietary data. Data holds value in a post-AGI world, not code.

I am defining specs to be more than just an architectural spec, to me it's more like I'm writing a booklet about a subject, and I'm using it to teach the LLM via in-context learning. It might need a different word than "specs".

[1] https://pages.cs.wisc.edu/~remzi/Naur.pdf


Piggybacking here; I'd describe it like a fish ladder. Instead of "teach" I'd say "orient." LLMs are a force whose magnitude is undeniable and increasing, but it's up to us humans to provide the theory that exerts the magnetic forces to naturally encourage them in the right directions.

> I’m wagering

So isn’t that gambling, not engineering?


It's a reductive inverse corollary, but highly skilled Blackjack players are known to hesitate hitting on 18

>> Defining exactly what the product is supposed to do is the hard part, writing code is the easy part.

There is a massive difference between a spec, which defines what the product should do, and code, which defines exactly how it should do it. Moving from the former to the latter is not "the easy part". Anyone who genuinely believes that either works on easy and straightforward problems, or is some sort of programming god. Because translating specs to code can still be difficult and exhausting.


> Defining exactly what the product is supposed to do is the hard part, writing code is the easy part.

> There is a massive difference between a spec, which defines what the product should do, and code, which defines exactly how it should do it.

He states: The difficult part is figuring out the details so LLM doesn't save much time. You state: If LLM is able to correctly assume the details that saves you a lot of time.

Case 1: Part of the spec describe some basic feature based on a popular framework and industry standards, everything is trivial. You are right, he is wrong.

Case 2: Part of the spec describe some niche feature and/or uses some not popular framework and/or require deviation from industry standards and/or cutting edge performance/latency requirements and/or uses a bunch of proprietary non-googlable data. You are wrong, he is right.

The more senior engineer are the less time they spend on case 1, those are easy, they don't spend much time on it, it is the 2nd which is much more time consuming.


One thing that I can’t seem to parse from the article is why the researchers assume that this is an unresearched part of ADHD and not a different disorder entirely. I’m sure they have their reasons, but I don’t think it’s written in the article.

To me it seems that if it’s not „treatable“ the same way ADHD is, I’m not sure if it’s useful to categorize it as such. On the other hand, I’m happy if kids with this disorder can get a diagnosis and treatment that actually helps them sometime in the future due to this research.


You have a set of diagnosis criteria, and matching those criteria gets you the ADHD diagnosis. This study takes people who fit the diagnosis, and says there's a test you can do to split those people into three groups.

But yes, once they have a better understanding of what that difference means, the next step might well be to split the ADHD diagnosis into two separate disorders, or even that, like cancer, ADHD is actually a whole range of separate but related conditions.


Diabetes has a similar issue, with type 1 and type 2 having very different causes and pathologies.

> why the researchers assume that this is an unresearched part of ADHD and not a different disorder entirely

Whether or not the extreme dysregulation is a different disorder in its own right or not, is not relevant here. They are grouping ADHD matches; clinically recognized ADHD presentations plus MRI recognized ADHD which have a distinct brain sub-pattern occurring in people that have the same distinct behavioral traits. ADHD frequently has co-occurring conditions.

"Identifying “specific subtypes” of ADHD will make it easier to treat these children effectively". Having a more objective way to diagnosis for things like that seems to be the focus of the approach. They expect it to keep evolving, so I wouldn't say they are assuming anything about absolute labels -- just grouping what they now know to be true, that certain external traits match certain distinct brain patterns that are within the larger adhd brain structure.

Also, I think it's not that it is "not treatable" as ADHD, it's that ADHD can be treated in many different ways and currently the wide variety of responses to such is still a black box. Adderall instant release could briefly make me tired, I would sometimes break off a small piece and use it as a sleep aid. Some other `treatments` (I prefer societal alignment coping aid) resulted in what seemed like an expensive joke. Subtypes may eventually be able to show which options work best for which types and to start there first, instead of the current default iteration.

This link adds more about their research. https://medicalxpress.com/news/2026-03-distinct-adhd-biotype...


It is a good point and I also struggled with that bit somewhat. It is different in so many ways, have different symptoms, does not respond (as well) to the same medication, and affect different parts of the brain. The jump from there to "subtype" was not too logical for me ...

I would suppose it interrupts the page load after streaming the HTML and before loading and/or executing the cookie banner‘s javascript, meaning the content is there but the cookie banner will never open.


Per their own docs, D1 is primarily meant for things like Auth DBs that you have frequent read/write access to but that store limited amounts of data. If you need more storage, running Postgres somewhere else and querying via Hyperdrive is probably what you want to do instead.


You sound exactly like the tens of doctors that misdiagnosed my aunt (who passed away through euthanasia after her symptoms got too bad to live with late last year).

I understand that the symptoms of ME/CFS might be similar to being obese/depressed or housebound, but the problem is that doctors often jump to that conclusion too quickly and don’t take efforts to diagnose ME/CFS leading to situations like my aunt’s. She was also obese and depressed and has been struggling with those symptoms for about 30 years and has constantly been misdiagnosed the entire time because doctors didn’t figure out that those were symptoms of ME/CFS and not two unrelated conditions coming from two different diagnoses.

Thanks to long-covid putting the symptoms of ME/CFS on the forefront lately, there’s finally been some much needed research into the disease and people like my aunt finally get the diagnosis they should have been getting many years ago.


Sorry to hear about your aunt. My condolences. I think your misinterpreted my intent, I would dearly love a good diagnostic test for ME/CFS and agree research has been hugely underfunded.

I have followed closely the research for many years and there has been false promise of good diagnostic tests previously. What I'm arguing for is that we need a test that is specific for ME/CFS. E.g. it will test positive for a patient with ME/CFS regardless of they are obese or not, but more importantly it will not test positive for everyone who is obese. This is known as the sensitivity and specificity of the test.

What I've seen in the past is some previous ME/CFS tests show positive for groups with related symptoms but who don't have ME/CFS. This then becomes a worthless diagnostic tool. For example this would not have helped your aunt.

Hope this explains my thoughts!


Reading this back, I did entirely misinterpret your comment and may have acted on that a bit more emotionally than I would like to admit.

Thank you for clarifying and I wholeheartedly agree.


My main question here is mostly why so many people still rely on axios for their fetch implementation. Native fetch has been a thing in the JavaScript world for so long, and the DX gains to using axios over it are miniscule. The only thing I can think of is axios instances, but you can easily write a tiny wrapper for fetch that would do the same.

This is a genuine question - if you still use axios, why exactly?


People tend to forget that React != React-DOM (i.e. HTML).

React is just a framework for declaratively defining components and reactivity, the end result can be whatever you want. That’s what react-native is for mobile apps, and as another commenter pointed out, in this case it was using React Native for Windows[1], which apparently calls native Windows APIs in the background.

I like to jump on the MS hate train as much as the next guy, but React itself is not the reason the start menu is bad.

[1] https://microsoft.github.io/react-native-windows/


Agreed and it's an important distinction to bring up. There are some pretty cool projects that use react like that, e.g., vicinae [0]. And one can implement a different renderer for react, here is a tui renderer [1].

0: https://www.vicinae.com/ 1: https://github.com/vadimdemedes/ink


Notice the word „decriminalize“, not „legalize“. It’s about not throwing people already struggling with addiction in jail but rather offering safe alternatives (counseling, safer use, etc.).

The government‘s not passing out drugs in the street, like US media likes to suggest.


It’s simple, you use innerHTML if you know for sure where the input comes from and if it’s safe (for example when you define it as a hard coded string in your own code). You use setHTML when you need to render HTML that is potentially unsafe (for example forum posts or IM messages). Honest question, which part of that isn’t clear?


They did carry over into the USB era! I specifically remember my stepdads copy of Cubase (music production software) requiring a USB dongle to open.


Ditto - and there's also the "iLok" dongle used by loads of virtual instrument & effects plugins for DAWs.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: