Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

if it catches a lot of bugs maybe you’d be better of letting it write it in the first place :)
 help



It also writes lots of bugs which it'll catch some of, in an independent review chat.

This is bogus. If you think LLMs write less buggy software, you haven't worked with seriously capable engineers. And now, of course, everyone can become such an engineer if they put in the effort to learn.

But why not just use the AI? Because you can still use the AI once you're seriously good.


> But why not just use the AI? Because you can still use the AI once you're seriously good.

Perhaps because the jury is still out on whether one can become “seriously good” by using AI if they weren’t before.


I haven't seen it.

This is definitely not correct in my opinion. You’re essentially saying, instead of a person actually getting better at the craft, just give up and let someone else do it.

I was joking :)

Nono, that is the reverse centaur. Structure your own thoughts, that's the human work.

Statistically LLMs generate more bugs for the same feature.

IME, not really. When you prompt it to review its own written code, it will end up finding out a bunch of stuff that should have been otherwise. And then you can add different "dimensions" in your prompt as well like performance, memory safety, idiomatic code, etc.



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: