Hacker Newsnew | past | comments | ask | show | jobs | submit | ajaystream's commentslogin

The spec-completeness problem here is the same one that bites distributed systems verification: the proof holds inside an operating envelope (no adversarial inputs, trusted runtime, bounded sizes), and the interesting failures live at the boundary. TLA+ has the same property - you can prove liveness under a fairness assumption the deployment silently violates, and nothing in the proof tells you when reality drifted outside.

What I'd actually want from the tooling is a machine-checkable statement of the envelope itself, propagated as a runtime guard rather than a compile-time comment. Then "proof holds" and "we are still inside the proof's domain" are two separate, observable properties, and the unverified-parser / unverified-runtime cases stop being invisible.


As someone who has discovered a bug in a CPU that was previously unknown to our chip vendor, I would like to point out that the rabbit hole is deep.

On the other hand, I've discovered thousands of bugs that weren't hardware bugs, and dozens of bugs due to people not having read hardware errata documents, so just formally modeling what we can model will absurdly reduce the bug quantity.


I am building in finance, where workflows dominate - ie if A then B, but if ~ A then C then D etc. Configuration is a major challenge, but AI helps in getting there, but ever so often as you know - LLMs to the wrong thing even though they had been told many times over before. Users tend to abandon the interface and then manually make their updates. Is there a user interaction model to get the user to stay engaged ?


I have so many slide decks from the 2010s when I was working on ideas that were way ahead of their time.

One of my realization was that intelligent systems had to be organized entirely around workflows, that is, you'd have steps that could be easily automated, others that had to be manual (e.g. either for legal reasons or because some physical thing has to happen) and others that could go either way. There had to be a process that makes it straightforward to route a task either way, have a person override something, and to the maximum extent, patch the system to make that override permanent. If you didn't have all these things you could have an AI as capable as we have today and... zero business value in the end.

That feeling you are being ignored actually causes strong reactions in people that people aren't all that aware of or feel like it is safe to talk about -- that is, we live in family, political and business systems that ignore us all the time and we've learned to ignore that feeling of being ignored.

I don't know exactly how to invoke that feeling in you but you could find it in yourself. I might imagine myself being erased with a big rubber eraser and feel a sinking feeling in my gut as it all goes dark or the feeling of bracing myself as my foot slips and I go over the edge backwards over a 300 foot cliff.

Practically rather than feeling these feelings (that go back to your feeling ignored by your mother when you were an infant) people often have their brain short circuit facing situations like this so you get the avoidance, the "not being engaged", etc. The short answer is "at any cost don't put people in this situation", if you must I don't have an answer, but I do know if you push people hard on this kind of thing you will be looking for new people to replace them and find the role is very high turnover.


I hardly felt that the problem could be answered in such poetic and psychological eloquence. I get what you are saying.


The other challenge that we have found is accuracy and completeness of fields required to be updated across use cases. Either we have to mandate all the fields or when we set them optional in the tool def. it sometimes blows through - how are you handling that ?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: