Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The simulacrum of a thing is a simulacrum of the thing though. LLMs are trained to simulate human thinking, and while their thought process is not the same, you can't say for sure that the thinking output is not necessary for their thought process to end up in the place where a human thought process would end up. If the "Interesting!" token(s) wasn't there, for all you know it would have gone down a completely different path.
 help



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: