Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I agree that the system of OpenAI, ChatGPT, and a user entering text on their website taken together may contain knowledge of "what a bag is, what a person is, what popcorn and chocolate are", etc. I do not agree that the LLM on its own "knows" what any of those things are.


Seems like that's a consequence of the philosophical semantics of the word "know", not really a statement about the demonstrable capabilities of the LLM. In other words, why does it matter?


In the context of a discussion on whether LLMs could have a theory of mind? I think the ability to know anything at all matters to evaluate that conclusion.

More generally, what an LLM actually knows or understands is important if you're considering using one for anything other than generating first drafts which will be fact checked by humans.


If you're depending on fact checking by any one human I think that the last few years in politics should be a sufficient warning to the dangers of that. In the end the LLM will have to be integrated into larger systems that cross check each other.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: