Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

But the systems that do that impressive work are no longer just LLMs. Look at the Claude Code leak - it’s a sprawling, redundant maze relying on tools and tests to approximate useful output. The actual LLM is a small portion of the total system. It’s a useful tool, but it’s obviously not truly intelligent - it was hacked together using the near-trillions of dollars AI labs have received for this explicit purpose.
 help



What does this matter? You can build a working coding agent for yourself extremely quickly; it's remarkably straightforward to do (more people should). But look underneath all the "sprawling tools": the LLM itself is a sprawling maze of matrices. It's all sprawling, it's all crazy, and it's insane what they're capable of doing.

Again if you want to say they're limited in some way, I'm all ears, I'm sure they are. But none of that has anything to do with "statistical text generation". Apparently, a huge chunk of all knowledge work is "statistical text generation". I choose to draw from that the conclusion that the "text generation" part of this is not interesting.


Well, hang on a second - it sounds like you may actually disagree with the user who created this thread. That user claims that these systems exhibit “real intelligence”, and success on this Erdos problem is proof.

You seem to be making the claim that LLMs are statistical text generators, but statistical text generation is good enough to succeed in certain cases. Those are different arguments. What do you actually believe? Are we even in disagreement?


I don't have any opinion about "real intelligence" or not. I'm not a P(doom)er, I don't think we're on the bring of ascending as a species. But I'm also allergic to arguments like "they're just statistical text generators", because that truly does not capture what these things do or what their capabilities are.

(The clearer way for me to have said this is that I don't care whether they're According-to-Hoyle "intelligent", and that controversy isn't what motivated me to comment).

"But I'm also allergic to arguments like "they're just statistical text generators", because that truly does not capture what these things do or what their capabilities are."

Umm, why doesn't it capture it? Why can't a statistical text generator do amazing things without _actually_ being intelligent (I'm thinking agency here)? I think it's important to remind ourselves, these things do not reflect or understand what they're outputting. That is 100% evident with the continuing issues with them outputting nonsense along with their apparently insightful output. The article itself said the output was poor but the student noticed something about it that sparked an idea and he followed that lead.


I reject the premise. I read the outputs I generate carefully (too carefully, probably). They don't "continue to output nonsense". Their success rate exceeds that of humans in some places.

To clarify: the problem I have with "statistical text generator" isn't the word "statistical". It's "text generator". It's been two years now since that stopped being a reasonable way to completely encapsulate what these systems do. The models themselves are now run iteratively, with an initial human-defined prompt cascading into series of LLM-generated interim prompts and tool calls. That process is not purely, or even primarily, one of "text generation"; it's bidirectional, and involves deep implicit searches.


Do you think it's akin to Ilya's [1] claim that next token prediction is reality? E.g. any deeper claims about the structure of that intelligence or comparing to humans?

To be clear, I'm 100% with you that "next token predictor" is stupid to call what these machines are now. We are engineers and can shape the capability landscape to give rise to a ton of emergent behavior. It's kind of amazing. In that sense, being precise about what's going on, rather than being essentialist (technically, yes, the 'actual' algorithm, whatever that even means, is text prediction), is just good epistemology.

I still think it's still a very interesting question though to ask about deeper emergent structures. To me, this is evidence of a more embedded cognition kind of theory of intelligence (admittedly this is not very precise). But IDK how into philosophy you are.

[1] https://www.dwarkesh.com/p/ilya-sutskever


I try really hard not to think about this stuff because I've seen how people talk when they get too deep into it. My mental model, or mental superstructure, if you will, for all of this stuff is that we've discovered a fundamentally novel and effective way of doing computing. Computer science is fascinating and I'm there for it, and prickly when people are dismissive of it. I'm generally not interested in the theory of human intelligence (it's a super interesting problem I just happen not to engage with much), which spares me from a lot of crazy Internet stuff.

Just to clarify because I’m not sure I understand:

So you agree that LLMs are in fact statistical text generators but you don’t like people use that fact in arguments about the capabilities of the things?


It's like a genotype/phenotype distinction, the genotype may be statistical text generator but the phenotype is something much more.

Not parent but I think you're being rather dense. They are _obviously_ statistical text generators. There's plenty of source code out there, anyone can go and inspect it and see for themselves so disputing that is akin to disputing the details of basic arithmetic.

But it is no longer useful to bring that fact up when conversing about their capabilities. Saying "well it's a statistical text generator so ..." is approximately as useful as saying "well it's made of atoms so ...". There are probably some very niche circumstances under which statements of each of those forms is useful but by and large they are not and you can safely ignore anyone who utters them.


It is still important to mention that because atoms have limitations and so do statistical generators. Plain and simple. People are walking around thinking organic brains are just statistical generators and they're gonna build AGI with GPUs. It's absurd.

And your evidence for these claimed limitations is ... ? I'm not aware of evidence either for or against organic brains being "just" statistical generators. Neither am I aware of evidence either for or against AGI being possible to achieve using GPUs. AFAICT you're just making things up.

> "People are walking around thinking organic brains are just statistical generators"

And your evidence that they aren't is ... ?


He does say that LLMs are just a part of the models used these days.

I think you're actually making a point but overall still disagree.

I do think LLM's are evolving towards this kind of embodied cognition type intelligence, in virtue of how well they interoperate with text. I mean, you don't need to "make the text intelligible" to the LLM, the LLM just understands all kinds of garbage you throw at it.

Now the question is: Is intelligence being able to interoperate?

In the traditional sense, no. Well, in a loose sense, yes, because people would've said that intelligence is the ability to do anything, but that's not a useful category (otherwise, traditional computer programs would be "intelligent"). But when I hear that, I think something like "The models can represent an objective reality well, it makes correct predictions more often than not, it's one of these fictional characters that gets everything and anything right". This is how it's framed in a lot of pop culture, and a lot of "rationalist" (lesswrong) style spaces.

But if LLM's can understand a ton of unstructured intent and interoperate with all of our software tools pretty damn well... I mean, I would not call that "a bunch of hacks". In some sense, this is an appeal to the embedded cognition program. Brain in a vat approach to intelligence fails.

But it clearly enables new capabilities that previously were only possible with human intelligence. In a very blatant negative form: The surveillance state is 100% now possible with AI. It doesn't take deep knowledge of Quantum Physics to implement, with a large amount of engineering effort, data pipelines and data lakes, and to have LLM's spread out throughout the system, monitoring victims.

So I'd call it intelligence, but with a qualifier to not slip between slippery slopes. It may even be valid to call the previous notion of intelligence a bad one, sure. But I think the issue you may be running into is that it feels like people are conflating all sorts of notions of intelligence.

Now, you can add an ad hoc hypothesis here: In order to interoperate, you have to reason over some kind of hidden latent space that no human was able to do before. Being able to interoperate is not orthogonal to general intelligence - it could be argued that intelligence is interoperation.

If you're arguing for embodied cognition, fine, we agree to some extent :)

The fear is that the AI clearly must be able to emulate, internally, a latent space that reflects some "objective notion of reality". If it did that, then shit, this just breaks all of the victories of empiricism, man. Tell me about a language model that can just sit in a vat, and objectively derive quantum mechanics by just thinking about it really hard, with only data from before the 1900s.

I don't think you need to be this caricature of intelligence to be intelligent, is what I'm saying, and interoperability is definitely a big aspect of intelligence.


Now this I can agree with. One thing that is extremely important to maintain with this technology is nuanced perspective. Otherwise, it will lead you astray quickly. It's also a difficult thing for us to maintain.



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: