Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
MIT AI researcher Lex Fridman says ChatGPT is reasoning [video] (youtube.com)
5 points by irthomasthomas on Feb 5, 2023 | hide | past | favorite | 15 comments


100% it is reasoning. People kept telling me 'it isn't judgement or reason' but I can plug in sections from my essays and have it argue (successfully) against them. People overestimate how "special" human reasoning is it seems.


It is 100% not reasoning. It is predicting the next 4 chars in a string, that is all. This becomes very obvious when you have used it for a few hours (depending on your luck, as it is non-deterministic). Sometimes it's output is uncannily human-like, but other times is utterly nonsensical gobbledygook. OpenAI even state this as something to be careful of:

  ChatGPT will occasionally make up facts or “hallucinate” outputs.


I am fully aware of the underlying arch, I am saying what you're describing is just 'how' it arrives at what appears to be reason much like our brains. Are you saying our brains are not stochastic parrots? Because they absolutely are, just less capable ones than ChatGPT...


Humans have the ability to think 'fast' or 'slow', with the former being predictive shortcuts. It appears that ChatGPT does not know how to think 'slow'.

That's currently. The problem I suspect is the form of the input. With AlphaGo/Zero/MuZero, they may be thinking 'fast' but is indistinguishable or better than humans thinking 'slow'.


[I thought about this a bit more and can express it more clearly.]

With AlphaGo/etc there is an internal representation/language being created directly from the board state.

In the case of ChatGPT we are using natural (or programming) language which is once removed from the 'thing' that's discussed. A good illustration of this would be if the AlphaGo folks were trying to beat Atari 2600 games from English descriptions of what's happening onscreen, written out in normal style as if being explained to a person. The machine of course can get this information at high speed so the lack of bandwidth is not the problem. We wouldn't be surprised it doesn't do as well as getting a direct view without human involvement in observation.

As an example AlphaGo was trained on high-level human games and self-play, AlphaZero only learned from self-play. The latter learns faster and plays better, without the human coloring of the data.

Btw, last I heard/read there are still some Atari 2600 games that the machine still can't beat.


Check this out and see if it changes how you think about it

DON’T TRUST ChatGPT’s CODE! [youtube / 18 min] https://www.youtube.com/watch?v=YiokTYzA6BI

"He explores the code that ChatGPT can write, and gets ChatGPT to practice Test Driven Development (TDD)."


This comment actually sounds AI generated. How can someone be fully aware of chatGPT architecture, and yet still claim that it is 100% reasoning? Those statements are logically incompatible.


"The corner tables wants a beer" a person will work out the joke given a little time,

bots will throw out something connected to the word beer or table XD


Lex is going for a larger audience. He knows what attracts more clicks.

He’s drawing in an audience on a very hot topic.


so close.. ;)

(It' silly, honest, always on my side: gives me any reasoning I wish reagardless if it does make sense (other maybe than Markov Chains I guess) - so, there _is_ some reasoning in it, it happens even convincing, but as well things which we can't explain - if it could be noise over signal or something we don't know, yet nor about, our wishfull thinking, expressed or overload.)

There must be a reasoning for some reasonings (patterns in LLM confused as knowledge? ..maybe quiet voices talk to it ..its cultured ? )

If AI may not be reasoning well by self - from choosing the right principles as the base ('options') to building only on it (to do this right it would need to have its own opinions and know the difference beetwen light and darkness, taste ('style transfer')(..were there other options?) - but how ? ) - and to get 100% reasoning ALL not reasonable paths shall be replaced, at least explained.. ('human labeling .. ranking.. options..')'reinforcement'.. "

.. it would be the nicest if the reasoning methods and paths could be just extendable and customisable (POVs or emulating differrent ways, coherent/emergent..) but some of them could be objectively choosen as canonical, in given circumstances, with consensus ('fine-tuning .. which one seems the best, so, ranking'), - _then_ we could let it grow ('generate .. output').. ( - otherwise it will become a disconnected from truth alternate hallucinated stretched reality which will eat itself in the best case ).


.. that's why I'm so thrilled and excited, waiting to see it when other (if Turing-tested - I don't care), such nice options of reasoning, like conforming to Descartes, Kant's Critiques, dissectable by simulacras, razors, worlds, trans, ency, meta, fairy, or of other volume or mass, would be finally available, fine-tuned and please, be included or inclusive when it's possible..


Is he still a researcher or a full time podcaster now?


They all like to have fancy sounding titles. The way Jordan Peterson is a clinical psychologist, and that intolerable black science communicator (forgot his name) is a physicist.

Has Lex even carried out any research ? A search in google scholar does not reveal much


"Any sufficiently advanced technology is indistinguishable from magic" -Arthur C. Clarke


It's just regurgitating language patterns. Not reasoning about it in the human way.. just a computer.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: