Hacker Newsnew | past | comments | ask | show | jobs | submit | nsingh2's commentslogin

I've been using GPT-5.4, and more recently 5.5, with Codex CLI + Ghidra MCP for reverse engineering a game without many issues. Injecting code is where it usually balks at, but I'm just trying to discover and parse structures from game memory.

I did get a refusal when trying to read in-game currency, even though modifying it would do nothing. It has some strange boundaries.


> a business that puts employees first and profits for owners last can often have a shit ton of profits for owners.

Owners can make 100x that shit ton if they put profits for owners first, so why wouldn’t they do that instead? Out of the goodness of their own hearts?


> The solution, if there is one, has to come from innovation from the private economy.

Why? The problems of offshoring, consolidation, automation, you described came from private sector incentives (not to mention debt driven consumption, and turning basics like housing, healthcare, and education into profit centers)

Why would those same incentives magically fix the problem on their own?

> And there isn't too much the US government can do to revert this economic decline

This is ahistorical. The post great depression economy that led to the “American Dream” was supported by huge public spending and actions by the government [1]. Revitalization happened before, it can happen again.

So much came from FDR/New Deal, social security, labor law, housing finance, banking regulation, securities regulation. Saying the US government can't really do that much is ridiculous.

[1] https://www.archives.gov/milestone-documents/president-frank...


These plots are terrible. Why is categorical data connected across categories with lines? Why not just use bar plots?

Like in the "Web Vulns in OSS" plot, white box data for Opus 4.7 is not available, but the absurd linear interpolation across categories implies it should be near 60.


It's just an ad thinly disguised as useful data.


I think the x axis is meant to be time but they screwed it up.


It's a combination of factors. There was rate-limiting implemented by Anthropic, where the 5hr usage limit would be burned through faster at peak hours, I was personally bitten by this multiple times before one guy from Anthropic announced it publicly via twitter, terrible communication. It wasn't small either, ~15 minutes of work ended up burning the entire 5hr limit. That annoyed me enough to switched to Codex for the month at that point.

Now people are saying the model response quality went down, I can't vouch for that since I wasn't using Claude Code, but I don't think this many people saying the same thing is total noise though.


This kind of sentiment, on its own, is hollow. Just more "violence bad", until the next round.

There is growing anger and discontentment in a large part of the population, driven by inequality of wealth and power. Hopelessness and a lack of control over the future.

Are the nodes of power willing to spread wealth and control more widely to stabilize the country? What are they willing to do to consolidate their power? The vast majority of violence is perpetrated by those nodes, to either consolidate power, or gain more of it.

Other people in this thread have already suggested more actionable responses: organize, unionize, understand class dynamics, and vote accordingly.


If you talk to the average individual outside of California or NYC about AI, or even Waymos, they will get increasingly irate and start spouting off about “water usage” and everyone’s jobs getting taken away—as if RLHF contract work is not available to basically anyone with a college degree. I hate to say it but you cannot trust “the masses,” Marx never said mob rule, he said rule by the proletarian, the class which knows, on account of their labor, the best integration of the human organism into mechanical production. No, there is no concern for the “masses” living in pre-industrialized agrarian communities or those who have been mystified by reactionary ideas (like this so-called majority), he was referring to those whose existence was an exception, that which was free and not predictable, contingent in the operation of the economy. It is by their exceptional circumstance that radical social change is even possible, not because of any moral need to raise humanity out of its savage condition. The masses, without the right understanding, will just become a lynch mob and start burning everything in sight, as they tend to in most circumstances.


The masses seem kind of right to be in that mindset, if you consider it from thier point of view for even one second?

So, yes RLHF is available right now, for people with specific backgrounds. That RLHF work is temporary and it's going to make hundreds of thousands of people redundant. The RLHF work is actually job-negative, it is work which will later deprive others of a way to make a living.

Once that training work dries up, what happens to the people who were doing the job which AI now does? How do they pay rent? How do they feed and clothe themselves? What answers do any AI proponant actually have for this, or is the intention that every person shuts the critical thinking part of their brain off and trusts the computer will come up with something?


I want you to trust me when I say that the RLHF work is never drying up.


Those who cannot convince, coerce. I don't trust your instinct and it doesn't seem like you can provide any evidence. Shame.


Yes well you are trusting your instinct, meanwhile the actual postings for RLHF work keep increasing, and the rates contractors accept keep going up. But who knows, maybe some superAI is going to take all their jobs away soon.


> meanwhile the actual postings for RLHF work keep increasing, and the rates contractors accept keep going up

If you knew this for fact you'd have something to corroborate, is this just vibes? Job loss numbers are published, at the very lowest end the estimates are 50k across 2025 in the US alone. I don't see any evidence RLHF is creating livelihoods at the rate AI is destroying them.


The economy is not a monad, some sectors grow rapidly, others shrink precipitously, and still others are very stable for many decades. Just because AI is booming right now does not mean that other areas will not experience deficits. And the AI boom is an international phenomenon, not restricted solely to the US, so it would be hard to measure the value of any labor input strictly according to US economic data.


This isn’t my experience at all when talking to non-techies all over the country.


Maybe I have too many encounters with insecure professionals and liberal petite bourgeois


> and everyone’s jobs getting taken away—as if RLHF contract work is not available to basically anyone with a college degree.

Huh? The jobs aren't going away because a few people can get temp work as traitors to automate away the jobs of their fellows? I suppose that's technically correct (e.g. the there-exists counterexample to a for-all statement), but it totally misses the point.

> The masses, without the right understanding, will just become a lynch mob and start burning everything in sight, as they tend to in most circumstances.

BTW, totally fine. If you like nice things and have political or economic power, it's totally on you to prevent things from getting bad enough that people want to do that. That's something libertarians would do well to remember. Propaganda only gets you so far.


All productive labor, profitable labor, involves creating something that reduces labor time. The people who manufactured looms took away the jobs of the weavers


Ah, the real Marxist constant finally rears its head. Thank you for so well demonstrating the primitive contempt for humanity which your ideology requires. What a shame none of you has actually read or studied any of that "theory" you prate about.


I would be perfectly happy to support your so-called humanity when you are capable of providing a rigorous definition of what it consists of, and one that does not require the concept of a "soul" or otherwise some basically racist, phrenological standard for the body. Because when you say humanity all I see are objects that are standardized almost too closely to the commodities they produce, a single standard that would unify and homogenize everyone in the world. That's why I don't care for "humanity," I care about power, physical power, creative power, what any individual is capable of with the right tools.


> when you say humanity all I see are objects

> I don't care for "humanity," I care about power

Yes, I know. I suppose at least you've read your Alinsky.


I’m not familiar with Alinsky, I’m more broadly influenced by the CCRU, although I suppose that makes my reading of Marx fairly idiosyncratic, though I do remain with him at the letter.


You're calling yourself a Marxist via those guys? Excuse me. Please carry on.


Nick Land moved to China, and AFAIK he teaches there now, so I wouldn't be the only one.


Nick Land has called himself a lot of things. But more interesting to me is this question: in what way may China since Deng be regarded as meaningfully Marxist? (Are they still nominally Marxist over there? Were they really ever? Maoism was its own "deviation.")



> https://www.nature.com/articles/s41586-025-09797-z

That title reeks of the paper equivalent of clickbait. The paper is about subjective well-being and mental health in the psychological sense. Broader well-being includes material conditions like income, housing, health care, safety, and social connections. So a null result on subjective well-being is not necessarily a null result on material welfare, and the problems that leads to. The paper’s own abstract also talks about context effects rather than a simple universal null.

> https://marginalrevolution.com/marginalrevolution/2022/10/ar...

Unions are not perfect, but they have been an important check on exploitation. Organized labor helped win the 40-hour workweek. If you demand perfect solutions, you end up doing nothing. And given that you're up against people with nearly unlimited resources, you can't afford to be picky.


Serf mindset like that won't get us very far...


It's going to be expensive to serve (also not generally available), considering they said it's the largest model they've ever trained.

I suspect it's going to be used to train/distill lighter models. The exciting part for me is the improvement in those lighter models.


It seems inevitable that costs will come down over time. Expensive models today will be cheap models in a few years.


What's interesting is that scaling appears to continue to pay off. Gwern was right - as always.


Seems like HN is doing something to combat this, considering how many [dead] comments I see in every post (which you can enable by setting `showdead` in your user profile).

I've only recently enabled it so I don't know how frequent dead comments were before the LLM era.


Fair enough. I actually noticed that right after I posted this comment.


To be fair, I've been here for like 15 years and have had show dead on for most of it, and although the quality of them has certainly gotten lower, I'm not convinced that they are more frequent.


Really uncharitable take. I did stupid things at 14, and had more unrestricted internet access too.

> absent parent more concerned with his business than his son

I don't know how you came to this conclusion from the post.


This a big exaggeration. Codex is probably one of the top two LLM programming tools, along with Claude Code. GPT-5.4 models are strong, unlike the initial GPT-5 ones, which were comparatively bad, and can hold up against Opus 4.6. In my experience, they are better at analytical work.

I cannot really see how they are "far behind," or how some plugin for Claude Code is a "last desperate bid." The tools are close enough to each other that I regularly use Codex one month and Claude Code the next without much disruption, just to try out any new models or features that might be available.

I do not have much visibility into the non-code applications, so maybe it is stickier there.

If/when the AI bubble pops and takes OpenAI down with it, I would not expect Anthropic to come out unscathed either.


They were years ahead. They managed to generate competitors (Anthropic is OpenAI refugees) by alienating their own employees by being so dishonest and immoral when compared to their own founding principals and even legal documents. They experienced a coup where the primary technical vision of the company was forced out in favor of someone who is comparatively a nontechnical dummy. That was the beginning of the multiple years of stagnation while they burned tens and hundreds of billions of dollars while their competitors caught up and then passed them by.

OpenAI is floundering and can't sustain their own burn rate. Their competitors are thriving. This is a market and technology that OpenAI largely created and just a few years in they are behind, losing unprecedented amounts of money, and have no clear path to catch up.

Lets be totally clear, they were 3 years ahead 3 years ago and now they are behind. They are literally standing still.


> They were years ahead.

Considering how fast competitors caught up to them, I'm not convinced that OpenAI was years ahead. LLMs and transformers were known technology, it's just that OpenAI accidentally productized it before others did (ChatGPT). This is not an advantage measured in years. Google, for example, could have caught up to them pretty easily (they invented the transformer architecture), I think it mostly came down to mismanagement that they flopped so hard with Bard. The biggest cost was high quality data, Google certainly had that, and a budget for huge training runs. I really don't think OpenAI had any special sauce that made them years ahead.

One confounder here is that LLM scaling has started to hit diminished returns recently, no more GPT3 -> GPT4/o1 jumps in recent times, making it easier to catch up to the SOTA.

That schism within the OpenAI leadership was ugly. And Sam Altman does seem to be a bit snakey to me. But I have no illusions about any company in this space, including Antropic. None of these companies are moral, given what data these models are trained on.

> their competitors caught up and then passed them by

The different models are more capable in different aspects, but they are close enough together that only in a few months they leapfrog each other.

> OpenAI is floundering and can't sustain their own burn rate. Their competitors are thriving.

Google is thriving, sure, but not because of Gemini, it's because of their existing ads business. I would not say that about Anthropic, they seem to be struggling to provide enough compute (with the recent usage limit changes). Hard to know whats happening funding wise in these companies. Saying that their competitors are thriving is a stretch. And again, if the AI bubble pops, Antropic is gonna hurt along with OpenAI. Just not clear to what extent.


Their competitors caught up after about 3 years though. Gemini 2.5 was more or less awful vs even GPT 3/4. Models have more than one measure of quality so they don't cleanly totally order, but Gemini 2.5 was awful. Gemini 3.1 is better than GPT 5.3 and competitive with 5.4 and preceded it by months.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: