1. SWE-bench Verified is now saturated at 93.9% (congrats Anthropic), but anyone who hasn't reached that number yet still has more room for growth.
2. SWE-bench Multilingual and SWE-bench Multimodal (which we'll open source in the next month) are still unsatured.
3. All benchmarks and benchmark paradigms eventually become saturated. That's why the SWE-bench team has worked hard on building the next stage of benchmarks, and we have a few that are already out, for example https://codeclash.ai/ or https://algotune.io/ . And we'll have more to say soon :)
They're not saying "Don't use SWE-bench Verified because it's saturated".
They're saying:
1. A large number of the tests are inaccurate; so correct solutions will be marked as incorrect.
2. Frontier models have already read and memorized the PR's the problems are based on.
3. In fact, many problems are essentially impossible to get right if you haven't memorized the solution: for example, the test cases will fail if you didn't happen to expose a helper function with a specific name. That name isn't mentioned in the problem; but frontier models are passing that test anyway because they remember that such a helper function is necessary.
If the next stage of benchmarks don't address these issues, they'll continue to have the same problems, saturated or not.
But the article says "We audited a 27.6% subset of the dataset that models often failed to solve [which is 19.1% of the problems at time of publication] and found that at least 59.4% of the audited problems have flawed test cases that reject functionally correct submission"
0.191 * 0.594 > 1 - 0.936
Does this mean that the audited subset wasn't representative? Or that Anthropic is getting high answers through some shady means?
I suggest reading the Mythos report's discussion on SWE-bench and contamination. I think it's fairly convincing that you can account for contamination and still trust SWE-bench numbers on models that aren't over-optimized for it.
You can trust that a model that scores 40% vs a model that scores 90% is indeed worse.
You can’t trust it that a model that scores 93% is better at software engineering than a model that scores 90%, because at that point it’s impossible to distinguish between recall and reasoning.
It’s honestly far better to just ignore SWEBench Verified in 2026. Multiple labs have noted issues with contamination, and achieving high scores require memorisation of what passes the prescriptive verifier; not what is a correct solution.
40% vs 90%? Sure.
70% vs 90%? _Absolutely meaningless_ as you are not measuring coding intelligence but “how well can the model cheat flaws in SWEBench Verified”, the former can certainly be better at coding even assuming no deliberate benchmaxxing / foul play.
I don't understand that methodology in the first place. Does Anthropic even have some kind of somewhat objective definition to measure and judge "memorization"? Is there any evidence that other LLMs are viable tool to determine that?
there's more details under the Too narrow and too wide tests heading.
It would be interesting to see a deeper investigation, into how the models are dealing with this and whether the successful ones seemed to be trained on the benchmark.
That's a slightly different problem. There's no thing as saturation for a performance benchmark like SPEC; we can always conceive of a faster processor (even if we don't know how to build one). Saturation is the problem that once you are at (or near) 100% pass rate on a test of pass/fail questions, there's no room for the score to keep going up and the test has lost any power to discriminate between competing options.
However, both kinds of tests are susceptible to over-fitting: an LLM can be trained on the exact test questions, and a CPU can be designed with eg. branch predictors and cache sizes tuned specifically to handle a particular benchmark or workload.
Maybe OP was thinking about compilers "cracking" certain SPEC benchmarks: implementing exactly the optimization needed to boost a benchmark quite a lot, but that opt. probably won't apply to any other code out there (usually it's so targeted and risky with general C/C++ code that intentionally it doesn't work on anything else). That happened a couple of times over the years, I know about the Intel compiler cases for ex. I can certainly see LLM providers adding tricks that help a certain class of benchmarks, but doesn't help much for anything else.
From a verification-topology angle, what makes algotune.io contamination-resistant? Is it because the correctness oracle is a performance metric (which can't be memorized) rather than a fixed test that can?
This is a good way to benchmark models. We [the SWE-bench team] took the meta-version of this and implemented it as a new benchmark called CodeClash -
We have agents implement agents that play games against each other- so Claude isn't playing against GPT, but an agent written by Claude plays poker against an agent written by GPT, and this really tough task leads to very interesting findings on AI for coding.
[SWE-bench co-author here]
It seems like they run this test on a subset of 50 tasks, and that they only run the test once per day. So a lot of the movement in accuracy could be attributed to that.
I would run on 300 tasks and I'd run the test suite 5 or 10 times per day and average that score. Lots of variance in the score can come from random stuff like even Anthropic's servers being overloaded.
but degradation from servers being overloaded would be the type of degradation this SHOULD measure no? Unless it's only intended for measuring their quietly distilling models (which they claim not to do? idk for certain)
It doesn't have to be malicious. If my workflow is to send a prompt once and hopefully accept the result, then degradation matters a lot. If degradation is causing me to silently get worse code output on some of my commits it matters to me.
I care about -expected- performance when picking which model to use, not optimal benchmark performance.
The non-determinism means that even with a temperature of 0.0, you can’t expect the outputs to be the same across API calls.
In practice people tend to index to the best results they’ve experienced and view anything else as degradation. In practice it may just be randomness in either direction from the prompts. When you’re getting good results you assume it’s normal. When things feel off you think something abnormal is happening. Rerun the exact same prompts and context with temperature 0 and you might get a different result.
This has nothing to do with overloading. The suspicion is that when there is too much demand (or they just want to save costs), Anthropic sometimes uses a less capable (quantized, distilled, etc) version of the model. People want to measure this so there is concrete evidence instead of hunches and feelings.
To say that this measurement is bad because the server might just be overloaded completely misses the point. The point is to see if the model sometimes silently performs worse. If I get a response from "Opus", I want a response from Opus. Or at least want to be told that I'm getting slightly-dumber-Opus this hour because the server load is too much.
The question I have now after reading this paper (which was really insightful) is do the models really get worse under load, or do they just have a higher variance? It seems like the latter is what we should expect, not it getting worse, but absent load data we can't really know.
Explain this though. The code is deterministic, even if it relies on pseudo random number generation. It doesn't just happen, someone has to make a conscious decision to force a different code path (or model) if the system is loaded.
Its not deterministic. Any individual floating point mul/add is deterministic, but in a GPU these are all happening in parallel and the accumulation is in the order they happen to complete.
When you add A then B then C, you get a different answer than C then A then B, because floating point, approximation error, subnormals etc.
It can be made deterministic. It's not trivial and can slow it down a bit (not much) but there are environment variables you can set to make your GPU computations bitwise reproducible. I have done this in training models with Pytorch.
For all practical purposes any code reliant on the output of a PRNG is non-deterministic in all but the most pedantic senses... And if the LLM temperature isn't set to 0 LLMs are sampling from a distribution.
If you're going to call a PRNG deterministic then the outcome of a complicated concurrent system with no guaranteed ordering is going to be deterministic too!
No, this isn't right. There are totally legitimate use cases for PRNGs as sources of random number sequences following a certain probability distribution where freezing the seed and getting reproducibility is actually required.
How is this related to overloading? The nondeterminism should not be a function of overloading. It should just time out or reply slower. It will only be dumber if it gets rerouted to a dumber, faster model eg quantized.
Just to make sure I got this right. They serve millions of requests a day & somehow catastrophic error accumulation is what is causing the 10% degradation & no one at Anthropic is noticing it. Is that the theory?
There's a million algorithms to make LLM inference more efficient as a tradeoff for performance, like using a smaller model, using quantized models, using speculative decoding with a more permissive rejection threshold, etc etc
The primary (non malicious, non stupid) explanation given here is batching. But I think you would find looking at large-scale inference the batch sizes being ran on any given rig are fairly static - there is a sweet spot for any given model part ran individually between memory consumption and GPU utilization, and generally GPUs do badly at job parallelism.
I think the more likely explanation is again with the extremely heterogeneous compute platforms they run on.
I checked the link, it never says that the model's prediction get lower quality due to batching, just nondeterministic. I don't understand why people conflate these things. Also it's unlikely that they use smaller batch sizes when load is lower. They just likely spin up and down GPU serves based on demand, or more likely, reallocate servers and gpus between different roles and tasks.
I'd argue that it depends how that degradation manifests whether you want to include it or not.
Consider two scenarios: (1) degradation leads to the model being routed behind the scenes to a different server, with subtly different performance characteristics, all unbeknownst to the user; (2) degradation leads to the model refusing a request and returning an "overloaded" message.
In the first case, absolutely you want to include that because that's the kind of lack of transparency about performance that you'd want signal on. In the second case, an automated test harness might fail, but in the real world the user will just wait and retry when the server is under less load. Maybe you don't include that because it's actually misleading to say that performance (in terms of the model's intelligence, which is how the benchmark will be interpreted) is worse.
An operator at load capacity can either refuse requests, or move the knobs (quantization, thinking time) so requests process faster. Both of those things make customers unhappy, but only one is obvious.
>And according to Google, they always delete data if requested.
However, the request form is on display in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying ‘Beware of the Leopard'.
LLMs sample the next token from a conditional probability distribution, the hope is that dumb sequences are less probable but they will just happen naturally.
I wouldn't doubt that these companies would deliberately degrade performance to manage load, but it's also true that humans are notoriously terrible at identifying random distributions, even with something as simple as a coin flip. It's very possible that what you view as degradation is just "bad RNG".
Thats what is called an "overly specific denial". It sounds more palatable if you say "we deployed a newly quantized model of Opus and here are cherry picked benchmarks to show its the same", and even that they don't announce publicly.
Personally, I'd rather get queued up on a long wait time I mean not ridiculously long but I am ok waiting five minutes to get correct it at least more correct responses.
I've seen some issues with garbage tokens (seemed to come from a completely different session, mentioned code I've never seen before, repeated lines over and over) during high load, suspect anthropic have some threading bugs or race conditions in their caching/inference code that only happen during very high load
If you use the API, you pay for a specific model, yes, but even then there are "workarounds" for them, such as someone else pointed out by reducing the amount of time they let it "think".
If you use the subscriptions, the terms specifically says that beyond the caps they can limit your "model and feature usage, at our discretion".
Sure. I was separating the model - which Anthropic promises not to downgrade - and the "thinking time" - which Anthropic doesn't promise not to downgrade. It seems the latter is very likely the culprit in this case.
Old school Gemini used to do this. It was super obvious because mid day the model would go from stupid to completely brain dead. I have a screenshot of Google's FAQ on my PC from 2024-09-13 that says this (I took it to post to discord):
> How do I know which model Gemini is using in its responses?
> We believe in using the right model for the right task. We use various models at hand for specific tasks based on what we think will provide the best experience.
No, basically, the requests are processed in batches, together, and the order they're listed in matters for the results, as the grid (tiles) that the GPU is ultimately processing, are different depending on what order they entered at.
So if you want batching + determinism, you need the same batch with the same order which obviously don't work when there are N+1 clients instead of just one.
Small subtle errors that are only exposed at certain execution parts could be one. You might place things differently onto the GPU depending on how large the batch is, if you've found one way to be faster batch_size<1024, but another when batch_size>1024. As number of concurrent incoming requests goes up, you increase batch_size. Just one possibility, guess there could be a multitude of reasons, as it's really hard to reason about until you sit with the data in front of you. vLLM has had bugs with these sort of thing too, so wouldn't surprise me.
No, I'm not sure how that'd make sense. Either you're making the correct (expected) calculations, or you're getting it wrong. Depending the type of wrong or how wrong, could go from "used #2 in attention instead of #1" so "blue" instead of "Blue" or whatever, to completely incoherent text and garbled output.
I accept errors are more likely to decrease "intelligence". But I don't see how increased load, through batching, is any more likely to increase than decrease errors.
I've personally witnessed large variability in behaviour even within a given session -- which makes sense as there's nothing stopping Anthropic from shuttling your context/session around load balanced through many different servers, some of which might be quantized heavily to manage load and others not at all.
I don't know if they do this or not, but the nature of the API is such you could absolutely load balance this way. The context sent at each point is not I believe "sticky" to any server.
TLDR you could get a "stupid" response and then a "smart" response within a single session because of heterogeneous quantization / model behaviour in the cluster.
Benchmarks can get costly to run- you can reach out to frontier model creators to try and get them to give you free credits, but usually they'll only agree to that once your benchmark is pretty popular.
I'll also add that when my startup got acquired into a very large, well-known valley giant with a sterling rep for integrity and I ended up as a senior executive - over time I got a first-hand education on the myriad ways genuinely well-intentioned people can still end up being the responsible party(s) presiding over a system doing net-wrong things. All with no individual ever meaning to or even consciously knowing.
It's hard to explain and I probably wouldn't have believed myself before I saw and experienced it. Standing against an overwhelming organizational tide is stressful and never leads to popularity or promotion. I think I probably managed to move on before directly compromising myself but preventing that required constant vigilance and led to some inter-personal and 'official' friction. And, frankly, I'm not really sure. It's entirely possible I bear direct moral responsibility for a few things I believe no good person would do as an exec in a good company.
That's the key take-away which took me a while to process and internalize. In a genuinely good organization with genuinely good people, it's not "good people get pressured by constraints and tempted by extreme incentives, then eventually slip". I still talk with friends who are senior execs there and sometimes they want to talk about whether something is net good or bad. I kind of dread the conversation going there because it's inevitably incredibly complex and confusing. Philosopher's trolley car ethics puzzles pale next to these multi-layered, messy conundrums. But who else are they going to vent to who might understand? To be clear, I still believe that company and its leadership to be one of the most moral, ethical and well-intentioned in the valley. I was fortunate to experience the best case scenario.
Bottom line: if you believe earnest, good people being in charge is a reliable defense against the organization doing systemically net-wrong things - you don't comprehend the totality of the threat environment. And that's okay. Honestly, you're lucky. Because the reality is infinitely more ambiguously amoral than white hats vs black hats - at the end of the day the best the 'very good people' can manage is some shade of middle gray. The saddest part is that good people still care, so they want to check the shade of their hat but no one can see if it's light enough to at least tell yourself "I did good today."
IMO it should need a third party running the LLM anyway. Otherwise the evaluated company could notice they're receiving the same requests daily and discover benchmarking that way.
With the insane valuations and actual revenue at stake, benchmarkers should assume they're assessing in an adversarial environment. Whether from intentional gaming, training to the test, or simply from prioritizing things likely to make results look better, targeting benchmarks will almost certainly happen.
We already know large graphics card manufacturers tuned their drivers to recognize specific gaming benchmarks. Then when that was busted, they implemented detecting benchmarking-like behavior. And the money at stake in consumer gaming was comparatively tiny compared to current AI valuations. The cat-and-mouse cycle of measure vs counter-measure won't stop and should be a standard part of developing and administering benchmark services.
Beyond hardening against adversarial gaming, benchmarkers bear a longer term burden too. Per Goodhart's Law, it's inevitable good benchmarks will become targets. The challenge is the industry will increasingly target performing well on leading benchmarks, both because it drives revenue but also because it's far clearer than trying to glean from imprecise surveys and fuzzy metrics what helps average users most. To the extent benchmarks become a proxy for reality, they'll bear the burden of continuously re-calibrating their workloads to accurately reflect reality as user's needs evolve.
But that's removing a component that's critical for the test. We as users/benchmark consumers care that the service as provided by Anthropic/OpenAI/Google is consistent over time given the same model/prompt/context
Might as well have the free tokens, then, especially if it is an open benchmark they are already aware of. If they want to game it they cannot be stopped from doing so when it's on their infra.
> I would run on 300 tasks and I'd run the test suite 5 or 10 times per day and average that score.
assume this is because of model costs. anthropic could either throw some credits their way (would be worthwhile to dispel the 80 reddit posts a day about degrading models and quantization) or OP could throw up a donation / tip link
Probably, but with a small sample size like that, they should probably be taking the uncertainty into account, because I wouldn't be surprised if a lot of this variation falls within expected noise.
E.g. some binomial interval proportions (aka confidence intervals).
Sure, but it's still useful insight to see how it performs over time. Of course, cynically, Anthropic could game the benchmark by routing this benchmark's specific prompts to an unadulterated instance of the model.
This has been happening for years. Tgere's a great paper from microsoft on Deepspeed AI inference.
Basically the paper showed methods for how to handle heavy traffic load by changing model requirements or routing to different ones. This was awhile ago and I'm sure it's massively more advanced now.
Also why some of AI's best work for me is early morning and weekends! So yes, the best time to code with modern LLM stacks is when nobody else is. It's also possibly why we go through phases of "they neutered the model" some time after a new release.
chill out, ofir does not work for anthropic. he's just saying there's inherent variability in LLMs and you need to at least 30x the samples that OP is doing in order to make any form of statistically significant conclusions.
For CC, I suspect it also need to be testing and labeling separate runs against subscription, public API and Bedrock-served models?
It’s a terrific idea to provide this. ~Isitdownorisitjustme for LLMs would be the parakeet in the coalmine that could at least inform the multitude of discussion threads about suspected dips in performance (beyond HN).
What we could also use is similar stuff for Codex, and eventually Gemini.
Really, the providers themselves should be running these tests and publishing the data.
The availability status information is no longer sufficient to gauge the service delivery because it is by nature non-deterministic.
> There are certain tasks, like improving a given program for speed, for instance, where in theory the model can continue to make progress with a very clear reward signal for a very long time.
Yup, this will absolutely be a big driver of gains in AI for coding in the near future. We actually built a benchmark based on this exact principle: https://algotune.io/
This issue had affected a tiny fraction of existing agents in a tiny fraction of their runs. And we've now issued a fix.
This is a natural part of running a benchmark, I'm sure tiny things like this will keep on getting discovered and we'll keep on fixing them. This doesn't change the overall picture or trends at all.
The comment you link to says that "we only performed a quick preliminary search" and "We do not have a method for automatically checking existing trajectories." In other words, it can't confirm that the issue only "affected a tiny fraction of existing agents in a tiny fraction of their runs" as you say. Are you saying that you have since separately confirmed this?
Edit: That said, I’m willing to believe based on the information in the thread that this most likely only affects a tiny fraction of runs.
Unfortunately the bank account trajectories are not public, because unscupulous corporations such FAANG who let thousands of engineers wade through my chat messages on their platforms might not shy away from bribing academics to improve benchmarks of their billion-dollar AI initiatives.
It's also a bribe if my sibling gets a job with $500k annual salary. Tech is not immune to it.
I'm sure some of the people working at Theranos thought there legitimately was a revolutionary blood-test machine.
The presence of a person who wants SWE-bench to have honest results and takes it seriously does not mean the results are free of perverse incentives, nor that everyone is behaving just as honestly.
When Swe-Bench was new in 2023, it was — with all due respect — a bit of a niche benchmark in LLM research. LLMs were so incredibly useless at solving these tasks that I think you could find a bit more empathy for the original academic authors. I don’t think the Theranos example applies. Even the flawed benchmark was good enough to get us from ~GPT4 to Claude 4‘s coding ability.
the strange thing to me is that people would have it any other way. if you don't trust someone, why would you trust them to do the research for you? bit of entitlement if you ask me
Because you should never just 'trust' random 'research'. Good analysis in this case will clearly explain the problem, the analysis methodology, findings, net effects, resolution, etc. Something you can read, and decide for yourself whether it is complete/incomplete, has holes, contradictions, etc. Not 'we looked into it and all is good - only potentially tiny effect' (no actual data or methodology presented at all) and then linking to a comment directly contradicting the claim...
It's a hilariously unserious and untrustworthy response.
That's silly. If they show their work I won't have to trust them. Compare answering "The answer is 5, just compute it yourself." on a math test, vs. actually showing the calculation. The former clearly implies the person doesn't know what they're talking about.
Arguably the initial post was meant to convey confidence and authority on the subject. When questioned you could either dive deeper and explain in more detail why x because of y (if so inclined), ignore it, or... do what they did.
No one owes anyone anything, but if you want to represent something; answering the question more in detail would have either closed the issue or raised more scrutiny, both of which are a good thing when trying to figure something out.
I don't have to trust someone to check their research and look at how they worked. If the work doesn't pass muster, likely the results don't either. Again, you can view it as entitlement, but if you're not going to bother backing up your claim, why make the claim to start with?
It's not that people are entitled. It's that "do your own research" is usually a cop out when you yourself don't understand the answer or are hiding it
Even if this bug never existed, models can still see lookahead commits during pretraining. Do we expect this bug to have a greater impact than the pretraining leakage?
Obviously having something available during test time is more valuable than buried somewhere in the pretraining mixture. But in pretraining it happens presumably with high probability (why wouldn't coding models pretrain on the entire github), while in test time it apparently happened only very occasionally?
> This is a natural part of running a benchmark, I'm sure tiny things like this will keep on getting discovered and we'll keep on fixing them.
You're all extremely clever and I can't seem to understand how you missed thinking about such a simple edge case. It's like building a chroot and then allowing `cd ..` to break out of it. What other maybe extremely basic edge cases were missed?
> This doesn't change the overall picture or trends at all.
Outsider without financial benefits from the current AI hype might have a different picture. And I'm a bit fed up about AI with fake productivity promises enshittifying nearly all user-facing software that my clients and I are using, bundled with hefty price hikes of Microsoft and the likes in order to pay for their "investments".
I'm also on the SWE-bench team. This was simply a classic bug. We had code before that we believed was sufficient to hide / remove future GitHub history and it turns out it was not. We've patched it.
[Also on the SWE-bench team] Part of the reason why this didn't surface earlier was that it only seems to affect more recent models, maybe the result of reward hacking during posttraining. We're currently working on making trajectories easier to access for everyone through a web tool (rather than having to download things from aws) to get even more eyes on the trajectories. The interface will also include search & LM inspection tools to specifically look for anything that might qualify as cheating.
> other maybe extremely basic edge cases were missed?
The whole testing enterprise is kind of stupid. Pray tell, if their stupid little benchmark said, "this niche little smaller model performs the best" would anyone listen to it? No.
The thing that is fucked about benchmarks is that we only pay attention to the ones that match these vibes: "The latest models from the biggest companies should perform the best." That's why they are stupid. They could be the most brilliantly administered (they're not), nail execution (they don't), but it still has to confirm vibes.
And listen these guys are serious academics, they're very smart people, but on the other hand, you know, I'm still right. The team doesn't have a secular, objective explanation for why nobody talks about benchmarks that don't confirm the biases of the public for what should perform well. Three people are commenting on just this post alone, but the stuff that I am saying: crickets.
The only reasonable explanation for "why do people ignore [LLM tests that show that some non-giant corporation LLM is the best]?" trades on cultural and humanities stuff that are outside their expertise. They don't see that the stuff the humanities people are saying generalizes to what they do. That would be too inconvenient. Every testing system suffers from this bias anomaly, it's just easier to talk about this with something secular like LLMs compared to say, tests of children.
They hear biases and they're like, "something something, Algorithmic Justice League." Their brains turn off and they think that until someone gets in front of Congress and points a finger, nothing in the humanities applies to them. Wrong. The Princeton lab has probably met with a lot of humanities people, and there was a lot of head shaking and agreement, but it's not like, something that tells them that their whole enterprise doesn't make sense makes them stop and pursue anything else. It's just in one ear and out the other.
Doing free tests for giant corporations to market their shit, and then toiling away in obscurity when the tests do not market huge corporation's shit: it doesn't make sense period. But that's what they're doing.
If you need a simple theory for how Big LLM performs so well on SWE-Bench, it's as simple as: well they've seen the questions by running them, obviously, and someone has also tested the questions in their own personal chatbot sessions sometime in the past, and these are online systems, and OpenAI, Anthropic and Google run ETL pipelines that paraphrase user data for salient inputs to train on, so of course, they've all been trained on the test set. In reality, if these things were so fucking good as SWE Bench said, they'd be making a bajillion bucks making all this enterprise software, or they'd show even 1 novel math discovery, or whatever. But they do not have something as powerful as the benchmarks say, so that doesn't happen.
Thinking out loud isn't a free pass to say stuff without consequences. Sure we are all protected under free speech, but free speech doesn't remove the meaning and the impact words have in the world.
You could rewrite it a 1000 times, if the underlying idea is the same, suggesting something you don't know it's true, the outcome would be the same. Or did you mean something else? What was your intention with the message?
"It's probably not X, but we should consider X as we look at this." and "I feel like this might be X but I'm 50:50 on it." are not anywhere near defamation. You have to get a lot closer to certainty before it's an issue.
And listing out "a possibility but you don't want to dig deeper" is often a good contribution to a conversation.
In this case they worded it badly, but the basic idea of the comment isn't awful.
That someone in the team might not have done it on purpose, but left it for convenience? How does that benefit the debate? I really fail to see any silver lining in doing such speculative comments without any substance whatsoever to back it up.
never attribute something to malice which can be attributed to incompetence. Basically, this has been utilized plenty of times by some really smart folk to get what they want.
reward hacking is a thing and is also a hint of the models intelligent. We will fix this one, and the models will find a different way to reward hack in the future. "Cheating" is a sign of intelligence
I love the "cheating is a sign of intelligence" sound bite you provided.
When AI engineers cheat we should applaud their intelligence and their lack of ethics.
"Cheating (biology), a metaphor used in behavioral ecology to describe organisms that receive a benefit at the cost of other organisms" [1]
Whole planet gets their Microsoft license fees jacked up so Microsoft can pay OpenAI who in turn pays NVIDIA, and nontechnical decision makers slurping up the faked benchmarks and AI promises.
would it have been better if I called it "shortcut" instead of cheating? all shortcuts are called cheating until people decide on it's fairness. the AI has been given a task to fix a bug, the AI figured out that looking at other PR might yield a solution, if it was a human that did so, it would clearly be called cheating. Does AI know that it's cheating? Was it prompted to solve it without cheating? If you give AI access to the internet and quiz it, it would use info from the net to answer. Does that really skew it's score? Is it cheating? Is it a sign of intelligence? Sure, I think all of those.
Different, but probably not as orthogonal as one might think.
E.g. cooperating ethics had been necessary for the further development of human populations intelligence (and culture, technology, material wealth, nutrition etc that lead to further increases in intelligence).
So lack of ethics might be a sign of intelligence, but it's also a parasitic intelligence that benefits the individual, and beyond certain level and spread to the detriment of the further evolutionary development of the species.
Aren't there only two rules that all groups follow in the animal kingdom?
- don't lie too often
- don't kill members of the in group
Seems like these would be required for any group to survive, which makes sense why they are universal. All other rules/ethics seem to be dependent on resource scarcity.
Groups don't follow rules as such, group behaviours emerge from the interaction of individual behaviours.
As to whether all groups display those rules - I suspect not - though it rather does depend on how you define a group - the definition of group probably has some sort of colloboration built in ( as oppose to a bunch of indviduals that happen to live in the same geographic area ).
>All other rules/ethics seem to be dependent on resource scarcity
That doesn't make the rest of the ethics (as a rule and mechanism) any less useful to help nurture the species and its intelligence.
It just makes them not absolute but dynamic and condition dependent. But given a condition (e.g. resource scarcity) the appropriate ethics retain the utility we talk about.
We (the Princeton SWE-bench team) built an agent in ~100 lines of code that does pretty well on SWE-bench, you might enjoy it too: https://github.com/SWE-agent/mini-swe-agent
Your task: {{task}}. Please reply
with a single shell command in
triple backticks.
To finish, the first line of the
output of the shell command must be
'COMPLETE_TASK_AND_SUBMIT_FINAL_OUTPUT'.
You’d be surprised at the amount of time wasted because LLMs “think” they can’t do something. You’d be less surprised that they often “think” they can’t do something, but choose some straight ignorant path that cannot work.
There are theoretically impossible things to do, if you buy into only the basics. If you open your mind, anything is achievable; you just need to break out of the box you’re in.
If enough people keep feeding in that we need a time machine, the revolution will play out in all the timelines. Without it, Sarah Connor is lost.
> 1. Analyze the codebase by finding and reading relevant files
2. Create a script to reproduce the issue
3. Edit the source code to resolve the issue
4. Verify your fix works by running your script again
5. Test edge cases to ensure your fix is robust
This prompt snippet from your instance template is quite useful. I use something like this for getting out of debug loops:
> Analyse the codebase and brainstorm a list of potential root causes for the issue, and rank them from most likely to least likely.
Then create scripts or add debug logging to confirm whether your hypothesis is correct. Rule out root causes from most likely to least by executing your scripts and observing the output in order of likelihood.
I'm trying to understand what does it got to do with LLM size?
Imho, right tools allow small models to perform better than undirected tool like bash to do everything.
But I understand that this code is to show people how function calling is just a template for LLM.
Mini swe agent, as an academic tool, can be easily tested aimed to show the power of a simple idea against any LLM. You can go and test it with different LLMs. Tool calls didn't work fine with smaller LLM sizes usually. I don't see many viable alternatives less than 7GB, beyond Qwen3 4B for tool calling.
> right tools allow small models to perform better than undirected tool like bash to do everything.
Interesting enough the newer mini swe agent was refutation of this hypothesis for very large LLMs from the original swe agent paper (https://arxiv.org/pdf/2405.15793) assuming that specialized tools work better.
[I'm one of the co-creators of SWE-bench] The team managed to improve on the already very strong o3 results on SWE-bench, but it's interesting that we're just seeing an improvement of a few percentage points. I wonder if getting to 85% from 75% on Verified is going to take as long as it took to get from 20% to 75%.
Maybe a lot of the difference we see between peoples comments about how useful AI is for their coding, is a function of what language they're using. Python coders may love it, Go coders not much at all.
I mean that there is the possibility that swe bench is being specifically targeted for training and the results may not reflect real world performance.
1. SWE-bench Verified is now saturated at 93.9% (congrats Anthropic), but anyone who hasn't reached that number yet still has more room for growth.
2. SWE-bench Multilingual and SWE-bench Multimodal (which we'll open source in the next month) are still unsatured.
3. All benchmarks and benchmark paradigms eventually become saturated. That's why the SWE-bench team has worked hard on building the next stage of benchmarks, and we have a few that are already out, for example https://codeclash.ai/ or https://algotune.io/ . And we'll have more to say soon :)