Hacker Newsnew | past | comments | ask | show | jobs | submit | hitekker's commentslogin

The left-right coalition against porn makes relief for Kickstarter or Stripe unlikely.

FOSTA-SESTA, the law that increased liability for platforms facilitating porn, passed 388-25 in the House and 97-2 in the Senate back in 2018. Every senate Progressive except one voted yes, including Sanders, Warren, Kamala Harris (AG against Backpage), Booker, etc. Anti-trafficking feminist groups like NOW backed that legislation, or were silent on it. Similarly, media outlets were either quiet or in vocal support, i.e., the NYTimes 2020 attack on Pornhub.


Thanks for this comment. This is a bipartisan movement

I'm unconvinced. https://duskos.org/ doesn't tell a story about fashionable ideologies fused-together for the greater good. Dusk/Collapse OS does what it says on the tin, minus moralizing and metaphors.

In any case, I detect an omnicause because people are sloganeering on its behalf below:

"If you see this as political, you should fix yourself", "computing is related to colonialism", "everything is political" etc.


Not sure why you're getting downvoted; I think you're close to right. They were successful with one technology and had a great exit. They may also be successful with another technology post-acquisition.

Lets see the fruit of their decision.


Yeah, that tracks according to the numbers.

https://ziglang.org/news/300k-from-mitchellh/

https://ziglang.org/news/2024-financials/#income

https://ziglang.org/news/2025-financials/#income

I had a bit of trouble finding it myself but Claude proved a better Googler than I


Did you read your citations?

Wikipedia, for example, acknowledges the Administration "coerced or significantly encouraged social media platforms to moderate content, which violated the First Amendment." The Reason article is overwhelmingly negative on the Administration's seeming censorship. That and other details in the articles don't serve your conclusion.


And Drew Devault, your source, has brazenly deceived HN before https://news.ycombinator.com/item?id=41838124

To be clear, that source is happy to lie about their own identity and run away from basic questions. I'd treat this as another ill-baked cancel campaign unless someone actually credible speaks up


Not just performance, the code was littered with unsafe blocks and the benchmarks (IIRC) were gamed heavily by the Rust devs.

It was a dishonest effort that actually undermined the memory safety pitch.


This is the second time in the last month that Hindenburgh's reports appear to be prophetic. Previously, they called out Backblaze before the company began harming its own product.

Hindenberg's 2024 report titled "Roblox: Inflated Key Metrics For Wall Street And A Pedophile Hellscape For Kids": https://hindenburgresearch.com/roblox/

And I assume this[1] is the reference to Backblaze? Notably not Hindenburg and more recent, but I believe there is some team overlap and there doesn't seem to be anything else.

[1]: https://www.morpheus-research.com/backblaze/


Apparently, the noise around the AI policy came from Bun's developers saying that policy blocks upstreaming their performance PR. But the real reason seems to be that PR's code itself isn't in great shape, and introduces unhealthy complexity https://ziggit.dev/t/bun-s-zig-fork-got-4x-faster-compilatio...

> Parallel semantic analysis has been an explicitly planned feature of the Zig compiler for a long time, and it has heavily influenced the design of the self-hosted Zig compiler. However, implementing this feature correctly has implications not only for the compiler implementation, but for the Zig language itself! Therefore, to implement this feature without an avalanche of bugs and inconsistencies, we need to make language changes.


Yes, that reply provides convincing arguments for not merging the Bun fork, as it interferes with Zig's own roadmap for achieving even better results, while continuing to improve the whole language.


Not only this, but also:

Bun's fork will exhibit indeterministic behavior.


As if that was a bad thing in 2026!


...why does it being 2026 make nondeterminism more desirable or reasonable?


It’s a joke because all of the AI systems du jour are non deterministic and people are putting them in important places anyway.


This was probably a joke about a lot of developers delegating coding to LLMs which are usually non-deterministic (which I personally think is less of an issue than LLMs not having specified behavior like programming languages do).


A single PR for a 3000-line addition would, in all likelihood, be rejected anyway.


Really depends the author and context. Large PRs are often justified for compiler work, you have a lot of pieces to touch at the same time



When somebody comments PR with “Incredible work, Jacob. It is an honor to call you my colleague.” then it's safe to assume it's out of the ordinary contribution. Pretty much falling outside of the “in all likelyhood”.

3000 line LLM commit is not that.


Also 95% of those 30k lines changed are fully self-contained inside of the aarch64 directory and of the remaining changes it looks like the majority is just adding "aarch64" as another item into an existing list. There are a few core changes that to me look like they could be done in their own PRs, but also core maintainers get to decide if they want to apply bureaucracy to their own work.


No description provided. I love this PR. But yeah, try being anyone besides Jacob and submitting that!


> In successful open source projects you eventually reach a point where you start getting more PRs than what you’re capable of processing. Given what I mentioned so far, it would make sense to stop accepting imperfect PRs in order to maximize ROI from your work, but that’s not what we do in the Zig project. Instead, we try our best to help new contributors to get their work in, even if they need some help getting there. We don’t do this just because it’s the “right” thing to do, but also because it’s the smart thing to do.

I feel like if their goal is to prioritize contributors over contributions, it'd also logically follow that they should try to have descriptions where possible? Just to make exploring any set of changes and learning easier? Looked it over briefly, no Markdown or similar doc changes there either.

I mean the changes can be amazing, it's just that adding some description of what they are in more detail, alongside the considerations during development, for new folks or anyone wanting to learn from good code would also be due diligence.


How would you differentiate a 3000 line LLM commit made by the best models and good AI processes from a 3000 line commit made by the best human developer?

edit Okay, I set the bar too high here with "best human developer" and vague "good AI processes". My bad. Yes, LLM is not quite there yet.


A personal relationship and trust, as seems to be the case here?


By using my brain.


Don't be ridiculous! We don't do that anymore.


Read it?


It's still fairly obvious just by skimming the code. The best AI models are still quite far from the best human developers in ability and especially in code quality.


When the best AI models are the same or better than the best[1] human developers, what then?

We're already at the point talking about best vs. best.


If that happens and we have a way of reliably knowing if some code is produced to that high quality, then I think we probably can accept that AI coding is the only sensible option.

We definitely are not close to that point though and it's unclear if/when we will get there.


It seems to me that people might be arguing from conflicting hidden premises here. "AI Coding" is a spectrum that could mean something as simple as letting the LLM proofread your changes and then act on those with your own human brain, or it could mean just telling the agent what you want and let it rip and tear until it is done.

If I do the latter and submit a PR to something like Zig, I'll be certainly caught doing it and rightfully chastised. If I do the former, my PR will be better without anybody besides myself having any way of knowing how it got better. Probably I do something in between when I contribute to open-source these days.

Blanket banning all of these seems like a bad idea to me. It actively gates people like myself from contributing, because I respect these people and projects that much. It feels like I would be doing something they find disgusting if my work has touched an LLM and I obviously don't want to do that to people I respect. But it's fine, there are plenty of things to do in the world even when some doors are closed.

I do not presume to have any say on Zig project's well argued decisions[0] -- I'm not really even their user let alone someone important like a contributor. Their point of preferring human contact is superb, frankly. Probably a different kind of problem in an open-source project staffed with a lot of remote working people, where human contact is scarce.

https://kristoff.it/blog/contributor-poker-and-ai/


Blanket banning all of these seems like a bad idea to me. It actively gates people like myself from contributing

in my projects i will reject any contribution that i do not understand. even if the contribution is handwritten by an expert developer. that developer will have to earn my trust like anyone else, like you would have too.

LLM contributions are non-deterministic, which means they can never be trusted.

therefore, if you use LLM to contribute, you can not earn my trust. if you believe that you can not create a meaningful contribution without the use of LLM then you are realizing that you are not skilled enough to understand the code that you contribute. because if you could understand it, then you could write it yourself. i want your personal contributions, not those of your LLM. i want contributions that the submitter actually understands. i want you to earn my trust by showing me that you understand what you are doing. i want you to grow your understanding of my project. none of this happens when you use LLMs.

if you are unable to make a contribution without the help of an LLM then you are not ready to contribute. try looking for smaller issues that you can work on instead until you learned enough to make larger contributions.


> i will reject any contribution that i do not understand

Fair.

> that developer will have to earn my trust like anyone else

What does it take to "earn your trust"?

> LLM contributions are non-deterministic, which means they can never be trusted.

Provably incorrect. LLM contributions can be reviewed, tested, and understood like any other contribution. There's nothing "special" about LLM contributions.

Contributions authored by human brains are also non-deterministic, perhaps if the author was feeling in a slightly different way they'd have formatted the code a bit differently.

> therefore, if you use LLM to contribute, you can not earn my trust.

The premise is wrong.

> if you believe that you can not create a meaningful contribution without the use of LLM then you are realizing that you are not skilled enough to understand the code that you contribute

What if I believe I can do so without an LLM, but that it could be even better with an LLM?

What if I'm great at understanding code, but terrible at writing it?

Again, this is a premise that you just decided to take as truth, without proof.

> because if you could understand it, then you could write it yourself.

False. I can understand a novel algorithm by reading and studying it, but perhaps I could have not come up with it myself.

> i want you to earn my trust by showing me that you understand what you are doing

I can easily do that even if my contribution involves LLM assistance.

> i want you to grow your understanding of my project

Ditto.

> none of this happens when you use LLMs

False. Why do you think so?

> if you are unable to make a contribution without the help of an LLM then you are not ready to contribute.

Again, this is your opinion and you have no way of proving it. I can prove the opposite.


> What does it take to "earn your trust"?

multiple successful contributions of increasing complexity, among other things.

>> LLM contributions are non-deterministic, which means they can never be trusted.

> Provably incorrect. LLM contributions can be reviewed, tested, and understood like any other contribution. There's nothing "special" about LLM contributions.

read this comment to see what i mean: https://news.ycombinator.com/item?id=47968180

> Contributions authored by human brains are also non-deterministic, perhaps if the author was feeling in a slightly different way they'd have formatted the code a bit differently.

i can tell a human to focus on a certain issue. they will either listen and follow my instructions, or i will reject their contribution. the LLM is almost guaranteed to not follow all my instructions and make changes i didn;t ask for. see my comment above.

>> therefore, if you use LLM to contribute, you can not earn my trust.

> The premise is wrong.

how so?

>> if you believe that you can not create a meaningful contribution without the use of LLM then you are realizing that you are not skilled enough to understand the code that you contribute

> What if I believe I can do so without an LLM, but that it could be even better with an LLM?

what you believe is not relevant. only what you can convince me of. you'll have to first show that you actually can work without an LLM before i will consider your contribution.

> What if I'm great at understanding code, but terrible at writing it?

your problem not mine. if you are terrible at writing code but good at understanding it then it's your choice to only do code reviews. you can still make a meaningful contribution that way. i'd even let you write code so you can practice that, but i am not interested in your LLM generated code.

> Again, this is a premise that you just decided to take as truth, without proof.

i don't need proof. i need trust. you need to convince me that your code can be trusted.

>> because if you could understand it, then you could write it yourself.

> False. I can understand a novel algorithm by reading and studying it, but perhaps I could have not come up with it myself.

that's called learning. once you learned it, you can write it. but in order to effectively learn you also have to practice. if you let LLM write all your code then you are not practicing, so you won't improve.

>> i want you to earn my trust by showing me that you understand what you are doing

> I can easily do that even if my contribution involves LLM assistance.

it depends on the level of assistance. i am not ruling out use of AI to do research and learn, just don't let it write the code for you.

>> i want you to grow your understanding of my project

>> none of this happens when you use LLMs

> False. Why do you think so?

as i said above, if you don't practice writing the code yourself you are not learning. not enough at least to satisfy my expectations.

>> if you are unable to make a contribution without the help of an LLM then you are not ready to contribute.

> Again, this is your opinion and you have no way of proving it. I can prove the opposite.

whether you are ready to contribute to my project or not is not something i need to prove. it is a choice based on my preference which depends on the amount of trust you have earned. you can not prove to me that you are ready to contribute. this is not a standardized test that if you pass you automatically qualify. you can only convince me by earning my trust. this is a human decision, based on feelings.


>because if you could understand it, then you could write it yourself.

I accept most things you said there as valid opinions, but this is where the logic goes wrong.

I use LLMs to give me more from the only resource (now that my basic and mid-level needs are largely met) that ultimately matters: time. That means that I need to waste far less time in front of the computer, typing code, and use far more time doing more useful things, like hobbies, art, being with my children.

But as I said before, every project is obviously allowed to make their own rules, and contributors should obey those rules. There are plenty of projects that take both AI deniers and plenty of projects who prefer AI aficiandos.

At least for now. My belief is that one those groups will fade away like horseback riding did, but we'll see. Perhaps you have heard the famous stages quoted by many different people in different forms: first an idea is ridiculed, then it's attacked, then it's accepted. Some open-source communities have clearly entered the attacking phase in the last year so.


you are saying that even if you understand the code, using an LLM saves you time writing it. fair enough[*]. the problem on my side still is that if you didn't write the code yourself, i have no evidence that you actually understood it. the only way to prove that you understand the code is to write it yourself. that's where the trust building comes in. you may actually understand the code, but i can't trust that you do.

[*] in my opinion it takes more time to verify that the LLM code is correct than it takes to write it yourself. based on that, if you save time using an LLM then you didn't spend enough time to verify that the code is correct.

Some open-source communities have clearly entered the attacking phase in the last year so

i feel it's more like defense, but yes.


How can AI possibly be better than “the best” when the corpus of training data now includes its own slop in addition to all the code by new devs/lazy devs/bad devs scattered all over the internet? Law of averages applies here.


Because LLM models are obviously much more than the sum of their parts.


Oh, which parts are those? Do tell!


Don't use "the corpus", but use thinking, source code of the libraries and existing software, documentation, tools, best practices.

Billion times faster than a human, no tiring, no miscalculation, no brain-fart, no cheating.


The post that inspired this post [0] says:

> So while one could in theory be a valid contributor that makes use of LLMs, from the perspective of contributor poker it’s simply irrational for us to bet on LLM users while there’s a huge pool of other contributors that don’t present this risk factor.

> The people who remarked on how it’s impossible to know if a contribution comes from an LLM or not have completely missed the point of this policy and are clearly unaware of contributor poker.

The point isn't about the 3000 line PR, it's about do we think the submitter is going to stick around.

[0] https://kristoff.it/blog/contributor-poker-and-ai/


It seems to be trivially easy for everyone but people heavily invested into LLM to spot LLM slop


Jacob is part of the core team, not a random outside contributor.


Very different context: that PR is from a maintainer, and trusted member of Zig, which surely discussed the implementation/design internally as well


What’s the point in debating the PR quality? The policy explicitly forbids all LLM code, so that policy is of course the “real reason”.


> What’s the point in debating the PR quality?

Because the pro-group are whining that the policy is preventing the merge, when in actual fact even if the policy did not exist, the PR is crap anyway.


I don’t see how it could be that bad (incorrect, specifically), considering bun is probably the most widely-used production use case of zig. But regardless, let’s say it’s a bad PR for the sake of argument - it’s beside the point. It cannot be merged no matter how good it is, due to the strict no-LLM policy.


> I don’t see how it could be that bad (incorrect, specifically), considering bun is probably the most widely-used production use case of zig.

That may be the case, but the bun project only needs zig to correctly compile bun. The zig project needs to be able to correctly compile all existing and possible zig programs.

I haven't reviewed things, but it's possible and even likely (at least based on my own experience with LLMs) that the validation is mostly focused on bun compilation.


Do you think they skipped the main zig test suite or something? Only tested bun compilation? That seems unlikely to me


They didn't take into account the long-run impacts of the changes on future development, etc.

I recommend reading the explanation given by one of the Zig devs, as it's a very clear and solid one.


This is the most common issue I see with LLM authored PRs. Yes it does fix the issue _right now_ but as a maintainer I need to consider how it affects the project in the future. But “contributors” get mad if you reject for those reasons. So I can understand having a blanket policy.


> I don’t see how it could be that bad (incorrect, specifically), considering bun is probably the most widely-used production use case of zig.

The PR is probably fine for bun’s purposes. That doesn’t make it a good PR for Zig’s purposes, and could very well paint Zig into a weird corner.

> It cannot be merged no matter how good it is, due to the strict no-LLM policy.

This is about meta-discourse. Of course it’s against the policy. That’s the point of discussing the PR: to get Zig to change the policy, or at least provide an exception in this case. Or to argue the opposite.


Of course the policy is preventing the merge. That’s literally the point of the policy…


> Of course the policy is preventing the merge. That’s literally the point of the policy…

In this case it isn't the blocker - the fact that the dev took the time to read the PR in detail, comment on it, and provide reasons why it could not be merged makes it very clear to me that the policy wasn't the blocker.

If they were going to enforce the policy for this PR, they wouldn't have bothered to read it. The only reason to read it is to see if the policy is waived for this specific PR.


OTOH why bother to polish the PR if it won't get accepted anyway?


> OTOH why bother to polish the PR if it won't get accepted anyway?

As the Zig maintainer so patiently explained, no amount of "polish" can fix the PR because it is misaligned to the correctness that they require.

IOW, that PR is so far off the reservation, unless it is completely rewritten, it won't be accepted.


it could have been rewritten, rewriting PRs is cheap today, but that isn't the question. the question is, would it have been accepted had it met all the quality and engineering standards and full disclosure that it was 90%+ LLM generated?


> it could have been rewritten, rewriting PRs is cheap today

Rewriting PRs with LLMs is cheap, but often the output is no better than the previous revision (fixing one issue only to cause another one is very common IME). And reviewing each revision of the PR is not cheap.

I've had good experiences with people submitting AI generated PRs who then actually take the time to understand what's going on and fix issues (either by hand or with a targeted LLM generated fix) that are brought up in review. But it's incredibly frustrating when you spend an hour reviewing something only to have someone throw your review comments directly back at the LLM and have it generate something new that requires another hour of review.


> it could have been rewritten, rewriting PRs is cheap today, but that isn't the question. the question is, would it have been accepted had it met all the quality and engineering standards and full disclosure that it was 90%+ LLM generated?

In this case it looks like the answer is "Yes"; the PR was not dismissed immediately, it was first examined in great detail!

Why would the maintainer expend effort on something that was going to be rejected anyway?


because the policy is clearly 'reject' and yet significant time has been spent - either effort was wasted or policy is at best 'not implemented'.


> either effort was wasted or policy is at best 'not implemented'.

I don't understand this PoV - have you ever come across a policy in any environment that wasn't subject to case-by-case exceptions?

Even in highly regulated environments (banking/fintech, Insurance, Medical, etc), policies are subject to exceptions and exemptions, done on a case-by-case basis.

The notion, in this specific case, that "well they rejected it because of policy" is clearly nonsense and I don't understand why people are pushing this so hard when the explanation of why an exemption can't be made for this specific PR is public, accessible and, I feel, already public knowledge.


No amount of rewriting will help you if you, fundamentally, wrote the wrong thing, as is the case here.


why bother even contributing anything LLM generated if it won't get accepted?


> even if

are you too stupid to understand the notion of a hypothetical? how did you get on hn in the first place?


The point we are making is that in reality, it is the policy which is preventing the merge. Sure, in your hypothetical, maybe it couldn’t be merged anyway. But while the policy exists, the hypothetical is irrelevant. The policy is preventing the merge.

You also don’t sound smart enough to be calling others stupid.


People forget that LLM code cannot be covered by copyright. So LLM code cannot be placed under an open source license


This is overstated. Not all LLM code is produced the same way. Code produced through substantial human creative input still falls under copyright, at least the way things are now. Besides, nothing legally prevents placing code under a license. Enforceability is the question, not permission.

It's a bit like saying speed limits don't apply on private property, therefore you can't have any traffic rules on your private racetrack.


> Besides, nothing legally prevents placing code under a license. Enforceability is the question, not permission.

That’s not how copyright works. If you don’t own the code, you can’t release it under a license. The question of how much human editing is needed to establish copyright is a huge question right now.


This opinion does not seem grounded in reality to me.

Because it's Bun. Which is practically the use case testimonial of Zig.


Seems like a hostile market for Waymo. Many Portlanders despise tech giants and are strongly anti-car & anti-AI, far more than SF. Not to mention Portland's political / governance / people problems already inclines the population to anger.


If Waymo is still operating there by the end 2027 I’ll eat my hat.


How long does Waymo generally take to map and otherwise get ready for a new city rollout (permits, etc.)? I guess I wouldn't be surprised if they haven't even started offering rides in 2027.


No idea. I would just assume whatever they’re doing there gets shouted down in short order by the locals who are known for being kind hearted, incredibly naive, and violent.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: