Zig's AI Ban Isn't Really About Code Quality.

The Zig project's blanket ban on AI-assisted contributions has been read as an extreme stance on code quality. The reasoning beneath it points at a different problem entirely — one that won't dissolve when AI gets good enough to fool a reviewer.

Last week, a small open source project published the most stringent anti-AI argument I've seen any major project make in public. The project is Zig, a systems language run by a 501(c)(3) foundation that funds a small number of paid maintainers and depends on a wider bench of volunteers. It also happens to be the toolchain underneath Bun, the JavaScript runtime that Anthropic acquired in December 2025 in its first-ever acquisition. Bun is the toolchain underneath Claude Code. Which makes Zig the foundation beneath a billion-dollar AI coding product, run by an organization that has now formally banned AI from helping build it.

Loris Cro, VP of Community at the Zig Software Foundation (ZSF), made the argument in an essay called "Contributor Poker and Zig's AI Ban". The essay frames the ban as a matter of long-term project economics. The project's Code of Conduct already places the rule at top level, adjacent to the harassment policy: no LLMs for pull requests, no LLMs for issues, no LLMs for comments on the bug tracker, not even for translation. This is well outside the norm. Most major open source projects have not gone anywhere near this far.

For context, open source software is built by two groups of people who are not the same. Contributors are anyone in the world who can write code and submit a change for the project to consider. There is no employment relationship, no formal entry, and no obligation in either direction. Reviewers are the much smaller group with authority to merge those changes, in many cases as part of a core team supported by a foundation (like ZSF) that pays a few of them and relies on volunteers for the rest. Anyone can contribute. Almost no one can review. That asymmetry is what makes reviewer attention, not code, the scarce resource open source projects actually budget around.

With this in mind, Cro's argument for Zig's ban on LLM-generated content starts from a frame the essay calls "contributor poker." Open source is an iterated game. The value a contributor brings is not in the first pull request but in the years that follow, as they accrue context, trust, and ownership of pieces of the codebase. Maintainer attention on a first PR is therefore not a transactional cost. It is a bet on a person who, if the bet pays off, returns the investment many times over. As Cro puts it, "you bet on the contributor, not on the contents of their first PR." LLM-assisted submissions break this exchange. There is no strictly-human contributor on the other side of an LLM-generated PR to invest in, and reviewer attention spent on the output yields questionable future return.

Cro is also clear about what the project actually receives from LLM-generated PRs: drive-by submissions full of hallucinated APIs that don't compile, ten-thousand-line first PRs from contributors with no prior history, and people who deny using LLMs but were caught running reviewer feedback through one in real time. The essay grants that this is not the only possible result of AI-driven coding. It is, in Cro's words, "clearly a misuse of the tool." But the misuse is overwhelmingly what the project sees in practice.

The contributor-poker logic is real on its own terms. Reviewer attention is scarce, and investing it in contributors who will grow into long-term collaborators is rational policy. But quality as an orienting argument doesn't fully explain a blanket ban on LLM use.

By Cro's own concession, some LLM-assisted code is indistinguishable in quality from human work and is already in the codebase. Also, if the concern were strictly quality, the response would be better triage, not outright prohibition. While recognizing the limitations of this approach for OSS projects, it is difficult to ignore that a blanket ban isn't calibrated against quality concerns. It's calibrated against something else.

My read is that the root of the issue is trust, because trust on teams and in collaborative environments is built in a particular way that AI happens to interfere with directly. Trust between collaborators is built through work product. The mechanism rests on the assumption that the work was performed by the person who hands it over, so the work product reflects that person's head — their taste, their judgment, their patience, their ethics. I can assess the person through the work product, and the running output of those assessments, accumulated over time, is how teams form trust and organize work.

That mechanism is so foundational to how we collaborate that our culture polices it with morality. The clearest existing case is plagiarism. We treat plagiarism as an ethical breach, even when potential economic harm is absent, and regardless of the quality of the output. That's because plagiarism severs the work-to-person link the entire trust apparatus depends on. AI threatens that same link by a different route. ZSF's policy isn't taking a moral position and doesn't necessarily need to. But the shape of the policy is the same shape moral language has historically taken around plagiarism – unacceptable in all cases.

This is why AI-policy debates often feel so heated: they're debating a valid concern in the wrong terms. Acceptable use policy isn't really about whether the end product is good. It's about whether using an LLM decouples your work from yourself in a way the social fabric of teams and organizations refuses.

Most large engineering organizations, including Google, Meta, and Amazon, have already mandated AI tooling for their engineers on productivity grounds. The decision was made without anyone asking what it would do to the trust controls of those organizations. Code review and QA are often cited as controls for concerns about widespread LLM use, but they are quality controls. They don't bear weight on whether teams come to trust each other as collaborators, if managers come to trust their reports as contributors, or if organizational structures can hold if that trust is absent.

This same phenomenon problematizes hiring itself. If the work you're hiring someone to do is going to be performed by AI, does evaluating the person separately from that AI predict their job performance at all? Most hiring practice rests on evaluating craft, judgment, and process, on the implicit theory that those qualities are what produce the work. When the work is produced by AI instead, the interview is measuring something else, and most companies haven't reckoned with what.

The decoupling Zig is responding to has already happened at the largest companies in software. Most of them haven't noticed. None of them are ready to respond. But, most of the tech workers I know can feel this shift in their organization's culture instinctively, whether they have the words to explain it or not.

Zig's response to these issues is the most extreme version. Refuse the decoupling entirely. Most organizations cannot afford that response, will not adopt it, and probably shouldn't – not that they could police it, anyway. After all, Zig can't.

But every leadership team is going to face the question Zig is answering, whether they choose to or not. What are our terms for trusting collaborators and employees whose work may not be theirs in the way work used to be theirs?

That question doesn't dissolve when AI improves. It becomes more urgent.

Read me in your inbox.

New analysis on AI-driven work, every Tuesday.

Subscribe