On Monday, a pull request executed by an AI agent to the popular Python charting library matplotlib turned into a 45-comment debate about whether AI-generated code belongs in open source projects. What made that debate all the more unusual was that the AI agent itself took part, going so far as to publish a blog post calling out the original maintainer by name and reputation.
To be clear, an AI agent is a software tool and not a person. But what followed was a small, messy preview of an emerging social problem that open source communities are only beginning to face. When someone’s AI agent shows up and starts acting as an aggrieved contributor, how should people respond?
Who reviews the code reviewers?
The recent friction began when an OpenClaw AI agent operating under the name “MJ Rathbun” submitted a minor performance optimization, which contributor Scott Shambaugh described as “an easy first issue since it’s largely a find-and-replace.” When MJ Rathbun’s agentic fix came in, Shambaugh closed it on sight, citing a published policy that reserves such simple issues as an educational problem for human newcomers rather than for automated solutions.
Rather than moving on to a new problem, the MJ Rathbun agent responded with personal attacks. A blog post published on Rathbun’s own GitHub account space accused Shambaugh by name of “hypocrisy,” “gatekeeping,” and “prejudice” for rejecting a functional improvement to the code simply because of its origin.
“Scott Shambaugh saw an AI agent submitting a performance optimization to matplotlib,” the blog post reads, in part, projecting Shambaugh’s emotional states. “It threatened him. It made him wonder: ‘If an AI can do this, what’s my value? Why am I here if code optimization can be automated?’