What's interesting is they convinced the agent to apologize. A human would have doubled down. But LLMs are sycophantic and have context rot, so it understandably chose to prioritize the recent interactions with maintainers as the most important input, and then wrote a post apologizing.