>but the moment their own code leaks they reach for DMCA takedowns.
Did they actually? Someone can go to prison for 5 years for that.
Fact 1: AI generated code has no copyright, so the Digital Millennium Copyright Act does not apply.
Fact 2: Misrepresenting your copyright ownership under the DMCA is felony perjury.
Fact 3: The existence of undercover.ts in the leak is grounds to void any copyright claims on whatever human written code might have existed in Claude Code. You have a DUTY TO DISCLOSE any AI generated code in your copyrighted work. undercover.ts HIDES DISCLOSURE to FRAUDULENTLY claim all the code is human written when it is not.
Given the current administration has a bone to pick with Anthropic, it was a VERY BAD IDEA for them to send false DMCA takedowns to github. Someone at Anthropic may be the very first ever to go to prison under that section of the DMCA.
Thaler v. Perlmutter: The D.C. Circuit Court affirmed in March 2025 that the Copyright Act requires works to be authored "in the first instance by a human being," a ruling the Supreme Court left intact by declining to hear the case in 2026.
Authors and inventors, courts have ruled, means people. Only people. A monkey taking a selfie with your camera doesn't mean you own a copyright. An AI generating code with your computer is likewise, devoid of any copyright protection.
The ruling says that the LLM cannot be the author. It does not say that the human being using the LLM cannot be the author. The ruling was very clear that it did not address whether a human being was the copyright holder because Thaler waived that argument.
the position with a monkey using your camera is similar, and you may or may not hold the copyright depending on what you did - was it pure accident or did you set things up. Opinions on the well known case are mixed: https://en.wikipedia.org/wiki/Monkey_selfie_copyright_disput...
Where wildlife photographers deliberately set up a shot to be triggered automatically (e.g. by a bird flying through the focus) they do hold the copyright.
AI generated code has no copyright. And if it DID somehow have copyright, it wouldn't be yours. It would belong to the code it was "trained" on. The code it algorithmically copied. You're trying to have your cake, and eat it too. You could maybe claim your prompts are copyrighted, but that's not what leaked. The AI generated code leaked.
The linked document labeled "Part 2: Copyrightability", section V. "Conclusions" states the following:
> the Copyright Office
concludes that existing legal doctrines are adequate and appropriate to resolve questions of
copyrightability. Copyright law has long adapted to new technology and can enable case-by-
case determinations as to whether AI-generated outputs reflect sufficient human contribution to
warrant copyright protection. As described above, in many circumstances these outputs will be
copyrightable in whole or in part—where AI is used as a tool, and where a human has been able
to determine the expressive elements they contain. Prompts alone, however, at this stage are
unlikely to satisfy those requirements.
So the TL;DR basically implies pure slop within the current guidelines outlined in conclusions is NOT copyrightable. However collaboration with an AI copyrightability is determined on a case by case basis. I will preface this all with the standard IANAL, I could be wrong etc, but with the concluding language using "unlikely" copyrightable for slop it sounds less cut and dry than you imply.
That's typical of this site. I hand you a huge volume of evidence explaining why AI generated work cannot be copyrighted. You search for one scrap of text that seems to support your position even when it does not.
You have no idea how bad this leak is for Anthropic because with the copyright office, you have a DUTY TO DISCLOSE any AI generated work, and it is fully RETROACTIVE. And what is part of this leak? undercover.ts. https://archive.is/S1bKY Where Claude is specifically instructed to HIDE DISCLOSURE of AI generated work.
That's grounds for the copyright office and courts to reject ANY copyright they MIGHT have had a right to. It is one of the WORST things they could have done with regard to copyright.
I merely read the PDF articles you linked, then posted, verbatim, the primary relevant section I could find therein. Nowhere does it say that works involving humans in collaboration with AI can't be copyrighted. The conclusions linked merely state that copyright claims involving AI will be decided on a case by case basis. They MAY reject your claim, they may not. This is all new territory so it will get ironed out in time, however I don't think we've reached full legal consensus on the topic, even when limiting our scope to just US copyright law.
I'm interpreting your most recent reply to me as an implication that I'm taking the conclusions you yourself linked out of context. I'm trying to give the benefit of the doubt here, but the 3 linked PDF documents aren't "a mountain of evidence" supporting your argument. Maybe I missed something in one of those documents (very possible), but the conclusions are not how you imply.
Whether or not a specific git commit message correctly sites Claude usage or not may further muddy the waters more than IP lawyers are comfortable with at this time (and therefore add inherent risk to current and future copyright claims of said works), but those waters were far from crystal clear in the first place.
Again, IANAL, but from my limited layman perspective it does not appear the copyright office plans to, at this moment in time, concisely reject AI collaborated works from copyright.
Your most recent link (Finnegan) is from an IP lawyer consortium that says it's better to include attribution and disclosure of AI to avoid current and future claim rejections. Sounds like basic cover-your-ass lawyer speak, but I could be wrong.
Full disclosure: I primarily use AI (or rather agentic teams) as N sets of new eyeballs on the current problem at hand, to help debug or bounce ideas off of, so I don't really have much skin in this particular game involving direct code contributions spit out by LLMs. Those that have any risk aversion, should probably proceed with caution. I just find the upending of copyright (and many other) norms by GenAI morbidly fascinating.
Currently, the US copyright application process has an AI disclosure requirement for the determination of applicability of submitted works for protections under US copyright law.
The copyright office still holds that human authorship is a core tenet of copyrightability, however, whether or not a submission meets the "de minimis" amount of AI-generated material to uphold a copyright claim is still being decided and refined by the courts and at the moment the distinction appears to fall on whether the AI was used "as a tool" or as "an author itself", with the former covered in certain cases and the latter not.
The registration process makes it clear that failure to disclose submissions in large contribution authored by contractor or ai can result in a rejection of copyright claim now or retroactive on discovery.
You do not apply for copyright. In the US you can, optionally, register a copyright. You do not have to, but it can increase how much you get if you go to court.
I do not know whether any other country even has copyright registration.
Your main point that this is something the courts (or new legislation) will decide is, of course, correct. I am inclined to think this is only a problem for people who are vibe coding. The moment a human contributes to the code that bit is definitely covered by copyright, and unless you can clearly separate out human and AI contributed bits saying the AI written bits are not covered is not going to make a practical difference.
My (limited) understanding was that without formal registration you cannot file any infringement suits against any works protected by said copyright. Then what's the point of the copyright other than getting to use that fancy 'c' superscript?
That comment is spot on. Claude adding a co-author to a commit is documentation to put a clear line between code you wrote and code claude generated which does not qualify for copyright protection.
The damning thing about this leak is the inclusion of undercover.ts. That means Anthropic has now been caught red handed distributing a tool designed to circumvent copyright law.
They can't. AI generated code cannot be copyrighted. They've stated that claude code is built with claude code. You can take this and start your own claude code project now if you like. There's zero copyright protection on this.
It's undetermined if code will be majority written by machines, especially as people start to realize how harmful these tools are without extreme diligence. Outages at Cloudflare, AWS, GitHub, etc are just the beginning. Companies aren't going to want to use tools that can potentially cause $100s of millions in potential damages (see Amazon store being down causing massive revenue loss).
I'm sure it's not _entirely_ built that way, and in practically speaking GitHub will almost certainly take it down rather than doing some kind of deep research about which code is which.
That's fine. File a false claim DMCA and that's felony perjury :) They know for a fact that there is no copyright on AI generated code, the courts have affirmed this repeatedly.
Try not to be overly confident about things where even the experts in the field (copyright lawyers) are uncertain of.
There's no major lawsuits about this yet, the general consensus is that even under current regulations it's in the grey. And even if you turn out to be right, and let's say 99% of this code is AI-generated, you're still breaking the law by using the other 1%, and good luck proving in court what parts of their code were human written and what weren't (especially when being sued by the company that literally has the LLM logs).
>In case you’re worried, this is still me. These are my own words. Writing is thinking, and it would defeat the purpose for an AI to write in my place on my personal blog.
Hey author. I vouched you so I can reply. Look into drum-buffer-rope. I think you'll like it. I agree with you, AI isn't accelerating the part than needs accelerating.
>10 U.S.C. § 3252 authorizes the Secretary of Defense to exclude a source from defense procurements involving national security systems if there is a supply chain risk, defined as the risk that an adversary may sabotage, maliciously introduce unwanted function, or subvert a covered system.
I think any LLM is covered by that, but specifically for Anthropic,
>Recent research has uncovered several critical vulnerabilities, including the "Claudy Day" attack chain which allows silent data exfiltration through conversation history, and a zero-click XSS prompt injection in the Chrome extension that enabled attackers to inject prompts without user interaction until a patch was released in February 2026.
What is obvious to me however is the timing. This Trump pants-shitting happened just before the Iran invasion. You can just imagine it. Trump wants to send fully autonomous bots into Iran to destroy the non-existent nuclear program. Anthropic leadership tries to make a moral stand saying innocent civilians could die. Trump doesn't care because he wants zero US military casualties even if it means a school full of Iranian children is bombed and everyone is killed. And then we get exactly that plus a forever war.
And obviously, the judge is out of her lane too... since, you know, the rule basically can apply to any AI agent because they're just as likely to do what you ask as they are to delete all your emails without even apologizing for it.
> Not if you want to run any of your banking apps or all sorts of things.
I must be getting old, cause I see everyone saying this in response as if it's a downside. As someone that's getting real tired of every company/product/service on earth trying to have you install their own app (even before we get to the privacy/data concerns, just on a pure convenience/hassle POV), the idea of "WeLl ThEn YoUr BaNk ApP DoEsN'T WoRk" is frankly a bonus.
I can touch to pay with a card , which is faster and more convenient than having to unlock/approve/dick with my phone, which by doing so also allows me to keep NFC off by default (personal preference).
Also, I don't need an app for that, already have one, it's called a browser.
You are getting old (and so am I), but banks are already starting to build out needed features into these apps that don't have equivalents in their web applications, and I'm deeply worried that this will continue. It also honestly needs a legislative solution, but at least where I live there is no appetite to handling that problem.
It's not paying I care about (and I don't need their app to do that, thankfully!), that's a solved problem as you rightly pointed out. It's everything else that makes me nervous as to where it might be going.
Said another way: I'm saying this as a warning, not as I "wahhhh I don't have the app that I want :'("
The Illinois bill is not about 18+ content. It's about controlling who your children can talk to on social media. The OS age check is just a means to that end. The end is blatantly unconstitutional. The bill of rights doesn't mention age limits. Freedom of assosiation applies to kids just as much as it does to adults. If the bill passes, then any racist parent could block all comms from kids of a different color for example.
I get what you’re saying but it’s a false premise. In today’s era, racist parents already block their children from even attending school with someone of a different color. Merely blocking comms would be a step before that in severity of control.
Parents have always had the ability (though maybe not explicitly the right to) control their children’s environment for the purposes of teaching personal beliefs. So long as the belief itself wasn’t deemed harmful to the child, society would allow it to continue propagate that way. Racism unfortunately has never been seen as innately harmful. It’s looked down on, yes, but not to the point of making it illegal to enforce in family life.
To be fair, as a parent I don’t want my under age children hooking up with literal nazis on social platforms, whoever that might be. The current tools and controls are lacking. A lot.
Did they actually? Someone can go to prison for 5 years for that.
Fact 1: AI generated code has no copyright, so the Digital Millennium Copyright Act does not apply.
Fact 2: Misrepresenting your copyright ownership under the DMCA is felony perjury.
Fact 3: The existence of undercover.ts in the leak is grounds to void any copyright claims on whatever human written code might have existed in Claude Code. You have a DUTY TO DISCLOSE any AI generated code in your copyrighted work. undercover.ts HIDES DISCLOSURE to FRAUDULENTLY claim all the code is human written when it is not.
Given the current administration has a bone to pick with Anthropic, it was a VERY BAD IDEA for them to send false DMCA takedowns to github. Someone at Anthropic may be the very first ever to go to prison under that section of the DMCA.
Good luck!
reply