Claude is better for much than GPT atm. You really think the government is going to hamstring the engineering of weapons and intelligence capabilities by not using it?
Bluster followed by a "we can't do it now but we will... soon". Whoever has the best model can do what they please you'll see. I work with these things daily as an engineer (been doing this shit for 25 years and wow it's like mana from heaven these days). Believe me no one is going to screw with themselves by not using the best one and right now Anthropic has it.
Is it really the case companies like OpenAI and Anthropic will repeatedly visit this archive and slurp it all up each time they train something? Wouldn’t that just be a one time thing (to get their own copy) with maybe the odd visit to get updates? My take is the article is about monetizing unique training info and I see them being paid maybe 10-20 times a year by folks building LLMs which is maybe nothing and maybe $$$$ I don’t know.
Not a doctor, but in Anthropic's case they bought actual books and scanned rather than using pirated versions. For digital versions from a vendor that were found to be in violation of the ToS they paid to settle the issue.
https://www.npr.org/2025/09/05/nx-s1-5529404/anthropic-settl...
Costs more to incentivize people who are already flush with cash to work harder and keep working for you. It’s the prize waiting for anyone here later in life if you kill it and earn the right to join those ranks. Keep your head down and bust your butt at work and it’ll come. No one ever bitched and moaned their way there.
I've been given one by a member of the NYPD twice in the past couple of years. I've politely accepted it, put in my desk drawer, and I do not carry it. Always bothered me. Police culture in this city is fundamentally broken.
Most complex, unique, value producing things have a path to monetization for the builder of the thing. If the money isn’t there for the builder they are either not leveraging their relationship to the thing correctly, or the thing does not have the value the builder may think it has.
> Most complex, unique, value producing things have a path to monetization for the builder of the thing.
I don't think this is true. You need an extra condition 'that few people want to produce'.
There is lots of good free art. Why? Because lots of people want to be artists and make art. There is tons of good free writing. Why? Because lots of people want to write. There is masses of good free music. Why? Because many, many people enjoy making music.
There aren't people who collect garbage, clean toilets, dig holes in the ground, or work in oil refineries for free. But there are people publishing science, doing research, writing philosophy, producing erotic material, designing things, putting on theatre, producing textbooks and teaching people things, making clothes, thinking of jokes, answering questions, providing peer support to addicts, playing music, making games, making animations, all without monetary compensation. This is because the people doing these things want to do them.
This isn't a failure of our economic system. It's a great thing - it makes the products better, the producers happier (provided they have the economic freedom to spend time on these projects) and the consumers better off.
First of all, it's obvious that in the vast majority of cases, writing free software falls into the 'amateur art' category not the 'dirty, boring and necessary job' category. Many, many people enjoy the time spent on writing and maintaining software, are motivated to solve their and other people's problems, and take pride in doing so well. You might expect that only games, intellectual toys or fanciful projects would motivate people to work on them in their free time. The reality is that software projects which could be seen as dry and boring to non-technical people (OS kernel design, file transfer protocols, laptop power management support, database and webserver stability, document rendering) attract many very talented people to work on them.
Secondly, if we think that there's some deep inequality or instability in our society because (for example) critical Internet infrastructure depends on hobbyists and volunteers, doesn't it make more sense to try and improve the conditions for hobbyists and volunteers, and make it possible for there to be more of them? The alternative put forward seems to be to turn them into more of the people who both don't enjoy the time spent on what they do, nor produce the best product that they can.
The existence of the path to monetization is entirely outside their control though. Millions of people make viral videos, very few have benefited from it. The financial system disincentivizes or outright bans open-product monetization.
There's a longer list of companies that have been basically out-competed and strip mined by the hyperscalers. But presumably the poster here is referring more to a long tail of small to medium sized projects that are important to the community at large but harder to monetize then these big high gravity projects that you mentioned
Pretty sure I’m a machine that’s drawn all my conclusions by statistically analyzing all the input I’ve received since birth… I don’t really know how else I would learn what I have… and I don’t understand how being “just that” is what differentiates modern approaches to AI and my brain.
Exactly this. A lot of these type of articles on AI make the simultaneous mistakes of understating what even the current iterations of models are capable of, while overstating the complexity of our own intelligence.
As these models get larger, and we start moving up the ladder of emergent behaviours, there will come a point (possibly quite soon) where the sort of distinctions being drawn are irrelevant.
“Oh that’s just an advanced multimodal model with some sensors and goal seeking behaviour. Stop anthropomorphising it!”
That doesn't address the alignment problem. As Yudkowski has pointed out, the space of possible minds is vast. Humans only occupy a small area. The models were are designing are not animal/biological minds. They don't have the same evolutionary drives. We're creating minds in a different part of the mind space, and there's a good chance they will figure out solutions that are not beneficial to us.
Absolutely, I would just addressing the ‘it’s just statistical’ argument. The vast majority of human behaviour is learned. We’re all implicitly ‘saying what seems right to achieve some underlying fitness function’ all of the time. That’s what fashion is about. But yes, definitely no reason to assume an AI will think like us underneath it all.