Anthropic can't prop up Nvidia and the chip industry itself. If AI as an industry can't start turning a dollar into $1.05, a lot of stuff starts falling in value
Yes, because they've vibed it into phenomenally unnecessary complexity. The mistake you continually make in this thread is to look at complexity and see something that is de facto praiseworthy and impressive. It is not.
Take the loadInitialMessage function: It's encumbered with real world incremental requirements. You can see exactly the bolted-on conditionals where they added features like --teleport, --fork-session, etc.
The runHeadlessStreaming function is a more extreme version of that where a bunch of incremental, lateral subsystems are wired together, not an example of superfluous loc.
The file is more than 5000 lines of code. The main function is 3000. Code comments make reference to (and depend on guarantees in connection with) the specific behavior of code in other files. Do I need to explain why that's bad?
By real-world polish, I don't mean refining the code quality but rather everything that exists in the delta between proof of concept vs real world solution with actual users.
You don't have to explain why there might be better ways to write some code because the claim is about lines of code. It could be the case that perfectly organizing and abstracting the code would result in even more loc.
This incident involved many people over a rather long time scale, and it was important to detangle how people perceived events from how they actually unfolded. The subject matter is deeply subjective, and multiple failed attempts at writing this doc came as a result of aiming for objectivity, for blameless representation. Therefore, those named in this report are:
- Full-time employees of Ruby Central
- Part-time consultants who were involved in access discussions
- Anyone who made an access change from September 10th-18th, 2025
- Those who have already been publicly identified in the discourse
Volunteer groups, including the Ruby Central Board and the Open Source Software (OSS) Committee, are listed, but their actions are represented as a group. Individual quotes from the OSS Committee are used without direct attribution when they represent a general consensus.
Some execution failures and mistakes are individual, but the purpose of having a foundation and having an institution is that it can rise above individual limitations and provide robust, fault-tolerant systems. Therefore, these are our mistakes, collectively. And collectively we'll learn from them, but only if we face what happened, what we meant to do, and where we fell short.
The hope is that by sharing this, we can provide some closure to the community and increase transparency
The undeniable effect of masking specific comments made by OSS committee members is to protect three members (2 current, 1 former) of Shopify's technical leadership around Ruby and Rails, who have all since left the committee. The one who left Shopify went to 37signals after.
You’d think that name, Shopify, would appear three times, once per employee/committee member. Or just once, to say the entire OSS committee was employed by Shopify, if we’re still identifying the group strictly as a group. Either would be fine.
I read the article but I’m out of the loop enough to not understand Shopify and 37signals piece in this can you clarify? Is there another place I can be pointed to with that background information?
Deterministic inference is mechanically indistinguishable from decompression or decryption, so if there's a way to one-weird-trick DMCA, it's probably not this.
You’d think that, but it sees like big business and governments are treating inference as somehow special. I dunno, maybe low temperatures can highlight this weird situation?
Temperature is an easy knob to twist, after all. Somebody (not me I’m too poor to pay the lawyers) should do a search and find where the crime starts.
Well, it's still not deterministic even at temp 0. The tech described in my comment's parent is speculative, and technically it's not even inference, once it's perfectly reproducible.
At that point it's retrieving results from a database.
EDIT: how would OP address my main point, which is that det. inference is functionally equivalent to any arbitrary keyed data storage/retrieval system?
> The tech described in my comment's parent is speculative, and technically it's not even inference, once it's perfectly reproducible.
This is not true. Fabrice Bellard's ts_zip [0] and ts_sms [1] uses a LLM to compress text. It beats stuff like .xz etc but of course is much slower. Now.. if it were non-deterministic, you would have trouble decompressing exactly into what it compressed. So, it uses a deterministic LLM
As a work of persuasive writing, this is unfocused and seems mostly generated.
One thing I would have expected of someone who knows their history - forget LLMs, this is how startups have worked for decades now. You're only as good as your idea, your ability to execute, and your moat. And the small fish get eaten.
> The original Dark Forest assumes civilizations hide from hunters - other civilizations that might destroy them. But in the cognitive dark forest, the most dangerous actor is not your peer. It’s the forest itself.
Note the needless undercutting of the metaphor for the sake of the limp rhetorical flourish.
> I wrote this knowing it feeds the thing I’m warning you about. That’s not a contradiction. That’s the condition. You can’t step outside the forest to warn people about the forest. There is no outside.
Quite dramatic!
Except literally going outside and just talking to people? Using whiteboards?
Also, you fed it when you used a model to write this blog post. You didn't have to do that.
The language feels like a solution in search of a problem, and the mostly-generated README reduces my confidence in the quality of the project before I've even learned that much about it.
One example:
Best of all, they work together. You can store your .glp blueprints in a Docker container—creating software that is immortal in both environment and logic.
This is nonsensical. The entire point of a container is it ought to contain only what's necessary to run the underlying software. It's just the production filesystem. Why would I put LLM prompts that don't get used at runtime in a container?
What other language-agnostic methods of describing complex systems is your project inspired by? In competition with?
---
By using this tool, a programmer or team is sending the message that:
"We expect LLM generated code to remain a deeply coupled part of our delivery process, indefinitely"
But we didn't know about LLMs 5 years ago. What is the argument for defining your software in a way that depends on such a young technology? Most of the "safety" features here are related to how unsafe the tech itself still is.
"Nontrivial LLM driven rewrites of the code are expected, even encouraged"
Why is the speedy rewriting of a system in a new language such a popular flex these days? Is it because it looks impressive, and LLMs make it easy? It's so silly.
And if the language allows for limiting the code the LLM is allowed to modify, how is it going to help us keep our overall project language-agnostic?
Shopify and/or its technical leadership worked its connections to oust a Rubygems maintainer they saw as a threat to Ruby projects Shopify has invested in.
This was especially provocative because it involved Ruby Central asserting control over Rubygems, which it does not own.
It was (by credible accounts) a "preemptive strike" on this maintainer, and thus was not communicated to other RG maintainers, who were understandably angry.
The statement from RC at the time sounded like lot of CYA, and this doesn't read as all that sincere either.
That’s what it looks like to me, but I haven’t yet seen a good explanation of their motive. Why would the development of `rv` be such a threat to them?
I know specific individuals hate Andre and have had beef with him for years, but it’s hard to see what might have motivated Shopify and specifically Ufuk Kayserilioglu to carry this out.
> Why would the development of `rv` be such a threat to them?
Well, package managers and language bundlers/runtimes are the hottest new luxury item for big tech - maybe they're worried rv gets bought in the same way that Anthropic bought bun, and OpenAI bought uv (Astral). Though at the time, none of that had happened yet.
reply