The safety teams are trivial expenses for them. They fire the safety team because explicit failure makes them look bad, or because the safety team doesn't go along with a party line and gets labeled disloyal.
Every time somebody writes an article like this without any dates and without saying which model they used, my guess is that they've simply failed to internalize the idea that "AI" is a moving target; nor understood that they saw a capability level from a fleeting moment of time, rather than an Eternal Verity about the Forever Limits of AI.
Funnily enough we have had those comments with every single model release saying "Oh yeah I agree Claude 3 was not good but now with Claude 3.5 I can vibe-code anything".
Rinse and repeat with every model since.
There also ARE intrinsic limits to LLMs, I'm not sure why you deny them?
There's intrinsic limits to vanilla transformer stacks. Nobody knows where they are. We don't know how unvanilla Opus 4.6 or GPT 5.3 are. We don't know what's in development or which new ideas will pan out. But it will still probably be called an "LLM".
It would not surprise me at all for bb7 to exceed Graham's number. Just a Kirby-Paris hydra or a Goodstein sequence gets you to epsilon zero in the fast-growing hierarchy, where Graham is around omega+2.
The 79-bit lambda term λ1(λλ2(λλ3(λ312))(1(λ1)))(λλ1)(λλ211)1 in de-Bruijn notation exhibits f_ε0 growth without all the complexities of computing Kirby-Paris hydra or Goodstein sequences. Even that is over 60% larger than the 49-bit Graham exceeder (λ11)(λ1(1(λλ12(λλ2(21))))). I think one should be quite surprised if you can climb from f_4 (2↑↑2↑↑2↑↑9) to f_{ω+1} (Graham) with just 1 additional state.
Thanks to OpenAI for voluntarily sharing these important and valuable statistics. I think these ought to be mandatory government statistics, but until they are or it becomes an industry standard, I will not criticize the first company to helpfully share them, on the basis of what they shared. Incentives.
I have tried to tell my legions of fanatic brainwashed adherents exactly this, and they have refused to listen to me because the wrong way is more fun for them.
Translators? Graphic artists? The omission of the most obviously impacted professions immediately identifies this as a cooked study, along with talking about LLMs as "chatbots". I wonder who paid for it.
are graphic artists actually getting replaced by AI? If so that would surprise me for as impressive as AI image generation is, very little of what it does seems like it would replace a graphic artists.
Doubling the productivity of 20% of workers, in cases where a lower price doesn't increase demand, can shift prices in the whole system as unemployed artists compete with other artists for wages. AI won't take your job, someone else unemployed by AI will take your job. (NGDPLT partially solves this but that's a higher competence level than civilization has.)
reply