Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Seems like Anthropic has too much money on their hands and are looking for ways to spend it. It’s surprising to see lean AI startups accumulate fat so quickly. Usually this sort of wheel spinning is reserved for large corporations.

And it’s not just them. To me this trend screams “valuations are too high”, and maybe hints at “progress might start to stagnate soon”.



Anthropic is a Public Benefit Corporation whose governance is very different from a typical company in that it doesn’t put shareholder ROI above all else. A majority of its board seats are reserved for people who hold no equity whatsoever and whose explicit mandate is to look out for humanity.

https://www.anthropic.com/news/the-long-term-benefit-trust

https://time.com/6983420/anthropic-structure-openai-incentiv...


This is why I cancelled my chatgpt subscription and moved to claude. Its kinda silly, but I feel like the products are about equivalent for my use case so I'd rather do business with a company that is acting in good (better?) faith.


Don't think that's silly at all.


Hope not - I haven't purchased a Nestle brand in years for this exact reason.


In the case they don't get high salaries from this activity, there is also a solution. The next step in ~10 years could be to offer their services to governments to offer "automated court decisions".

Then the people who funded / trained this "justice" out of their good heart, would actually have leverage, in terms of concrete power.

It's a much more subtle way to capture power, if you can replace the judges with your software.


Anthropic pays their engineers pretty well. They're doing just fine, at least for as long as people are pouring money into their company. But that's everyone in this space, isn't it?


I guess they can get them to rewrite the US Constitution to remove that pesky "fair trial" bit and, since they would control the narrative, delete 1000+ years of common law.

Brave new world, indeed...


Thanks but no thanks.


That isn't silly, that's one of the only ways to exercise agency under hypercapitalism. I recently cancelled my Amazon Prime membership and got a Costco membership for the same reason. I don't get every product I want, but I'm also okay with that.


This has to be a meme. Costco is peak hypercapitalism lol.


Could you say more?


It's a 500B company that undercuts everyone else with incredible efficiency, just like Amazon. It's an example of how capitalism can be great. If you really want to get of out of capitalism, you can just buy directly from farmers or grow your own food.

The whole thing about no ethical consumption under capitalism is a just a way to enjoy the conveniences of capitalism on a moral high ground. It's totally doable, you just might not enjoy it haha.


I guess the angle I was coming at it from is that they pay their employees a living wage. I need to buy toilet paper from somewhere, and between Amazon and Costco I would much rather give my money to Costco.


The secret is buying a bidet so you dont need to buy from either ever again!


Hell, just buy from Wallyworld where you get low, low prices and pseudo-socialism with their employees on the food stamps.

The camel's gotta get its nose in the tent somehow.


I'm not sure if you are being sarcastic or not, but the practical upshot of this new "Public Benefit Corporation" thing, with or without a trust or non-profit attached, is that you can tell both the public and your investors to fuck off. The reason why all the big AI startups suddenly want to use it is because they can. Normally no sane investor would actually invest in such a structure, but right now the fear that you might be left out of the race for humanity's "last invention" is so acute that they do it anyway. But if Dario Amodei actually cared about humanity any more than Sam Altman, that would be the surprise of the year to me.


Can you imagine a hypothetical AI company that did care about humanity, and if so, how would it look different from Anthropic?


It wouldn't be doing this: https://investors.palantir.com/news-details/2024/Anthropic-a...

It wouldn't specifically brag about doing it, while leaving out that they were specifically dealing with Palantir, because they know what they're doing is unethical: https://www.anthropic.com/news/expanding-access-to-claude-fo...

Being available for use by militaries is incredibly irresponsible, regardless of what scope is specifically claimed, because of the inherent gravity of the situation when a military is wrong. The US military maintains a good deal of infrastructure in the US; putting into their hands an unreliable, incompetent calculator puts lives at risk.

It would be structured as a non-profit (there are no teeth to a PBC; the structure is entirely to avoid liability, and if you have no trust in the executive body of an organization, it has zero meaningful signal).

It would have a different leadership team.

It would have a leader who could steelman his own position competently. Machines of Loving Grace was less redeeming than Lenat's old stump speeches for his position, despite Amodei starting up in an industry significantly more geared for what he had to say, and Lenat having an incredibly flexible sense of morality. Its leader would not have a history working for Chinese companies and jingoistically begin advocating for export controls.

It would have different employees than the people I know who are working there, who have a history of picking the most unethical employers they can find, in a fashion not dissimilar to how Illumination Entertainment's "Minions" select employers.


You seem to misunderstand benefit corporations. They remain committed to profit and are just as subject to their board and officers as any other corporation.

There are sane investors that prefer investing in companies that adopt these corporate structures. Based on data, those investors see public benefit corporations as more profitable and resilient. They are able to attract employees and customers that would otherwise not be interested or might be less interested.


The attempt is commendable, but the agency problem is well understood and none of these alternative structures have really solve for it.


> the agency problem is well understood

What is "the agency problem"?


Very generally it is (https://en.wikipedia.org/wiki/Principal%E2%80%93agent_proble... ) about the conflict of interest between agent( people taking action) and principal (the entity or person on behalf of whom action is taken)

In modern management compensation theory (https://saylordotorg.github.io/text_introduction-to-economic... ) this is key to why executive compensation has increased much faster than workers in the last 50 years.

Stock based compensation mix evolved from this thesis, and quite common in the valley and why almost all OpenAI staff wanted Sam Altman back even though the non profit board did not.

Aligning key talent's compensation to enterprise value is only viable in unrestricted for profit entities any other structure with limits (capped profit, public benefit corporation, non profit, trust, 501c's etc) does not work as well.

Talent will then leave to a for-profit entity who can offer better compensation than a restricted entity can because they share a % of their enterprise value which restricted ones either cannot or not have same liquidity/value [1] etc.

---

[1]This is why public companies are more valuable for RSU/options than private companies, and why cash flow positive companies like Stripe still raise private money to just give liquidity to employees .


Put this and 'dont be evil' and 5 dollars in my hand and I'll give you a cup of coffee.


Coffee for $5? That's a steal in this economy!


The coffee is made with the assistance of AI, which means some nonzero portion will be something other than coffee, but at least it means every sip is an adventure.


This is one of the funniest takes on ai I've read, it could've been out of a videogame like the outer worlds with its absurd takes on crapitalism

It's not the best choice, it's spacer's choice!


Isn't there an SCP where occasionally it spits out liquid magma or strange matter or something?


The exact opposite. Relative to ChatGPT Anthropic has an enormous "brand problem." What they should be doing is exclusive deals like this, but with deals with large publishers on a recurring basis and figure out how to teach consumers who they are and how to use them best. For like 99% of the use cases all these products are parody and the real business gains are finding a way into consumers lives.

Semi-relevant sidenote: ChatGPT, spent $8m on a super bowl commercial yesterday just to show cool visualizations instead of any emotional product use case to an ultra majority audience has never had a direct experience with the product.

These companies would be best served building a marketing arm away from the main campus in a place like LA or NY to separate the gen pop story from that of the technology.


I disagree. I think Anthropic, like the other big players, is trying to get some of that government money. Releasing policy-adjacent papers seems like a way to alert government officials that Anthropic ought to be in the room when the money starts changing hands.


I am inclined to agree. If you’re at the precipice of automating or transforming knowledge work and the value for being the first is nearly infinite (due to “flywheel effects”), why would you dedicate any energy to studying the impact of AI on jobs directly? The thesis is everything changes.

I think AI in its current iteration is going to settle into being like a slightly worse version of Wikipedia morphed with a slightly better version of stackoverflow.


I think that strongly underestimates the impact LLMs, especially reasoning models, have on how code is written today.


Educate me. I find them useful but they are less so when you try to do something novel. To me, it seems like fancy regurgitation with some novel pattern matching but not quite intuition/reasoning per se.

At the base of LLM reasoning and knowledge is a whole corpus of reasoning and knowledge. I am not quite convinced that LLMs will breach the confines of that corpus and the logical implications of the data there. No “eureka” discovery, just applying what we already have laying around.


Let's say I can't fully disclose the details because it is an area I am actively working on, but I had an algorithmical problem that was already solved in an ancient paper, but after a few hours of research I could find no open implementation of it anywhere. I thus spent quite some time re-implementing this algorithm from scratch, but it kept failing in quite a few edge cases that should have been covered by the original design.

Just to try it out, I uploaded the paper to DeepSeek-R1 and wrote a paragraph on the desired algorithm, that it should code it in Python and that the code should be as simple as possible while still working in exactly the way as described in the paper. About ten minutes later (quite a long reasoning time, but inspecting the chain of thought, it did almost no overthinking, but only reasoned about ideas I had or should have considered) it generated a perfect implementation that worked for every single test case. I uploaded my own attempt, and it correctly found two errors in my code that were actually attributable to naming inconsistencies in the original paper that the model was able to spot and fix on the fly. (The model did not output this, this I had to figure out myself.) I would have never expected AI to do that in my lifetime just two years ago.

I don't know whether that counts as "novel" to you, but before DeepSeek, I also thought that Copilot-like AI would not be able to really disrupt programming. But this one experience completely changed my view. It might be the case the model was trained on similar examples, but I find it unlikely just because the concrete algorithm cannot be found online except for the paper.


This fits my experience. When the information is encoded somehow already, LLM’s excel at translating to another medium.

Combined with the old “nothing new under the Sun” maxim, in that most ideas are re-hashes or new combinations of existing ideas, and you’ve got a changed landscape.


clearly NOT novel as you so clearly explained, "an algorithmical problem that was already solved in an ancient paper"


Well, of course. Realistically, I would not expect AI systems like this to be very useful for novel cutting-edge scientific results, proving mathematical theorems etc. in the next few years.

But this is not the majority of what software developers are doing and working on today. Most have a set of features or goals to implement using code satisfying certain constraints, which is what current reasoning AI models seem to be able to do very well. Of course, this test was not rigorous in any meaningful way, but it really changed my mind on the pace of this technology.


I think the trap people fall in is that LLMs don't need to be novel or reason as well as a human to revolutionize society.

Plenty of value is already added just by converting unstructured data to structured data. If that is all LLMs did they would be still be a revolution in programming and human development. So much manual entry and development work has essentially evaporated overnight.

If there was never a chat based LLM "agent" LLMs just converting arbitrary text to structured JSON schema would be the biggest advancement in comp sci since the internet. There is nothing equivalent that existed before except for manual extraction or rule based hard coding.

Judging LLMs based on some criteria of creativity or intuition from a chat is missing the forest for the trees.


> find them useful but they are less so when you try to do something novel.

Well over 90% of work out there is not novel. It just needs someone to do it.



Because that research helps you understand your market and where the value generation is. This can expose where to better invest.


A lot of assumptions there. Why isn't Ford the only motor company?

And if the flywheel is that AI begets AI exponentially in an infinite loop then those share certificates you own probably won't be worth much. The AI won.

Coincidentally, Anthropic's mission is AI safety.


Understating who is using your product is wheel spinning?


I don't see it. This is just an analysis of how Anthropic customers are using the product and what investment areas seem most promising in the future - why wouldn't they want that?


It's clearly more than an interesting tech blog post written by one of the data guys in their spare time. It's an "initiative".

That said, this doesn't seem like completely superfluous "fat" like what Mozilla does. It seems very much targeted at generating interesting bits of content marketing and headlines, which should contribute to increasing Anthropic's household brand name recognition vs. other players OpenAI, as well as making them seem like a serious, trustworthy institution, rather than a rapacious startup that has no interest in playing nice with the rest of society. That is: it's a good marketing tool.

My guess is that they developed it internally for market research, and realized that the results would make them look good if published. Expect it to be "sunset" if another AI winter approaches.


Even on the contrary, this is very important information to have, in order to understand your customer base and how sticky you are with them, what features you need to focus on, etc etc


We live in a world where there's a lot of talk about how AI might impact societies and economies - but little actual data. To me it seems very worthwhile to try to add 'any' data to that discussion and track how things change over time. Are reports of economic or labour trends pointless? Should companies not track how people use their products? I don't think it costs Anthropic much to do this - it's work for a couple of people to analyze their database.


idk, the models themselves are quickly becoming a commodity. it makes sense to spend money figuring out go to market rather than just improve the models themselves.


I would argue this is within their overall objective. It’s not like Stripe creating a publisher (??)


They only have like 500 employees. And you could argue this is part of their stated mission.


only?


And yet they don't have the resources to let job applicants know when their application was unsuccessful. You just get an email after you applied saying: "We may not reach out unless we think you are a strong fit for the role you applied to. In the meantime, we truly appreciate your patience throughout our hiring process." They also tell you not to use AI in the application.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: