Public benefit corporations in the AI space have become a farce at this point. They're just regular corporations wearing a different hat, driven by the same money dynamics as any other corp. They have no ability to balance their stated "mission" with their drive for profit. When being "evil" is profitable and not-evil is not, guess which road they'll take...
In general public benefit corporations and non-profits should have a very modest salary cap for everybody involved and specific public-benefit legally binding mission statements.
Anybody involved should also be prohibited from starting a private company using their IP and catering to the same domain for 5-10 years after they leave.
Non-profits where the CEO makes millions or billions are a joke.
And if e.g. your mission is to build an open browser, being paid by a for-profit to change its behavior (e.g. make theirs the default search engine) should be prohibited too.
I think that's the point though. The AI companies can't compete without hiring very talented employees and raising lots of money from investors. Neither the employees nor investors would participate if there weren't the potential for making mountains of money. So these AI companies fundamentally can't be non-profits or true B-corps (I realize that's a vague term, but the it certainly means not doing whatever it takes to make as much money as possible), and they shouldn't pretend they are.
To me, it feels like saying "you can't be a public benefit corporation unless all the labor involved in delivering that public benefit is cheap".
Which just doesn't seem like it should be true?
Sure, some "public benefit" missions could scale sideways and employ a lot of cheap labor, not suffering from a salary cap at all. But other missions would require rare high end high performance high salary specialists who are in demand - and thus expensive. You can't rely on being able to source enough altruists that will put up with being paid half their market worth for the sake of the mission.
>But other missions would require rare high end high performance high salary specialists who are in demand - and thus expensive. You can't rely on being able to source enough altruists that will put up with being paid half their market worth for the sake of the mission.'
That's exactly what a non-profit should be able to rely on. And not just "half their market worth", but even many times less.
Else we can just say "we can't really have non-profits, because everybody is a greedy pig who doesn't care about public benefit enough to make a sacrifice of profits - but still a perfectly livable salary" - and be done with it.
The real danger is "We make mountains of money, but everyone dies, including us."
The top of the top researchers think this is a real possibility - people like Geoffrey Hinton - so it's not an extremist negative-for-the-sake-of-it POV.
It's going to be poetic if the Free Markets Are Optimal and Greed-is-Rational Cult actually suicides the species, as a final definitive proof that their ideology is wrong-headed, harmful, and a tragic failure of human intelligence.
But here we are. The universe doesn't care. It's up to us. If we're not smart enough to make smart choices, then we get to live - or die - with the consequences.
It really depends on the type of material and country. Many monoplastics and almost all cardboard can be recycled and is (e.g. in Germany and other European countries).
> Recycling mostly means "sent to landfills in the third world"
This is less true now that China banned plastic waste imports.
I agree though that the average person might overestimate how much of their waste can be recycled. However, many materials are recycled and then re-used, so it's not like the whole concept is a scam.
>for hiring a team to build a frontier model? These kind of rules will make PBCs weaker not stronger
Weaker is fine if those working there are actually true to the mission for the mission, are not for the profit.
Same with FOSS really, e.g. I'd rather have a weaker Linux that's an actual comminity project run by volunteers, than a stronger Linux that's just corporate agendas, corporate hires with an open license on top.
You're overthinking this. Just give the beneficiaries of the corporation (which in the context of a "public" benefit corporation is the public) the grounds to sue if the company reneges on their mission, the same way shareholders can sue if a company fails to act in their interest.
As in a true believer in our present day dystopia? I think chances are we'd evolve a few more neo variants of fascism at least a few times in-between some neo variants of liberal history-ending ones (I think abundance is next?) before the bombs drop and give us the rest.
>Public benefit corporations in the AI space have become a farce at this point. They're just regular corporations wearing a different hat, driven by the same money dynamics as any other corp.
Could you describe the model that you think might work well?
It sounds like OP thinks AI companies should just stop pretending that they care about the public benefit, and be corporations from the start. Skip the hand wringing and the will they/wont they betray their ethics phases entirely since everyone knows they're going to choose profit over public benefit every time.
That model already exists and has worked well for decades. It's called being a regular ass corporation.
> being a regular corporation is not the only possible model
the point is that it _is_ the only possible model in our marvellous Friedmanian economic structure of shareholder primacy. When the only incentive is profit, if your company isn't maximising profit then it will lose to other companies who are.
You can hope that the self-imposed ethics guardrails _are_ maximising profit because it the invisible hand of the market cares about that, but 1. it never really does (at scale) and 2. big influences (such as the DoD here) can sway that easily. So we're stuck with negative externalities because all that's incentivised is profit.
>the point is that it _is_ the only possible model in our marvellous Friedmanian economic structure of shareholder primacy. When the only incentive is profit, if your company isn't maximising profit then it will lose to other companies who are. You can hope that the self-imposed ethics guardrails _are_ maximising profit because it the invisible hand of the market cares about that, but 1. it never really does (at scale) and 2. big influences (such as the DoD here) can sway that easily. So we're stuck with negative externalities because all that's incentivised is profit.
I'm curious about your thinking on this subject, if you email me at the email on my profile I have some specific questions about your views on this matter.
We have real services you can use immediately, such as this p2p phone/chat/video service without time limits (Zoom has a 1 hour meeting limit for free accounts) and no tracking: https://stateofutopia.com/instacall.html
We do believe that it is important to have market dynamics, and our model is for this state to own state-owned companies as well. Getting this model right is important to us and we would like to engage with you on this subject. We hope you'll email us to discuss your thoughts further.
I feel like we went through this exact situation in the 2010s of social media companies. I don’t get why people defend these companies or ever believe they have any sense of altruism
Well, now I'm wondering, if the company was chartered with the public benefit in mind, could you not sue if they don't follow through with working in the public interest?
If regular corporations are sued for not acting in the interests of shareholders, that would suggest that one could file a suit for this sort of corporate behavior.
I'm not even a lawyer (I don't even play one on TV) and public benefit corporations seem to be fairly new, so maybe this doesn't have any precedent in case law, but if you couldn't sue them for that sort of thing, then there's effectively no difference between public benefit corporations and regular corporations.
I really don’t see it. PBCs are dual purpose entities - under charter, they have a dual purpose of making profit while adding some benefit to society. Profit is easy to define; benefit to society is a lot more difficult to define. That difficulty is reflected at the penalty stage where few jurisdictions have any sort of examination of PBC status.
This is what we were all going on about 15 years ago when Maryland was the first state to make PBCs legal. We got called negative at the time.
> Public benefit corporations in the AI space have become a farce at this point.
“At this point”? It was always the case, it’s just harder to hide it the more time passes. Anyone can claim anything they want about themselves, it’s only after you’ve had a chance to see them in the situations which test their words that you can confirm if they are what they said.
I presume in the beginning, many at OpenAI actually believed in the mission. Their good will simply was corrupted by the mountains of money on the table.
I was a Pro subscriber until last week. When I was chatting with Claude, it kept asking a lot of personal questions - that seemed only very very vaguely relevant to the topic. And then it struck me - all these AI companies are doing are just building detailed user models for being either targeted for advertising or to be sold off to the highest bidder. It hasn't happened yet with Anthropic, but when the bubble money runs out, there's not gonna be a lot of options and all we'll see is a blog post "oops! sorry we did what we promised you we wouldn't". Oldest trick in the tech playbook.
A less cynical explanation: It's heavily trained to ask follow-up questions at the end of a response, to drive more conversation and more engagement. That's useful both for making sure you want to renew your subscription, and also probably for generating more training data for future models. That's sufficient explanation for the behavior we're seeing.
I could be wrong, but I remember that Claude models didn't really ask follow-up questions. But since GPT models are doing that, and somehow people like that (why?), Anthropic started doing it as well.
Pete Hegseth also threatened to take, by dictat, everything Anthropic has. He can do that with the Defense Industrial Act or whatever its called if he designates them as critical to national defense.
It would've been better PR for Anthropic to let Hegseth do that instead of fold at the slightest hint of pressure and lost contract money. I've canceled my Claude subscription over this (and made sure to let them know in the feedback).
He seems to be the driving force behind all this. Mediocrities are attracted to AI like moths.
The press always say "the Pentagon negotiates". Does any publication have an evidence that it is "the Pentagon" and not Hegseth? In general, I see a lot of common sense from the real Pentagon as opposed to the Secretary of War.
I hope Westpoint will check for AI psychosis in their entrance interviews and completely forbid AI usage. These people need to be grounded.
I've seen more than a few rewrite attempts fail throughout the years. However, I've never seen a direct language to language translation fail. I've done several of these personally: from perl to ruby, java to kotlin, etc.
Step 1 of any rewrite of a non-trivial codebase should should be parity. You can always refactor to make things more idiomatic in a later phase.
Do the "linuxisms" inherent in a compatibility shim like linuxlator get exposed to users in day to day application use?
I figured it'd be more like how proton provides windows APIs on Linux and applications "just work" as per normal.
I admire your purist approach, but most folks don't have that luxury and just need to make do with what works today for their tooling of choice (or more common, what their employer thrusts upon them.)
Theo de Raadt, 2010, on the removal of emulation: “we no longer focus on binary compatibility for executables from other operating systems. we live in a source code world.”
(Since then, OpenBSD has gained support for virtualization for some operating systems including Linux, through the vmm(4) hypervisor.)
There was a sockets API though (https://en.wikipedia.org/wiki/Winsock) and IIRC we all used Trumpet Winsock on Windows 3.1 with our dialup connections. But could have been 3.11 - my memory is a bit hazy.
You're simply describing the end state of a hyper capitalist system, as outlined by classic Marxist theory.
The core operating principle of which says capitalism requires and promotes systems that enforce the separation of labor from the product they produce. This precludes fellow laborers from meaningfully communicating with each other; knowledge sharing could expose more of how the product "works" after all! Only in final combination, following an undisclosed (to the worker) larger plan, does the product become whole and provide utility.
So not knowing "what happens" in layers "above" and "below" you for your specific work unit is key. This is the "de-skilling" tenet of capitalism and is required for exploitation, conformity, at scale. As labor units become smaller, they require less skill and time to produce, rendering laborers "conditioned to a machine." In other words, workers must acquiesce their skills in the name of "progress" of the system itself. This can easily be sold to the laborers, couched by real world data highlighting the obvious efficiency gains, along with a heavy bonus of having to do less work yourself.
Only by making ever smaller parts of a whole, awhile hiding the utility of those parts produced, can capital rob labor of their value (their skill, their products, their output.)
This very same system lends itself to outcompeting private labor by way of parallelization: as it just so happens that smaller slices of work tend to parallelize better than larger ones. If you can operate at a scale that bespoke creators have no chance of replicating on their own, you "win!" The beautiful moat, the envy of all.
In other words, you're just describing being a worker in a highly efficient capitalist machine! Look! We're almost there! I can just about smell all the "winning" from here...
Fun fact: Shortly after MS Teams launched I created an internal "reconstituted" desktop Teams client for myself and the poor souls in my org that had MS Teams thrust upon them. It extracted resources from the (unminified!) electron app as well the js and CSS files from their web version, then repackaged it again via electron, wrapping into a standalone executable. Think like a really complicated greasemonkey/tampermonkey script.
My fork at the time replaced their criminal white space use and offered a more compact and information dense alternative using CSS and JavaScript, injected all post rendering. Ah, the silly things one is capable when faced with a minor inconvenience and a wandering mind...
reply