Hacker Newsnew | past | comments | ask | show | jobs | submit | adangert's commentslogin

Let me reiterate some points for people here:

Income and revenue sources always, inevitably, and without fail, determine behavior.


I think your theory might be missing an extremely relevant and timely counterexample?

I will repeat here again the same comment I made when they posted their constitution:

The largest predictor of behavior within a company and of that companies products in the long run is funding sources and income streams, which is conveniently left out in their "constitution". Mostly a waste of effort on their part.


Anthropic (for the Superbowl) made ads about not having ads. They cannot be trusted either.


Advertisements can be ironic, I don’t think marketing is the foundation I use to decide about a companies integrity.


The largest predictor of behavior within a company and of that companies products in the long run is funding sources and income streams (anthropic will probably become ad-supported in no time flat), which is conveniently left out in this "constitution". Mostly a waste of effort on their part.


I'm not sure Anthropic will become ad-supported - the vast bulk of their revenue is b2b. OpenAI have an enormous non-paying consumer userbase who are draining them of cash, so in their case ads make a lot more sense.


While true, irrelevant.

This isn't Anthropic PBC's constitution, it's Claude's constitution. The models themselves, not the company, for the purpose of training the models' behaviours and aligning them with the behaviours that the company wants the models to demonstrate and to avoid.


Conway's law seems apt here. The behavior of Claude will mirror the behavior and structure of anthropic. If anthropic deems one revenue source higher than another, Claude's behavior will optimize towards that regardless of what was published here.

What a company or employee "wants" and how a company is funded are usually diametrically opposed, the latter always taking precedence. Don't be evil!


Yes, but that is a different level of issue. To analogise in two different ways, first it's like, sure, Microsoft can be ordered by the US government to spy on people and to backdoor crypto. Absolutely, 100%, and most world governments are probably now asking themselves what to do about that. But what you said was kinda like someone saying of Microsoft:

  In the long run autocratic governments spying on their citizens will backdoor all crypto (Microsoft will probably concede to such an order in no time flat), which is conveniently left out in this "unit test". Mostly a waste of effort on their part.
Or if that doesn't suit you: yes, sure, there's a large flashing sign on the motorway warning of an accident 50 miles ahead of you, and if you do nothing this will absolutely cause you problems, but that doesn't make the lane markings you're currently following a "waste of effort".

Also, as published work, they're showing everyone else, including open weights providers, things which may benefit us with those models.

Unfortunately, I say "may" rather than "will", because if you put in a different constitution you could almost certainly get a model that has the AI equivalent of a "moral compass" tuned to supports anything from anarchy to totalitarianism, from mafia to self-policing, and similarly for all the other axes people care about. With a separate version of the totalitarianism/mafia/etc variants for each specific group that wants to seek power, c.f. how Grok was saying Musk is best at everything no matter how non-sensical the comparison was.

But that's also a different question. The original alignment problem is "at all", which we seem to be making progress with; once we've properly solved "at all" then we have the ability to experience the problem of "aligned with whom?"


Is there so far any official/semi-official info about products placement in current generation of LLMs? I mean even for coding agents there's tons of services it can recommend and can be proficient in using (thanks to deliberate training).


OpenAI are testing ads in the free tier of ChatGPT, but they state that the actual LLM responses won't include advertising/product placement [0].

[0]: https://openai.com/index/our-approach-to-advertising-and-exp...


Awful awful awful, ads lead to anti consumer behavior, anti free market competition, turns capitalism into a pay to win game, are similar to a cancer, incentivize the creation of extremely harmful platforms, such as slop filled tik tok, destroys existing companies such as Facebook, generally harms society in every measure, the cons outweigh any and all pros. It transforms your product into a sticky candy box trap for unassuming visitors while your actual customers are now just the advertising industry, you become, as is so well spoken , the product. Adopting ads should be taken as a step towards harming your consumer base so you can vampiricaly extract attention from them indefinitely and forever, as your company slowly becomes a skeleton drain on all of humanity. There is no such thing as a "good" ad.


Dutch Mao (mentioned briefly in the wiki) is a variant where everyone comes up with a hidden rule before the game begins, and is one of my favorite games of all time.

Followed closely by Eleusis the master of inductive reasoning card games, a brilliant zendo like experience: https://en.wikipedia.org/wiki/Eleusis_(card_game)


The main arguments I hear against banning all ads is that it will hurt small businesses, a better solution might be to ban all adds for companies making above X amount per year, or even better: create systems where users pay for ads themselves, then the incentives would switch to be in favor of consumers.

In any case, totally agree, ad companies are out of control, I'm hoping more Kagi like services start appearing soon.


Banning companies above a size still allows a unhappy medium where only "small businesses" BUY the same horrible ads and we drop one or two Army or IBM ads from the lineup.


Not everything has to be black and white, there is middle ground for improvement. I'm not sure anyone loves the same MegaCorp™ ad plastered all over buildings, highways and stoplights.

The size, depth, and reach of the advertising industry is a direct result of the amount of money injected into it. The current ad industry is effective, awful, anti-competitive and resembles more of a cancer at this point than it's intended purpose to provide useful information.


No because small businesses arent hiring ad agencies who spent years studying psychology in order to manipulate people into doing what the company wants, not what the person wants. This is very much an issue of scale


That market is made when you ban "large companies" from making ads.


Deshitification is directly related to profit motives, VC dollars, and providing a service or good that overwhelmingly exceeds any hope of making substantial ROI in the future. None was shown in any of the above promotional materials, your company and product is tarnishing and devaluing the term, congrats on the achievement. We'll continue to look for another word that has not been captured.


Last year ChatGPT helped save my life from having a stroke. LLMs are incredibly beneficial in providing medical information and advice today.


> LLMs are incredibly beneficial ... today.

LLMs sometimes can be incredibly beneficial ... today

LLMs sometimes can be incredibly harmful ... today

Non-deterministic things aren't just one thing, they're whatever they happen to be in that particular moment.


Non-deterministic doesn't mean random or unpredictable. That's like saying the weather forecast is useless because it's not deterministic or always 100% accurate.


Last time I used GPT-4.5 to analyze blood results it gave different output if I uploaded it as 2 instead of 3 separated CSV files. It was both amazing experience: clear and easy to understand statements, and list of most common causes. And terrifying: "What about X?", "You ar absolutely right, there where X results included, disregard everything I wrote above, here is the new analysis".

So for me non-deterministic means unpredictable. Yes, there was nothing random or non-deterministic in that case, I could repeat both scenarios multiple times and get same results again. But the result is affected by something I didn't expect to matter. That damages the trust in tool, no matter how we call it.


LLMs seem best at creative brainstorming, coming up with ideas to check that you hadn't thought of. Their weakness becomes a non-issue because ideas are just things for you to check the viability of, because they could be completely unworkable.


> Non-deterministic doesn't mean random or unpredictable. That's like saying the weather forecast is useless because it's not deterministic or always 100% accurate.

I don't know where you got 'useless' from. LLMs are great, sometimes. They're not, other times. Which remarkably, is just like weather forecasts. The weather forecast is sometimes completely accurate. The weather forecast is sometimes completely inaccurate.

LLMs, like weather forecasting, have gotten better as more time and money has been invested in them.

Neither are perfect. Both are sometimes very good. Both are sometimes not.


Non-deterministic means random - that's the definition of the word. The weather forecast is also random - in fact, weather forecast is (if you simplify it too much) an average of several predictive (generative) models.


> Non-deterministic means random - that's the definition of the word.

That's not really the defintion. Non-determinism just means the outcome is not a pure function of the inputs. A PRNG doesn't become truly random just because we don't know the state and seed when calling the function and the same holds for LLMs. The non-determinism in LLMs comes from accepted race conditions in the GPU floating point math and the PRNG in the sampler.

That's besides the point, but we could have perfectly deterministic LLMs.


Inconsistent seems a more accessible word. It gives inconsistent results.


ChatGPT isn’t random though.

If you ask it what a star is, it’s never going to tell you it’s a giant piece of cheese floating in the the sky.

If you don’t believe me, try it, write a for loop which asks ChatGPT, what is a star (astronomy) exactly? Ask it 1000 times and then tell me how random it is versus how consistent it is.

The idea that non deterministic === random is totally deluded. It just means you cannot predict the exact tokens which will be produced but it doesn’t mean it’s random like a random number generator and it could be any thing.

If you ask what is Michael Jackson the entertainer famous for it’s going to tell you he’s famous for music and dancing. 1000/1000 times, is that random?


> If you ask it what a star is, it’s never going to tell you it’s a giant piece of cheese floating in the the sky.

Turn the Top-P and the temperature up. Turning up the Top-P will enable the LLM to actually produce such nonsense. Turning up the temperature will increase the chance that such nonsense is actually selected for the prediction (output).


Sure but nobody is doing that, are they?

I'm talking about the standard settings, and infact GPT-5 doesn't let you change the temperature anymore.

Also, that's not really the point. Humans can also produce nonsense if you torture them until they're talking nonsense, but that doesn't mean humans are "random."

LLMs are not random, they are non-deterministic, but the two words have different meanings.

Random means you cannot tell what is going to be produced at all, i.e. a random number generator.

But if you ask an LLM, is an Apple a fruit, answer yes or no only, the LLM is going to answer yes, 100% of the time. That isn't random.


I agree with everything that you've stated.


Nearly everything in life is non-deterministic.

Most things that are generally helpful and beneficial are not 100% helpful and beneficial 100% of the time.

I used GPT-4 as a second opinion on my medical tests and doctor's advice, and it suggested an alternate diagnosis and treatment plan that turned out to be correct. That was incredibly helpful and beneficial.

You're replying to a person who had a similar and even more helpful and beneficial experience because they're alive today.

Pedantically pointing out that a beneficial and helpful thing isn't 100% beneficial and helpful 100% of the time doesn't add anything useful to the conversation since everyone here already knows it's not 100%.


>LLMs are incredibly beneficial...

No, they can be. To state that they are, as an absolute, based on your sample size of one, especially with regard to other instances where ChatGPT has failed the user with serious physical results, is fallacious.

I am glad that you are OK, but as another user suggested, it's nowhere near as consistently accurate as it needs to be in order to be anywhere near an adequate substitute for a call to a GP or 911.


Any reason you didn't just call your GP or even 911?


Denial, or rather some form of "there's no way this is frickin stroke/heart attack, right?", is common when you're having a medical emergency.


Throwing my two cents in, the problem with all of this, is that we're dealing with artificially created scarcity, for content that can be easily duplicated. Artificial scarcity is a terrible thing to exist, and if got rid of it as a whole in society, we might end up with less high production content, but at least we would own all digital content that would be created. Services where consumers are paying for or perhaps even have sway over the creation of content, not the distribution, i.e. like patreon, are much more in line with nature of digital things.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: