Hacker Newsnew | past | comments | ask | show | jobs | submit | mrandish's commentslogin

> we haven’t donated to Trump

Another reason is that Sam Altman has been willing to "play ball" like providing high-profile (though meaningless) big announcements Trump likes to tout as successes. For example:

> "The Stargate AI data center project worth $500 billion, announced by US President Donald Trump in January 2025, is reportedly running into serious trouble.

More than a year after the announcement, the joint venture between OpenAI, Oracle, and Softbank hasn't hired any staff and isn't actively developing any data centers, The Information reports, citing three people involved in the "shelved idea."

https://the-decoder.com/stargates-500-billion-ai-infrastruct...


When @sama announced within hours that OAI was replacing Anthropic with the "same conditions ", it was clear that either the DoW or OAI (or both) were fudging. DoW balked at Anthropic's conditions so OAI's agreement must have made the "conditions" basically unenforceable.

And sure enough, my reading of it left the impression the OAI conditions were basically "DoW won't do anything which violates the rules DoW sets for itself."


I'd have money on OpenAI hiding behind the "all lawful use" phrasing to claim high levels of protection.

He also claimed that they would build rules into the model the DoD would use, preventing misuse. Aka he claims OpenAI will quickly solve alignment and build it right in...I wouldn't hold my breath.


All lawful use. And then they followed up with “intentionally doing illegal things.” If they happen to accidentally do illegal things, OpenAI is ok with it.

I hate this so much. The nsa’s spying on everyone in 2010 was “legal” and I can only imagine how much worse it is now with AI to follow your digital footprint around everywhere. Too bad we don’t have any more whistleblowers like Snowden

"Too bad we don’t have any more whistleblowers like Snowden"

Probably because most don't want to end up in russia?


I feel so sad about snowden sometimes. I tried reading his book's first few pages on how when he was growing up, he could be anyone in a forum and there was this sense of anonymity and at the same time, just freedom. And later on when he saw just how much the overreach of govt. was etc., he did what others couldn't.

It wasn't as if there weren't any other contractors like Snowden, but there were no other whistleblowers like Snowden

and where'd that leave him? In a country far away from his motherland and being worried about his safety. Being called god knows what by the country at home and most general people don't even care.

Snowden didn't do it for the money, he did it for what he felt was right and that's so rare.

Its so sad how when I searched up on Snowden on youtube, the first thing I found was ex CIA agent claiming Snowden wasn't innocent and how he had to befriend russia but at the same time, that was only because US would have literally killed him and made an example out of him to whistleblow about such a large-scale mass surveillance

“What kind of asshole reveals the fact we’re the assholes, then doesn’t let us kill him!” is one heck of a comment I found.

Also, We will charge the whistleblower with death but we will not take any action against the act which was whistleblown in the first place (:


I agree. What people forget is Snowden didn't intend to end up in Russia. He wanted to go from Hong Kong (where he tought he would be safe, but realised extradition still was an option) to Ecuador. But he feared US would intercept his plane if he went over US/US allies sky. So his plan was to go from HK to Russia, then to Cuba and finally Ecuador.

Russia stopped him because US had cancelled his passport.


That fear proved well-grounded. While it probably doesn't seem as big of a deal now — in this era when we just serially assassinate heads of state we don't like without any pretense otherwise — the US indeed did direct its European allies to intercept the plane of Bolivian president Evo Morales, based on the (incorrect, as it turned out) suspicion that Snowden was on board.

https://en.wikipedia.org/wiki/Evo_Morales_grounding_incident


Most likely scenario is that if it does something “unlawful” and found out - claim that “These machines are black boxes and they don’t know what went wrong. They will set up an investigative committee and find out.”

* spawn 8 investigative agents

When shit hits the fan they are going to blame AI, but then not even use hand sanitizer. They will 100% be using OAI as a scapegoat, although I'd like to see the OAI goat stay and someone else run into the woods.

All Lawful Use is a tautology with fascists because they cannot break laws by definition.


Yeah, here's some examples of all these fascists doing exactly that:

Soviet Union - The show trials of the 1930s were conducted with full legal apparatus: confessions, judges, verdicts. Stalin's purges operated through legally constituted troikas. Entirely "lawful" by Soviet law.

East Germany (DDR) - The Stasi's surveillance and harassment programmes were codified in law. When the wall fell, many Stasi officers genuinely argued their conduct was legal under GDR statute: a defence that West German courts largely rejected.

Castro's Cuba - Mass executions after the revolution were conducted by legally constituted revolutionary tribunals. Castro explicitly defended this on legality grounds when challenged by foreign press in 1959.

Chavez/Maduro's Venezuela - Suppression of opposition media, jailing of political opponents was consistently defended as operating within Venezuelan law, which was progressively rewritten to make it so. Classic self-referential legality.

Mao's Cultural Revolution - The revolutionary committees had legal standing. Persecution of intellectuals and landlords proceeded through formal (if kangaroo) legal processes.


You should ask the language model that output this text the definition of 'whataboutism,' and if the comment you've posted responds meaningfully to the discussion at hand.

I think similar to how AI-generated comments are frowned upon, "this comment was generated by Ai" comments should also be frowned upon. It's really annoying to see a well written comment and replies that don't address the comment but just accuse the poster of having used Ai to generate the comment.

you should ask the GP about his use of the word fascist on everything he doesn't like.

> if the comment you've posted responds meaningfully to the discussion at hand.

https://mirror.org/


> you should ask the GP about his use of the word fascist on everything he doesn't like.

If mirror dot org actually existed, you might want to look into it, because your long list of examples has one related to 1930s Germany, and the rest has nothing to do with the political definition of "fascism"?

Your point about legality was valid, but you're undermining it with the sarcasm.


Everything I don't like is pretty broad brush. I have only used it with the Trump regime.

https://en.wikipedia.org/wiki/Ur-Fascism

https://www.rollingstone.com/politics/politics-news/trump-su...


Nothing deep going on there. Fascism in modern informal parlance is a synonym for authoritarianism. Those who object most loudly to Stalin being called a fascist are usually themselves actually fascists, or stalinists. Everybody else gets it.

[flagged]


Yes, it has become a general use pejorative. At least in this case it's being used to refer to murderous authoritarians.

you have upped the ante

More like they will feed machine bullshit like WMDs exist in Fiji. My gut says so. My mom always believes me. Machine will call it out. Then they want overide. Machine will log it. Then they want an erase log button etc. Institutions and rules didnt fall from the sky. It evolved to damp the damage caused by such behavior.

OpenAI: Is that... legal?

DoD: I will make it legal.


Alignment is with the user of the LLM not to some fuzzy interpretation of human rights. So solving alignment for the DoW is just "don't refuse to bomb people when I ask you".

That's absolutely not the definition people use for alignment. Safety discussions often circle around alignment because they are worried about AI doing things that are bad for humanity as a whole, not because it goes off track from any one user's goal. That would be terrible for safety if alignment meant I could ask to hack tha TSA and the LLM would do it.

Ignoring the definition, what would be required for individual alignment is exactly the same as collective alignment. The only difference is the goals and who writes them, for the LLM it is being somehow forced to follow those rules no matter what.


That's safety, not alignment. Alignment is necessarily to the user.

It's not the department of war. Don't call it that.

> However, only an act of Congress can legally and formally change the department's name and secretary's title, so "Department of Defense" and "secretary of defense" remain legally official.

https://en.wikipedia.org/wiki/United_States_Department_of_De...


For consumer ChatGPT accounts, go to their privacy portal [1] and, first, delete your GPTs, and then, second, delete your account.

[1] https://privacy.openai.com/policies?modal=take-control


How do I cancel my subscription to the DoW?

The bigger picture is that the DoW got what it wanted and it got it by threatening one company while the other did its bidding.


By voting.

Voting changes the name of the department. It doesn't change if the government wants mass surveillance.

See PRISM.


This is my Senator:

https://www.wyden.senate.gov/issues/domestic-surveillance-re...

He may not be perfect on everything, but elect more people like him and it starts moving the needle. Or elect some more that are even more opposed to some of these things. It doesn't happen overnight. Change is difficult.


> Change is difficult.

I agree, though notice that the GOP/MAGA have and continue to make enormous changes. The difference is that they believe they can do it while others sit around talking about hopelessness and powerlessness. The only difference is belief.


> Voting changes the name of the department.

You're conceding that the name has already changed, without voting.

> It doesn't change if the government wants mass surveillance.

That can be prevented by Congress with enough political will.


Did the nsa's spying on everyone change between democratic and republican governments?

Did you vote in the primaries for a candidate that might change it?

Did democrats offer primaries in the last elections?

Did voting for Bernie Sanders in the last two primaries (especially the ones when Trump won for the first time) amount to anything?

I wonder how long can the American public keep the self delusion that the elections are anything but a theater for the naive, to keep the pretense the public has any say in things that matter.

How much has the current administration asked the public about going to war with Iran?


> How much has the current administration asked the public about going to war with Iran

Here is the 2026 Senate map [1]. Do you suggest any of them will flip over Iran? (I don’t. The folks who regularly vote simply don’t show any sign that this is a priority. Folks who stay at home grumbling don’t matter.)

[1] https://en.wikipedia.org/wiki/2026_United_States_Senate_elec...


> Did voting for Bernie Sanders in the last two primaries (especially the ones when Trump won for the first time) amount to anything?

He didn't win the primaries though. It would have amounted to something if he got enough votes.


1) He did not win primaries, in significant part also because DNC was heavily against him. The level playing field thing.

2) If he won the primaries, there is still no guarantee that that would have amounted to anything.

First, he might not have won the elections (mainstream media and the whole ruling elites were heavily against him). And even if he won, he might not have been able to do much against the permanent state.

I still think the main cause of Trump's wins is the deep disillusionment of the democratic voters by Obama's failure (inability/unwillingness) to impact a meaningful change.


Everything you're saying here is the exact delusional cynicism that got us here. Stop.

Yes, my stance is cynical.

Sadly, it is also factually correct (i.e. not delusional).

Which of my statements are you contesting?

From my point of view, your stance (play fairly, according to the rules set by your stronger opponent) is delusional. Note that the opponent is not 'republicans', but the whole ruling elites.

And no, I can't help you, I am not USian, just an outside observer. Sadly, due to its weight, whatever USA does, heavily influences everybody else as well.


> it is also factually correct

No, it isn’t. Sanders’ supporters didn’t have the votes. That’s a fact.

If people believe in something, they should call their electeds and vote. The fact that a lot of people with a certain confluence of views (privacy, anti-war, et cetera) are too lazy to do either (regardless of post rationalization), but not self aware enough to not complain about it, is delusional cynicism.


Note that I did not say he won the primaries.

I said the leadership of the democratic party did dirty tricks to prevent him winning.

The mainstream media was also against him.

Not anywhere close to a level playing field.

Note, that I am not against voting or calling your elected officials and all the related stuff. That is necessary. But, sadly, far from sufficient. If you think that that is sufficient, you are delusional.

Your subsequent generalizations are lazy and unsubstantiated, in fact they fit the classical smear patterns established by the mainstream media.


> Not anywhere close to a level playing field

But still, ultimately, turnout was turnout. Media saying mean things about your side isn’t a real excuse, Trump has been saying the same for a decade.

> they fit the classical smear patterns established by the mainstream media

Of course they must. In the meantime, the issues I care about seem decently reflected (outside privacy and war, where I concede most Americans who share my views are lazy, delusional and nihilistic). I’ve even had the opportunity to help write some state and federal legislation. So I guess I should be okay with the lack of political competition.


https://en.wikipedia.org/wiki/2020_Democratic_Party_presiden...

https://en.wikipedia.org/wiki/2024_Democratic_Party_presiden...

Skill issue. Run your candidate. Convince people to vote for them.

> How much has the current administration asked the public about going to war with Iran?

THE ELECTIONS are how the public weighs in.


> THE ELECTIONS are how the public weighs in

That's the second box only. There's also the soapbox (that you also referred to), the jury box and ultimately the ammo box.


Re: Skill issue Money issue. This is not level playing ground, the field is severely tilted. The referee is bought.

But you are saying: You lost fair and square, wait 4 years to have any say in what is going on.

Re: THE ELECTIONS are how the public weighs in.

When the choice is between Tweedledee and Tweedledum, the public's choice is meaningless.

To say nothing about politicians outright shamelessly lying (e.g. Trump campaigning on 'no more wars').


Money issue is also a skill issue, but I have no doubt in the era of free media someone could figure it out.

Sorry I didn't invent the idea that there are federal elections every two years, I'm just telling you that you have to win them. Bonus points: this is also how you can change the election schedule or political system!

If you're saying both candidates were bad when one was Trump, and the other was Hillary, Kamala, or Joe, then you don't have very good judgement. I agree Trump lying about not starting a war was bad. Many of us have said for years that he is a terrible liar. Please help us.


I agree that Clinton/Harris/Biden are not equally bad as Trump.

Trump is monstrously bad (= force the shit hitting the fan NOW), the democratic alternatives were just 'normally' bad (= continue the same old crap driving the shit closer to the fan, ignoring the looming disaster).


> Did democrats offer primaries in the last elections?

Uh, yeah? I voted for Biden/Harris.

And in any case, focusing almost exclusively on one race is part of the problem. Where I live, we also had a Dem primary for the house district, and a more electable candidate won - and then went on to win in the general. It was one of the very few red->blue flips in 2024.

Our former congresswoman, incidentally:

https://newrepublic.com/post/207234/trump-labor-secretary-ch...

Then there are all the races for school boards, city council, county commission and all those things that provide the base and the bench to build off of.


I like that I can’t tell if this is some sort of admonition for not voting centrist enough in a primary that didn’t happen or for not voting left enough in a primary that did not happen. It seems like if you’re going to be so bold as to do a callout you might as well say what for (and why you either picked or specifically skipped a primary that did not happen)

No.

... But the government flooding cities with thousands of masked thugs with a license to do whatever they want... has so far been an entirely Republican thing.

There are more colours to the world than pure black and pure white. There are also a million shades of grey in between, and most of us have the ability to distinguish between them.


Here's a simple ubsubscribe guide

https://usa.gov/renounce-lose-citizenship


Unless we move out of the country though, we are still technically subscribed to the DoW (still need to pay taxes etc)

Why?

If you have so little faith in them that they won’t honour the privacy controls you should also delete your non-consumer account too.


Enforcement is the real issue, not the specific red lines, regardless of what Anthropic claims and news outlets repeat.

Verification requires access to classified logs. These logs would attract the spies of the whole world. Even if these logs are in principle for "past actions", in practice past logs (for war games, for example) would compromise future strategy.

Since these manual audits are too risky, the only alternative is to hard-code limits into the AI. But are we ready trust an AI to "judge" a mission and refuse to execute during a crisis?

Anthropic wanted technical enforcement, the Pentagon wanted trust.

It’s a choice between two bad options: an unaccountable military and an unreliable AI kill switch. They are both very dangerous, just in different ways.


Agree with this completely.

But besides Sam Altman, this whole episode has made me totally and completely lose all respect for Paul Graham. I used to really idolize pg, and I really used to like his essays, but over the years I've found his essays increasingly displayed a disturbing lack of introspection, like they'd always seem to say that starting a startup is the best thing anyone can do, and if you're not good at startups then you kind of suck.

But his continued support of Altman in this instance (see https://x.com/paulg/status/2027908286146875591, and the comment in that thread where he replies "yes") is just so extra disappointing and baffling. First, his big commendation for Altman is that he's doing an AMA? Give me an f'ing break. When someone is a great spin doctor I'm not going to commend them for doing more spinning. It's like he has total blinders on and is unwilling to see how sama's actions in this instance are so disgusting and duplicitous. Maybe subconsciously he knows he's responsible for really launching sama into the public consciousness, so he now just is incapable of seeing the undeniably shitty things sama has done.

Oh well, I guess it's just another tech leader from the late 90s/early 00s who has just shown me he's kind of a shitty person like a lot of us.


Billions of dollars is a hell of a drug.

Yeah he has some great essays but also some that I find really dumb. Reading “Founder Mode” is when I realized he’s just as susceptible to fallacy as the rest of us.

Never meet your heroes

We know how this story will end for Dario. See Oppenheimer, Turing, Lavoisier, Galileo, Socretes etc. Power does not reside in the hands of people with knowledge or even wealth. And most technical people have not taken a political philosophy course or even a philosphy course. The Ring of Gyges story is 4000 years old.

Oppenheimer? Really? Quoting a review of an Oppenheimer biography:

“Oppenheimer was clearly an enormously charming man, but also a manipulative man and one who made enemies he need not have made. The really horrible things Oppenheimer did as a young man – placing a poisoned apple on the desk of his advisor at Cambridge, attempting to strangle his best friend – and yes, he really did those things – Monk passes off as the result of temporary insanity, a profound but passing psychological disturbance. (There’s no real attempt by Monk to explain Oppenheimer’s attempt to get Linus Pauling’s wife Ava to run off to Mexico with him, which ended the possibility of collaboration with one of the greatest scientists of the twentieth, or any, century.) Certainly the youthful Oppenheimer did go through a period of serious mental illness; but the desire to get his own way, and feelings of enormous frustration with people who prevented him from getting his own way, seem to have been part of his character throughout his life.”

Seems more like Sam Altman, who is known to get his way, than Dario.


The source for the poisoned apple story is Oppenheimer himself, and otherwise uncorroborated to be clear. He spent his life clearly racked by feelings of inadequacy, guilt and self-doubt.

When combined with a somewhat paradoxical large ego and occasionally fanciful reshaping of his own life story or exaggeration, it's entirely plausible (if not likely) that this was in reality a brief intrusive thought or a partially realized fantasy blown up into a catchy anecdote that better fit his self-image of being unable to control his typically human qualities of anger and envy.

If it was Sam Altman, we'd have heard the story from the guy he tried to poison, who instead of filing a police report thought it showed Sam was a real go-getter and offered him his first job on the spot as VP at the company he founded (later forced out by Sam replacing him as CEO, but still considers him a friend with no hard feelings).


The idea isn't that Oppenheimer was a saint, but that the government he served well and faithfully -- at the expense of his soul, some would argue -- turned on him viciously as soon as he dared to question their agenda.

As you suggest, it is easy to imagine Altman in the same hot seat. Never mind his sexual orientation, which the Republican theocrats will eventually use against him as surely as the knives came out for Ernst Röhm.


It's a bit simplistic to personify complex organizations of millions of people like "The Government" or "The Market" as if they were a living, breathing persons with a single mind.

There were people working in government who successfully attacked Oppenheimer for personal and/or policy reasons, people who stood by, and people who unsuccessfully supported him, voted to clear him, or condemned the proceedings.

Oppenheimer still paid the price, and arguably, the risks to someone like him today are considerably higher, as the current administration isn't exactly like Eisnehower's.

Nevertheless it's reductionist, reifying sentimentality to talk about "the government" turning "viciously" on someone who "served them well" because they are defying its agenda. The government isn't a character in Game of Thrones. The responsibility lies with the specific individuals who attacked him, and those who stood by.


Nevertheless it's reductionist, reifying sentimentality to talk about "the government" turning "viciously" on someone who "served them well" because they are defying its agenda. ... The responsibility lies with the specific individuals who attacked him, and those who stood by.

I'm sure that was of great comfort to Oppenheimer, as it will be to Altman and/or Amodei. "It's not you, it's us."


I think Amodei is widely underestimated. The consensus viewpoint on the deal that OpenAI struck with the Pentagon is that Anthropic got played. I disagree. I'm certain that Amodei and his team gamed this out. In doing so, I think there's at least two conclusions they would have drawn:

1. Some other AI company would cut a deal with the Pentagon. There's no world in which all the labs boycott the Pentagon. So who? Choosing Grok would be bad for the US, which is a bad outcome, but Amodei would have discounted that option, because he knows that despite their moral failures, the Pentagon is not stupid and Grok sucks.

That leaves Gemini or OpenAI, and I bet they predicted it would be OpenAI. Choosing OpenAI does not harm the republic - say what you will about Altman, ChatGPT is not toxic and it is capable - but it does have the potential to harm OpenAI, which is my second point:

2. OpenAI may benefit from this in the short term, and Anthropic may likewise be harmed in the short term, but what about the long game? Here, the strategic benefits to Anthropic in both distancing themselves from the Trump administration and letting OpenAI sully themselves with this association are readily apparent. This is true from a talent retention and attraction standpoint and especially true from a marketing standpoint. Claude has long had much less market share than ChatGPT. In that position, there are plenty of strategic reasons to take a moral/ethical stand like this.

What I did not expect, and I would guess Amodei did not either, is that Claude would now be #1 in the app store. The benefits from this stance look to be materializing much more quickly than anyone in favour of his courage might have hoped.


> Choosing Grok would be bad for the US

They chose Grok and OpenAI. The story was drowned out by the Anthropic controversy, but an xAI deal was signed the same week.


Grok is chosen because Musk spent $250+ million to elect Trump and is expected to underwrite the 2026 elections. Also, a lot of Trumps and their friends are invested in SpaceX. So they give them money too, but use OpenAI or Claude. I have a feeling that the military likes Claude more

Didn’t they choose Anthropic first and then all of this happened so they were forced to go with Grok?

Not adding up


We must conclude that they’re wary of Grok. Maybe it’s the incentive for bias and sabotage.

They "chose Grok" for political optics, but they don't seriously intend to use it because it's actually just benchmaxxed garbage - hence why they worked with OpenAI.

The mistake here is thinking they can take on Power without really sitting in any officual position of Power.

Wikileaks and Assange got popular too. What happened to them?

The State Dept and CIA do exactly what Assange did. They pick and choose who to target with leaks. They get away with it (mostly even when exposed) because they officially are in power. Assange was not in power. If you take a moral position do it when you have real power.


> If you take a moral position do it when you have real power.

If the condition for getting real power is having no morals, this is hard to accomplish.


Lyft was briefly number one ahead of Uber, too

> Choosing OpenAI does not harm the republic

if we consider AIs as "force multipliers" as we do with coding agents, it's easy to see how any AI company can harm the republic if the government they are serving is unethical and amoral.


Nobody gives a shit about jumping to #1 in the app stores, at this scale.

If US & A really goes full-Huawei on Anthropic, they can't IPO. It's an existential crisis for them. I think they can survive in some form, somehow, because their model is really good, probably the best.

And in other times, I would think the US government had sufficient intellectual horsepower to not cut off its own dick, and the golden goose's head, over some idiotic morning-drinker road-rage type beef. But these are not other times. These are these times.


There is also:

3. Talent migration to Anthropic. No serious researcher working towards AGI will want it to be in the hands of OpenAI anymore. They are all asking themselves: "do I trust Sam or Dario more with AGI/ASI?" and are finding the former lacking.

It is already telling that Anthropic's models outperform OAI's with half the headcount and a fraction of the funding.


I think that's wishful thinking. Just because someone is a "serious" researcher (careful, sounds like a No True Scotsman coming up), it doesn't mean that they care about AI guardrails or safety, or think our current administration is immoral.

I don't - idealistic motives seems to be common among leading AI developers and researchers. It's totally realistic that Anthropic sticking to principle & taking a hit for it will give it an edge recruiting those idealistic types.

I've hung out with this crowd and they are very idealistic, they care deeply about guardrails and safety, and definitely find the idea of handing the current administration AGI/ASI repulsive.

They still need a lot of money and what their VC’s think is going to be more important than what Amedei does. Nothing more profitable than war and government.

App Store rankings are meaningless, I have Claude, ChatGPT and Gemini all in top five, with a electronic mail app being 1 and a postal tracking service app (for a very small provider) being 3.


The value of hyperscalers' equity in Anthropic alone dwarfs their contracts with the government. Not to mention the revenue from hosting their models that helps justify the insane capex. Anthropic going to $0 would be a huge hair cut to all of their balance sheets.

They’ve only invested a couple of billions, like 20 or so split between them. Not really something that hurts them long or even medium term. Microsoft has multiple multi billion dollar government deals, I think Amazon is the only that doesn’t, Google also has a lot of government contracts, especially outside of cloud.

I do not believe the Ring of Gyges preceded Plato making it up for The Republic... Where are you getting 4000 years?

Also maybe not seeing the message or connection here... That myth isn't really about who has power or not, right? It's kind of just a trite little "why you should do good even when no one is watching" thing. It just serves Socrates for his argument with Thrasymachus, and leads us into book 2 where it really gets going with Glaucon and all that. This is from memory so I might be a little off.


I got it from Tamar Gendlers philosophy and human nature course on open yale courses. She says it was a popular folk story passed down orally much before it was written in a book. Plato used it because people grew up hearing the story.

The story is asking whats the source of morality? Who decides where the lines are? And its not scientists. Science produces the Ring.


I was wrong, it's in Book II. This is "Socratic irony", its Glaucon speaking, assuming the position of an argument from earlier. Socrates himself of course doesn't believe in this conclusion... we are going to learn later that justice is a form, based on the Good! This is all the doxa of one still in the cave.

> According to the tradition, Gyges was a shepherd in the service of the king of Lydia; there was a great storm, and an earthquake made an opening in the earth at the place where he was feeding his flock. Amazed at the sight, he descended into the opening, where, among other marvels, he beheld a hollow brazen horse, having doors, at which he stooping and looking in saw a dead body of stature, as appeared to him, more than human, and having nothing on but a gold ring; this he took from the finger of the dead and reascended. Now the shepherds met together, according to custom, that they might send their monthly report about the flocks to the king; into their assembly he came having the ring on his finger, and as he was sitting among them he chanced to turn the collet of the ring inside his hand, when instantly he became invisible to the rest of the company and they began to speak of him as if he were no longer present. He was astonished at this, and again touching the ring he turned the collet outwards and reappeared; he made several trials of the ring, and always with the same result—when he turned the collet inwards he became invisible, when outwards he reappeared. Whereupon he contrived to be chosen one of the messengers who were sent to the court; whereas soon as he arrived he seduced the queen, and with her help conspired against the king and slew him, and took the kingdom. Suppose now that there were two such magic rings, and the just put on one of them and the unjust the other; no man can be imagined to be of such an iron nature that he would stand fast in justice. No man would keep his hands off what was not his own when he could safely take what he liked out of the market, or go into houses and lie with any one at his pleasure, or kill or release from prison whom he would, and in all respects be like a God among men. Then the actions of the just would be as the actions of the unjust; they would both come at last to the same point. And this we may truly affirm to be a great proof that a man is just, not willingly or because he thinks that justice is any good to him individually, but of necessity, for wherever any one thinks that he can safely be unjust, there he is unjust.

https://gutenberg.org/cache/epub/1497/pg1497.txt


> it was clear that either the DoW or OAI (or both) were fudging.

This is my first thought as well. It's too obvious. He should have consulted ChatGPT before the announcement.


More likely assumed (perhaps rightfully) that there would be no consequences anyway.

per other Snowden comments, “all lawful use” means whatever we want it to mean.

Secret FISA court decisions will say the use is lawful, but you’ll never get to read or challenge those decisions.


Greg Brockman donated 25 million dollars, and DoW gives OpenAI 200 million dollar contract.

Just good 'ol fashion grifting mixed with a bit of government corruption.

This country has been boiling the frog of graft, grifting, and corruption too long.


@sama's did say: "[..] will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control". Law is what Trump decides.

Or, as is likely, OpenAI models have no guardrails, Anthropic's did and the DoD was bumping into them.

Does anyone else notice claude is just plain better at reasoning? It may not just be post training guardrails. It would not surprise me of it was something anthropic couldn't simply disable. Either from reinforcement or even training corpus curation. Of all the models, claude is the only one that makes me wonder if they have figured out something beyond stochastic language generation and aren't telling anyone

I have noticed this too, despite the close benchmark results Claude just works better. It knows when to push back, it has an "agency"... there is something there that I don't see with Gemini or OpenAI's best paid models.

> OAI conditions were basically "DoW won't do anything which violates the rules DoW sets for itself."

I believe this understanding is correct. The issue many people have these days with Dept. of War, and most of Trump admin is that they have little respect for laws. They only follow the ones they like and openly ignore the ones that are inconvenient.

Dept of "War" should have zero problems agreeing to the two conditions Anthropic outlined, if they were honest brokers. But I think most of us know that they are not. Calling them dishonest brokers seems very charitable.


I don’t care who is in the whitehouse. Snowden revealed the crimes of the NSA in 2013 when Obama was president. They’re all going to want to use AI for mass surveillance

AI doesn't add anything to the ability to do mass surveillance. That genie was already out of the bottle from clouds and big data systems. At best AI might take on some of the gruntwork for drawing conclusions from profiles but it's doing it's usual thing of being a powerful interface built on top of other systems.

> AI doesn't add anything to the ability to do mass surveillance

I recommend reading Yuval Noah Harari's Nexus for a deep discussion around this.

He makes the point that what makes this AI age much more dangerous for mass surveillance isn't just the collection of data, which has indeed been possible for a while, but the new ability to have AI sift through that enormous volume of information, an ability which until recently has not been possible in a meaningful way without a ton of manual work to support it.

Older attempts at mass control of a population already involved mass surveillance, even in a large amount of detail, but even when capturing in detail all citizens' activities, there were just not enough people around to be able to dig through that and analyze it. This has been somewhat true even with the help of computers, though computers have certainly already been making this easier.

But now you can just give all that data to an AI with your instructions, and it'll apply some sort of "judgement" on your behalf, completely autonomously, and even perform actions against those folks it finds, again autonomously, without needing to manually build a whole infrastructure to do that with manual rules. That's a very meaningful upgrade for someone wanting to control a population.


That's still actuating using existing infrastructure that already existed. I agree with the summarise + decide part maybe being quicker sometimes but the bottleneck remains collection and collation and actioning infrastructure

crazy take

like saying kids having internet-connected devices with built-in cameras doesn't increase the probability of sexting, they could do the same with film cameras and a fax machine


AI doesn't increase the amount of data captured or the processing throughput is the difference with your cameras metaphor. As said at best it can summarise things better sometimes.

I haven’t seen them follow a law yet

Mass surveillance is completely legal. It's just stupid to say its not.

I don't think that's what is being said, mainly? Like that's why Anthropic wants to have it in the contract(s) with the government?

At the same time, it is expressly illegal in some circumstances; that was the whole core of the Snowden revelations. The NSA and CIA are expressly curtailed from doing that by law — there are cases where they may surveil citizens with a court order, but not "mass" surveillance. There are some restrictions on the military along those same lines.

Keywords: Executive Order 12333, FISA, National Security Act, Posse Comitatus Act


I find it confusing in most directions.

Ex: For the above statement, if they're truly dishonest brokers and openly ignore the rules that are inconvenient, they would have zero problems agreeing to Anthropic's terms and then violating them. So what you say may be quite true, but there would still need to be more to the story for it to make sense.

Ex: DoW officials are stating that they were shocked that their vendor checked in on whether signed contractual safety terms were violated: They require a vendor who won't do such a check. But that opens up other confusing oversight questions, eg, instead of a backchannel check, would they have preferred straight to the IG? Or the IG more aggressively checking these things unasked so vendors don't? It's hard to imagine such an important and publicly visible negotiation being driven by internal regulatory politicking.

I wonder if there's a straighter line for all these things. Irrespective of whether folks like or dislike the administration, they love hardball negotiations and to make money. So as with most things in business and government, follow the money...


I have no idea what exactly Anthropic was offering the DoD, but if there were a LLM product, possible that the existing guardrails prevented the model from executing on the DoD vision.

"Find all of the terrorists in this photo", "Which targets should I bomb first?"

Even if the DoD wanted to ignore the legal terms, the model itself would not cooperate. DoD required a specially trained product without limitations.


[flagged]


There's a reason it's unpopular.

If your company makes an herbicide that happens to be very good at killing off anyone who drinks it at a high concentration in their water supply, you're saying that there should be no way for your company to resist being used for mass murder (including unavoidable collateral damage)?

Also, the core mission of the military is not "killing its adversaries through any means necessary". It is to defend state interests. Some people have a belief that mass killing is the best mechanism for accomplishing that. I do not agree with, nor do I want to associate with, those people. They are morally and objectively wrong. Yes, sometimes killing people is the most effective -- or more likely, the quickest -- way. In practice, it doesn't work very well. The threat of violence is much more powerful than actually committing violence. If you have to resort to the latter, you've usually screwed up and lost the chance to achieve the optimal outcome. It is true that having no restrictions whatsoever on your ability to commit violence is going to be more intimidating, but it also means that you have to maintain that threat constantly for everyone, because nobody has any other reason to give you what you want.

The actual military is not evil. Your conception of it is.


>> Unpopular opinion around here, but no company should have the ability to stop the military from its core mission: killing its adevarsaries through any means necessary.

> The actual military is not evil. Your conception of it is.

You're right, but there's a a real question here: should a company have the ability to control or veto the decisions of the democratically-elected government?

To give different hypothetical example: should Microsoft be allowed to put terms in its Windows contracts with the government, stipulating that Windows cannot be used to create or enforce certain tax policy or regulations that Microsoft disagrees with? Windows is all over, and I'm sure pretty much every government process touches Windows at some point, so such a term would have a lot of power.


> You're right, but there's a a real question here: should a company have the ability to control or veto the decisions of the democratically-elected government?

I don't think "control or veto" is fair. Anthropic is not trying to prevent the US government from creating full autonomous killbots based on inadequate technology. They are only using contract law to prevent their own stuff from being used in that way.

But that aside, my opinion is that to a first order approximation, yes a company should very much be able to have say in its contract negotiations with any party including the government. It's very similar to the draft. I don't believe a draft is ethical until the situation is extreme, and there ought to be tight controls on what it takes to declare the situation to be that extreme. At any other time, nobody should be forced to join the military and shoot people, and corporations (that are made of people) should not be forced to have their product used for shooting people.

A corporation is a legal fiction to describe a group of people. Some restrictions can be placed on corporations in exchange for the benefits that come from that legal fiction, but nothing that overrides the rights of its constituent people.

Governments are made of people too. Again, a subset of people are given some powers in order to better achieve the will of the people, but with tight controls on those powers to keep the divergence to a minimum. (Of course, people will always find the cracks and loopholes and break out of their constraints, but I'm talking about design not real-world implementation here.)

So to look at your hypothetical, first I'd say it's not very different from the question of whether an individual person should be forced to personally enforce tax policy. Normally, I'd say no. There are many situations where the government needs more say and authority in such things, but that must only be achieved via representatives of the people passing laws to allow such authority. Other than that, yes: I believe a company should be able to negotiate whatever contract terms it wants. In a democracy, we are not subjects of a controlling government; the government is an extension of us.

In practical terms, if Microsoft were to insist on that contract stipulation, the government would not agree to the contract and would award its business to someone else. If the government were especially out of control and/or unethical, it might punish Microsoft with regulations or declarations of supply chain risk or whatever, but that is clearly overstepping its bounds and ought to be considered illegal if it isn't already. The usual fallback would be that the people would throw the people perpetrating that out on their asses. That's the "democratically-elected part".

Obviously, Microsoft would be stupid to insist on such a thing in their contract, and its employees would probably lose all confidence in the corporate leadership. Most likely, they'd leave and start Muckrosaft next door that rapidly develops a similar product and sells it to the government under a reasonable contract.

Basically, I'm always going to start from people first, and use organizations and laws only in order to achieve the will of the people. The fact that the people are stupid does make that harder, but the whole point of democracy is that we'll work out the right balance over time.


My conception is that the world would be a much simpler place if war was total. No one would start it unless it would be 200% it could win it. And we would all go through military training just in case, you know, a neighbor drank too much last night and thinks it can win against you.

> The threat of violence is much more powerful than actually committing violence.

While I agree with this statement, the only way the threat works is if from time to time you apply violence to reinforce your capability and availability to actually do it. And the US is really good at actually being violent so others don't even think about doing something against it, at least the majority of countries anyway.


Re: My conception is that the world would be a much simpler place if war was total. No one would start it unless it would be 200% it could win it

Now apply the same logic to the current Iran war.


I do not see Iran winning this. The current government is also hated by the people who would very much like to see all of them dead.

Al Jazeera has some very good insights into this, and the gist of it is: the Iranian regime is in a fight for its life with nothing to lose. If they are degraded enough, a revolution will start in Iran and they will be killed by the people. Or by US/IL bombs - whichever comes first. There is no way they get out of this alive. They are trying to prolong the inevitable.


Regarding Iran's future:

You are describing Libya scenario, not a 'lived prosperously ever after'. There is no credible opposition in Iran to take the mantle.


No. Iran has almost all of its population part of the same ethnic group, which in Libya it was not true: all the tribes started fighting each other.

It does not an established opposition because the current regime has the habit of killing anyone it doesn't like or goes against the official line. Now there is a chance for opposition to form.


Iran has significant Kurdish, Azeri, Baluchi and Arab minorities, Persians form cca 2/3 of the population.

With the US & Israel supporting the minorities (most likely offering them independence), in the hope of toppling the regime, and bombing mostly Persians, the most likely outcome (assuming they are actually able to force regime change, which is far from guaranteed) is fragmentation and general lawlessness.

Note that whoever inherits the regime would have to deal with wholesale destruction of the country, traumatized population and hate for those who bombed them and killed their relatives and children. Slavishly obeying the new foreign overlords will not be very popular. Have we not learned anything from Iraq and Afghanistan? How can you still believe the fairy tales of welcoming the liberators?


OK, slowly:

The wars are already total for the weaker sides. See Ukraine/Iran. Did not stop the stronger side attacking.

You are advocating for no constraints (total war) on the stronger side. Taken literally, that means genocide of the losers. Really, that's what you want?

But yes, you are right, the world would be much simpler in such case - there will be no humans left. OK, maybe some hunter-gatherers.


> You are advocating for no constraints (total war) on the stronger side. Taken literally, that means genocide of the losers. Really, that's what you want?

Taken literally, it means genocide of the losers is an option the winning side has. It always has been.

Note that Genghis Khan's explicit plan when he conquered China was to wipe out the Chinese to make room for Mongols. He wasn't stopped from doing that; there was no constraint to block him.

But he was persuaded not to.


This is the same mistake as made in Iraq and Syria by media policy pundits. Dictatorial regimes collapse pretty quickly without a significant base of support enough to stop a revolution happening. They might not have a majority of people supporting but it isn't a democracy. Dictatorial regimes will always have one or more of military, business, or sub-groups of citizens in their pockets as clients.

Whenever we say "the regime is hated by it's people it will collapse" it should be asked "then why didn't it collapse already?". In Iran metropolitan areas are where you see opposition. That's also where people have cameras and media orgs tend to be. We get a warped depiction of opposition in Iran even without our own media's baggage. Meanwhile the power base of Iran is everywhere but metropolitan cities. And there's a lot of clients who benefit from the regime. I think this might be worse than the sectarian violence that came out of the Hussein regimes collapse because the Sunni sect his base was built around was still a minority. This time it's the majority and the people being fought against are the Americans, the Israelis and the Arabs so their backs are against the wall this is a total war already from their side.


With the way you've phrased it the government could nuke the entire world; all of the adversaries would be dead along with literally everyone else. I don't really see why it's an issue if a company doesn't want to sell them the tools to do that.

On the flipside, housing prices would go down significantly. Lots of room to expand.

If I start a small business that sells Apples and the US government comes to me and says "we want to buy your apples and fire them at high speed to" these are now your words "kill adversaries through any means necessary."

If I say, no, then am I stopping the military?

I feel like it is reasonable that I can say "no, I don't want to sell you my apples."

I cannot for the life of me figure out why that means I am stopping the military from killing people. The US Military will definitely still be able to kill people for centuries. I'm just saying I don't want to participate in it.


More to the point, if everyone stopped selling anything to the military they would still be able to kill people with their bare hands. People are arguably very good at killing people and it takes civilization to train us not to kill each other.

In the context of the larger discussion, if you already sold apples to the military, you cannot go to them and say you don't like how they're using the apples you sold them.

In the context of the larger discussion, Anthropic thought of that ahead of time and put the restrictions into the contract that the government agreed to. So "already sold" is a non-sequitur; that's not the situation under discussion.

That's not their mission, in any country, ever.

The problem here is that this department claims its adversaries are Americans. Do you think antropic should aid in the killing of Americans?

I don’t believe for a second the Pentagon sees Americans as adversaries.

Trump sees many Americans as adversaries (i.e. the 'radical left' like Alex Pretti an ER nurse and Renee Nicole Good - a mother). In his first term he asked whether protestors can be shot in the legs.

So in short it doesn't matter what the Pentagon thinks as Trump is the commander in chief and as far as I know the Pentagon has to follow his orders.


Unfortunately, reality is not determined by what you personally don't believe for a second.

Evidence (the Commander in Chief calling the opposition terrorists, and celebrating their government executions, for example) indicates that reality indeed reflects the things you personally don't believe.


Any company is free to choose its business partners and set terms to them. "Don't like our terms, don't partner with us"

If government can force any private company to work specially for government then US is no better than PRC


You might want to read about the War Production Board during World War II. Established by a presidential executive order no less.

Wasn't that for defense during an actual war started by another country?

Legit war time measures can be a thing (that's why it's fucked if president can just start a war and then use that as excuse for any war time measures they like)


"Legit war time measures" is not a thing. If Congress declares war on Cuba or Venezuale for example, people who do not support it will not see the measures as "legit". The US has a lot of precedent of bombing/invading other countries at the whim of presidents without actually calling it a war for decades.

And for better or worse, it is actually good that it is like this. Otherwise, if Congress declares war on Iran or China or whatever, the whole country will be put on a war footing, companies will be directed to build whatever the Pentagon says it needs, drafts will be enforced and so on. And it would be pretty ugly.


If Congress declared an actual war and if they declared to use war time laws to force a private company to comply with the war effort, we wouldn't be having this conversation.

What happened was different: a private company decided to enforce some terms, as they can do during peace time and they have been bullied in a way that is disgraceful precisely because it didn't happen during war time nor it has been done using the existing laws around that.

What is the purpose of having laws in the first place if we accept that the government can rule by intimidation?


if you didn't notice we are talking about wwii

usa was not aggressor

fat chance congress declaring war of aggression on a peaceful country


Yes, Musk is guilty of treason for exactly that reason. He directly sabotaged a major US military operation in Ukraine.

However, the military is bound by US and international law. It's clear they're not going to obey either of those with respect to this contract.

On top of that, Anthropic has correctly pointed out that the use cases Trump was pushing for are well beyond the current capabilities of any of Anthropic models. Misusing their stuff in the way Trump has been (in violation of the contract) is a war crime, because it has already made major mistakes, targeted civilians, etc.


> DoW balked at Anthropic's conditions so OAI's agreement must have made the "conditions" basically unenforceable.

I think it’s also possible DoW didn’t care about the conditions but just wanted some pretext to punish Anthropic because Dario isn’t a Trump boot licker like the rest of the SV CEOs.


I think this is supported. Hegseth has said numerous things about curly hair. I read that his reaction to Dario started with his hair.

Except if there's one defining property of the last 4-5 administrations it is that they definitely and constantly violate the rules they set for themselves. With every new administration it gets worse and worse.

And while this administration is brazen about this, it's not really a drastic change anywhere.

In fact most EU laws (GPDR, AI regulation, Chat Control) are directly, up front, declaring they themselves won't respect it. They very directly have one set of rules for states, government employees, ... and ANOTHER set of rules for everyone else. And they're incredibly brazen. For private individuals, companies it goes very far, it's essentially impossible to even know what does and does not violate the GPDR, and you can't ask the courts, that's not allowed. You also cannot use the courts to compel government to do anything under these laws.

For governments, when it comes to what's allowed, it goes incredibly far. Governments can declare any action legal under the GPDR, before and after the fact, without parliament involvement. It does not matter if that action was done by the government themselves, or if it's an action by a private company (so the government can use subcontractors for any violation of the GPDR)

This means that, for THE example given for GPDR protection: medical information. Medical insurance in the EU is either state-owned or has exceptions, the law does the exact opposite of what it appears to do: it makes all your medical information available for medical insurers. And the police (e.g. to find you). And the tax office. And courts. And medical institutions themselves (to deny transplants to smokers). And ... And while doctors (and priests) used to be huge no-no's when it came to information gathering, that's no longer the case. If a doctor uses the state required medical file, your medical information flows straight into a state database, immediately searchable for everyone the GPDR supposedly protects you against.


Very nice. I didn't have some of those. Thanks for posting!

For those of us who have a user.js file to keep all these about:config customizations over time and across installs... here they are formatted for user.js:

  user_pref("privacy.query_stripping.strip_on_share.enabled", false);
  user_pref("browser.translations.select.enable", false);
  user_pref("screenshots.browser.component.enabled", false);
  user_pref("dom.text-recognition.enabled", false);
  user_pref("browser.search.visualSearch.featureGate", false);
  user_pref("browser.ml.linkPreview.enabled", false);
  user_pref("browser.ml.chat.menu", false);
  user_pref("dom.text_fragments.enabled", false);
Also, there's a nice little toolkit for removing items from all the Firefox right-click menus: https://github.com/stonecrusher/simpleMenuWizard.

That toolkit looks great. I have a big ol' userChrome.css removing ~15 menu items, but I may switch to this next time I work on improving things again.

As a customer, I just want the information I need. While I don't want to talk to a chatbot, I also don't want to talk to a human - and for the same reason: they usually don't have the info I need.

That's the aspect I don't understand. The information I want is almost always something some other customers have asked already. I'd much prefer to avoid their customer support maze entirely and help myself on a searchable wiki. Unfortunately, most company's online product support FAQs usually only contain answers to obvious shit on the order of RTFM and "is it plugged in." Why not just post the doc their advanced tier 3 support people share amongst themselves? It can be under a warning label like 'preliminary advanced info for engineers'.

I realize people like me represent only around 2-3% of the customers seeking support but it's 2-3% that is able to self-serve and takes more time than average because we invariably have to work through front-line support to get escalated to someone with the non-obvious info that's still been asked many times before. So maybe we're only ~2% but we suck up 4% of support bandwidth and we probably take up closer to ~20% of Tier 3 support - the most expensive, scarce type.


I mostly agree (although sometimes it is necessary to talk to someone about it); it would be better to actually have good documentation (so that you do not need to talk to someone about it).

A warning label like you mention is a possibility if that is considered to be necessary, although I think it might be better to have a file that you can download and read (or request by mail or telephone or fax, if this becomes necessary in some circumstances; do not assume the computer always works and is compatible with your file), instead of a searchable wiki.


While I agree with TFA's point that forcing a chatbot isn't a substitute for just having the info available, organized and searchable, the answer to your specific question is that the fully burdened cost of a trained support center human includes a lot more than their gross hourly wage. There's recruiting, interviewing, hiring, training plus space, desk, computer, phone, IT, HR, health care, vacation, sick days, insurance, employer's share of employment taxes.

A rough rule of thumb is the full burdened cost of an hourly office knowledge worker is two to three times the gross hourly wage.


> "Two such use cases have never been included in our contracts with the Department of War..."

While I agree with Anthropic's position on this regardless, the original contract wording does matter in terms of making either the government look even more unreasonable or Anthropic look a little less reasonable.

The issue is a subtle ambiguity in Dario's statement: "...have never been included in our contracts" because it leaves two possibilities: 1. those two conditions were explicitly mentioned and disallowed in the contract, or 2. they weren't in the contract itself - and are disallowed by Anthropic's Terms of Service and complying with the ToS is a condition in the contract (which would be typical).

If that's the case, then it matters if the ToS disallowed those two uses at the time the original contract was signed, or if the ToS was revised since signing. Anthropic is still 100% in the right if the ToS disallowed these uses at the time of signing and the ToS was an explicit condition of the contract, since contracts often loop in the ToS as a condition while not precluding the ToS being updated.

However, if the ToS was updated after contract signing and Anthropic added or expanded the wording of those two provisions, then the DoD, IMHO, has a tiny shred of justification to complain and stop using Anthropic. Of course, going much further and banning the entire US government (and contractors) from using Anthropic for any use, including all the ones where these two provisions don't matter - is egregiously punitive and shitty.

While the contract wording itself may be subject to NDA, it would be helpful if Anthropic's statements could be a bit more precise. For example, if Dario had said "have always been disallowed in our contracts" this ambiguity wouldn't exist.


It does not matter. If Anthropic had been precise in this narrow way, there would have been some other nitpick to raise.

You're trying desperately to find a way that things can be at least a little normal, and I really do get it. It would be great if such a way existed. But it doesn't. I recommend you take a social media break like I'm about to, take the time you need to mourn the era of normal politics, and come back with a full understanding that the US government is not pursuing normal policy objectives with bad decisions. They hate you and they hate me for not being on their side, and their primary goal is to ensure that we're as miserable as they can make us.


I'm in a weird spot where I do agree with your assessment of the core claim. But putting that aside, in the world where the DoW's claim _is_ correct -- I think you don't have any choice other than to designate them a supply chain risk.

Disregarding who is right or wrong for a moment, if the DoW are right (which I'm not personally inclined to believe, but we're ignoring that for the moment) -- how else can they avoid secondhand Claude poisoning?

Supposing they really want to use their software for things disallowed by Claude's (now or future) ToS, it seems like designating it a supply chain risk is the only way they can ensure that their contractors don't include Claude (either indirectly as a wrapper or tertially through use of generated code etc)


> designating it a supply chain risk is the only way they can ensure that their contractors don't include Claude

I agree that if the DoW claim is correct (and I doubt it is), then, sure, the DoW dropping Anthropic and precluding the DoW's suppliers from using Anthropic for any DoW work would be expected. However, the "supply chain risk" designation they are deploying goes far beyond that to block Anthropic use by any supplier to any part of the entire U.S. government for anything.

For example, no one at Crayola can use Anthropic for anything because Crayola sells crayons to the Education Dept. The DoW already has much less draconian ways to restrict what their direct suppliers use to build things for military applications. But instead of addressing the actual risk in a normal measured way, they are choosing to use a nuke against a grenade-sized problem. This "supply chain risk" designation is rarely used and has never been used against a U.S. company. It's used against Chinese or Russian companies when in cases where there's credible risk of sabotage or espionage. That's why that particular designation always blocks all products from an entire company for any application by any part of the U.S. Government, contractors and suppliers (which is why it's never been used against a U.S. company).


One positive thing I will say about this administration is that they have really drawn into focus the difference between de jure and de facto law.

My hope is that this gets us some real concern for things that have been defended with de facto arguments (i.e. privacy) going forward.

edit: Anthropic argues that your Crayola analogy is fundamentally incorrect.

> Legally, a supply chain risk designation under 10 USC 3252 can only extend to the use of Claude as part of Department of War contracts—it cannot affect how contractors use Claude to serve other customers.

https://www.anthropic.com/news/statement-comments-secretary-...


> Anthropic argues that your Crayola analogy is fundamentally incorrect.

Yes, I just saw Dario's latest post with that more detailed info. My understanding was informed by news reporting in a couple different outlets but those reports may have been conflating the "supply chain risk" designation (under 10 USC 3252) with the net effect of statements from the pentagon and white house which go substantially further.

Even if it's not in the legal scope of 10 USC 3252, the administration has made clear they intend to ban Anthropic from use across the federal government. AFAICT doing that is probably within the discretionary remit of the executive branch, even though I believe it's unprecedented - to your point about de jure and de facto law.

To me, if there's a silver lining to all this, it's making a strong case for restricting executive branch power.

Edit to add: Per the Wall Street Journal's lead story (updated in the last hour): "The General Services Administration, which oversees federal procurement, said it is removing Anthropic from its product offerings to government agencies... Even absent the supply-chain risk designation, broadening the clash to include all federal agencies takes the Anthropic fight to a much larger scale than its spat with the Pentagon."


How would this risk be mitigated by signing a contract? Seems like “supply chain poisoning as treason” is probably not going to stopped by a piece of paper. You either trust anthropic or you don’t but the deal has nothing to do with it.

Isn't the point that they aren't entering into a contract with them, they are just ensuring that none of their still trusted suppliers repackage Anthropic without their knowledge?

I’m not sure, but I think you’re right. I was thinking about the logical implications of the. If they are a supply chain risk without a contract, how does the existence of a contract suddenly make them not a risk? Especially if the DoD strong arms them into a deal.

Because the act that the SCR designation would “protect” against is treason, so I don’t think people would care too much whether there’s a contract.


> In 2007 it was visionary and essential ... In 2026 it is a symbol of Google's stagnation.

Since around ~2010, Google's culture has gradually transitioned from exploring, discovering and building new businesses to defending and extracting maximum value from existing businesses (eg Enshittification).

I was vividly reminded of this listening to the Acquired podcast's three episode Google arc last year. Although the hosts don't explicitly call it out, they do such a good job of exploring all the ways in which pre-2010 Google was incredibly innovative. visionary and exciting, the contrast to today is sobering.

While Google deserves credit for leading the way on early AI research pre-2010, they squandered much of their pole position because LLMs were more threat/risk to their huge legacy search business (despite being deployed under the hood). Then, only when the external threat became undeniable, did they respond - requiring a huge come-from-behind to regain most of the lead they'd lost.

https://www.acquired.fm/episodes/google-the-ai-company


> Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man

The things this definition misses: First, 'intelligence' is a poorly defined and overly broad term. Second, machine intelligence is profoundly different than biological intelligence. Third, “surpassing humans” is not a single threshold event because machine and human intelligence are not only shaped differently, they're highly non-linear. LLMs are a particular class of possible machine intelligences which can be much more intelligent than humans on some dimensions and much less intelligent on others. Some of the gaps can be solved by scaling and brilliant engineering but others are fundamental to the nature of LLMs.

> an ultraintelligent machine could design even better machines

There is a huge leap between "surpass all the intellectual activities of any man" and "invent extraordinary breakthroughs and then reliably repeat that feat in a sequential, directed fashion in the exact way required to enable sustained iteration of substantial self-improvement across infinite generations in a runaway positive feedback loop". That's an ability no human or collective has ever come close to demonstrating even once, much less repeatedly. (hint: the hardest parts are "reliably repeat", "extraordinary breakthroughs" and "directed fashion"). A key, yet monumental, subtlety is that the self- improvements must not only be sustained and substantial but also exponentially amplify the self-improvement function itself by discovering novel breakthroughs which build coherently on one other - over and over and over.

The key unknown of the 'Foom Hypothesis' is categorical. What kind of 'difficult feat' this is? There are difficult feats humans haven't demonstrated like nuclear fusion, but in that example we at least have evidence from stellar fusion that it's possible. Then there are difficult feats like room-temp superconductors, which are not known to be possible but aren't ruled out. The 'Foom Hypothesis' is a third category of 'hard' which is conceptually coherent but could be physically blocked by asymptotic barriers, like faster-than-light travel under relativity.

Assuming Foom is like fusion - just a challenging engineering and scaling problem - is a category error. In reality, Foom requires superlinear, recursively amplifying cognitive returns—and we have no empirical evidence that such returns can exist for artificial or biological intelligences. The only prior we have for open‑ended intelligence improvement is biological evolution which shows extremely slow and unreliable sublinear returns at best. And even if unbounded self‑improvement is physically possible, it may be practically unachievable due to asymptotic barriers in the same way approaching light speed requires exponentially more energy.


> They were rare, and special, and you'd have a few photos per YEAR to look back on.

My generation generally only had photos from birthdays, holidays, vacations, weddings, graduations and reunions. We looked at the three albums which contained every family photo often and I know them all by heart.

My kid was born in 2009 and our family digital album has nearly 1,000 photos per year of her life. And she's seen virtually none of them and seems to have little interest in ever seeing them since she creates so many of her own photos every day which are ephemeral.


I guess some of the appeal of those sparse photos is the element of fantasy and imagination. Wondering what it could have been. Looking at a low quality yellowing wedding photo of your grandma... It allows you to think and wonder. Seeing it in 4K video or a volumetric 4D gaussian splat in VR robs you of all that sentimental mystery.

Nostalgia and idealization of the past is also harder when you have a more representative cross section of past moments.


My strong initial reaction to even the idea of "fully autonomous AI killbots" made me miss a subtle distinction about what the real danger is. We already have a variety of non-AI killbots. Conceptually, any area denial weapon like a proximity triggered Claymore mine is a non-AI "killbot". And just tying one or more sensors to trigger a gun or explosive already works today without AI. . So what's gained by adding full AI?

Such non-AI automatic triggering and targeting can already be constrained by location, range, time frame, remote-control, etc using fairly sophisticated non-AI heuristics. If non-AI devices can already <always pull trigger if X, Y and Z conditions = TRUE>, this is really about not pulling the trigger based on more complex judgements. That really only enables leaving such systems armed and active in far larger, less constrained contexts where 'friend or foe' judgements exceed basic true/false sensor conditions. That the military feels such urgent need for that capability is much more worrying to me.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: