The US no longer uses its army for defense. Nobody in their immediate region dares attack them, they're too powerful ("Godzilla", in the words of John Mearsheimer). All the wars that the US has fought since WWII are nothing to do with defense. Just look at the Wikipedia article on "power projection":
The leader image is ... a US aircraft carrier (the USS Nimitz). That's what the US uses its military power for, to influence events in lands far, far away from its territory.
But, now, tell me which one of the many wars that the US has fought in after WWII did not end in disaster. Afghanistan? Iraq? Korea?
There was a meme doing the rounds the other day: "Name a character who can defeat Captain America". The answer being "Captain Vietnam". The US has faced humiliating defeat after humiliating defeat while bringing death and destruction and immeasurable misery to millions around the world.
That is what HN users seem to have an "anti" sentiment for. If you watch the news you'll be able to tell that this goes far beyond HN. The whole of US society seems to be extremely tired with those "forever wars", those senseless excursions to faraway lands, that not only do not secure US interests but turn world opinion more and more against the US. Even the US' closest allies now fear the US: vide Greenland. Anyone with more than a video game or comic book understanding of how the real world works would do well to be concerned.
Edit: also from EU, btw. Greek but living in the UK.
>The whole of US society seems to be extremely tired with those "forever wars",
This is the main thing I would disagree with, as an American who rubs elbows with conservatives quite a bit.
A large amount of Republican and conservative Americans want war. They're primed for a war they haven't had this generation. There are a lot of relatively young conservatives who are eager for war. A weird number of Republicans don't think we lost Iraq or Afghanistan, or a few other wars, so they aren't tired of it yet.
Like 15-25% of Americans also believe in some form of the end times prophecy involving Israel. I'm not kidding about this. The number really is that high. A lot might not openly state that they believe in it, but they were raised under a religious teaching that says it will happen. Hegseth, literally, has a crusades tattoo and openly talks about eradicating Muslims on his weekly or monthly sermon.
But yes a majority of americans, like 60%, are extremely tired of ongoing wars. But I can also drive to towns in the western US where trump still has majority support and they will openly say they support the Iran war. America is really polarized and a lot of conservatives only talk about this stuff to family now.
I grew up super rural and have to deal/work with very religious conservative Americans often enough. There are a lot more of them than people think. They've just learned to self-segregate and keep to themselves and say things a certain way.
As an American, I think a better metric for outcomes of Korea, Vietnam, Afghanistan, and Iraq is: were we trading with the before the war and are we trading with them one generation after the war? The same is even true of WWII, a more important marker afterward is that we spent the rest of the 20th century trading prosperously with Japan and Germany.
Korea: the south became an economic powerhouse with whom we now trade for critical computer components and is a generally reliable ally in the region.
Vietnam: we now trade with them happily and enjoy generally productive relations, largely because they fought us for less than two decades but fought China for centuries and centuries.
Iraq: we aren't yet a generation past, but the government they have now is better than what they had under Saddam Hussein, even if it was almost immediately subverted by Iran. And jury is out on Iran because that hot war just started.
Afghanistan: we aren't yet a generation past, but very likely the most clear failure in this list. I remember thinking in high school (during the active phase of the war): "if we actually want to make a difference, we'd have to stay a century or more, and we don't have the will to do that the way the British or Russians tried to, and even they ultimately failed to make any local changes."
Europeans also need to realize that everyday Americans don't actually care about Europe very much and never truly have. It took the Lusitania to get us into World War I, Pearl Harbor (and Hitler's declaration of war) to get us into World War II, and the credible threat of the Soviet Union to keep us in Europe for decades after the war. The husk of Russia at the center of the Soviet skeleton isn't a credible threat to America, and the American reversion to the mean of isolationism began as the Cold War ended. That reversion completed sometime between 2010 and 2015. There is a new credible threat, but that is China, and even to well informed Americans Europe is slipping from their attention.
Most people in Trump's government probably don't care that much about reopening Hormuz quickly. Gas prices are only truly spiking in U.S. states where local environmental regulations have obstructed access to domestic and regional supply, and the largest of those states (i.e. California, New York) have broken against Republicans in every Presidential election (9 of them in a row) since the end of the Cold War.
> As an American, I think a better metric for outcomes of Korea, Vietnam, Afghanistan, and Iraq is: were we trading with the before the war and are we trading with them one generation after the war?
At least you're honest. Personally I can't believe someone would think it's OK to invade someone else's county and massacre civilians on the scale of Vietnam or Korea in order to establish profitable trading relations.
> Personally I can't believe someone would think it's OK to invade someone else's county and massacre civilians on the scale of Vietnam or Korea in order to establish profitable trading relations.
Strange. I don't remember writing that trading relations afterward justify the initiation of a war. Instead, I only remember writing that it is a better metric to assess the outcomes.
It's stranger still that you read these things between the lines, when my comment specifically includes a recollection of my own disquiet with the Afghanistan War, probably the most justified war of the four enumerated, that I felt while the war was happening.
It’s easy when you worship money and consider people of other races or cultures as less than human. Not that I am advocating for this view of course but a lot of Americans do even if they won’t admit it.
American reaction to the Cuban Revolution was deeply incompetent. The Bay of Pigs is up there with the Iran Hostage Crisis and the withdrawal from Afghanistan (and specifically from Bagram) in the list of stunning foreign policy blunders of the last hundred years.
We still don't trade with Cuba, and that is a clear sign of ongoing foreign policy failure. But who knows, in a year's time we may be trading with Cuba again. We're trading with Venezuela now.
Nominally, stopping the spread of communism in Asia. Actually, stopping the spread of Chinese and Russian influence in Asia.
Our politicians did then and do now frequently miss the trees for the forest when assessing foreign crises (and I'm inverting that saying deliberately). Ho Chi Min was a nationalist first and a communist second, but all our leaders could see was a monolithic, global communist bloc. In fairness to them, hindsight is 20/20 and the Sino-Soviet split wasn't obvious to outsiders until the late 60s or early 70s.
Consider the cost on local civilians of the Vietnam and Iraq wars (the GWB war likely killed more Iraqi civilians that Hussein did in 24 years). And the literal trillions of dollar these wars costed. And the real possibility that regime change could have occurred anyway by less horrific means. Are you getting at a tiny silver lining or do you actually think these wars were remotely a good idea?
> Are you getting at a tiny silver lining or do you actually think these wars were remotely a good idea?
I'm getting at outcomes, whether or not a war is a good idea in the first place. War is never a good choice, IMO, but can sometimes be a necessary choice or an inevitability.
It's perfectly reasonable to point out that a war initiated for the wrong reasons had good (or some good) outcomes, or that a war initiated for the right reasons had bad (or some bad) outcomes. And that all war is ultimately terrible.
Our own Civil War was initiated for the right reasons and yet it became the bloodiest war in our history. More Americans died during our Civil War than during all our other wars put together, and Britain was able to end slavery across their whole empire without any war at all, though at great national expense (continuing payments until 2015 or so) and with some bloodshed on the seas.
Shahed drones have a maximum range of 25000 km [bbc_1]. The distance from e.g. Isfahan to Tel-Aviv is ~1592 km [google]. Shaheds can reach Israrel from Iran.
As to them all being intercepted, in the 12-day war that seemed to be the plan, i.e. force Israel to waste interceptors on cheap drones [bbc_2]. That seems to have changed in the current conflict.
_______________
[bbc_1] With a maximum range of 2,500km it could fly from Tehran to Athens.
[bbc_2] When Iran attacked Israel with hundreds of drones in 2024, the UK was reported to have used RAF fighter jets to shoot some down with missiles that are estimated to cost around £200,000 each.
> As to them all being intercepted, in the 12-day war that seemed to be the plan
That's doubtful, these are different interceptors than the ballistic missile interceptors (AA missiles). That doesn't make sense as a strategy if they cannot hit any targets
During WW2, the British used Spitfires to shoot down V1s. The V1s, pushed by a simple pulse jet, I presume are much faster than the drones. So some WW2 aircraft could be re-armed and used to shoot them down cheaply.
The British also employed a belt of radar-guided flak guns to shoot them down.
I don't hear any comparisons with the V1s, so my idea must be stupid, but I'm not seeing the flaw in it.
I think a big difference is that asymmetry has grown a lot: The modern drone is much cheaper than any manned aircraft (while V1/V2 needed comparable or greater industrial input compared to fighter planes).
If you want to scramble manned fighters (even WW2-style ones!) every time cheap drones are launched then the pure material cost per intercept might be acceptable (no guarantee here: you need more fuel and your ammunition is potentially more expensive than the drones payload, too), but the pilot wage/training costs alone ruins your entire balance as soon as there is any risk of losing the interceptors (either from human error/crashes or the drone operator being sneaky).
Big problem with stationary AA is probably coverage (need too many sites) and flak artillery is not gonna work out like in the past because the drones can fly much lower and ruin your range that way.
The V2 was so expensive it was rather catastrophic to the German war budget. V1s, on the other hand, were very cheap to make and deploy.
> you need more fuel
Not much of a problem.
> and your ammunition is potentially more expensive than the drones payload
I'd say it's on par. A few rounds into a slow moving target moving in straight line would be easy to hit.
> the pilot wage/training costs alone ruins your entire balance as soon as there is any risk of losing the interceptors (either from human error/crashes or the drone operator being sneaky).
The US somehow managed to train an enormous number of competent pilots in WW2. I doubt there would be any shortage of men eager to fly them and "turkey shoot" the drones down. And there'd be a lot of mechanics falling all over themselves to build those machines!
A lot of people might find the idea fun, but actually sitting around in some remote base, just waiting for the next wave of drones to come? Even if you draft those people "for free", they could be working (or raise a family) instead, so the human cost is always there.
In WW2, the US lost ~15000 airmen just in training accidents to crew the ~300k planes it built. I'm sure we could get that rate down substantially with modern simulators and safety investments (=> also not free), but human lives simply got comparatively more expensive (and competent pilots were not that cheap back then either).
The attacker, meanwhile, is certainly gonna lose less men building and controlling the drones, and he can afford at least 10 attack drones for every interceptor you build.
If you did something like this on a larger scale, a big concern would also be that your manned interceptor aircraft simply become targets themselves, so the "low-risk turkey shooting" could quickly degrade.
I do expect (non-suicide?) interceptor drones as countermeasure at some point (specifically against the "cruise missile with props" style of attack drones, less so in the FPV weight class), and those could be conceptually quite similar to old prop fighters.
The marginal cost of a fighter aircraft to shoot down a drone flying slow in a straight line would be minimal, especially compared with the expense of each guided counter-rocket.
As for being targets themselves, the drones would be in enemy airspace so who/what is going to target the fighters?
I don't see how you realistically get airframe cost below $200k; you need basically a cropduster with a bunch of electronic equipment and weapon systems on top. That's worth 10 attack drones at least (realistically, US military would probably pay several times that).
> As for being targets themselves, the drones would be in enemy airspace so who/what is going to target the fighters?
Something like a sidewinder strapped under some of the attack drones. If you create the incentive (juicy, trained pilots exposed in slow aircraft engaging at low range) your opponent is gonna adapt. Exactly this evolution happened with Ukraine sea drones (already shot down several russian aircraft).
Unlikely but they can be intelligent about their trajectory. That is avoid known areas of resistance, use natural features for protection.
Being slow moving as they are, they are quite vulnerable to countermeasures after they have been detected. I expected a-10s, helicopter gunships guarding critical infra, but have not heard of anything like that in the news.
That's a legitimate question and it has no good answer. Not just Sudan. There is an ongoing genocide in Myanmar, against the Rohingya. There is an ongoing genocide against the Uyghurs in china. None of those get nearly the amount of coverage the genocide in Gaza gets, or, now the war in Iran and Lebanon.
I have no idea why. I have recently started to grow a bit paranoid and wonder whether I am being manipulated by the media I consume. That would not be a huge surprise, I'm willing to bet most people are influenced by some of the things they read online.
Anyway this is an interesting question that has to be answered: why only Gaza, and not the other genocides?
If you really cared about those other conflicts, I'd expect to see you mention them more often in your comments. Are you sure you actually care about them or you just want people to stop talking about Gaza?
Super easy answer: because only on Gaza your government openly sides with the perpetrators, arms and finances them, the media justify them, laws are passed to curb criticism and punish boycotts, and people in online discussion forums bring up always the same debunked arguments and rhetorical devices to divert the attention [1], blame the victims and justify the perpetrators. It's the disagreement that fuels the discussion, the obvious contrast between the right position and the official statements and public propaganda.
1- of which yours is a classic example: "why talk about this and not about something else?"
>> Written by the creator of the Daleks, Terry Nation, and Dennis Spooner, the serial starred Hartnell and Purves alongside an early appearance by Nicholas Courtney as Bret Vyon, Adrienne Hill as Katarina, and Kevin Stoney as Mavic Chen.
I have to ask, are there really "anti-AI activists"? Like, are there people marching against AI, attacking data center, spray-painting "AI OUT" on computers, and so on? Or is it just an exaggeration by Carmak?
This is a conversation forum, so it's natural for people to ask questions of each other. Sure, we could, in principle, ask Google, or ChatGPT for everything, but then why have an online conversation at all?
Basically, a holdover from the days of symbolic AI, from back when neural network ML wasn't the dominant AI paradigm.
Some people in the "symbolic AI" camp didn't take the loss well, so they pivoted towards "ML is not real AI and it needs a symbolic component to be a real AI", which is: the neurosymbolic garbage.
This work isn't exactly that, and I do think it can amount to something useful, but the justification for it reeks of something similar.
Full disclosure: all my published work is on symbolic machine learning (a.k.a. Inductive Logic Programming) :O
I think you're confusing various different things as "neurosymbolic AI". There is a NeSy symposium and I happen to have met many of the people there, and they are not GOFAI ideologues, rather they recognise the obvious limitations of neural nets (i.e. they're crap at deduction, though great at induction) and they look for ways to address them. Most of that crowd also has a predominantly statistical ML/ neural nets background, with symbolic AI as an afterthought.
I don't think I've ever heard anyone say that "ML is not real AI" and I mainly move in symbolic AI circles. I would check my sources, if I were you.
Anwyay, honestly, this is 2026, there is no sensible reason to be polarised about symbolic vs. statistical AI (or whatever distinction anyone wants to make). An analogy I like to make is as follows: a jetliner is a flying machine, a helicopter is a flying machine. We can use both for their advantages and disadvantages, but a flying machine is something too useful to give up on any one kind for ideological reasons. The practical benefits overwhelmingly make up for any ideological concerns (e.g. "jets bad" or "propellers bad").
And just to be clear, symbolic AI is still in rude health: automated theorem proving, planning and scheduling, program verification and model checking, constraint satisfaction, discrete optimisation, SAT solving, all those are fields where symbolic approaches are dominant, and where neural nets have not made significant inroads in many decades; nor are they likely to, not any more than symbolic approaches are likely to make any inroads in e.g. machine vision, or speech recognition. And that's just fine: lots of tools, lots of problems solved.
I don't think symbolic approaches are completely useless. It's just that they're solving yesterday's problems 1.12% better. While ML is cracking open entirely new fields - and might go all the way to AGI, the way it's going now.
One is near the end of its potential while another is only picking up steam.
In many ways, the space ML dominates now is the space of "all the things symbolic approaches suck ass at". Which is a very wide space with many desirable things in it.
Well, neural nets do what neural nets do best (not ML in general, which is a broader field), so if a lot of funding is going to neural nets then we'll see a lot of progress on the stuff neural nets are best suited for. No surprise. If Google et al were spending billions on symbolic AI maybe we'd see equally spectacular results there too. Maybe not. But we won't know because they don't.
There's no sense in which symbolic AI is at the end of its life and if you pay close attention you'll see that LLMs are trying to do all the things that symbolic AI is good at: major examples being reasoning, and planning from world models.
And as nextos says in the sibling comment most of the recent successes of LLMs in tasks that go beyond language generation, e.g. solving math olympiad problems, are the result of combining LLMs with symbolic verifiers.
>> While ML is cracking open entirely new fields - and might go all the way to AGI, the way it's going now.
I don't agree. Everything that neural nets do today, speech recognition, object identification in images, machine translation, language generation, program synthesis, game playing, protein folding, research automation, I mean every single thing really, is a task that comes from the depths of AI history. There's a big discussion to be had about why those tasks are "AI" tasks in the first place and what they have to do with "intelligence" in the broader sense (e.g. cats are intelligent but they can't generate any sort of text) but this discussion is constantly postponed as we all breathlessly run up the hill that neural nets are climbing. When we get to the top and find it was the wrong hill to climb, maybe we'll have that discussion at last, or maybe the entire industry, academia in tow, will run after the Next Big Thing in AI™ all over again. But- cracking open new fields? Nah. Not really.
AGI is not going to happen any time soon though. We have no idea what we're doing in terms of reproducing intelligence, that much is clear.
The whole notion of "we need to know what intelligence is exactly to reproduce it" is completely and utterly wrong.
It's also the kind of thinking that results in "neurosymbolic garbage is good actually".
What neural nets do today is basically "everything humans do". There is no longer a list of "things computers can't do" - just a list of things computers do worse than the top 1% of humans. Ever shrinking.
Well, for example a computer can't make me an omelette. There's tons of examples like that, pretty much everything humans "can do" with our bodies, that computers can't- not just because they don't have bodies, but because even when we give them bodies we can't program them to do the things we want them to. LLMs don't help at all here. They can easily fake knowing what to do but the -not few- attempts people have made to connect LLMs to a robot to get the LLM to drive the robot like a little AI brain have ... not really worked out? I guess? Not even self-driving cars use LLMs.
Speaking of self-driving cars' AIs, while they have plenty of machine learning components, e.g. for vision, SLAM, and so on, they are largely hand-coded, rule-based systems. Just like the good old days of GOFAI.
>> The whole notion of "we need to know what intelligence is exactly to reproduce it" is completely and utterly wrong.
I can't see anything about "training a transformer". I'm trying to understand if e.g. the Sudoku solver was learned from examples (in which case, what examples?) or whether it was manually coded and then "compiled" into weights.
There is no training in the usual sense of the term, i.e. no gradient descent, no differentiable loss function. They use deceptive language early on to make it sound this way, but near the end make it clear their model as is isn't actually differentiable, and in theory might still work if made differentiable. But they don't actually know.
But IMO this is BS because I don't know how one would get or generate training data, or how one would define a continuous loss function that scores partially-correct / plausible outputs (e.g. is a "partially correct" program / algorithm / code even coherent, conceptually).
Yeah, a "100% correct" Sudoku solver fully trained by gradient descent from examples? That sure would be something entirely new.
To answer dwa3592, it's always possible to set the weights of a neural net by hand, albeit extremely fiddly and normally only done "on paper". This is e.g. how the Turing-completeness of RNNs was shown back in the '90s:
So, what I'm trying to understand, and I can't find any clear information about that in the article, is how they "compiled" e.g. the Sudoku solver into a Transformer's weights. Did they do it manually? Say, they took the source of a hand-coded Sudoku solver and put it through their code-to-weight compiler, and thus compiled the code to the Transformer weights? Or did they go the Good, Old-Fashioned, Deep Learning way and train their Transformer to learn a ("100% correct"!) Sudoku solver from examples? And, if the latter, where's the details of the training? What did they train with? What did they train on? How did they train? etc etc.
My interpretation is that they built a simple virtual machine directly into the weights, then compiled a WASM runtime for that machine, then compiled the solver to that runtime.
Nope, they encoded or compiled in a simple VM / WASM interpreter to the transformer weights, there is no training. You'd be forgiven for this misreading, as they deliberately mislead early on that their model is (in principle) trainable, but later admit that their actual model is not actually differentiable, but that a differentiable approximation "should" still work (despite no info about what loss function or training data could allow scoring partially correct / incomplete program outputs).
Thanks, but where do they say that? I can only find this instance of "different" (as in "differentiable") in their article:
Because the execution trace is part of the forward pass, the whole process remains differentiable: we can even propagate gradients through the computation itself. That makes this fundamentally different from an external tool. It becomes a trainable computational substrate that can be integrated directly into a larger model.
In the section "Programs into weights & training beyond gradient descent", near the end, they say:
[...] *the compilation machinery we built for generating those weights** can go further. In principle, arbitrary programs can be compiled directly into the transformer weights, bypassing the need to represent them as token sequences at all. [...] [my emphasis]
In the same section, they also continue:
Weights become a deployment target: instead of learning software-like behavior, models contain compiled program logic.
If logic can be compiled into weights, then gradient descent is no longer the only way to modify a model. Weight compilation provides another route for inserting structure, algorithms, and guarantees directly into a network.
So they (almost-invisibly) admit they compile in the weights, but make it clearer this was the whole intention the whole time in later sentences.
>> So then, when you see this picture (and remember, it might only be showing half of the whole setup), do you think "wow, cool, they got to wrangle all of that", or do you think "OMG they had to wrangle all of that"? It's an important distinction to make, and I think someone's gut reaction to this amount of hardware in one place might influence how they approach building new systems.
It's a very good point. Like, I have some stuff that eats up RAM like a, a very hungry thing, so I went online to see if I could buy some old server blade with a couple TB of RAM from ebay. I found a few, refurbished, not in a horrible condition, not prohibitively expensive (I'm not currently funded, as such) and I remember this distinct feeling, like a frisson of excitement at the thought of having access to ~20 times more POWER than I usually have...
... and then I cooled down, didn't buy a server, and instead rented one with "only" 256 GB RAM until I could fix my stuff so that it now runs with up to 8GB on my laptop. Still expensive, but we're getting there.
Morale of the story: don't know. I prefer to find ways to make software go faster than rely on hardware? I get the feeling I'm very alone on this, seeing as everyone's talking about putting nuclear-powered server farms in space and whatnot.
https://en.wikipedia.org/wiki/Power_projection
The leader image is ... a US aircraft carrier (the USS Nimitz). That's what the US uses its military power for, to influence events in lands far, far away from its territory.
But, now, tell me which one of the many wars that the US has fought in after WWII did not end in disaster. Afghanistan? Iraq? Korea?
There was a meme doing the rounds the other day: "Name a character who can defeat Captain America". The answer being "Captain Vietnam". The US has faced humiliating defeat after humiliating defeat while bringing death and destruction and immeasurable misery to millions around the world.
That is what HN users seem to have an "anti" sentiment for. If you watch the news you'll be able to tell that this goes far beyond HN. The whole of US society seems to be extremely tired with those "forever wars", those senseless excursions to faraway lands, that not only do not secure US interests but turn world opinion more and more against the US. Even the US' closest allies now fear the US: vide Greenland. Anyone with more than a video game or comic book understanding of how the real world works would do well to be concerned.
Edit: also from EU, btw. Greek but living in the UK.
reply