Do you see nothing wrong with the same company that makes YouTube Kids making killer AI? I think creating weapons is often evil. I think companies that have consumer brands should never make weapons, at the very least it's white washing what's really going on. At worst, they can leverage their media properties for propaganda purposes, spy on your Gmail and Maps usage and act as a vector for the most nefarious cyber terrorism imaginable.
The same company that brings you cute cartoons for kids might also develop technologies with military applications, but that doesn't make them inherently "evil." It just makes them a microcosm of humanity's duality: the same species that created the Mona Lisa also invented napalm.
Should companies with consumer brands never make weapons? Sure, and while we're at it, let's ban knives because they can be used for both chopping vegetables and stabbing people. The issue isn't the technology itself. It's how it's regulated, controlled, and used. And as for cyber terrorism? That's a problem with bad actors, not with the tools themselves.
So, by all means, keep pointing out the hypocrisy of a company that makes YouTube Kids and killer AI. Just don't pretend like you're not benefiting from the same duality every time you use a smartphone or the internet which don't forget is a technology born, ironically, from military research.
It sounds like they're distracted, tbh. It's hard to imagine how a company that specializes in getting children addicted to unboxing videos can possibly be good at killing people.. oh, wait, maybe not after all..
There is a wide range of moral and practical opinions between the statement “all weapons are evil” and “global corporations ought not to develop autonomous weapons”.
Who should develop biological weapons? Chemical weapons? Nuclear weapons?
Ideally no one, and if the cost / expertise is so niche that only a handful of sophisticated actors could possibly actually do it, then in fact (by way of enforceable treaty) no one.
> Who should develop biological weapons? Chemical weapons? Nuclear weapons?
Anyone who wants to establish deterrence against superiors or peers, and open up options for handling weaker opponents.
> enforceable treaty
Such a thing does not exist. International affairs are and will always be in a state of anarchy. If at some point they aren't, then there is no "international" anymore.
> in other words, cede military superiority to your enemies?
We're talking about making war slightly more expensive for yourself to preserve the things that matter, which is a trade-off that we make all the time. Even in war you don't have to race for the bottom for every marginal fraction-of-a-percent edge. We've managed to e.g. ban antipersonnel landmines, this is an extremely similar case.
> How would you enforce it after you get nuked?
And yet we've somehow managed to avoid getting into nuclear wars.
Resusal to make or use AI-enabled weapons is not "making war slightly expensive for yourself", it's like giving up on the Manhattan project because the product is dangerous.
Feels good but will lead to disaster in the long run.
If we hadn't developed nuclear weapons we would still be burning coal and probably even closer to death from global warming. The answer here is government contractors should be developing the various type of weapon as they are, people just do not think of google as a government contractor for some reason.
Palantir exists, this would just be competition. It's not like Google is the only company capable of creating autonomous weapons so if they abstain the world is saved. They just want a piece of the pie. The problem is the pie comes with dead babies, but if you forget that part it's alright.
Palantir provides combat management system in Ukraine. That system collect and analyzes intelligence, including drone video streams, and identifies targets. Right now people are still in the loop though that is naturally would go away in the near future I think.
With or without autonomous weapons, war is always a sordid business with 'dead babies', this is not in itself a fact that tells us what weapons systems to develop.
Indeed. Usually weapons are banned if the damage is high and indiscriminate while the military usefulness is low.
There is at this moment little evidence that autonomous weapons will cause more collateral damage than artillery shells and regular air strikes. The military usefulness on other other hand seems to be very high and increasing.
Not all is bad, it's preferable to have autonomous systems killing each other than killing humans. If it gets very prevalent you could even get to a point where war is just simulated war games. Why have an AI piloted F-35 fight a AI piloted J-36? Just do it on the computer. It's at least 1 or 2 less pilots that die in that case.
those are mostly drawn on how difficult it is to manage their effects. chemical weapons are hard to target, nukes are too (unless one dials the yield down enough that there's little point) and make land unusable for years, and biological weapons can't really be contained to military targets.
we have, of course, developed all three. they have gone a long way towards keeping us safe over the past century.
Propping up evil figure/regime/ideology (Bolsheviks/Communists) to justify remorseless evilness (Concentration camps/Nuclear bomb) isn't new nor unique, but particularly predictable.
We have Putin at home, he spent the past weekend making populist noises about annexing his neighbours over bullshit pretenses.
I'm sure this sounds like a big nothingburger from the perspective of, you know, people he isn't threatening.
How can you excuse that behaviour? How can you think someone like that can be trusted with any weapons? How naive and morally bankrupt do you have to be to build a gun for that kind of person, and think that it won't be used irresponsibly?
The better logical conclusion of that argument is that the US needs to remove him, and replace him with someone who isn't threatening innocent people.
That it won't is a mixture of cowardice, cynical opportunism, and complicity with unprovoked aggression.
In which case, I posit that yes, if you're fine with threatening or inflicting violence on innocent people, you don't have a moral right to 'self-defense'. It makes you a predator, and arming a predator is a mistake.
You lose any moral ground you have when you are an unprovoked aggressor.
I'm not a fan of Trump but I also feel he has not been so bad that I think that surrendering the world order to Russia and China is a rational action that minimizes suffering. That seems be an argument that is more about signalling that you really dislike Trump than about a rational consideration of all options available to us.
It's not a shallow, dismissable, just-your-opinion-maaan 'dislike' to observe that he is being an aggressor. Just like it's not a 'dislike' to observe that Putin is being one.
There are more options than arming an aggressor and capitulating to foreign powers. It's a false dichotomy to suggest it.
TBF, vkou's post disagrees with mine, but I don't disagree with it. If pressed to offer a forecast, I think the moral dilemmas we're about to face as Americans will be both disturbing and intimidating, with a 50% chance of horrifying.
It's not a luxury belief for a multinational tech company that intends to remain in business in countries that are not allied to the US. Being seen as independent of the military has a dollar value, but that may be smaller than value of defense contracts Google hopes to get.
Whatever your feelings on that are, it's hardly unreasonable to have misgivings about your search and YouTube watches going to fund sloppy AI weapons programmes that probably won't even kill the right people.
It's definitely an opinion Google employees had in the last decade.
Actually I think a lot of people have it - just yesterday I saw someone on reddit claim Google was evil because it was secretly founded by the US military. And they were American. That's their military!
they have no problems heavily censoring law-abiding gun youtubers. Even changing the rules and giving them strikes retroactively. I guess it's "weapons for me, but not for thee".
And these same organizations fuel conflicts that actively make the USA less safe. These organizations can both do great things (hostage rescues) and terrible things (initiating coups), and it’s upon the citizenry to ensure that these forces are put to use only where justified. That is to say almost never.
Weapons inherently aren’t evil, which is why everyone has kitchen knives. People use weapons to do evil.
The problem with building AI weapons is that eventually it will be in the hands of people who are morally bankrupt and therefore will use them to do evil.
The concern with AI weapons specifically is that if something goes wrong, they might not even be in the hands of the people at all, but pursue their own objective.
Who is to say a wielder of a kitchen knife is not "morally bankrupt" - whatever that means.
In my garage, I have some pretty nasty "weapons" - notably a couple of chainsaws, some drills, chisels, lump/sledge/etc hammers and a fencing maul! The rest are merely: mildly malevolent.
You don't need an AI (whatever that means) to get medieval on someone. On the bright side the current state of AI (whatever that means) is largely bollocks.
Sadly, LLMs have and will be wired up to drones and the results will be unpredictable.
Every kind of nefarious way to keep the truth at bay in authoritarian regimes is always on the table. From the cracking of iPhones to track journalists covering these regimes, to snooping on email, to using AI to do this? Is just all the same thing, just updated and improved tools.
Just like Kevin Mitnick selling zero day exploits to the highest bidder, I have a hard time seeing how these get developed and somehow stay out of reach of the regimes you speak of.
> That said, I do not think AI weapons are a reasonable thing to build for any war, for any country, for any reason - even if the enemy has them.
So you're in favor of losing a war and becoming a subject of the enemy? While it's certainly tempting to think that unilateralism can work, I can hardly see how.
>So you're in favor of losing a war and becoming a subject of the enemy?
I never said that. Please don't reply to comments you made up in your head.
Using AI doesn't automagically equate to winning a war. Using AI could mean the AI kills all your own soldiers by mistake. AI is stupid, it just is. It "hallucinates" and often leads to wrong outcomes. And it has never won a war, and there's no guarantee that it would help to win any war.
Use of weapons is only benign to you if you're not on the receiving end. Imagine your family being blown up by a rocket because an AI system hallucinated that they're part of a dangerous terror cell.
My point though is that this is the only use case for such systems. The common comparisons to things like knives are invalid for this reason.
The US is not under any kind of credible threat and in fact is the aggressor across the globe and perpetrator of crimes against humanity at scale. This is not a recent phenomena and has been going on as long as this country has existed.
You're either misdirecting the discussion, or have missed the point. The statement isn't about weapons, but the means of _control_ of weapons.
It's legitimate to worry about scaled, automated control of weapons, since it could allow a very small number of people to harm a much larger number of people. That removes one of our best checks we have against the misuse of weaponry. If you have to muster a whole army to go kill a bunch of people, they can collectively revolt. (It's not always _easy_ but it's possible.)
Automating weapons is a lot like nuclear weapons in some ways. Once the hard parts are done (refining raw oar), the ability for a small number of people to harm a vast number of others is serious. People are right to worry about it.
You don't have to have a "total believe in the purity of weapons" to recognize that military tech is a regrettable but necessary thing for a nation to pursue.