Weaponizing AI on the face of it seems completely insane. The problem is that if we don’t, then we would presumably be at a tremendous strategic disadvantage to an adversary which did (more or less the logic that led to the development and proliferation of nuclear weapons).
I can’t see how militarized forms of AI don’t emerge as a consequence of significant progress in non-military AI, so perhaps all roads do eventually lead to SkyNet.
Great. Another reference to Skynet as the inevitable outcome of advancing automation. Despite all the facts to the contrary.
Will technology, including AI, inevitability be used to improve weaponry? Yes. Will AI inevitably lead to Skynet and Terminator robots? Hardly.
Today we can't build a self-driving car with more than level 2 autonomy, nor do honest experts believe one will happen soon. Today's autonomous mobile robots are incapable of even the most rudimentary human motions, and likewise, human-level robots are invisible on any 50 year time horizon, commercially or militarily. No AI-based tech has shown even the faintest sign of the level of AGI capabilities needed to control a robot army. Nor has any AI shown the potential for an emergent executive function or a desire to KILL ALL HUMANS.
To assume that present-day AI will likely self-assemble into a rebel robot army intent on destroying humanity… Why does anybody take this crap seriously? Or soberly reference it while hoping to be taken seriously?
It's time for all adults everywhere to stop imagining that pop scifi movies are a sensible foundation toward discussing how new tech can best serve its intended purpose. Scifi is meant to entertain, not inform. Given what we know today about AI, Skynet doesn't have a hope in hell in happening — not in terms of platform mobility nor in terms of cognition nor in terms of self-assembly. So PLEASE give all references to Skynet a rest.
Terminator style robots would be a pretty inefficient way to wipe out humanity.
Wrt motive, I would assume that if we are someday killed off by our own machines, it will most likely be something akin to an accidental nuclear holocaust.
>more or less the logic that led to the development and proliferation of nuclear weapons
Sure. One country threw a ton of resources into a program, and as a result the technology spread to numerous other countries leading to something like seven countries capable of destroying the world.
Maybe the end result is inevitable. Racing towards it so we can kill others before they can kill us isn't smart.
Obviously, it’s a terrible idea! Unfortunately, every nation or deeply resourced actor exercising complete restraint forever is probably not a Nash Equillibrium.
I can’t see how militarized forms of AI don’t emerge as a consequence of significant progress in non-military AI, so perhaps all roads do eventually lead to SkyNet.