Opsec safety is not the biggest problem with autonomous driving. It is a secondary problem at best and one that can be addressed (though certainly there is room for improvement) using normal security techniques.
Correctness is the biggest problem with AI safety. Note that "adversarial ML attacks" fall under correctness.
Is the assertion that adversarial ML is a subset of correctness widely considered canonical?
That appears counterintuitive because so many ML techniques seem (for lack of a technically defined term) tautological. For example, a big hairy random forest classifier maybe can be gamed in certain cases, but it is not "correct?" After all, it is its own definition.
Yes. I'm not sure what point you're trying to make about tautology. Adversarial ML examples are clearly errors if they were part of the test set, and part of "correctness" is reducing the error rate (correct model, correct parameters, etc.).
Correctness is the biggest problem with AI safety. Note that "adversarial ML attacks" fall under correctness.