These ruminations miss such an obvious point: We are well past the point of dependence on AIs that can be turned off. Were these people not paying attention to the US headlines since 2016? And even in the last month?
Huge corporations, whose primary legal responsibility is to make money ("fiduciary responsibility to our shareholders") cannot turn off a large number of AIs because their financial existence (aka existence) depends on them. Those overlooked US politics headlines are an existence proof of this point.
Go see the movie Brazil. The plot involves someone mistakenly victimized by an dystopian state when some info processing goes slightly wrong, (a typo). We now have AI that is highly error-prone (e.g. facial recognition errors), treated as infallible (by bureaucrats and politicians wanting clearcut answers to questions -- who committed this offense?), and addictive (see above).
That is our problem with AI, and it exists right now.
Huge corporations, whose primary legal responsibility is to make money ("fiduciary responsibility to our shareholders") cannot turn off a large number of AIs because their financial existence (aka existence) depends on them. Those overlooked US politics headlines are an existence proof of this point.
Go see the movie Brazil. The plot involves someone mistakenly victimized by an dystopian state when some info processing goes slightly wrong, (a typo). We now have AI that is highly error-prone (e.g. facial recognition errors), treated as infallible (by bureaucrats and politicians wanting clearcut answers to questions -- who committed this offense?), and addictive (see above).
That is our problem with AI, and it exists right now.