The grand-grand-whatever-parent post to this entire discussion thread is "I don't believe no one has noticed at Google yet. There has to be something else at this point." That's what I'm agreeing is going on. It's not just some normal automated process that a human hasn't looked at. I think you're confusing "ban-worthy confirmed ToS violation examined by and confirmed by people" (what we're speculating) with "automated ban no one has even looked at".
Perhaps I wasn't clear enough in some of my belief: even if people have noticed now, people can't see clearly enough into the machines. You can't scry "motive" from algorithms, no matter how hard you have tried. You've got opaque numbers and scores and rules around them, and somehow you expect a person to come up with an opinion about how the machine arrived at those actions?
We know from centuries of human behavior that rules systems and governments syncretize until the average person just trusts them, assumes their outputs are "fair" when they can't see well enough inside the black box. I am under the impression that such a thing has happened at Google. Too much belief in the "impartiality of algorithms" (which has never been true) blinds a group to avoid questioning what the machines have already done because the machines wouldn't have done it if they didn't have a "good reason". Of course we all know that it is unlikely any of those machines actually reason at all ("have motive"), and instead reflect the biases of their programming, the lack of general oversight in their outliers and false positives/false negatives, the questions of ethics that would be asked of a human judge (and jury), etc.
I assume, because Google has given me no evidence to assume otherwise, if people examined and confirmed the result, they did so with a rubber stamp and presumption the machine is right unless proven otherwise. That's the nefarious thing: there's no motive here. The people aren't given motive to question the machine's decisions, because the machine's decisions are "fair", and the machines have no motive of their own to suggest the sorts of "fair" that include ethics and judicial process, just a bunch of dice rolled that landed with enough snake eyes to "convict". (It's a classic sci-fi dystopian tale, that Google has made an unfortunate part of so many lives.)
Google isn't a company run by some AI algorithm though, at least not yet. At the end of the day it's run by people. When something gets this much public notice (or even much less than it), people step in to take control and make the decisions. If this high-profile PR-damaging ban was confirmed up to fairly high levels of the company (which it must have been by now), then there's a clear reason for it that's a lot stronger than "Well this black box algorithm doesn't like him and we don't know why".
To return to the original post that started this conversation subthread:
> I don't believe no one has noticed at Google yet. There has to be something else at this point.
I think that's true, it's just that those of us on the outside will never find out the real reason because the decisionmakers within Google are deciding to stay silent about this for whatever reason.
I'm saying that in a company like Google in the mindstate of Silicon Valley, they've invested millions into the system and they start from an ideal image that the system is unbiased and "fair" and good.
Even if it goes to the highest levels in the company, they aren't playing the game "prove why the system was wrong", they are are playing the game "prove why the millions of dollars we poured into this system were right". Especially if they are at fairly high levels of the company removed from the day to day maintenance of the system, they have more personally invested in assuming the system works right, in believing the system to be an unbiased and "fair" black box (because their personal reputations get tied to the company's reputations get tied to the system's reputation).
I think the "something else" is just general mistrust in broken "guilty until proven innocent" systems. I believe people have noticed, but the system still wins because the system is usually "right". It's the "benign" AI bureaucracy "utopia" Google (and a lot of Silicon Valley) keeps saying they all want and keeps trying to build doing exactly what it was built to do, absolve humans of human decisions and blame it on the "fair" black box. But maybe I'm just a huge cynic about human nature.