They did, and many games actually used to do that.
But it's a simple issue of economics:
- you can't have cosmetic microtransactions if players can self-host and modify their own servers
- developers hosting servers is costly
- using a p2p architecture with the "server" running on a random gamers' computer is much more profitable
- but that requires trusting the client, which means
- client-side anti-cheat
Without the live-service lootbox gambling microtransaction bullshit that has infested the gaming industry, none of this would have ever been necessary.
You don't need client-side anticheat if your clan/guild is self-hosting your dedicated servers, you can just ban the obvious cheaters.
The question you should be asking is: If you can't tell the difference between a cheater and a regular human player, does it even matter?
Because this question reframes our task, now we only have to measure whether a player is doing something a fair human player couldn't have done.
As we've already have precise mouse movement and timing, and we can calculate what a player would be able to see and/or know at a given moment, we can calculate what they should have been able to do, how fast, and when.
And that's an issue that's common in many other industries outside of gaming, with everyone else having to come up with server-side solutions to the issue.
Reacting to a piece of information that you shouldn't have had access to at a given moment (whether it's insider trading or fog visibility improved by an ESP or a custom monitor LUT) is an easy to measure tell:
Does it matter if people cheat in chess, sports, etc? The point of these kinds of games is that your own skill matters and the fun is in improving yourself over time, against people that compete under the (almost) same conditions. Why would I want to compete against people that artificially improve their own abilities? It does not matter if they exceed human capabilities or not.
Mouse movements server side are not precise, they're sampled at a low-ish rate. There are ML solutions available but unclear if they're effective, based on how many cheaters there they don't see to be, yet. Then there's the question of false positives, seems like it cannot be a perfect system. Detecting humanness of players is unlikely to be precise enough, which is very different from detecting cheating software where you essentially catch the cheater red handed. This still has false positives but is typically reversed as the banned group report it, how can you do that when the AI/software detects that you're not human?
And to reiterate, it does not matter if they exceed human ability or not, the artificial increase of your ability is against the competitive spirit of these games. Might as well play against bots if everyone is cheating.
But it's a simple issue of economics:
- you can't have cosmetic microtransactions if players can self-host and modify their own servers
- developers hosting servers is costly
- using a p2p architecture with the "server" running on a random gamers' computer is much more profitable
- but that requires trusting the client, which means
- client-side anti-cheat
Without the live-service lootbox gambling microtransaction bullshit that has infested the gaming industry, none of this would have ever been necessary.
You don't need client-side anticheat if your clan/guild is self-hosting your dedicated servers, you can just ban the obvious cheaters.