It's not silly, for many types of games, having access to privileged information on the client is simply a necessity. Any multiplayer FPS client will have to know the positions of other players before they come into view for latency reasons. The client will have to know exact origin points of any sounds other players might make. Player models fully occluded by transparency effects will still have to be rendered and cheaters could just forgo the transparency pass all together. Same story with things like overlay effects and postprocessing (flash bangs, blurry vision, b&w image, ...). Texture changes can give a visibility advantage. The list goes on and on. Developers rely on client-side AC out of necessity, not out of ignorance.
The reason client-side AC is used is not technical, but economical.
It's possible to calculate what information should be available to a client at a given time (and within of the motion range of the clients' latency), only send that data, and calculate the time delta between when a player should have been able to know something, and when they reacted to it.
A lot of games used to do that in the past, and some still do.
But it requires powerful servers. Gamers could self-host dedicated servers, but publishers put an end to that as it's not compatible with micro transactions.
Developers could host servers, but that's not as profitable as using p2p gaming where a random client is used as the server.
It's a self-made problem really, none of this was necessary.
The way I read their statements is simply that the it's the binary classification problem (is the user cheating or not) that should be done server-side. There's still data collection going on client-side. Given this, actually sending more data to the client could be helpful because they could use that extra data for cheating, data that a legitimate player would never have access to.
Well, the end-goal is to stop cheating, not to detect it - detecting it is simply a means to an end. In that sense, sending more data is not ideal because even though you might be able to detect cheating more easily that way, cheaters would still have some period of time where they are able to cheat (and cheat more easily), thereby worsening the experience for legitimate players.
I don't understand. Because someone wrote fiction about Linux and debuggers being somehow made illegal we should allow cheaters to ruin multiplayer gaming?
fTPMs and attestation/endorsement, Boot Guard, Secure Boot etc. have existed for years. They are just now beginning to be discussed/used for anti-cheat. I still think the ideal solution is server-side data analysis, however.
The APIs and such exist but due being an open ecosystem, proper security that cheaters can't bypass will take a while to become standard since random OEMs will not care about security unless you force them.
They did, and many games actually used to do that.
But it's a simple issue of economics:
- you can't have cosmetic microtransactions if players can self-host and modify their own servers
- developers hosting servers is costly
- using a p2p architecture with the "server" running on a random gamers' computer is much more profitable
- but that requires trusting the client, which means
- client-side anti-cheat
Without the live-service lootbox gambling microtransaction bullshit that has infested the gaming industry, none of this would have ever been necessary.
You don't need client-side anticheat if your clan/guild is self-hosting your dedicated servers, you can just ban the obvious cheaters.
The question you should be asking is: If you can't tell the difference between a cheater and a regular human player, does it even matter?
Because this question reframes our task, now we only have to measure whether a player is doing something a fair human player couldn't have done.
As we've already have precise mouse movement and timing, and we can calculate what a player would be able to see and/or know at a given moment, we can calculate what they should have been able to do, how fast, and when.
And that's an issue that's common in many other industries outside of gaming, with everyone else having to come up with server-side solutions to the issue.
Reacting to a piece of information that you shouldn't have had access to at a given moment (whether it's insider trading or fog visibility improved by an ESP or a custom monitor LUT) is an easy to measure tell:
Does it matter if people cheat in chess, sports, etc? The point of these kinds of games is that your own skill matters and the fun is in improving yourself over time, against people that compete under the (almost) same conditions. Why would I want to compete against people that artificially improve their own abilities? It does not matter if they exceed human capabilities or not.
Mouse movements server side are not precise, they're sampled at a low-ish rate. There are ML solutions available but unclear if they're effective, based on how many cheaters there they don't see to be, yet. Then there's the question of false positives, seems like it cannot be a perfect system. Detecting humanness of players is unlikely to be precise enough, which is very different from detecting cheating software where you essentially catch the cheater red handed. This still has false positives but is typically reversed as the banned group report it, how can you do that when the AI/software detects that you're not human?
And to reiterate, it does not matter if they exceed human ability or not, the artificial increase of your ability is against the competitive spirit of these games. Might as well play against bots if everyone is cheating.
There are a number of Microsoft-signed drivers that have vulnerabilities in them that can be exploited allowing kernel-level access (memory read/write primitives, etc.) - they would load fine under Secure Boot - and, indeed, malware already has exploited this before.
This does make cheating harder, and does make it a cat-and-mouse game where signatures are revoked and they move on to a new driver, but the fact of the matter is - there are a ton of drivers out there and some of them will always be vulnerable in some way. To this end, I think focusing on client-side anti-cheat at all is a lost cause.
> Importantly, this work also highlights the defensive implications of such techniques. While Secure Boot and firmware integrity mechanisms would prevent this attack chain when correctly enforced, the explicit requirement for users to disable Secure Boot demonstrates how social and usability tradeoffs continue to undermine otherwise effective platform defenses.
Valorant and Battlefield 6 does require secure boot and they do not sell their cheat for those games. Though there are still cheats available for those games, in particular using DMA hardware.
You connect the DMA PCIe card to a laptop/pc with USB, then it can read any memory on the game PC and display a radar on the laptop screen. They even sell mouse and hdmi/dp mergers, these allow the laptop to show an ESP overlay over your game and aimbotting by sending mouse inputs.
No. That's too soft. We should go one step further and make computers immutable appliances the moment any game is installed, or maybe out of the box.
macOS, Windows and Linux has the technology. Why wait? Kill general purpose comp^H^H^H^H^ communism right now! Protect the children, save the capit^H^H^H^H nation!
That's such a hilarious quote, as it explains exactly why client-side anti-cheat is silly in the first place.
reply