This is a VERY controlled environment - and they used 20 passes of each person walking with direct knowledge of each person to train for identity. They did no tests with multiple people walking at the same time, or with any other external moving distortion effects (doors opening, etc) . This is very far from actual 'identification' of people in real public settings - and no doubt the cell phone everyone is carrying with them offers many orders of magnitude better opportunity. In a real crowded environment this would be nearly worthless.
The devices that reported BFI information were also stationary, and there were no extra devices transmitting information that would be conflicting.
Yes, but things could be refined. With more resources and research thrown at it, it could become more versatile, that's why the title of the post says "could". And chances are, there are private and government entities already doing this. Research like this has been coming out for at least a decade now.
Even Xfinity has motion detection in homes using this technique now:
Yes, you won't be able to do this on normal wifi traffic typically either, you need to send specific packets at a high enough rate (in between normal internet traffic) in order to sense with any accuracy, as I also remarked earlier: https://news.ycombinator.com/item?id=46976849
Yea, that makes sense as you would need quite a bit of information across a reasonable temporal range if the identifying qualities are movement related. Very interesting.
WiFi AP's already do a lot of tracking and measurement just to improve signal fidelity and effective throughput. Why wouldn't those same techniques be useful for more general object tracking? Of course using a single AP to attempt to track movement in real-time is unlikely to have great results, but with several APs and enough compute triangulation should improve results.
Yeah this is one of those "cool demo" research results that is completely impractical in the real world that is sold (probably by university PR departments) as an actual viable technique that might have real-world implications.
We've seen it before with things like taking photos around corners.
And no, it isn't like the Wright flyer and a bit crap now but in 40 years we have jet planes. This will never get significantly better.
Well nowadays you individually track by using mac addresses and other network information from the devices within range. Cisco has some creepy real time maps of your location with each person walking around and all their previous visits etc
Modern phones connect with a randomized MAC address. So yes, you can track a person around, but you will need another system (like the WiFi login page) to match MAC to identity.
Not sure how many people are aware that the newer Alexa devices have "presence detection" that uses ultrasound so they can detect when people are nearby. [0]
Heck, even Ecobee remote temperature sensors can do this.
Reminds me of the story about how the Google Nest smoke detector had a microphone in it. [1]
Not even the biggest privacy issue of using Alexa devices. I think listening you 24/7 is a bigger potential issue.
Not sure if Alexa has this, but cheap mm-wave wideband multi-GHz sensors(or radars more accurately) now enable more finely grained human presence detection and also human fall detection[1] with the right algos, so you can for example detect if grandma in the nursing home fell down and didn't get back up, but in a privacy focused way that doesn't resort to microphones or cameras. Neat.
>Reminds me of the story about how the Google Nest smoke detector had a microphone in it.
Vapes have microphone arrays in them to detect when you're sucking and light up the heating element. Cheap electronics have enabled a new world of crazy.
The Nest smoke detector microphone was never really secret. It was part of the monthly self test to determine if the alarm was working. It would send you a notification telling you it was going to sound the alarm and that it would be listening for the sound to confirm it was working.
> The method takes advantage of normal network communication between connected devices and the router. These devices regularly send feedback signals within the network, known as beamforming feedback information (BFI), which are transmitted without encryption and can be read by anyone within range.
> By collecting this data, images of people can be generated from multiple perspectives, allowing individuals to be identified. Once the machine learning model has been trained, the identification process takes only a few seconds.
> In a study with 197 participants, the team could infer the identity of persons with almost 100% accuracy – independently of the perspective or their gait.
So what's the resolution of these images, and what's visible/invisible to them? Does it pick up your clothes? Your flesh? Or mosty your bones?
What happens is that a large body of water (pun intended) has the ability to absorb and reflect wifi signals as it moves through the room. For this you need to generate traffic and measure for instance RSSI or CSI (basically, signal strength) of the packets. If you increase frequency you can detect smaller movements such as arms moving vs. a body, or exclude pets if you reduce sensitivity. It works well for detecting presence and movement in a defined space, but ideally requires you to cross the path between two mains-powered devices, such as light bulbs or wifi mesh points. Passing a cafe doesn't seem too likely.
If you want to do advanced sensing, trying to identify a person, I would postulate you need to saturate a space with high frequency wifi traffic, ideally placed mesh points, and let the algo train on identifying people first by a certain signature (combination of size/weight, movement/gait, breath / chest movements).
Source: I worked on such technologies while at Signify (variants of this power Philips/Wiz "SpaceSense" feature).
Given the number of gait analysis publications over several decades using varying techniques, can you recommend a good review article disproving all of them?
> The results for CSI can also be found in Figure 3. We find that we can identify individuals based on their normal walking style using CSI with high accuracy, here 82.4% ± 0.62.
If you're a person of interest you could be monitored, your walking pattern internalized in the model then followed through buildings. That's my intuition at practical applications, and the level of detail.
They tested correlation between different perspectives (same scene and AP even) later in the paper and achieved an accuracy of 0%. Not to discount other methods being able to achieve that.
> So what's the resolution of these images, and what's visible/invisible to them?
The researchers never claimed to generate "images," that's editorializing by this publication. The pipeline just generates a confidence value for correlating one capture from the same sensor setup with another.
[Sidenote: did ACM really go "Open Access" but gate PDF download behind the paid tier? Or is the download link just very well hidden in their crappy PDF viewer?]
Android devices already know exactly where they are even with GPS disabled, because they sniff the nearby WIFI networks and then ask Google where they are. QED Google knows already, all combined is mass metadata surveillance already provided to those that tap into it.
Any sub-meter precision or presence detection does not really matter, if these companies have all your other questions, queries, messages, calendars, browse history, app usage, and streaming behaviour as well.
First this is not just Android. Apple does the same thing. You can buy an iPad which physically does not have any GPS hardware and it can reasonably tell you where you are. Personally I first learned of this feature when I bought a second-generation iPad, so it’s been there a while ago.
Second, it is a logical leap to assume Google knows everything already. They could for example build this nearby Wi-Fi based location querying API with privacy in mind, by purposefully making it anonymous without associating it with your account, going through relays (such as Oblivious HTTP), use various private set intersection techniques instead. It is tired and lazy to argue that just because some Big Tech has the capability of doing something bad therefore they must already be doing it.
The approach described in the article is much different and more interesting, as it's passive and doesn't require any electronics on the individual being identified.
WiFi Sensing is part of Wi-Fi 7 and present in most recent laptops and smartphones. Local NPU machine learning can be combined with WiFI radar. Malware can attack phone and radio basebands and exploit this capability. It can uniquely fingerprint human biometrics, measure breathing rate, record keystrokes and more. Thousands of academic papers have been published in the last 15 years on "device free wireless sensing", before the capability was ratified by IEEE as 802.11bf. It's being rolled out commercially. Mitigations include drywall or insulation with a layer of RF shielding.
> Researchers.. developed.. a biometric identifier for people based on the way the human body interferes with Wi-Fi signal propagation.. CSI in the context of Wi-Fi devices refers to information about the amplitude and phase of electromagnetic transmissions.. interact with the human body in a way that results in person-specific distortions.. processed by a deep neural network, the result is a unique data signature.. [for] signal-based Re-ID systems
Various cheating to get their conclusions (from the paper):
> To allow for an unobstructed gait recording, participants were instructed not to wear any baggy clothes, skirts, dresses or heeled shoes.
> Due to technical unreliabiltities, not all recordings resulted in usable data. For our experiments, we use 170 and 161 participants for CSI and BFI, respectively. [out of 197]
I wish they had explained what the technical unreliabilities were.
Caveat: Indoors. However, since indoors is typically a private space, the degree of surveillance depends on the owner of the space. Civilians can only compel government agencies to make sure that government buildings do not enable tracking. We won't be able to stop Walmart, they can always play the security card which trumps privacy every time.
You can do it to yourself[1], I am using Tommy for presence detection in Home Assistant, works great (my house is small, so two ESP32s works fine, I am sure having 3-4 would let it see my cat breathing).
I don't feel that this article is a fair summary of the paper. And the title is just clickbait.
The paper says, in a somewhat contrived scenario, with dozens of labelled walkthroughs per person, they can identify that person from their gait based on CSI and other WiFi information.
This is a long way from identifying one person in thousands or tens of thousands, being able to transfer identifying patterns among stations (the inference model is not usable with any other setup), etc.
All the talk of "images" and "perspectives" is journalistic fluffery. 2.4Ghz and 5Ghz wavelengths (12cm & 6cm) are too long to make anything a layperson call an "image" of a person.
What creepy thing could you actually do with this? Well, your neighbor could probably record this information and tell how many and which people are in your home, assuming that there is enough walking to do a gait analysis. They might be able to say with some certainty if someone new comes home.
That same neighbor could hide a camera and photograph your door, or sniff your WiFi and see what devices are active or run an IMSI catcher and surveil the entire neighborhood or join a corporate surveillance outfit like Ring. Using the CSI on your WiFi and a trained ML model is mostly cryptonerd imaginiation.
Indeed. I'm confused by this line from the article
> a study with 197 participants, the team could infer the identity of persons with almost 100% accuracy – independently of the perspective or their gait.
The paper seems to make it clear that the technique still depends on gait analysis, but claims it's more robust against gait variations.
It feels rather more than a little bit creepy to realize that Comcast et al, and thus the US government (if you live there), laundered through 3rd party data brokers, knows if you're sleeping and knows if you're awake. Knows if you've been bad or good, for ICE/ATF/DEA/SEC's sake.
Comcast is late to the party, then. AT&T has been selling your information for decades. And your mobile provider can track you anyplace that there's a cell-signal, potentially even outside the country.
I was really impressed that a ESP32 Antenna Array Can essentially make a WiFi camera - it uses both time and phase differences to localize based on MAC addresses (which are sent plaintext) https://www.youtube.com/watch?v=sXwDrcd1t-E
I don't see how this is categorically any different from hidden networked cameras. Perhaps that's the real issue we should be focusing on in terms of privacy and mass surveillance.
How about personal canisters of chaff that get fired off whenever I enter a room? Before long, folks will get so annoyed with all of the metal fibers I leave behind, that I simply won’t be invited anywhere and my anonymity will have been protected.
Beamforming information is utilized for creating this surveillance. There are also a lack of configurations in common routers to turn off BFI. The BFI information is available to any WiFi snooping and can easily be used to detect presence. You just need to read the BFI data (its plaintext) and if it changes, you can track wherever the smartphone the beam is now pointing towards. Detecting exactly who is another feature but in general, WiFi technologies are insecure and easily available as surveillance devices.
I’m not understanding this. You still have to deploy a piece of hardware to read the Wi-Fi waves. Why wouldn’t you just deploy some other piece of hardware that’s better at surveilling the surroundings?
Also, if the Wi-Fi device is in the area are not busy now your camera is off that doesn’t seem good. Also, I imagine you have to tune it for every environment, geometry that doesn’t sound easy. And then after all that work, I move my Wi-Fi router 4 inches to the left.
I’m not understanding this. You still have to deploy a piece of hardware to read the Wi-Fi waves. Why wouldn’t you just deploy some other piece of hardware that’s better at surveilling the surroundings?
Also, if the Wi-Fi device is in the area are not busy now your camera is off that doesn’t seem good. Also, I imagine you have to tune it for every environment, geometry that doesn’t sound easy.
I’m not understanding this. You still have to deploy a piece of hardware to read the Wi-Fi waves. Why wouldn’t you just deploy some other piece of hardware that’s better at surveilling the surroundings?
Also, if the Wi-Fi device is in the area are not busy now your camera is off that doesn’t seem good
I’m not understanding this. You still have to deploy a piece of hardware to read the Wi-Fi waves. Why wouldn’t you just deploy some other piece of hardware that’s better at surveilling the surroundings?
Microwave frequencies like 2.4 or 5 GHz just passively allow you to do this. You'd have to adopt frequencies that are useless for radar.
I mean you could even jam a microwave oven door open, turn it on, and then measure how much energy loss there was through certain paths. That's essentially all beamforming in Wifi requires -- a really sophisticated way of measuring paths that cause energy loss, and a really sophisticated antenna design that allows you to direct the signal through paths that don't cause energy loss. The first problem is what's facilitating surveillance because humans cause signal loss because our bodies are mostly water, and 2.4 GHz radio waves happen to get absorbed really well by water. This causes measurable signal loss on those paths and the beamforming antennae use that information to route around your body. But they could also just log that information and know where you are relative to the WAP.
The devices that reported BFI information were also stationary, and there were no extra devices transmitting information that would be conflicting.
A single camera would be much more effective.
reply