Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Safe Crime Prediction: Encrypted Deep Learning for Less Intrusive Surveillance (iamtrask.github.io)
49 points by williamtrask on June 6, 2017 | hide | past | favorite | 49 comments


The number of people killed by terrorism in developed countries per year is, what, a hundredth of those killed in car accidents? A tenth of those shot in Chicago alone?

http://www.businessinsider.com/death-risk-statistics-terrori...

Terrorists have to kill literally hundreds of times more people per year for it to justify even the meagerest response. And one may argue about the economic impacts, but that's just a reflection of this underlying irrationality. If we had media focusing on "Keep Calm and Carry On" like Israel does now or Britain did in WWII it would have a much smaller economic impact - and terrorists would stop doing it, since it would be less effective at achieving the subgoal of "terror".

So this post is really begging the question IMO.


Counterpoint: As Taleb points out, car accident deaths are 'thin-tailed'; that is, the rate of car fatalities is essentially fixed (and predictable by looking at past data), while terrorism is 'fat-tailed', and the number of deaths that would be caused by a dirty-bomb in Manhattan does not appear in historical data, and so it can't be 'priced' in the same way.


In the example of the dirty bomb, I'm not sure that the number of casualties/cancer incidences is especially high, based on some brief search. Sept 11 seems like it was a more damaging attack, for example, and that seems roughly as difficult to replicate as a dirty bomb. Especially given the trade-off between payload and shielding your substances enough to prevent detection by the myriad of radiation detection systems I'm sure DHS has installed in major cities.

But you are certainly correct that statistically these events are more difficult to interpret, and raw averages are not a fair comparison.


I have made this exact argument (quoting Taleb) but HN users are not receptive to anything other than a do-nothing approach.

https://news.ycombinator.com/item?id=14262011


This was reply to the grand-parent comment, mistakenly posted to parent comment.


Unfalsifiable arguments are not especially strong.


If the media wouldn't cover terrorism it would probably go away by itself. Israel is probably the only country dealing with it effectively.


Not just the media. A lot of current U.S. politicians apparently believe their best path to power is endless fear mongering. And it works pretty well.


That's interesting. While studying the Reichstag Fire in grade-school history class, I ran across a quote from someone in the Nazi propaganda machine, who asserted something to the effect that exerting control over a democracy was simply a matter of manufacturing a large enough exterior threat for them to fear. Sadly, I cannot remember the quote nor the author.


It was a well-discussed idea at the time, so I'm not sure who you're referring to. But Hermann Goering famously discussed it with Gustave Gilbert:

"[...] the people can always be brought to the bidding of the leaders. That is easy. All you have to do is tell them they are being attacked and denounce the pacifists for lack of patriotism and exposing the country to danger. It works the same way in any country."


That was the one! Clearly I didn't remember the details of the wording well enough to use the right search term.


That's like telling a bullied kid to ignore a bully (or stop giving him/her attention) and that it will make the bully go away, eventually, sometime in the future (maybe). It may work in the long-term, but it doesn't fix the problem of people hurting and being scared in the now.

I'm a simple person. We have government, and we pay taxes so we can get services and have safety (that is what they tell us). So, terrorism is something we expect government to fix and resolve. Just like all the other things listed above that kill more people per year than terrorism. We can't just say "let's ignore it", and "we have to deal with X acts of terrorism per year for a N years before the terrorists lose interest". That is what some of these "anti-anti-terrorism" arguments logically lead to, and rightfully it's a tough pill to sell to the public.


The bully won't go away, but terrorism's purpose is to terrorize as many people as possible. By spreading the word about it, the media is essentially doing it a service - popularizing terror.

Did it ever occur to you that terrorism could be aided by your or other government, using the taxes you and others pay? The US is arming rebel factions in Syria now. We'll see how that turns out. They armed the Taliban with Stinger missiles during the Russian war in Afghanistan. The Taliban took over the country and then harboured Osama Bin Laden and Al Queda. US ally Saudi Arabia is sponsoring ISIS and sunni rebel groups in Yemen. Turkey also bought oil from ISIS. There's a huge scandal in the Middle East now due to Quatar allegedly supporting the Muslim Brotherhood. Have tou ever heard the quote: state sponsored terrorism? All of this is done with taxpayer money.


Author here. These are great points and I tend to agree. While the tech is interesting to me as a way to reconcile privacy and security, changing the incentive structure is a more straightforward initial response.

Perhaps this post is most useful as a counter-argument to any "privacy vs safety" arguments that might be used to justify unrestricted access to consumer data.


So if the media weren't flagrant whores for eyeballs things might be better. I'm not holding my breath. The fable of the farmer and the snake comes to mind.


Author here. So the techniques in this post aren't actually limited to terrorism, although it' s probably the easiest to talk about. It's broad enough to include homicides (as mentioned in the post), and hundreds of thousands of people are murdered each year because they aren't able to predict their own danger soon enough for law enforcement to be informed (and to intervene).


I should say - that's not to say that the technical content isn't very interesting. I just think that the solutions to violence are better considered from the perspective of preventing people from becoming violent people, rather than preventing violence from occurring. I think there's significant evidence to suggest that there are very powerful yet unused paths forward in that direction.

There are other applications that such a system could be interesting for, though. What about determining interest rates on loans for people and small businesses? Small businesses could benefit greatly if lower-rate small loans enabled them to grow. Maybe insurance rates too? Etc.


> and terrorists would stop doing it, since it would be less effective at achieving the subgoal of "terror".

And that's why there are no terrorist attacks in Israel.


There are multiple elements to unpack here. The obvious pre-crime problem other commenters raise. The fact that terrorism is such a rare risk it isn't worthy of our attention.

But I'd like to raise another objection: Homomorphic encryption does not provide integrity over the ciphertext, which could open the door to active attacks against the systems that undermine its privacy goals.

https://news.ycombinator.com/item?id=14443191

https://paragonie.com/blog/2016/08/crypto-misnomers-zero-kno...

If you really need to build such a dangerous and needless system, would you want it to be built with such an error-prone cryptographic design? I'd say "No".


Interesting, I didn't know that homomorphic encryption was already advanced enough to be feasible for actual computation. Years ago I just read about the proof-of-concept systems that worked but were too slow.

I'm still doubtful if the specificity of such pre-crime systems will be high enough that only a negligible number people will be wrongfully investigated. After all, the prevalence of terrorism is extremely low in any population. I guess if you trust law enforcement enough to escalate the investigation slowly and carefully (instead of putting a suspect on a no-fly list immediately) it can work.

Also, with regard to the audits by NGOs or government watchdogs: I suppose you would also need the auditors to cryptographically sign the version of the software they audited so that users can check that a trusted surveillance system was deployed.


Re: cryptographically signing the software version. I think that's a brilliant idea. I'll look into it.


Ok, I'm probably missing something.

How is public-key homomorphic encryption possible? Suppose I know the pub-key, and have some cipher text X. Can't I simply encrypt 0 and try, for all plausible values V, whether encrypt(V) - X = encrypt(0). Or is the encryption function not reversible, i.e. there are multiple 0s?

If not, it seems like you'd need quite a bit of true entropy in the plausible values. I don't see how you add something like a Nonce to artificially add entropy.

Heck, if you have nice integer under/over flow on division, that would give a crude implementation of comparison, bringing the search down by a logarithm.

It seems like the only reasonable situation is one where indeed multiple different values decrypt to 0.


> Or is the encryption function not reversible, i.e. there are multiple 0s?

Yep, there's padding.


Whats to prevent me from finding 'almost all' zeros by trying enough (say 10 000 000) versions of enc(v) - enc(v).

That, combined with comparison based on integer division and underflow would still make decryption quite easy.


replace 10 000 000 with 2^256


"A positive prediction should launch an investigation, not put someone behind bars directly."

That is the scariest thing I've read. All we need is a black box to investigate people at anytime.


There are already thousands of black boxes. Consider satellite photography, malware detectors, fire alarms, sniffing dogs, credit fraud detectors. All of these are tools ("black boxes") used to launch investigations. The future of crime fighting is impossible without tools like these as criminals become more sophisticated.

The real question is who owns them and are they audit-able? This blogpost is about making neural networks used for these purposes auditable by a third party without making them vulnerable to evasion by criminals.


Agreed. Stop, assess and arrest is now audit-able thanks to body cameras and we're already seeing the results in greater accountability. However, with greater visual accountability has come de-policing, a serious problem in some cities.


Yup, this sounds like software driven Minority Report.


Several problems with your points - Data

For the SPAM example you provided, you used a Data Set available publicly with no consequences. But where will the training data for prediction of homicides will come from? Will it be accurate? Will it unfairly target minorities?

The predictor uses features in the data to predict the future. What features will the machine learning algorithm learn to predict homicide? location? age? gender? ethnicity? mannerism? income? I could see big problems with any features used to detect and potentially send law enforcement for "Investigations"

- Detectors

Capturing and sending users data to a warehouse is a privacy/security risk. But what is worse is installing a detector in their house, or in their computer. What happens when a citizen removes the SPAM detector from their computer? Well, the next logical step is to pass a law requiring all citizens to run a specific process on their computer and are not allowed to reverse engineer it or else...

For me, this is even worse than having my privacy violated. It would mean users would not have root access to their own computer. It would mean, many applications will be illegal and developers cannot write certain applications that enable people to send emails. If this is not impossible, it would be the worst outcome for everyone.

- Globalization

The SPAM detector, the fire detector and the sniffer dog are great localized examples. But todays problems are globalized. Attacks might be planned and coordinated from different country with different set of rules. And not all countries are considerate when it comes to privacy. How will global security surveillance deployment work when we can't even agree on matters of climate change?


For those unfamiliar with homomorphic encryption, I found this to be a good algebraic treatment. https://web.wpi.edu/Pubs/E-project/Available/E-project-04261...


I like this, but I would argue that predicting crime isn't the core problem that law enforcement has today. The real issue is the incentive structure that disincentivizes crime prevention.

That sad truth is that police bear no cost when they fail to prevent crime and, in fact, get more funding and power if crime goes up.


>> The sad truth is that teachers bear no cost when they fail to teach a student.


In public schools, this is often the case. In private schools, bad education will lead to a loss of profit. If you think this doesn't apply to teachers, you must've never been in a bad public school


The machine should be able to make economic policy decisions as well.


I think you mean "Pre-crime"


i think you're right


The dog argument was cute. But the dog also doesn't make a searchable, indexable list of all your personal information.

Someone will inevitably make this though, and it will inevitably be abused.

Plus what if I start switching search parameters from say, 'planning a terrorist attack' to, likely to to vote one way, believe on thing, or be of a certain religion.

We will trade all our privacy and the nefarious people will switch to a new method of comms...like they always do.


So part of what makes this work is that it's not an index. As opposed to doing general storage of people's data, this restricts a surveillance operation to only be able to identify specific concepts.


The software itself would have access to the index. So it would still exist.

Keep in mind the extents governments and other private enterprise would go to get their hands on this.

You will have created a clean super-weapon.

Or 'the dark mark' for those biblical folk out there.

Can you imagine what that would do to your life as well? Even if you couldn't access the information.

Half the governments of the world would torture you just to double-check.


"the dark mark"

I think that's from Harry Potter, not the bible. Maybe you meant the mark of the beast?


that's strange, i would have assumed that they would be averse to a tool that doesn't give them un-restricted access to the data. right now, the dialogue is "privacy vs security... take your pick" but this breaks that rhetoric... showing that it's a false choice.


Far from it, this tool prevents the need for us to give up our privacy. (not sure why HN isn't giving me a "reply" button to your posts)


This tool assumes we've already gave up our privacy.

Its just a smart way to comb through all the data you've extracted.


I'm concerned with a tool such as this that can be directed to target anything of interest to those administering the system. We've already seen mass surveillance shift from being used "exclusively" for terrorism to also assisting in the war on drugs. Where else would the lens be trained? As the author suggests, murder. What about organized crime? Gangs? Illegal downloads ... the sharing of Netflix passwords?


Does anyone know what minimum level of "homomorphism" is necessary to insulate the entity doing surveillance or analytics from legal action?

I ask this about the following contexts:

- government surveillance: does using a homomorphism remove the need for a search warrant?

- in-app analytics: does using homomorphisms allow a firm to consider data not disclaimed by the firm's privacy policy?

- research: when does a homomorphism eliminate the need for IRB approval?


Great question. I sincerely doubt there is legal precedent in this area though. I'd love to be wrong here though :)


There was a great talk at USENIX on how Whatsapp reduced spam while maintaining end to end encryption

https://youtu.be/LBTOKlrhKXk


I have nothing useful to add to this discussion. I just like that this idea is a core part of the TV show Person of Interest.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: