Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Shoring up Tor (newsoffice.mit.edu)
81 points by user_235711 on July 29, 2015 | hide | past | favorite | 12 comments


> The researchers’ attack requires that the adversary’s computer serve as the guard on a Tor circuit.

This is a great reason to run your own Tor relay, even if it's just a private bridge for you and your friends. You can even use a pluggable transport of your choosing -- I picked obfs4 to make otherwise identifiable Tor connections look like random noise to my service provider.


Does that really help? "Random noise" over TCP/IP doesn't really exist in the wild.

Now, if you made it look like well-encrypted HTTPS traffic, that might be borderline non-suspicious.


Yes this is the pattern of obsf4 usage.

It makes tor traffic look like https traffic to a bridge node.


I didn't readily find details of this particular cloaking, but I'd be skeptical of how well it can work if your adversary is more determined to snoop on you than your bridge is to camouflage itself[1]. Is obfs4 sending an SNI header for the fake SSL session? If so, an observer can connect to the named server himself and see things like a) the SNI header is faked, e.g. if there is no DNS entry for that name, b) it's not really an HTTPS server, c) it doesn't have enough discoverable content to really justify you spending hours using it every day, d) nothing else on the web links to this server. If it's not sending SNI, that's probably unusual these days too, so they know your client is non-standard.

I don't know, maybe I'm overthinking and/or considering things you don't care about as part of your threat model. I suppose some of this could be overcome by using a client certificate whitelist as a plausible reason to not serve anything to snoopers, but then you and/or the bridge are somewhat identifiable as users of an obscure SSL feature. And since the client certificate is in plain text[2], you're either trackable by a static certificate, or identifiable as something unusual if using many rotating certificates all coming from the same IP.

[1] Maybe this is weird logic, but in other words, if you are paranoid enough to be trying to obfuscate your tor usage, I don't see why you should believe this can be secure enough (unless obfs4 has addressed these issues somehow.) But maybe the point is more about web filters just smart enough to block standard tor, than about constantly evolving spies bent on unraveling your communications.

[2] https://security.stackexchange.com/questions/24577/client-ce...


I'm not an expert, but I've read and have seen people claiming if you run a number of relays, you can control and inspect the traffic going through your relay - uncloaking the user and the IP addresses.

Is this actually possible?


I believe you may be referring to a Sybil attack, e.g. https://blog.torproject.org/blog/tor-security-advisory-relay...

Tor users should know that Tor design doesn't try to protect against an attacker who can see or measure both traffic going into the Tor network and also traffic coming out of the Tor network (e.g. a three-letter agency with near infinite budget running many nodes).


“Anonymity is considered a big part of freedom of speech now...” That's a great way to express the importance of Tor.


People don't get that, though. Too vague. If anyone asks, show them the value of anonymity with real examples on this page:

https://www.torproject.org/about/torusers.html.en

There are also plenty of situations ordinary people, police, and military might relate to on it. Hard to argue with what you might do yourself. ;)



"Researchers mount successful attacks against popular anonymity network — and show how to prevent them."

This could've been the title of many, many articles about Tor. You'll see plenty more. It has so many past attacking, due to difficulty of its goal, that it should only be used as one step in a series of anonymity-enhancing methods.


Journalists need to start making a stronger distinction between Tor end-use and hidden services. End-use is solid - hidden services are experimental.

That said, the fact that this research can identify what hidden service a user accesses skirts that line. On the other hand, that doesn't sound too different from a typical timing attack (something that Tor doesn't try to prevent).


So, the claim is that any guard can identify the sites Tor users that use it connect to 88% of the time? That's very bad, if true, because the guard also knows IP.

Edit: glancing through the paper, it may also require monitoring the webpage, in which case it's less severe. I'll wait until there's a good writeup by the Tor Project itself.

From the paper

>Indeed, we show that we can correctly determine which of the 50 monitored pages the client is visiting with 88% true positive rate and false positive rate as low as 2.9%




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: