Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't think we've quite worked out the language here yet.

I use "webmail" to refer to a remote hosted web interface. GMail, Yahoo Mail, Hotmail, riseup.net, etc. This is the dominant way that people access email, and it's not possible to secure well because of the "webapp crypto problem."

Mailpile, on the other hand, is a locally hosted MUA that happens to use your web browser as the UI. I think it's a great idea, leveraging the UI properties of a web browser, but with everything running locally.

All development of a new secure email protocol has been stymied for the past 13 years by webmail. It is not possible to provide end-to-end encryption if you don't perform that encryption on the client side, and in the webmail world there is no "client."

I'm excited about Mailpile because it could be what gives us a usable local MUA, which is the precondition to deploying a nice, modern, usable, end-to-end encryption protocol.



Could some please please please explain what this "webapp crypto problem" thing is. I think of a browser as a client. Isn't the javascript done client-side. Why is everybody saying that the browser (javascript) is a broken platform for crypto. Have a even characterised what the issue is correctly? I don't even know.

I figure that if this is explained to me then surely the solution should present itself at the same time :)


Let's say Bob wants to communicate with Alice, and he doesn't want Sergey to be able to read his message, but Sergey writes the software that they want to use to communicate.

In a peer to peer setup, Bob and Alice only need to trust a discrete piece of software they download from Sergey at a given point in time. Maybe that software is open source so they can audit it and thereafter have confidence in it. But if Sergey is instead releasing his software as a javascript program to be run in a web browser than they need to trust Sergey each and every time they run the program, because they are downloading the program anew each and every time they want to use it.

Even if Sergey is a stand up guy, this setup means that the government can force him to break his promise at any time, whereas if he had put out a discrete set of software versions, particularly if they were open source with a public source control, he could plausibly tell the government that what they were asking was impossible.


So the question becomes one of modifying the browser Javascript delivery mechanism to allow a more secure & discrete versions channel that would require user intervention. Or could there be another possible way?


There's already a mechanism in place for that in the form of javascript plugins/extensions/scripts which have to be explicitly installed. There's still problems because downloaded javascript can run in the same context and subvert the installed javascript. I believe tptacek has written a blog post elaborating on the problems with that.

In a larger sense, I would push back against the web browser as a general purpose OS. We already have a few battle tested OSes who designers have put great thought and effort into these problems. If you insist on javascript, there's even node.js programs, albeit generally installed outside of the usual OS specific mechanisms. If you insist on HTML/css for layout and javascript for programming, I believe there are toolkits based on webkit, chromium and IE that let you create a standalone program embedding the respective browser engines. You can even do what mailpile is doing and embed a web sever in your client application and use your web browser as a client to the client acting as a server (though this last seems a little Rube Golderg-esque to me). But in any event downloaded and installed like a full fledged program rather than treating the exercise as though you were going to a special website.


Web mail is one of those things that really, really makes sense though. I don't use it on my computer, but I'm glad it's there when I need to check my mail from any other device.


And if I understand it right, the mailpile kind of 'webmail', you can't actually do that, it's not webmail at all, it's just running on your local computer, and the mail is stored on your local computer.



I'm just gonna pick it up...


It seems like what we need is code signing for web pages and their assets. If you sign the app, you still have to trust the people making it, but you don't have to trust the server it comes from. This puts you in no worse of a position than an update to your non-web mail client.

And if a web site suddenly switches to a new public key, the browser should do the same kind of thing as it does for expired SSL.

It should be relatively easy to create a browser extension that does this in the meantime.

Then what I'd like to see is a mail service that sends its source unminified, and then publishes the same code (with signature) on its server. That way you could easily verify that you were getting the canonical version of the code (and not a special compromised version that an attacker inserted for users on a special list), and anyone could look and see if it was doing something fishy (or broken).


If your browser gets JavaScript crypto from webmail.example.com every time you visit webmail.example.com then there's nothing stopping webmail.example.com from serving malicious JavaScript crypto that steals your keys or unencrypted data. Even though the JavaScript runs locally, the code is supplied by webmail.example.com. There's a discussion of this and a few other issues here: http://www.matasano.com/articles/javascript-cryptography/

JavaScript in web browsers also has a few other issues, such as side-channel timing attacks and the lack of control of memory.


Ahhh. I see. But of course, how dim of me.

In that case, why do we trust e-commerce? Are we stupid to trust e-commerce?

Am I right in saying though that if the javascript has been signed that the browser could trust it assuming the browser could trust webmail.example.com

I mean, we all get our software from somewhere. Why should I trust a security update from Apple, Microsoft, or Canonical for instance ...


E-commerce doesn't rely on Javascript cryptography.

You generally don't trust code updates, which is one reason you do them infrequently; every time you update code there's an opportunity for someone who has corrupted the update process to take over your machine.

A Javascript application might need to update itself several times per second across a single execution of itself.


> You generally don't trust code updates, _which is one reason you do them infrequently_ [emphasis added]

Is this true anymore? So much stuff auto-updates I barely know what goes on these days, and it seems pretty frequent. Between Firefox auto-updates, OS X updates, MS Word critical updates, etc., I would be surprised if a week goes by without something important being updated.


Would there be a way of hooking important Javascript blobs into the OS update/store/packaging mechanism or am I being completely dense?

Say I don't trust code updates which is why I choose to run Uuntu because I like its central package management system. Is it entirely infeasible to leverage that update mechanism to enable end-to-end crypto communication in the browser or are these entirely separate issues? Is it your contention that the browser is not the correct platform for end-to-end crypto communication?

edit: it's ok - you needn't reply, I've read some of your other posts and I get that you'd tell me that there are DOM considerations as well.


Are you noticing how hard it is to reason through the security model of Javascript crypto code? How many different interactions there are you'd need to account for? That's a big part of the problem, and it's a problem that simply doesn't exist in the same way for native code.


Dang, fell asleep there mid-conversation :/

I am noticing that it is unexpectedly difficult to reason through the security model of Javascript crypto code. And you sure are patient, and I thank you for bringing about that realisation. It is beginning to dawn on me that it is amazing how _happily_ we allow any random site to go ahead and use are CPUs to do _God knows what_ as soon as we visit their site. That's rather trusting of us when you think about it.

But we gotta. Because why? Because dynamic content supposedly; it was easier to have Turing-complete Javascript than figure out how to make HTML/CSS dynamic. Never mind that a generic VM approach should have been taken if that's what you're gonna do, and let random site-designer Jo(sephin)e choose the language they like hacking with rather than create yet another language that we're all going to bitch and moan about. And you can tell that the assembler for the Web / VM approach should have been taken because that's what Javascript is becoming. Exhibit A: ASM.js

And at the time we should have figured out that in addition to sandboxing we also needed a security model that would cater for end-to-end secure (anonymous?) communication. Pity we couldn't see 20 years down the road. Now we're stuck with Javascript (which I actually like, don't get me wrong) and GMail (which I'm regretting that I use, nowadays) . sigh


"It is beginning to dawn on me that it is amazing how _happily_ we allow any random site to go ahead and use are CPUs to do _God knows what_ as soon as we visit their site"

That's a very different issue from JavaScript cryptography though. Allowing random sites to use your CPU is the whole purpose of the world wide web - it takes CPU cycles to render static HTML, after all. The issue here is trusting that the browser sandbox is good enough to prevent that code doing anything malicious outside of the context of the browser. Browsers are pretty good at that these days.


"I mean, we all get our software from somewhere. Why should I trust a security update from Apple, Microsoft, or Canonical for instance"

The difference is that it is very hard to specifically target someone via an OS update. It is very easy to specifically target a web app user:

http://www.wired.com/threatlevel/2007/11/encrypted-e-mai/

Now, if you were forced to log in or to otherwise uniquely identify yourself before you received OS updates, this would be different.


> Why should I trust a security update from Apple, Microsoft, or Canonical for instance

Because it's your operative system and you can't realistically read and compile each time the patches (if you have the sources). If your operative system is against you you've utterly lost, so your best bets are to both trust them and use 100000 eyes to find bogus patches (open source OS)


> why do we trust e-commerce? Are we stupid to trust e-commerce?

well, many don't trust it, with good reason, and use temporary credit cards (sorry, can't remember the correct name for that but I hope it's clear enough)



And here's the thread taking that post apart:

https://news.ycombinator.com/item?id=6637915


(A bit off-topic)

Locally hosted web apps are on the rise (Mailpile, Camlistore, etc.) and remembering which app runs on which port is neither user-friendly nor scalable. More so if you start to consider multiple users on the same machine.

Maybe there's a need for a usable reverse proxy just for local web apps?

It would also be neat if browsers could speak HTTP over some IPC that isn't TCP on some random port. Maybe UNIX sockets in ~/.run? This would delegate read/write permissions to the OS.


One time I hacked chromium to skip the socket and send the HTTP directly to the embedded python wsgi app. In the end we couldn't use it because we needed some 64 bit only code running behind the wsgi and chromium only builds 32 bit on windows. Not that it would really be ideal either. Your app is too easily confused for a legit chromium window.


From the article: "Despite what anyone tells you, end to end encrypted email is not possible in a webmail world."

From above: "it's not possible to secure well because of the 'webapp crypto problem.'"

I REALLY hate these sort of platitudes because they sound authoritative with no real basis. "Not possible" is a very strong statement. One, as a matter of fact, that I am working on a solution to.

The so called "webapp crypto problem" that you refer to is the fact that you cannot trust that the provider will change the source on you at will to initiate an attack. This can be dealt with by having hashes to identify the piece of code that has been recieved. This hash is then looked up by multiple verifying nodes which will confirm the signature. These nodes can confirm the signature by looking at the source and matching it with the hash. This way you move the authority from the single issuer to the set of verifiers. Now, if the code is open source any individual can verify the verifiers.

This is a general overview of the system that can solve the "webapp crypto problem." Yes, there are details missing, but this should be enough to show you that it is indeed possible.


Surely the hashing solution you propose can only be implemented as an enhancement to browsers? If you have decentralised "verifiers" how can you be sure that the version they most recently verified is the same code as your browser just downloaded?

I'm not convinced the "webapp crypto problem" can be solved without changes to browsers.


Why not a plugin?

Imagine this scenario. You get a plugin from your distro's repository; you have encrypted, sig-checking, hash-checking mechanism in apt or rpm or whatever. It is open source/Libre, maintained and audited by competent crypto people, uses well-vetted mechanisms in the code, etc..

And what this does is run native code to encrypt your message, after prompting for a passphrase to unlock your private key. It provides an editing window so plaintext won't go into the browser. Then after editing, you encrypt, and the plugin pastes the encrypted text, in, say, ASCII form, into the text field in the webmail application.

The correspondent of course has the same plugin and uses it for decryption. You exchange public keys with your correspondents by a side channel.

(Edit: Obviously, you can do this today, minus the GUI; it's easy enough to run a GPG command, use a text editor, paste manually)

This would be a non-starter on vendor-captive smartphones and tablets, of course, and proprietary OS, as such systems are fundamentally unsecurable. But it might be viable for laptops, desktops and anywhere you can have root with Linux or BSD.

The metadata problem is much harder.


web hosted MUA < curses client over SSH [1]

[1] like pine


I agree for geeks, but not for "normal" people.


As I understand it, Ladar's plan for Lavabit is to open source it (and implement the Dark Mail protocol) so you can run your own instance of it... exactly like Mailpile.


There's a huge difference between running your own mail server and running a mail client. Most people are not well positioned to run their own mail server.

Mailpile is an MUA, not an MTA.


People werent well positioned to run their own web browser either.

We can make it happen, if this new thing is so attractive people would will do it, just like they put up with Windows for several decades now.

It could be a simple raspberry pi box that has all the stuff ready-made, just enter wifi credentials and whatever and run. The box has to be in white and silver because then who wouldnt want one? To show it off like a status symbol. A sleek little box in a corner, "my own email" people could say.


Thanks for clearing that up for me.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: