As ridiculous as it sounds now, eBay used to give an option: use a non-secure form (by default), or click a link to proceed to a secure login form using SSL over https.
Vanilla browser based https encryption cannot stop states like the US or China which have access to all the major pipes. All it takes to do a man in the middle on basic https is the key for any one certificate that your browser trusts. Any organization that has one (including many employers which install certs) can easily watch your traffic.
HTTPS is fundamentally insecure against MITM attacks by governments.
I think the spirit of your comment is pure tinfoil-hat, but the letter of it is actually valid.
It is vanishingly unlikely --- on the level of "major diplomatic incident" --- that China (or any other government, including ours) has the key associated with Verisign's root CA certificate.
It is somewhat more likely (but still, I think, pretty unlikely) that one of the following could have happened:
* Your computer is controlled by an employer that installed a deliberately insecure certificate.
* Your computer is infected with malware that poisons the CA store in your browser.
The first scenario is just very unlikely. We work extensively on-site for Fortune 500 enterprises, in some of the highest-security environments in the world (and in some of the most idiosyncratic and crazy), and none of them require us to install bespoke certificates to access the Internet or their websites. Now, you have to find the one employer (or zero ISPs) that not only installed a custom root, but also gave the keys to China.
The second point is irrelevant. If your computer is infected with malware, there are worse things to be done to you than injecting bogus certificates.
It is almost the case that it's worth pruning out certs from your browser from untrustworthy CAs; unfortunately, the market for certificates has devolved to "who will give us a cert the cheapest". I wouldn't know which ones to kill. A year or so ago, pruning out untrustworthy CAs would have protected you from the MD5 problem.
If you suggest "now is the time to get paranoid about your certificate store", I agree.
If you imply that there's a problem with the SSL/TLS/HTTPS model, I'm pushing back hard. There is a weird SSL stigma among web developers, and it's time to start beating it back. The alternatives are worse.
It's more a problem of certificate chain trust models. You have to trust the security of every podunk CA in your browser's root store since they all have the ability to sign anyone in the world's cert. Should the government CA of Taiwan have the ability to sign bofa.com? I would say no. Let them have * .tw.
The funny thing is that x509 has the ability to limit signing to various CNs, so it's possible to limit signing powers, but no one seems to do it.
> The alternatives are worse.
Hardly. Some alternatives might be worse (plaintext everywhere, for instance). What I would like to see is a * .example.com signing cert being given to all domain owners when they buy a domain, signed by a single * .com authority (or .net or whatever the domain is). This would prove to the https client that the domain they are connecting to is really the domain owner. This is a basic level of trust that would be an option for everyone who owns a domain (without any stupid yearly charge for the certs).
Banks and other important sites could go above and beyond and buy certs that verify that their business is what they say it is, which is another trust level entirely (this already exists and can be seen when you go to paypal.com in firefox and get the green location bar thing).
I think that would be a much better alternative than our current system.
From another perspective: I worked on the content filter for Parental Controls at Apple. My understanding is that every filtering proxy product with SSL support requires the installation of an insecure certificate. That, or they do a fairly rudimentary IP or reverse-DNS based filtering (which is the route we chose for Parental Controls) for secure connections.
These vendors certainly claim to have a lot of Fortune 500 clients.
You're thinking of, e.g. Bluecoat. Certainly everyone runs something like a Bluecoat. But I can't name a client that uses the feature where your end users install the Bluecoat cert so they can scrub SSL traffic. They just use IP and DNS blacklists, like you.
A MITM attack of this nature would be easy for a website owner to discover. Just compare the public keys with what a random browser is receiving.
Edit: CA operates based on trust. All it takes is for a single website to demonstrate this is going on for it to quickly become public knowledge. Now, if you where to target a specific user it might be possible to do this, but hacking their system is a much simpler approach to the same problem with more benefits.
Hm. I wonder how often this happens. We should be able to detect it easily if they do it all the time. Today they don't have to.
For the long term I wonder if it's possible to close this hole by putting certs into DNS and using a single key to sign all of these DNS records. Is this part of the DNSSEC spec? (not at a computer and can't remember for sure if it is called DNSSEC; whatever the name of the next gen DNS is.)
The original intent of DNSSEC was to provide an IETF-endorsed public key infrastructure, not to defend the DNS from attacks. The idea was, "we have this great DNS protocol, it works well at Internet-scale, let's leverage it for something other than hostname resolution".
The reality of DNSSEC is that it's creaky, somewhat broken, solves neither the PKI nor the DNS protection problem well, and --- because it is likely to pervasively screw up applications across the Internet --- unlikely ever to be deployed.
Either way, there's no magic bullet. If you concede root keys to China, you have to concede DNSSEC keys --- which are, if anything, more loosely held and more widely distributed than CA keys, which have never had a known breach.
While I was in China, I remoted back to my home computer through a remote desktop session to check email. I also tunneled my RDP session through an SSH proxy. Hopefully that was secure enough. I also changed my passwords once I returned home.
Once you have the host key for the site you are connecting to on your laptop SSH is not vulnerable to a MITM attack in the same way that browsers are with their huge CA list. Of course, pay attention to the warnings about changed host keys.
Patently false. If users submit to the login URL through a link that was sent in the clear, a MITM can just run sslstrip (http://www.thoughtcrime.org/software/sslstrip/) and presto, you're visiting the HTTP version of the page instead of HTTPS. The login URL needs to be unavailable over HTTP, as does the login form.
Yup, the future is clearly client side hashing. I wonder if we could get the latest and greatest from that field built into the next jquery etc so people become more likely to use the routines. ...
That said states like Iran might be more likely to garner the most passwords through sniffing, which is a passive activity, than hacking the sites that store passwords insecurely.
Hm, I guess I always assumed we would end up with SCrypt or something similar running on the client side with a nonce etc so the server never even sees the plaintext password.
Even if that means a new protocol that uses SSL to send the javascript, etc., and we're 15 years away.
Do you think designing a system where the server never sees (or has the opportunity to store) the plaintext password is the way of the future?
(Edit: more emphasis on future, not short term, ambitions.)
There is a secure auth mode for SSL called TLS-SRP, RFC 5054. It is quite clean, deriving a session key and authenticating the session in the same number of round trips as a regular TLS handshake. It is secure against dictionary attacks.
What scheme would allow for that, though? Except for public key cryptography?
Seems to me storing hashes and nonces does not work, because without the original password the server can not create new hashes and nonces. So the client would still always send the same thing, which would be equivalent to a password.
yeah, I know what you mean. I do think it's solvable though.
one time during undergrad I was talking with Hal Abelson because there was this big stink about the new research building (Stata center) requiring rfid's to get around. Richard Stallman et al were making a huge ruckus about the privacy implications of not having physical keys.
I mentioned some desired attributes of a protocol to let people in without logging it, and started worrying about the implementation details / feasibility. he said "oh don't worry about implementation. this building has people in it who can implement anything." he had this smirk about him.
so give it some experts and time and they'll figure it out.
I'm sorry. I thought you mean Javascript challenge-response, which is a very common and very evil suggestion as a way to get SSL-like security without having to do SSL.
You mean things like 1password. That's fine. I use Keychain and head /dev/random | md5 for any password I don't have to remember.
A few years ago I stoped someone from implementing that. The problem is if you intercept the hash you can use it like a password because the system at the other end would never see the original password.
You can prevent this circumvention by only allowing the client side generated hash to be good for one login attempt. This is done by giving the client a unique challenge string for each login request and hashing the password with the challenge string.
For example, the user browses to http://www.example.org/login. The generated login form on the page contains a hidden field called "challenge" with a randomly generated hash from the server. When the user submits the login form, have the client do a hash(challenge + password) and send this to the server. Upon receiving the hash and checking credentials, the server invalidates the challenge string regardless if the login attempt was successful.
This obviously does not work if the user does not have Javascript enabled and is not preferable to SSL.
Security is a hard problem. Unless the server knows the password they can't validate Hash(challenge + password). If you intercept both the challenge and the resulting hash you can brute force the password. If you have a method of safely sending the password in the first place you should probably use that all the time instead of trying to build a JavaScript system to avoid sending the password in the clear.
PS: You could implement all the security features of SSL in JavaScript but that wold be pointless and slow. If you really wanted to build a custom security system I would recommended using a Java applet, but even with a team of experts it takes years to build something as secure as SSL.
The future is hopefully something more secure than client-side hashing, probably SRP, or it’ll be NTLM all over again. It would be nice if web browsers implemented it directly, but until then there are JavaScript libraries like http://server.denksoft.com/wordpress/?page_id=27 .
Do not implement crypto algorithms in Javascript. Very dangerous. Also, there's no way to serve them securely unless the whole login process is SSL/TLS anyways.
I think more people should be worried about the janitor with night time server access and a big Vegas debt to pay than the government of Iran, to be honest.
That being said, point taken. While both are important variables to address, one is just plain bad practice.
Client-side hazing (with JS) is a cute hack that's completely useless for improving security. If there's a man in the middle, they can strip the protection code before you fill in the form.
It's not completely useless: it's slightly helpful against an attacker that can snoop but not modify.
Of course, it's pretty silly to go to the trouble of writing javascript code that can not only be modified, but may have bugs, when you can just use HTTPS.
In practical terms, there is no such attacker. The ability to watch packets in real time is the ability to silently redirect them, or to insert fake packets into the stream.
The traffic manipulation part of the attack is also extremely easy, and not particularly noisy.
For most applications, Javascript crypto is completely useless.
Consider the case of someone getting their information by triggering some sort of debug mode on the router, or using some sort of side channel. More plausibly, consider the case where someone can break your over-the-air encryption (but not in real-time). It may be extremely rare, but strictly speaking it's not impossible.
That said, if option A may help in some unusual case, and option B will be effective in all non-catastrophic cases, I know which one I'd choose.
Why must it be stored in clear? Why not encrypted rather than hashed? That still allows password retrieval. Of course encrypted data is only as secure as the DB and keys/decryption code ...
Never mind the fact that if any of the authenticated traffic goes over the wire in the clear (post-login), The Great Firewall could snarf sessionIDs and replay them. This is the concept used in Errata Security's Hamster/Ferret tools. You don't need a username or a password to read their gmail, update their facebook status, or anything. And if any part of the session happens without https, the whole session is at risk.
Good point. Also, Moxie Marlinspike's tool, SSLStrip, can break some of this stuff as well. I haven't tested it against the relatively few sites that go out of their way to set the Secure flag.
They also email your password to you in plain text after you register for an account. They do this after enforcing a password strength requirement when you create the password.
It's not like they're storing critical data or anything, but lots of people use the same password for a bunch of different things. It wouldn't take much to glean a password from one insecure site and try it out on other, more critical, services.
Not only do many Twitter apps send your password in the clear, many of them automatically give your Twitter username & password to any Twitter-specific photo/video hosting service you use with them.
Also fun: if I have an iPhone client app interacting with Twitter and also with a web service that in turn interacts with Twitter, how does OAuth fit into that scheme? Do I make the user do the OAuth dance for the iPhone app? My web site? Both?
I'm a lot less concerned about the threat model where one side is your iPhone's persistant store and the other is Twitter. So, no, I think OAuth is probably unnecessary for actual iPhone apps.
Not really fair to include Wikipedia on this list. The Wikimedia Foundation actually has a completely separate secure server that actually forces you to use https for authentication and article viewing. As far as I know, it works for every single Wikimedia project and language. Wikipedia also has a bolded section called "Secure your account" below the login form with six security tips.
I guess you could send out a unique session key with the page, have the JS run AES or similar.
Wait, stop, crypotime.
I'm appear to be trying to design a protocol for secret exchange. I should just go and do some reading instead.
Do you know if there's any value in http digest auth these days? It's deployed in http clients (but only with md5 algo?) and I think it requires the server to store single-hash-of (salt,realm,pw) which you've eloquently pointed out in the past is a bad idea (single-hash of pw is vulnerable to bruteforce since hashes are fast).
My current thinking is that http basic auth over ssl would be preferred to http digest (as an auth method widely, natively supported by http clients), since it avoids pw in plaintext and allows server to store stronger hash (e.g. stretched sha-256, bcrypt) for pw.
The problem isn't crypto. The problem is that Javascript in the DOM model runs crypto code in an environment where you can't be sure what any symbol means, because any of a dizzying variety of page components or updates sourced on-site and off, under SSL and (most likely) not, could have modified them.
With current technology, default encryption massively slows down the web. This is Dan Bernstein's key research topic (I believe he's currently working on "High Speed Cryptography", his book, but that's in support of both his DARPA grant and his mission statement, which is "default encryption for everything"). That's a cryptographer, inventing and defending entire new algorithms, because that's what he believes is necessary to achieve default encryption.
In the meantime, what we really need to do is get rid of the stigma about SSL, which credibly solves the problems we're talking about on this thread.
So that's what that was.....I couldn't read the Chinese, ran into that form, and then managed to switch over to a 'normal' login box through random clicking. The normal one was unsecure, but I'm impressed by the ActiveX control.
Well, an ActiveX login for online trading sites is like de facto standard in China now. The problem is every site has their own ActiveX, I myself must have over 7 ActiveX installed.
And they are IE & Win32 only. That's why every Mac & Linux user has to install a VirtualBox/VMWare for online trading.
I was hoping Google Gears could provide some sort of similar mechanism, together with CAPICOM exposure to Javascript across platform and browsers :)
The curious case was Zynga. They don't have a login form. (!)
Facebook, YouTube, Yahoo, Windows Live, Blogger, Baidu, MSN, QQ.com, Google, Twitter, MySpace, Amazon, Wordpress.com, Ebay, Rapidshare, yandex.ru, mail.ru, imeem, Flickr, IMDB, Craigslist, LinkedIn, AOL, Blogspot, deviantArt, PayPal, Alibaba, Ning, Hulu