Hacker News new | past | comments | ask | show | jobs | submit login
HTTPS Everywhere will sunset in January 2023 (eff.org)
274 points by gslin on May 25, 2022 | hide | past | favorite | 143 comments



This is the kind of sunset you love to see. Retirement because they succeeded and therefore became redundant, rather than due to failure. If there's one organization I love to see succeed, it's the EFF.


Eh, kind of. It's successful as far as the Internet, which is great. But Internet web pages isn't everything.

There's still a real issue with infrastructre that has web configuration. Everything from home routers to video cameras and so on. Not being able to ship with a certificate that passes browser security checks is a problem that essentially nobody has addressed.

When people connect to an IOT device, they need to be able to connect with a web browser and not jump through hoops to say, "No, really I know that this is a secure connection." Because we can't keep teaching people to dodge secure connections when they should care about having secure connectinos. As much as I don't like IOT, but this is an issue that needs to be easier than "understand cyrptography configuration, generate your own keys, and install them wherever you need them".

That's all besides the vendor "solution" of "install this phone app that will maybe barely work except for the parts that track your data forever LOL thx sucker".


Right, but that's not HTTPS Everywhere.

HTTPS Everywhere was "this site already has HTTPS, and really should only use that, but doesn't, so we'll redirect you to the HTTPS version". Now sites that have HTTPS default to it, and browsers have options to basically try https first and see if it works.


Should have been called "HTTPS as long as its already there".

"HTTPS Everywhere" is pretty suggestive. And the English meaning of the words might be a worthwhile goal too.


Was there any confusion about the plugin creating an HTTPS capability where one doesn't exist? I always understood the outcome to be very clear.


What more would you expect of a browser plugin?


Couldn't browsers just designate the .local tld to not check for SSL certs and enforce that it resolves to an IP on the current network? Seems like a simple solution for this.


Lots of local devices already do such a lookup and there's a whole class of vulnerabilities based on it (DNS rebinding and friends).

I'd personally classify 192.168/16, 172.16/12, 10/8, 127/8, fe80/10, and ::1 as local networks, but that's simply not always the case. There are tons of universities and even business out there that use publicly routable addresses for clients and that approach is even the default for IPv6. You could be tempted to use the local network range as a "local" network but there are plenty of networks out there where that would mark foreign networks as "local". Then there are those who use 1.0.0.0/8 for local addresses because that subnet was previously unused and the 10 range already had a separate meaning.

Just verifying that .something (.local is already reserved, you shouldn't use it for internal device names even though it'll probably work) matches an internal IP doesn't add any security. You might as well mark HTTP to local IP addresses as a secure origin and not mess with certificates at all. I don't th9nk that's a very good idea.

With IPv6, there's a solution to this problem. You can provision certificates to globally unique IP addresses and possibly their hostname. I don't think there's a solution for IPv4 on most local networks, though.


But .local is reserved for mDNS. The correct suffix for hostnames on a home network with nothing better configured is actually ".home.arpa".

However precisely because .home.arpa domains are non-unique is is forbidden by the relevant RFC to treat it specially for security. Because with a roaming device, when out on say an public attacker controlled wifi, the name might resolve to something malicious under attacker control, and doing something automatically trusting self signed certs would make it more likely that attacks using such devices could succeed.

If a mechanism for securely identify exactly which home network you are connected to is eventually discovered, then this limitation can be lifted, with the user specifically whitelisting trusted home networks.


How do we define 'current network'?


That's a thing for local software to decide.

It's also something that can be done with a level of reliability that will impress incredulous people. But that there will always be somebody to complain that is not following the standard recommendations.


In that case you don't need a certificate. Just check that the .local FQDN's IP is on a local network.

  1. Look up all local interfaces' networks (IP & subnet mask)
  2. Determine if .local IP is within subnet range of one of the interfaces
  3. If .local IP is not on a subnet of a local interface, drop the connection
This is still "host-based networking" and there's always the possibility a rogue network could be attached to your host, such as a spoofed open access point, or an ISP with lax network security and a customer on the same subnet being attacker-controlled.

What certs prove is "this host currently has a private key and cert, and at one time this private key was used to generate a CSR for this cert, and at one time it was validated that an IP resolvable by the domain name in the CSR was also controlled by whoever issued the CSR". It's a very awkward thing that doesn't really match up to local consumer devices.

We need a vendor key registry, the way MAC address prefixes are mapped to vendors, so at least we could say "the key on this host came from vendor XYZ". The browser would need to pop up a warning saying "WARNING: You are now connected to a local device from Vendor XYZ! If you did not intend to connect to a local device, close this window!" Updates to the vendor registry could also invalidate previous entries if old keys got compromised. But this would be in combination with the aforementioned "is the site on a local network?" logic. Anyone using a .local would have to both compromise a local network and steal a key from a vendor, or be registered as a vendor, and then the user would have to be dumb enough to click through a big warning about a local device.


It's not practical to hide vendor keys. Unless the key is embedded in a security chip in factory it's usually possible to extract them from the firmware. Even if vendor CA is not compromised, stealing keys from a single device should be sufficient.


I was thinking more along the lines of signature verification, but I forgot that any spoofed device could just supply a signature it got out of a device, so I think you're probably right.


I wonder if the router or switch could be set to provide that information? Most modern routers have LAN info and could reliably inform a smart enough appliance that it is accessing something inside of the gateway address.


Indeed that's much better.

It's probably better if we terminate the trust at a TLS certificate or something similar to DANE. So there isn't any specialized registry to maintain.


It's security theater, it's not meant to solve real problems for real people.


Since everyone is using HTTPS, the problem has been away for so long that you simply forgot it.


The year 2015 wants its flamewar back.


I was thinking about this classic article [0] that gets linked whenever people talk about HTTPS?

If so, that's a 2017 flamewar.

[0] http://n-gate.com/software/2017/07/12/0/ (NOTE: this site tracks if you come from HN and sets a cookie. You'll probably need to open it in incognito)


Not entirely.

They don't provide an alternative to "HTTPS Everywhere User Rules" as well as adding exceptions to HTTPS only sites (under "HTTPS Everywhere Sites Disabled"), which can be found in HTTPS Everywhere options.

In Chromium I can only turn HTTPS on all sites, without exception. It also doesn't allow me to have mixed content (on certain sites)


Well Firefox has also exceptions.


Plus:

* Long sunset period

* Instructions to make sure you can enjoy the same security after they sunset


That’s how I used to feel until I found out they accepted millions in donations from companies like Google and Facebook/their executives/their executives’ charities.

I’m just some guy on the Internet but IMHO their being OK with massive conflicts of interest means I no longer trust them.


Would you turn down millions of dollars in no-strings-attached donations if you were running a charity? What was the negative result of those donations that you can point to?

I'm not a fan of Google or Facebook, but I also don't outright boycott everything that they've ever touched. They donate money and developer time to tons of projects that I use regularly (including Linux), and it would be hypocritical of me to look down on the EFF for benefiting from the companies that I also benefit from.

I don't like the bad behavior from Google or Facebook, but I don't think donating millions to the EFF is bad behavior, nor do I think getting tens of thousands of pull requests merged into public FOSS projects from Google and Facebook employees is bad behavior. Black and white guilt-by-association doesn't work when you have companies of this size, with hundreds of thousands of employees, and tons of varying internal cultures.


It’s not bad from Google or Facebook’s perspective, it’s bad for the EFF. Accepting money from the organizations you’re supposed to be monitoring is a conflict of interest. You shouldn’t have to question whether that money affected their behavior towards those organizations and now you do. They should not have accepted it, arguably they didn’t need to.


What would you have them do? Not accept those donations, and be unable to work on as wide a range of issues that they do?

EFF works on a wide range of issues, ranging from anti-censorship, software patent reform, online free speech, just to name a few. Many of those areas are where their values align with those of the Facebook/Google executives which donate to them. That they are able to receive those donations in spite of the areas where they disagree speaks more to EFF's track record of being an effective and powerful force for good in tech.


Serious question: is there a popular charity that by rule doesn't accept donations from really rich people?


Then nut up and donate the millions yourself.


That doesn't follow. Do you really want to argue that the only way someone is allowed to disagree with millionaires is by being one themselves?


What's the EFF to do?

1. Achieve its mission with no money, or 2. Find money from sources deemed acceptable, or 3. Take money from sources deemed unacceptable

I'm OK with option 3, so long as its transparent, because I agree with the agenda the EFF are pushing. If they start pushing an agenda I disagree with, I'll stop supporting them, whether or not they receive money from Google et al.


It doesn't seem to be mentioned by the EFF, but coincidentally, January 2023 is when Manifest v2 extensions stop working in Google Chrome: https://developer.chrome.com/blog/mv2-transition/


And its going to be a very big moment, since V3 effectively bans Adblockers and website redirect extensions.

It might increase Firefox adoption if it actually happens.


- bans adblockers that use their own matching engine

- bans website redirect extensions that can't use declarativeNetRequest action.redirect https://stackoverflow.com/a/66394857/3878893


The api provided with v3 is not sufficient to create an adblocker with the same privacy protections as is currently possible with v2 https://github.com/uBlockOrigin/uBlock-issues/issues/338


You do realize Firefox is going to V3 as well?


Firefox is adding V3 support for compatibility with Chrome but not removing V2 support: https://blog.mozilla.org/addons/2022/05/18/manifest-v3-in-fi...


That's what they say now, they don't exactly have a good extension record themselves.

I operate some popular extensions and I now maintain a v2 and v3 build (make file). I'd love to nuke the v2 build tasks and only operate in v3. Most users are on Chrome anyway.

If you're lucky, v2 will be kept around for legacy extensions that are rarely if ever updated.


V3 and request modification banned are two different things. The chrome still support request modification in v3 but disallow extension to use it(only allow enterprise policy or whatever to enable it). While firefox don't have plans like this (so far).


There's a piece of animation software that I use in my game development called Spine, and it's truly fantastic and the developers and staff are great... but the PHPBB forums don't have https enabled. I've brought it up on these same forums[0] but I don't think they get why https is an important thing to turn on, even in 2022.

Turning on https mode in my browser brings up, as it should, a large error message saying that the site is insecure. I can't imagine that's a terribly good first impression, even though, again, Spine is one of the best animation packages out there.

[0] - http://esotericsoftware.com/forum/HTTPS-for-EsotericSoftware...


Not only do the forums have HTTPS disabled, but they expect you to download executables to run on your computer over HTTP. And, the kicker: they already have a legit HTTPS cert for the entire site: visiting on HTTPS redirects you to HTTP _facepalm_.

Never heard of Spine before your comment, but if I found this in the wild I'd assume it was amateur hour and turn back immediately.


A slight correction here, the download of the exe does take place over https. As does buying the software, and signing up to the forums. But everything else doesn't?

The software itself is some of the smoothest and most stable I've ever used. And when there's an update within the software that, as far as I'm aware, takes place over https too.

But the rest of the site and forums, even when signed in, is http, and I don't really know why.


The download itself is over https, but the page where you click the download link is http.

If someone were going to MITM the executable, they can just MITM the download page instead and point the download button to their own server with the bad executable.


Exactly; this is why mixed content is problematic, and the r'aison d'etre for HTTPS Everywhere.


> Turning on https mode in my browser brings up, as it should, a large error message saying that the site is insecure. I can't imagine that's a terribly good first impression, even though, again, Spine is one of the best animation packages out there.

A site being on HTTP isn't necessarily insecure. That warning is inaccurate. It's more about creating censors and gatekeepers in the form of certificate authorities.

(Debian packages are still served over HTTP and are secure with no certificate authority. Try to figure that out!)


Debian packages are verified via a separate mechanism after download. The only verification method included in your web browser is HTTPS.


Debian ships with its own signing keys to authenticate the packages that it downloads. They are acting as their own CA.

This isn't scalable to the web.


Whether or not it's scalable is orthogonal to the question. A browser would call Debian's repos insecure despite the fact that they are secure by other means.

vv: whether it's authenticated by TLS or PGP is literally isomorphic except one is centralized to CAs and one is decentralized with web of trust. That's the only difference.


Because they are. Your web browser has no way to validate the authenticity of any content served by a Debian mirror. This is very much done that way because anyone can run a Debian mirror (or indeed a mirror for almost any distribution, which all authenticate their packages in a similar manner).

Nothing stops an admin running a repository mirror from choosing to make it serve malicious content, so the downloads need to be authenticated out of band. This is the very definition of insecure.


A web browser would be correct. It is insecure, because the browser could not secure it. Therefore it could be showing data that has been compromised. Just because it is secure when apt pulls the package doesn't magically make the web browser's view of the data secure.

TLS and PGP maybe isomorphic, but the browser only has access to TLS. Therefore things secured by PGP are not secure in a browser, not because there is anything wrong with PGP but because the browser is incapable of checking it.


> Whether or not it's scalable is orthogonal to the question

If it's not scalable to the web then it's not a replacement for the current solution.

> whether it's authenticated by TLS or PGP is literally isomorphic

PGP is a viable solution in this scenario because there is a single organization that is signing a known set of packages. It doesn't provide confidentiality and can't authenticate arbitrary responses. A narrow use case that doesn't require a CA does not make a strong argument that CA's are unnecessary in general.


Just because you can separately verify that something is secure later does not mean that it's secure at the time of the download. Browsers show that a website is securely encrypted once they've been able to verify it. It's entirely proper for them to show that a download is insecure before they've been able to verify it.

If you get a phone call from an unknown caller, do you get mad at your phone for listing that as an unknown caller just because you could theoretically add that number to your contact list later? Or to talk directly about PGP, do you get mad when you see an untrusted/unknown key warning because theoretically you could add that key to your keyring later after verifying it yourself? Of course not.

The browser is telling you that at that moment, the code/download you are getting is insecure. If you take that code/download out of the browser and verify it separately, then great! But that doesn't mean the browser was wrong. If anything, insecure connection prompts should be a welcome reminder to you that you need to externally validate the data you're receiving.

> That's the only difference.

The difference is that one of them happens inside the browser, and one of them doesn't.


> A site being on HTTP isn't necessarily insecure. That warning is inaccurate.

The actual semantics of HTTP are very surprising to humans and this is a problem.

We have a whole bunch of systems - including some that are key to making HTTPS work such as OCSP, which rely on plain HTTP but those systems know about its semantics and account for them in how they work while ordinary users do not and shouldn't be expected to learn.

HTTPS delivers much closer to the semantics people actually expect, with the remaining exception being that people are often surprised that McDonalds.phishing.example isn't necessarily anything to do with McDonalds.


That was a good extension for a specific time. I stopped using it about 5 years ago when everyone had been pushing HTTPS hard, and Let's Encrypt had become popular. I didn't notice any websites not using HTTPS, so I didn't look back.


http://neverssl.com remains for those badly-setup wifi networks.


Or alternatively http://httpforever.com/


Similar to HTTPS-only being built into web browsers, isn't captive portal detection built into all modern OSes? What's the badly-setup wifi network that requires you to open your browser but doesn't get detected properly by the OS as a captive portal?


it isn't so much "badly" set up as maliciously set up. Captive portal vendors like to find ways of defeating the operating system's captive portal detection, because it hinders their ability to show ads after the captive portal login process.


Many don't because the captive portal works differently than the regular browser, I'm guessing. If you exit the captive portal on iOS (say to look up a password), you can't just go back to it, you have rejoin the network and start over.


It's funny that if I navigate to this site with HTTPS-only mode, it redirects to an HTTPS page.



Beat me to it. beats the hell out portscanning the subnet at the airport


Thanks! I'll stop using captive.apple.com now that I know about this!


You can also use example.com. I find that easier to remember.


The example names are just examples, it so happens that today they have a web site and it isn't HSTS. Tomorrow maybe they have HSTS. The week after maybe they get rid of the web site. The names are merely examples, and so it's probably a bad idea to use them for any other purpose.


Cool, they even generate subdomains dynamically to avoid caching


There are still places that don't auto-redirect to https for some reason. Maybe a temporary misconfiguration. I just ran into http://www.mbsonline.gov.au today and was surprised it didn't redirect and has broken https, even though that org really shouldn't do that.


A redirect itself is considered bad by some because it encourages relying on http in the first place (imagine always hitting a web site through http and expecting a redirect instead of going to https directly).

As soon as you do that, you are similarly prone to MITM attacks: on an insecure network, you hit an http site and it replaces the redirect with one to a https site that looks like the same domain but really isn't (with a valid cert for mbsonline.gov.au.foo.io).

For my personal sites, I prefer to go with no redirects and if it really processes data, https-only.

For users, the right solution isfor browsers to always attempt https first even if those could be technically serving entirely different content (that would be one hell of an anti-pattern though, so I wouldn't worry about it).


A permanent redirection before setting the correct headers is the best you can do at the server side.

Yes, the user is using insecure practices, and no, you can't correct it on your server.


I am arguing that the best you can do is not even have HTTP enabled.

A permanent redirect is the second best ;) And yeah, most practical for commercial entities (which is why I highlighted what I do for my personal sites).


No, you're actually less secure without HTTP. Scenario: someone figure out how to mitm your http traffic. You visit the page:

For the first time - you're affected either way.

For the second time - if you still have the permanent redirect cached, you avoid the mitm.

Disabling your own http only helps the attacker in specific situations.


My point is that you want to "educate" visitors to never use HTTP (eg. for their bookmarks and typing the URL in), which I hope is effectively achieved by not having HTTP web site even on a connection you trust (eg. your local network) — so if you succeeded in "educating" them at least for your web site, they are similarly protected when they come back, including from another device. If they only rely on their browsers caching the permanent redirect, they are protected for a short time on the same browser.

But ultimately, browsers have to default to HTTPS for a guaranteed "protection" from the start (whether that's through HSTS preload lists or simply by defaulting to HTTPS, does not matter much).


The best you can do is set HSTS and redirect from http to https.

You not serving http won't prevent a MITM from serving http to the victim.


There's a corner case where users who have never visited your domain (either ever or just on that browser) do so for the first time over a malicious connection. In this case, Mal sends them a page that looks reasonable, they create an account or login or whatever, Mal gets the password and probably proxies the requests to the origin domain (properly over https so even if the site operator redirects all pages to https, this still works) so that the user gets the email confirmation or whatever.

A tiny bit better is to submit your domain to https://hstspreload.org/ so that (major) browsers force https on the first connection. You should still set the HSTS header of course, it's a requirement for inclusion in the preload list and it should also catch people who have a browser from before your inclusion in the list or browsers that don't support the preload list at all.


You could also choose names whose parent domain is already HSTS pre-loaded. The .dev TLD is pre-loaded for example.


What happens if your browser (eg. new device, another browser, first visit) does not have a record of HSTS data for a domain, and you visit it by going to HTTP on an insecure network?

Can't they similarly serve HTTP data without any HSTS headers? Or do browsers also check HTTPS for any HSTS headers on the same IP and cross-compare?


I've been using HTTPS-only mode on Firefox for many months now. The place where I see HTTP links most often is email tracking links. I'm commonly automatically upgraded to HTTPS and it works but very often there is no HTTPS support at all. Even for sensitive things like password resets that have secret tokens in the URL.


A few sites I run into once in a while have the following bad setup, which HTTPS-only flags (because it's actually unsafe) but looks normal to most people:

1. http://www.example.com/ exists and redirects to http://example.com/

2. http://example.com/ also exists and redirects to https://example.com/

3a https://example.com/ works fine but

3b https://www.example.com/ does not exist

4. External links go to http://www.example.com/stuff/goes/here

You will also see the mirror image mistake (www.example.com is canonical, but the redirects go from example.com only on HTTP) at similar rates.

This is all because Tim chose not to rely on SRV records to make his toy hypermedia system work and decades later we're still paying for this (among others) mistake.


One website not supporting HTTPS which caught my eye is http://paulgraham.com/


I'm surprised by the number of German websites (.de) I've encountered this year that don't support HTTPS. I think they're all small businesses or personal sites, but still unexpected.


Whew, for a sec I confused HTTPS Everywhere with Let’s Encrypt! I’m glad Let’s Encrypt isn’t going anywhere :)


Noob question: If I have a personal HTTP website running out of GCP cloud storage (without the load balancer, bells & whistles), is it possible to upgrade it to HTTPS so that visitors don't get warnings


Looks like you can either do this by referencing static pages with HTTPS directly or by putting a load balancer in front of your site:

https://cloud.google.com/storage/docs/troubleshooting#https

My shared hosting provider (Cooini) gives an HTTPS option out of the box. It used to be a paid option that required upgrading to dedicated hosting, but they changed it years ago to a simple cost-free toggle in their dashboard


Interesting. Just a couple hours ago, I was reading their post (2016) on how they were sunsetting their canary watch program, saying that it had achieved the goals they had set out on for it (internally, my knee-jerk reaction was "...what?").

Of course, I'm not as incredulous this time around over HTTPS Everywhere.

https://www.eff.org/deeplinks/2016/05/canary-watch-one-year-...


Great news. Now the only required extensions are an ad blocker, tamper monkey, no script, multi-account containers, container proxy, tree style tabs and auto tab discard.


Great job on this. The sunset in this case means it worked. Everyone benefits from the work done here and will continue to do so. Love to see it!


Is https enough or do we also need hsts? And how does QUIC fit into all of this?


Pretty much all browsers will try HTTPS first if you type a URL without a protocol. HSTS only practically helps in the case where you have URLs that have the wrong protocol.

QUIC and H2 are both always encrypted. But since quic is purely supplemental at this point, it really doesn't factor into anything relevant here.


The reason to use https is because we assume there could be a malicious party between you and the site you're visiting.

If we assume that, we should further assume that they will inspect SNI in your https request, see that you're visiting some domain they are interested in, and just block that request (or creatively fail to make a good https connection, to arouse less suspicion), causing your browser to helpfully fall back to plain http. But this is exactly the behavior we don't want.

HSTS (or firefox's force https setting, probably similar settings exist for other browsers, or HTTPS Everywhere's strict setting) makes your browser insist on using https for connections to that domain, and showing an error page if it's not possible.


What I don't get is what is the purpose of HSTS if HTTPS is enforced in the first place. Several (most?) security checkers will warn if HSTS is not enabled for a site. From the tests I have done it seems impossible to get QUIC and HSTS to even work on the same site.


> From the tests I have done it seems impossible to get QUIC and HSTS to even work on the same site.

I don't understand, maybe you need to explain "the tests I have done" here ?


Enabling HTTP3 and HSTS on major cloud platforms like Cloudflare (Enterprise) and so on only seems to enable QUIC/HTTP3 and the HSTS headers are no where to be seen. Is HSTS even a thing?


Now, if only we could become better at securely managing keys.


Can we also sunset animated GIFs, while we’re at it?


I think it still has value. A lot of sites have https but users or resource-fetch from them over http is still not redirected to https.

If the internet is navigable while blocking plain http on your host firewall, then we don't need this. But I do get it takes time and resources to maintain their list.


The article talks about how to get this HTTPS by default functionality in all major browsers. I don't see how your complaint about not being redirected to HTTPS is relevant with those settings enabled.


There's still plenty of useful websites that are HTTP-only or that have HTTPS misconfigured. The HTTPS-only mode will break them, whereas HTTPS Everywhere just works.


Exactly this. In Firefox, the suggested replacement is to use the force https setting, but that is for the whole browser instance instead of the domain of the current tab. If I disable force https to make some site work, I have to remember to turn it back on when I'm done, and if I'm going back and forth between sites on different tabs/windows where some support https and some don't, it's a huge pain.

I don't even know how disabling it affects already-loaded pages in other tabs. If each page/tab caches the setting on page load and still respects it when it gets changed, maybe that's fine... but if js-driven background requests on all my other tabs suddenly don't force https when I disable the setting to visit paulgraham.com, then that's not a good replacement for https everywhere.


+1. I was having this sort of issue setting up an Android tablet as a wifi CNC pendant. The controller is based on an ESP32 (which has limited storage for a full cert chain) and I run it without a gateway address on it's own VLAN. I'm certain it's "secure enough".

Turns out Android + Firefox mobile keeps trying to turn "http" to "https" and at the same time specifying the IP + port explicitly in the URL is considered malicious by Firefox.

"This address uses a network port which is normally used for purposes other than Web browsing. Firefox has canceled the request for your protection."

I had to run a development version of Firefox just to be able to get to "about:config" to disable the behavior.


But why pick a port that's never traditionally used for HTTP ?

Firefox isn't just assuming all ports are prohibited, it has a relatively short list of ports where we know it's crazy to pick HTTP on those ports and sometimes people do it to exploit protocol parsing issues, so it forbids that.


There is still a "use HTTP" button that shows up on these pages. So it just makes me aware of the insecure connection, not actually me from preventing me to use any HTTP sites that I choose to use.


Safari's HTTPS upgrade option actually addresses that issue, fwiw


HTTPS Everywhere being available in browsers as an option is great. HTTPS Everywhere being promoted as something you should have on by default is bad. HTTPS, like much else, relies on incorporated entities as certificate authorities. And that's fine for commercial interactions and if browsers were only for interacting with businesses.

But by a combination of centralization in a few CAs (everyone uses LetsEncrypt now) and browsers shipping HTTPS only we are now entering an age when you can only host a visitable website on the continued temporary whim of some external corporation. LE may be a benign dictator for now, just like dot Org was, but the more people that use it and the more centralized it becomes in all interactions the greater pressures will be put on it re: corruption from within and political attack from without to allow some but not others.

HTTP only is okay. HTTP+HTTPS is great. HTTPS-only is the end of the web for human persons and the beginning of the commercial only web.


HTTP is not ok. Anyone can ready / modify what is being sent. This privacy intrusion will definitely happen, whereas the risk of being banned by "some external corporation" is low. And, you always have the option of self-signing your own certificate, which is at least as secure as using HTTP, and much more secure if you can verify the certificate via a side channel.


It's like wearing a bulky level 3 bullet proof vest while you're at home cooking dinner. Yeah, it's keeping you safer. There's no doubt about that.

The real dangers on the web come from the insane behavior of running all arbitrary code sent to the browser from anywhere. Like opening every email attachment you get sent. NoScript temp whitelist only provides a lot more safety than HTTPS Everywhere and doesn't give all power to a few corporations.


You need HTTPS to even begin trusting remote code. For instance, you download uMatrix to setup a whitelist. Where did uMatrix come from? If you downloaded it over HTTP, then you could be running anything. Even if you have a checksum for uMatrix, you can't trust it if you got the checksum over HTTP.

Now let's say you installed uMatrix and you want to trust a script. Well, how do you know that the script you downloaded came from the URL you've allowed? If you've requested this script before, then you can use a content hash, but if not, then you're basically blindly trusting that no one has tampered with the data.


> Where did uMatrix come from?

But is uMatrix to trust?

Can you trust uMatrix developers?

I have bought a pair of shoes from an HTTPS only web sites, shoes never arrived, HTTPS apparently can't fix everything.

Trusting trust is a problem since computing was invented. [1]

[1] WARNING! PDF! https://www.cs.cmu.edu/~rdriley/487/papers/Thompson_1984_Ref...


Without some kind of baseline for secure connections you can't even start approaching the problem of trusting trust.

Yes, it's hard to figure out whether or not to trust uMatrix. But I'd rather not make that even harder by allowing basically anyone to intercept and modify the code that uMatrix is sending at any time.


I send you a file over HTTP and then I send you an SMS with the sha256 hash that you can verify

Even if SMS are "easy" to intercept, even if I send you an email with the SHA256, using a side channel greatly improve security, unless you're being targeted on multiple channels, which is quite uncommon.

For the same reason, even an MD5 hash is good enough most of the times. Even if it's been proven not secure years ago, it's secure enough to trust that in the general case the odds of generating the same hash while also creating something that is not simply broken, but it's malicious, are very, very low and require effort.

the point is: HTTPS still relies on trusting things we can't control and we will probably never control.

We are forced to trust HTTPS because there is nothing more we can do.

> But I'd rather not make that even harder by allowing basically anyone to intercept and modify the code that uMatrix is sending at any time.

But the reality is that it is hardly "everyone"

a MITM must be, by definition, in the middle.

Debian survived for decades delivering their packages over HTTP


> and then I send you an SMS with the sha256 hash that you can verify

Your blog isn't doing that, and none of your readers are verifying the hashes. If signature checks aren't being done automatically within the software, then for all practical purposes they might as well not exist for the vast majority of people.

Sending an SMS with a sha256 is bad security because for the average person it is equivalent to just sending them an SMS message. They will not verify the sha256 and it is impossible to train them to. And sure, I use SMS for some stuff, but I don't pretend that it's secure or that encrypted messaging doesn't matter because trusting trust isn't a solved problem yet.

> Debian survived for decades delivering their packages over HTTP

No, Debian used what is essentially their own certificate authority (PGP and Web of Trust). Your blog does not have the same security guarantees as Debian's package manager, and the fact that Debian relies on package signing using PGP and Web of Trust is strong evidence that they do believe that in-software automatic verification of message integrity is important for delivering code.

Everyone acts like Debian is proof that you don't need security, but package managers throw error messages when package signatures are wrong, just like your browser does when it can't verify an SSL cert. Because in both cases, verifying message integrity matters, even though it doesn't get rid of the entire problem of trust on its own.

> But the reality is that it is hardly "everyone" [...] a MITM must be, by definition, in the middle.

It's enough people. It doesn't have to literally be everyone on the entire planet. You should not be multiplying your attack vectors unnecessarily when there are trivially implemented, free, widely available methods for removing those attack vectors.

----

What a lot of this boils down to:

> For the same reason, even an MD5 hash is good enough most of the times.

There is a difference between leaving your shed unlocked because you're hoping your neighborhood is nice and no one will steal your lawnmower, and going onto a forum and arguing that locks are unnecessary because you leave your shed unlocked.

What you are essentially arguing here is not that HTTP is secure, you are arguing that bad security is OK. Which, fine, make that argument if you want, but "I don't need to be secure" is very different from "HTTPS doesn't matter for security."


How did we exchange phone numbers? I need to know your phone number to verify the hash I receive via text message.


> How did we exchange phone numbers?

I used HTTPS so our common friend at NSA knew what it was and sent a pigeon to your house.

How do you know that once you verified the hash, the software is safe?


Ok, so the NSA could have replaced the phone numbers we sent each other, which means that we can't be sure that the NSA isn't intercepting our file transfer later, which is the exact same issue we would face had we using HTTPS to transfer the file, except we did a lot more work (e.g. exchanged numbers, texted file hashes, compared check sums, etc.).

btw, citation needed that the NSA is able to decrypt https traffic.


> Ok, so the NSA could have replaced the phone numbers we sent each other

What if we're living in a black hole and our universe is a white hole?

> except we did a lot more work

Or we trust the sender to not be malevolent and/or compromised, like Debian did for more than 20 years.

I don't understand why people live like they are surrounded by enemies in enemy territory, given it's not usually the case.

HTTP is perfectly fine, unless you have reason to not use HTTP.

They exists, but it's not always necessary.

Also: 99% of corporate networks install self signed certs and override the certification authority, so traffic is inspectable.

Some mobile network operators do the same things when they sell to customers "secure network" upgrades.

HTTPS is not a panacea.

You simply trust a different set of "authorities" that none of us know or control.


> Or we trust the sender to not be malevolent and/or compromised, like Debian did for more than 20 years.

I mentioned this above, but when was Debian ever doing this? I don't think I've ever used a package manager that wasn't at least making some attempt to secure against MITM attacks and compromised CDNs.

> Or we trust the sender to not be malevolent and/or compromised

That's not what HTTPS protects against, HTTPS is designed to protect against MITM attacks. It doesn't inherently have anything to do with verifying identity, and we've actively moved away from identity verification in the SSL world, because the companies trying to do identity verification to determine which certificates were "verified" added very little security to the process and were mostly a waste of money.

LetsEncrypt pretty much only cares whether or not you control the domain you say you control. It doesn't verify your identity past that point, because that's not its job. It solves a specific problem.

> Also: 99% of corporate networks install self signed certs and override the certification authority, so traffic is inspectable.

I don't personally like when companies do this, but it's not breaking HTTPS. If the user or device owner imports a certificate authority, then the browser should trust it. Isn't that the whole criticism people have with HTTPS, that they don't like gatekeepers? They should be happy that device owners can swap out certificate authorities.

> HTTPS is not a panacea.

Carrots and spinach are not a health panacea, but that doesn't mean I'm obligated to eat dirt instead.

There are definitely alternative schemes that could be used in the browser other than HTTPS. It's OK if you don't like HTTPS in specific. But that doesn't mean HTTP is secure.


> I mentioned this above, but when was Debian ever doing this? I

They are still doing it!

On my system (this one's not Debian, but KDE Neon)

    $ cat /etc/apt/sources.list
    deb http://archive.ubuntu.com/ubuntu/ focal main restricted universe multiverse
    deb http://security.ubuntu.com/ubuntu/ focal-security main restricted universe multiverse
> That's not what HTTPS protects against, HTTPS is designed to protect against MITM attacks

That need a MITM

Which is a specific type of attack.

If you are running a network where that's not possible, you're fine.

> LetsEncrypt pretty much only cares whether or not you control the domain you say you control

LetsEncrypt is pretty much an advanced tool, for advanced users

> I don't personally like when companies do this, but it's not breaking HTTPS.

It's breaking confidentiality.

Which is one of the features of HTTPS

In a corporate network MITM attacks are not that easy to pull off.

So basically HTTPS main feature is encryption of content in that context.

> Carrots and spinach are not a health panacea, but that doesn't mean I'm obligated to eat dirt instead.

first of all, carrots and spinach are a panacea.

Secondly, HTTP is not dirt.

Like SMTP is not dirt and IRC is not dirt and FTP is not dirt and TFTP is not dirt

> But that doesn't mean HTTP is secure. an ex nobody said HTTP is secure, but it's not inherently insecure, like every plain text protocol it's plain text.

it's the network that is insecure, MITM is not an exclusive of HTTP.

If the network is secure, HTTP is secure.

In my kubernetes clusters, TLS is terminated at load balancer and PODS talk to each other using plain simple HTTP.

Wasting energy on useless cryptography "just because" is not very smart IMO.


> They are still doing it!

No, they're not! Debian uses PGP and signature verification to protect against MITM attacks because it is critically important for software downloads to be protected against MITM attacks.

Now, Debian does not use HTTPS in specific to protect against MITM attacks because they have another method baked into the package managers that people are using. It does not follow that HTTP is secure, it's not. It follows that you can use an insecure protocol to deliver software if you bolt the same security features on top of it.

You're looking at someone who has come up with an alternative way to protect users from MITM attacks and thus doesn't use HTTPS, and the conclusion you're drawing is, "I don't need to be concerned about MITM attacks." That's not the right lesson to draw from this.

Is your publicly facing site being accessed by software that automatically checks data integrity using a set of pre-shared keys? No, and no normal person is going to manually do that check when they visit your site, so you need HTTPS.

> If you are running a network where that's not possible,

If your website is accessible to normal browsers on the public Internet, than you're not on a network where that's not possible.

> LetsEncrypt is pretty much an advanced tool, for advanced users

Most free hosts I've looked at recently from Github pages to Netlify have automatic background SSL management for free with zero configuration from the user. If you're running your own server, then you're an advanced user, but even in that situation LetsEncrypt is one of the easiest SSL setups I've ever used.

Any situation where you're using a host who can't manage SSL for you is also probably a situation where you're using a host who can't manage things like DNS or hosting for you, at which point, yeah, I expect you to be able to run a command line tool. If you can install Wordpress on a server, you can use LetsEncrypt. If you can't install Wordpress on a server, you should be using a managed host, and they should install LetsEncrypt.

> It's breaking confidentiality. Which is one of the features of HTTPS

No, HTTPS encrypts data according to the certificate authority. While I don't like networks effectively doing MITM attacks on their own users, it is not the fault of HTTPS that it trusts the authorities that the user's device tells it to trust, any more than it's the fault of Debian if you import an untrustworthy key/repo into package manager.

I was kind of leaving privacy off the table here since you were championing Debian and Debian has substantial privacy issues with software downloads. But if you want to go down that route, than sure, another weakness of HTTP is that it allows tons of network snooping that really shouldn't be possible even for "innocuous" sites that claim they don't have private data on them.

> first of all, carrots and spinach are a panacea.

This is a complete sidenote, but either you don't understand what "panacea" means or you're at risk of Vitamin B12 deficiency. Carrots and spinach are not cure-alls for health, no.

> If the network is secure

It's not. Not if it's accessible from a browser on the public Internet.

> Wasting energy on useless cryptography "just because" is not very smart IMO.

Rejecting basically free cryptography that substantially improves security just because people are determined to be the last holdouts on a move that pretty much every security professional recommends is cutting off one's nose to spite one's face.

People being so concerned about centralization that they start sending messages in plain-text and start arguing with people online that sending messages in plain-text is somehow better for decentralization is at best misguided.


> Now, Debian does not use HTTPS

so YES, they are still doing it.

thanks for confirming.

Anyway, how do you donwnload those PGP keys in your opinion?

http://subkeys.pgp.net/keys/ http://pgp.mit.edu/ http://keyserver.ubuntu.org

Why they do it?

because the heavy cryptographic computation this way is done on the client, saving computer cycles on the packages mirrors that more often than not are kept alive by volunteers using their own money and time.

> If your website is accessible to normal browsers on the public Internet, than you're not on a network where that's not possible.

*If you are running a network where that's not possible,*

again, thanks for confirming it.

anyway, I could not care about MITM attacks if my website is my personal blog, running on a PI in my living room.

It's the network job to ensure security from MITM, not the website owners'.

> Most free hosts I've looked at recently from Github pages to Netlify have automatic background SSL management for free with zero configuration from the user

And now you have to trust them too, it's not "your personal website" anymore.

Talking about trusting trust... your solution is more delegation to unknown entities.

Doesn't sound so convincing to me.

> No, HTTPS encrypts data according to the certificate authority

No, HTTPS works with self signed certs too, it's browsers that don't want you to and show a scary popup like this

https://help.univention.com/uploads/default/original/2X/5/5e...

Which is not even true, the connection is private, it is simply privately private and not "trusted" by any know CA.

which could not be a problem if I trust the website and who runs it.

But, again, the scary warning page will scare people away, while the connection is working as intended.

> I was kind of leaving privacy off the table

You are confusing privacy with secrecy.

HTTPS is not private, the connections being made are still visible.

> you don't understand what "panacea" means

I'm Italian with Greek roots, we literally invented the word.

Its original meaning is "plants that cure everything".

That's why my remarks.

It's still a plant name! (https://www.google.com/search?q=p%C3%A0nace)

> Rejecting basically free cryptography that substantially improves security

Yeah, I imagine you lock yourself in your house and set the alarm every time you exit from a room and then disable it every time you enter, after unlocking the door.

I'm sure you do it, it's free, it improves security!

> People being so concerned about centralization that they start sending messages in plain-text and start arguing with people online that sending messages in plain-text is somehow better for decentralization is at best misguided.

I know how you feel in your paranoia fueled World and I feel for you, but I tell you, dear friend, that you're wrong.

Nobody said what you're saying they did.

Use the easier solution available given the circumstances is simply sign of intellect.

You're not stupid, I'm sure of it, so why you acting like you were?


> Anyway, how do you donwnload those PGP keys in your opinion?

You are completely missing the point. Debian does key verification. It does not naively trust that nobody has MITMed the connection. It's not just sending you code and keys and then shrugging and saying, "well, the network should have made sure they were good, so we're just going to run the package."

Debian is not an argument for disregarding MITM attacks. Debian cares about MITM attacks. Debian is also not an argument for ridiculous claims like "it's the network's job to protect me." Debian does not trust the network.

----

> Which is not even true, the connection is private, it is simply privately private and not "trusted" by any know CA. But, again, the scary warning page will scare people away, while the connection is working as intended.

The browser can not guarantee that connection is private. The reason why a browser shows a warning for a self-signed certificate is because there is no guarantee that the certificate itself is not a MITM attack.

This is entirely sensible and it would be grossly irresponsible for the browser not to show a warning. The way you get around that warning is you manually verify the certificate in a way the browser can see, and once you've done that, the warning goes away.

Note that Debian does the exact same thing, if you download a package that isn't linked to a key that your package manager knows to trust, it gives you a warning. Because of course it does, it would be ridiculous for it not to.

----

> Talking about trusting trust... your solution is more delegation to unknown entities.

What are you talking about? If you are using a managed host then you are already trusting that website to send files and host files. It's ridiculous to act like them also holding a certificate somehow makes you more dependent on them. Having a web host manage your SSL adds no additional dependencies or vulnerabilities to a managed site.

It's like using a managed host to serve HTML files and then getting upset that they also serve CSS files. It doesn't make any sense, you are trusting that website to send files. Not everything is a conspiracy to take away control. It's still "your website" to the extent that you trust "your website" to be hosted on somebody else's computer.

And yes, I am proposing delegation, because if you don't know how to do security then the responsible thing to do is to delegate to someone who does know how to do security.

On the other hand, if you are self-hosting off a Raspberry Pi in your living room, then no, I completely disagree that LetsEncrypt is too complicated for you to set up, or that it adds any substantial increase in complexity or difficulty to that self-hosted website, and I consider it to be pure FUD to try and claim otherwise. If you know how to set up DNS for a self-hosted website, then you are capable of setting up LetsEncrypt.

Now, if your website is only accessible within your NAT, then do whatever the heck you want, I couldn't care less. Send your data over unencrypted Bluetooth for all I care. But it's still not fantastic security and also why the heck are you coming onto public forums and arguing about what public websites do? If you're on a public network, you have to care about MITM attacks.

----

> Its original meaning is "plants that cure everything".

Spinach and carrots do not cure everything. They aren't even a nutritionally complete meal on their own, let alone cure every disease.

----

> Yeah, I imagine you lock yourself in your house and set the alarm every time you exit from a room and then disable it every time you enter, after unlocking the door.

Hey, at least I didn't remove the locks from my car doors because I was scared that the Toyota was going to use them to take control of my car.

The argument here is not that anything that increases security is good, the argument here is that when you have an effectively zero-cost security improvement for most people, and that security improvement protects people against real attacks that have been regularly exploited by both criminals and governments -- maybe it makes sense to turn those security measures on.

Metadata privacy as well as general browsing privacy is increasingly being shown to matter more and more online, not less and less. When we're in a situation where increasingly it's becoming easier and easier to use metadata in nefarious ways, it actually makes a lot of sense to just universally protect metadata and browsing privacy.

On that note:

> if my website is my personal blog

Imagine being so confident that your blog will never give important enough advice or say anything controversial or impactful enough to warrant encrypting it. Imagine being so confident that your blog will always be so mainstream that readers will never have a reason to hide that they're looking at it.

And then imagine thinking that a blog that innocuous and uncontroversial would ever need to care about how it's hosted or who's doing the hosting. If you're so convinced that nobody will ever care enough about what you write to warrant encryption, then stick your blog on Github pages and save yourself the extra electricty costs, because nobody is going to care about censoring something that isn't important enough to warrant encrypting.

----

> Use the easier solution available given the circumstances is simply sign of intellect.

There is no good reason not to have encryption on a publicly facing website: it costs nothing, it has substantial upsides, and the only reason not to do it is stubbornness.

The only arguments I've heard against encryption are centralization and complexity, neither of which are good arguments against encryption. Relying on a network for security makes your website less portable and increases your reliance on 3rd-party infrastructure far more than HTTPS does. Out-of-band verification is far more complicated and computationally expensive than in-band message verification, both in computing terms and for your users. Neither are good arguments.

It's just stubbornness, paranoia that LetsEncrypt secretly has some plan to control the world, outdated attitudes about the computational costs of TLS. I'm not going to pretend it's a respectable security position, the entire security industry has rejected arguments against HTTPS.

If you're that concerned about Internet centralization or about companies taking away control of "your website", then honestly, go complain about DNS in general; current DNS systems have way more impact on centralization than encryption does. Being anti-HTTPS is such an arbitrary, misguided hill to die on.

----

> It's the network job to ensure security from MITM, not the website owners'.

This is a bad security policy and should be discouraged.

If you can guarantee security within a network, great. But effective security is not about playing a blame game, if you are on a public network transmitting data then it is your job to care about MITM attacks. Security at the network level rather than at the message level has been consistently shown to be bad policy pretty much across the board. It's why we've started to move towards E2EE in many situations.


funnily enough, Letsencrypt knows it and in fact they use HTTP [1]

This is the most common challenge type today. Let’s Encrypt gives a token to your ACME client, and your ACME client puts a file on your web server at http://<YOUR_DOMAIN>/.well-known/acme-challenge/<TOKEN>. That file contains the token, plus a thumbprint of your account key. Once your ACME client tells Let’s Encrypt that the file is ready, Let’s Encrypt tries retrieving it (potentially multiple times from multiple vantage points). If our validation checks get the right responses from your web server, the validation is considered successful and you can go on to issue your certificate. If the validation checks fail, you’ll have to try again with a new certificate.

Our implementation of the HTTP-01 challenge follows redirects, up to 10 redirects deep. It only accepts redirects to “http:” or “https:”, and only to ports 80 or 443. It does not accept redirects to IP addresses. **When redirected to an HTTPS URL, it does not validate certificates** (since this challenge is intended to bootstrap valid certificates, it may encounter self-signed or expired certificates along the way).

[1] https://letsencrypt.org/docs/challenge-types/#http-01-challe...


Almost correct, but missing the broader point once again:

1. LetsEncrypt has additional methods to help mitigate the risk of MITM attacks for HTTP-01 challenges (https://community.letsencrypt.org/t/validating-challenges-fr...)

2. LetsEncrypt has restrictions on HTTP-01 validation (no wildcards) that are designed to mitigate the fallout from exploits of this vulnerability.

3. There are proposals active to try and allow website operators to disable HTTP-01 validation entirely using DNS. It's not infeasible to me that HTTP-01 could end up being retired eventually, at least for websites that have public DNS records.

And most importantly:

4. This is treated as a vulnerability

LetsEncrypt is not saying, "MITM attacks don't matter and HTTP is good enough". They're working on a fundamentally difficult problem (how to do identity verification for the first time when all communication methods are subject to attack), and they are rolling out the best solutions they can come up with at this time; solutions that are tangibly better than the status quo of HTTP.

That is very, very different than saying, "oh, HTTP is fine for this, we can ignore working mitigations that would be easy to deploy." If there was an easy way to completely mitigate HTTP-01 attacks, LetsEncrypt would be doing it.


> Debian does not trust the network.

Who ever said they do?

Why are you side tracking the discussion?

> The browser can not guarantee that connection is private. The reason why a browser shows a warning for a self-signed certificate is because there is no guarantee that the certificate itself is not a MITM attack.

I can check the certificate, if I have a copy from a trusted source.

The point of the scary warning (that I cannot disable) is that nowadays tech thinks of people like children to be protected from themselves, instead of educate them to understand the risks.

> There is no good reason not to have encryption on a publicly facing website

There are, actually.

None of them are good reasons for you.

An example from another post of mine [1]

Imagine you built an app for weather reporting, the app downloads static json from a server using the hardcoded ip address

What is the added value of putting HTTPS in front of the stupid web server serving stupid static Json?

What would anyone gain by MITM it, admitting someone can or want to do it?

There are many other ways to break an app like that...

If you can put yourself in the middle, you can simply break the trust chain and you broke the app, even on HTTPS.

there's no need to replace the content.

Assuming there's value in doing it.

> neither of which are good arguments against encryption

Who ever said no to encryption or HTTPS BTW?

the original post talked about HTTPS everywhere, that's what are we talking about, is it so hard to stay on topic?

Anyway, I can send encrypted data over HTTP if I wanted to.

> It's just stubbornness, paranoia that LetsEncrypt secretly has some plan to control the world

None of that corresponds to nothing that's been said here, not by me anyway.

It's just that HTTPS doesn't protect anybody from a stubborn malevolent actor with enough resources and motivation.

It should be said, instead of pretending that HTTPS is completely fine.

HTTPS is fine like OpenSSL "audited by million of developers' eyes in the World thanks to open source" was fine.

> This is a bad security policy and should be discouraged.

The fact that it's their job, doesn't mean encouraging it as a policy.

It's just a fact.

If you don't agree, ok, but if you agree why twist my words?

> if you are on a public network transmitting data then it is your job to care about MITM attacks.

I'm not transmitting, I'm delivering. Which is different.

Deliver happens on demand.

Transmitting is more similar to what a broadcaster does.

And even though broadcast radio and TV frequencies have been open and public for decades, they rarely have been hijacked.

Main reasons are: lack of resources and because it's prohibited by laws that are actually enforced.

> It's why we've started to move towards E2EE in many situations.

Which is, in fact, better than HTTPS as it is today in my opinion.

[1] https://news.ycombinator.com/item?id=31517326


This is going in circles.

Every example you're bringing up for ignoring HTTPS in the real world is by services that have alternative methods for protecting against MITM attacks -- methods that are built directly into clients that don't require external validation. You are trying to use that as justification for why you don't need to care about MITM attacks; but setups like Debian, LetsEncrypt, etc... are fundamentally different from the types of situations you're talking about where you want to ignore HTTPS. You are arguing that you should be able to ignore security measures, and that users shouldn't be warned that you are doing so.

----

Fortunately, I did some thinking about this, and I do feel I've been approaching this discussion in the wrong way. And I think I've managed to come up with a solution that can make both of us happy:

- You don't need to worry about MITM attacks, you can leave that all up to the network, it'll handle keeping the users secure. That's it's job, not yours. Basically, you can do whatever you want, and the network will protect its users.

- The way the network will protect its users in this particular instance will be by putting big scary warnings in front of sites that it can't guarantee are delivering their content securely.

So everybody wins. I think this is probably the best solution; you get to deliver your content however you want, and the network takes whatever measures it deems necessary to keep its users secure.


Also, adding on to this Debian has built-in MITM protection. But in order to trust that MITM protection, you need to trust your Debian install, which you may have downloaded from the internet. You can only trust your Debian install if you can trust your download, and you can only trust your download if you used HTTPS (or some other MITM-proof scheme).

But I agree this conversation is going nowhere. TBH, I think the person you're replying to just isn't very smart. They can follow a single logical step, e.g. "I use this thing [Debian] and it doesn't use HTTPS." But they're incapable of making the N logical steps required to get from their example to the root of trust. In the example above, the root of trust is the download source of Debian.


> HTTP is not ok. Anyone can ready / modify what is being sent.

How do you plan demonstrate that in my local network, connection between my computer and printer web based interface? Generally, we had several decates HTTP as main protocol and that worked out.


The underlying assumption is that we're talking about the internet, not a private network, but even your private network would benefit from encryption. What is the benefit of having anyone with access to your network potentially read / modify your network traffic?


Browsers are being pretty weak to understand difference between local networks vs internet. Lot of times I have seen hassle caused by HTTPS, be it printer or server baseboard management controller.


An issue I don't think is addressed is how do you get a valid certificate for a server on a local network? Like setting a new device or router, you often type in the IP address (or maybe mDN name), then you either have to use http, or for https you get a warning and have to add an exception for an invalid certificate... How would one even solve this issue on a local network? I had an idea that I was thinking would be a cool RFC, have the router run a CA, then pass a DHCP (or RA) option with a local CA certificate for the end-user device to trust. Then services could request server certs from it (via ACME protocol). The issue though is that this gives too much power to the network operator. Imagine connecting to wifi at a coffee shop and they decide to MITM your google connections...


Citation needed. Firefox HTTPS only mode does not upgrade local IP addresses or reserved local "TLDs" like .local. If machines on your "local network" are squatting on a public IP or potentially public domain name how is the browser supposed to know the difference?


> Firefox HTTPS only mode does not upgrade local IP addresses or reserved local "TLDs" like .local. If machines on your "local network" are squatting on a public IP…

It could be one of your public IP addresses—more likely with IPv6, but still possible with IPv4—and not simply "squatting" on someone else's assigned public IP address. The browser may not be aware that these are local.

With that said, the devices should use public domain names and obtain proper certificates for them via the ACME DNS challenge, which avoids the issue altogether.


If I've got a powerful enough wifi emitter, I could get close to your home and impersonate your AP using the same SSID (with open access). If you accidentally connect to it without paying attention, all unencrypted traffic is mine to record and modify. HTTPS solves that too.


We've had more security alerts from OpenSSL and other cert-related software than the man-in-the-middle attacks.


To be fair: if there were successful MIM attacks by the black hats -- how would we know?


To be fair ... that's true of most attacks.


> HTTP is not ok

actually, it is.

HTTP is perfectly fine. [1]

> Anyone can ready / modify what is being sent

Anyone can break a window and enter my house.

But I haven't aired a private army to patroll the windows.

NSA can break HTTPS, TGF exists and China Trusted SSL Certificates are a thing.

False sense of security is often more dangerous than a real sense of insecurity.

Edit: [1] how many of you don't terminate SSL at load balancer?


So because the government can potentially decrypt your traffic, you don't care if anyone can? Do you use online banking? Do you care if you transmit your password to your bank account in plaintext? What if you need to call your bank? Would you really trust a phone number delivered over HTTP? That just seems crazy to me.


> you don't care if anyone can?

that's a very bald assumption, my dear friend.

But in practice, yes, it is safe do not care of the possibility that someone is going to inject a script in your blog header, because I am no police officer, I do not work overtime, fighting crime. [1]

Same way I'm not worried that someone is going to steal my car and use it to rob a bank or worse.

> Do you use online banking?

Banks also have guards at the doors.

They handle other people's money, of course they care about it and about the safety of their employees.

Are you a bank?

> Do you care if you transmit your password to your bank account in plaintext?

Not really.

99% of my passwords are passw0rd on websites I really don't care about.

It is much harder, if not impossible, to guess my username.

I bet I am not the only one.

Besides, my bank ask me to confirm any operation in a MFA way.

If they notice something strange, they call me, on my phone, a human calls me.

It's their job.

> Would you really trust a phone number delivered over HTTP?

I've trusted for the majority of my life phone numbers sent unencrypted through wires that everybody could wiretap to and then by email...

Nothing bad ever happened.

Besides, what can happen if you call the wrong number?

I do not believe that the Grudge is a real story.

The point is: no, I am not paranoid.

Common sense is enough 99% of the times.

[1] https://www.youtube.com/watch?v=o2Z1yLO9C-Q


HTTP has been used as an attack vector in the past and there’s no reason to think it won’t be again in the future. HTTPS on your site protects the rest of us from it being used by things like China’s Great Cannon.


> HTTP has been used as an attack vector in the past and there’s no reason to think it won’t be again in the future

so has been HTTPS

> HTTPS on your site protects the rest of us from it being used by things like China’s Great Cannon.

I'm not scared by China, I'm scared by the fact that NSA already controls some of the certs in the so called "trusted" authorities, because, you know, "matter of national security" or "patriot act".


> how many of you don't terminate SSL at load balancer?

Oof, this just reminds me of the whole PRISM thing [0] where NSA was tapping inter-DC fiber links and Google wasn't encrypting (some of) the traffic between DCs

[0] https://slate.com/technology/2013/10/nsa-smiley-face-muscula...


Yep, if you are Google you should put in place every layer of security you can think of and then double them.

But there's only a few Google around.

Besides, NSA can (and probably already is) collect encrypted streams and then try to break them offline.

Real time is not an hard requirement for them.

if what you are doing doesn't require secrecy, HTTP could suffice.

Imagine you built an app for weather reporting, the app downloads static json from a server using the hardcoded ip address.

HTTPS would add no benefit, worst that can happen is that someone hijacks the ip address (for some of the users, it's not possible to do it worldwide) and point the app to the wrong jsons, that might or might no be valid for the app.

Which is a lot of effort to break a weather app...


Again and again, even supposedly smart people are fine when what a dictator claims aligns with what they think is good while handwaving the long term issues of giving up control and power to them.

Especially around security where techies have a tendency to shut down their brains whenever it is brought up, as if in the name of security everything else should be compromised.


Completely agreed.

Also, greetings from the Rome.ro forums!




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: