Hacker News new | past | comments | ask | show | jobs | submit | vonquant's comments login

I'm not sure it's creditable that enough credence is being given to these anonymous claims in a private group to be impacting this person's life. Seems more akin to a slapp.


How many careers and lives were ruined by the anonymous "Shitty Men in Media" list?

https://www.thedailybeast.com/ugly-battle-over-shtty-media-m...

Some cases were probably well desreved, but anonymity allows easy score settling and revenge.

There's no recourse against an anonymous accusation, there's no way to defend yourself, and there's no way to prove you are innocent.


Money is often a compelling argument.


Of course, the VPN could be the one doing the hijacking.


Well let's assume that he f.ex. has friends in several countries to whose VPN servers he has a direct unimpeded access (by IP, not hostname).

Would a trusted VPN help? Or would hijacking the traffic at the point of departure (his phone / the air) still manage to corrupt the payloads?


"So do you trust some shady foreign VPN provider more than your government approved ISP?" might return true from journalists in autocratic states that have a record of censorship, surveillance and imprisoning journalists.


E2EE and open source: the two things people assume automatically makes things super-crazy-secure.

The implementation of E2EE must be robust and there must be somebody who is actually checking the source code (plus verifiable builds)


Don't forget the human element: users still have actually do the verifying (e.g. checking public key fingerprints of recipients) that the source code enables!


If you go down that road, you can make this argument infinitely. Even if you verify your builds, you cannot know if the software you are using to check the build isn't compromised. Or if you check the software you use to check the build, you have to check the software doing that check and so on.

Nothing makes software automatically super-crazy-secure. Absolute security doesn't exist.


You'd get close by doing all you mentioned, but also compiling and hosting the code and infrastructure yourself. Not often this is feasible.


You'd be still trusting the compiler. However many layers of checks you do, there's always something you need to trust.


It doesn’t automatically make everything secure, but it’s still a prerequisite for a trusted secure thing.


and safe OS, computer, room...


People who wish to mask their crimes have a greater incentive to use E2EE so will probably gravitate towards platforms that offer it. I would therefore suggest those not committing crimes are disproportionately affected by E2EE not being made the default where possible. Once one service in a particular category offers E2EE, the benefits of the other services in that category not offering it is significantly reduced.


This is only true if you assume that the world is populated with an equal number of criminals and non-criminals.


www.example.com and example.com don't necessarily resolve to the same place.


A common convention with system administrators is to have the canonical name at www.* and redirect www-less requests to the former. If you argue that a browser implementation should fix uncommon configurations, I would argue that administrators should fix their configurations in the first place.

You don’t have this issue at all for domains that don’t have a www subdomain.

Furthermore it would be extremely confusing to have different content for www.example.com & example.com.


Yeah, I don't like the current trend to redirect www to root. You can easily doing simple dns-based load balancing by having multiple ip addresses on the www subdomain. You can't do that on root domain, you'll have to use a dedicated load balancer even if all you want is just simple load balancing among a small set of servers. It only benefit cloud vendors and hurt hobbyist/small website operators if this trend continues to the point that visitors expect all websites to be served from root insetad of www.


If we could ever be bothered to implement SRV records for http then load balancing and failover could be significantly more straightforward and robust, without worrying about root vs. www at all.


I also dislike the trend to redirect the 'www' prefix to root.

My company uses DNS load balancing for a root domain, though, so either I'm misunderstanding you or you're mistaken about what's possible here.

We use Constellix's DNS management to have round-robin DNS via multiple A records for 'nxtbook.com'.

If you're thinking of some other form of DNS load balancing, would you please clarify?


Multiple A records is fine on root domain, but root can't use CNAME, which is used by some people to implement their dns load balancing (I use cname so I forgot that you can still do it using A records). By using root domain instead of www, your options for load balancing is diminished.

Edit: another common use case is hosting your static website on S3 or github pages. Typically it's done by adding a cname entry to s3 or github.io (been a while so hopefully I remember it right). You can't do this on root, unless you're using another server as reverse proxy (e.g. cloudflare's cname flattening service). Again, it's benefits cloud vendors (cloudflare got more potential customers by offering this service for free) but ultimately hurt people that want to host their small websites.


Ah, gotcha. We CNAME a lot of things precisely so we don't have to mess with multiple A records.

Definitely simpler, and a good argument against using the root domain.

Thanks for clarifying!


Redirecting a subdomain to root should be a choice not forced.


Hey, this looks great.

One thing you might want to consider is having one domain for a website about collectednotes (collectednotes.com) and another for hosting user's blogs (collectednotes.blog for example) because it is currently not clear what urls are official and made by you and which are just blogs made by anyone.

For example, https://collectednotes.com/accounts/ is genuine and made by you whilst https://collectednotes.com/account/ is a blog I just created. To me there seems a very real risk of users being mislead.


On second thoughts, using a sub domain would make way more sense.


In that case: make sure you trust your registrar.

There was a story here a couple of weeks ago or something with a post mortem from someone who got their websites taken offline and access to their mail suspended or taken offline because some automated system at a giant corporation decided so.

It stayed offline for hours because there was no competent humans available with authority to override the clearly (AFAIK) stupid decision made by the system.

Personally a key takeaway from that (except not depending on said company) was to make sure one has at least two domains, preferably with two registrars, and make sure user generated content is not available on the one you depend on for email etc.


The story in question: https://blog.gitbook.com/tech/post-mortems/06-20-gitbook-dom... (https://news.ycombinator.com/item?id=23417046).

Note that in that case the user content was on a separate domain—the registrar blocked the entire account.


Oh. Possibly even worse.

(Although I think there might have been some comments about user generated content on the main domain.)


Great tip - got a link for the story, by any chance? Or do you remember who the company was, so I can search for it myself?

EDIT: I couldn't find anything directly on point, but I did find this related horror story about a company whose ICANN email contact information no longer worked because the company's contact person had left the company. An ICANN email to the contact person bounced back. ICANN took the entire domain offline. The company's Web person had a lot of trouble getting through to a human at ICANN to straighten things out — and even then: "To make a long story short, my client was required to submit articles of incorporation, bank statements and other documents in order to get the domain working again." [0]

EDIT 2: Thanks to Chris Morgan, who provided a link to a GitBook horror story. [1]

I added both stories to my "Startup Law 101" page [2], with a hat tip to both Erik and Chris.

[0] https://www.skyhoundinternet.com/2017/05/17/website-email-ge...

[1] https://news.ycombinator.com/item?id=23516675

[2] https://www.oncontracts.com/startup-law/#Use_a_separate_Web_...


ah good catch thank you


+1


Exactly. Having a bunch of keywords reserved like Twitter does would be handy.


Is the extent of the 'unlimited' important? Lending a single book more than they own is an impossibility if they are respecting IP - no different to a physical library photocopying entire books so they can lend more.


Oh let me see, perhaps it has something to do with China being a one-party state with a ripe history of human rights atrocities and pervasive censorship.

Just look at SARS for an example of where trusting the official information from China made for a terrible idea.


And the US isn't a two party state with a ripe history (and present) of human rights atrocities and pervasive censorship?

Your view of China is almost entirely defined by US media, owned by capitalists. Perhaps consider they have an incentive to lie.


There is a discontinuity there. A two party system is many orders of magnitude more reliable than a single party system, even if not ideal.

The US also has a much smaller history of hiding facts and persecuting the Press.


The two US parties are almost completely identical in behaviour. They have both continued brutal imperialism. They have both demonised countries that opposed said imperialism and persecuted the few in the US (press or otherwise) that were also opposed.

There is also no democratic control of the people over the US press, it's privately owned by only a few. The press has an extremely long history of hiding and fabricating facts, particularly against anti-capitalists.

More importantly, there is no democratic control over production and distribution. It is almost all privately owned.

The US's history of brutality and manipulation is unmatched.


That may well be, but the predominant concern is larger droplets from a cough or sneeze. I don't recall seeing anything speaking of airborne transmission.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: