Hacker News new | past | comments | ask | show | jobs | submit login
Intent to unship: HTTP/2 Push (groups.google.com)
179 points by todsacerdoti 4 months ago | hide | past | favorite | 102 comments



It is extremely annoying that all of this sudden discovery that http2 push didn't work hasn't come with some kind of apology to everyone out here who had tried to explain before why this wouldn't work and why it would be a dangerous waste of time just to be shouted down for years by the people insisting it was going to be epic as the much-smarter people at Google knew what they were doing and really needed this so we should just let them ram it into the spec. We should be extremely conservative about what we put in the spec and stop just throwing in speculative stretch goals because some people at Google thought it's a good idea.


Trying to improve HTTP and Web performance strikes me as pretty pointless when I have enough bandwidth to stream 1080p video, low enough latency to play real-time games, and a CPU 1000x more powerful than the one I first went online with.

The only thing slowing browsers down is the mind-boggling amount of ads, trackers, and junk weighing down most sites. And the tech I use to fix that issue is uBlock Origin, not anything that Google invents.

Everything Google forces into Web standards these days isn't for ordinary developers like us. It's so they can cram more ads in our faces or save 0.1% on their datacenter budget.


Slightly tangential rant — I use an application firewall to actively allow/deny network connection attempts. It takes a bunch of tuning for the first month or so, but it’s then far less intrusive. It’s worth a week of click-ops’ing all of your network usage.

Paired with DNS adblocking and enabling JavaScript on a per-domain basis, I am continually astonished at how garbage ridden the Internet has become anytime I’m outside of my network moat.

I don’t think there’s any one answer, but I think a lot of it is laziness, greed, and a lack of care.

Seriously, why is Stripe on (figuratively) _every_ page? Maybe that’s just lazy development. But why does Azure’s US marketing/pricing site load Google Brazil includes? Why does the 1Password web vault need to connect to marketing clouds?

I was visiting a site yesterday where the mobile experience was basically top to bottom ads, and then when you start scrolling one of those annoying “Login or sign up with your Google Account” popups took up the whole bottom half. The site was basically unusable.


> why is Stripe on (figuratively) _every_ page?

Stripe recommends this for fraud prevention. I assume they check for reasonable browsing behavior before purchasing?


As currently there are more new browsers coming along, maybe it's time for a new spec, exclude big corps & start from scratch with all the good stuff. I mean the sad reality is, the best thing that could happen is for mozilla to die. Then google would get into monopoly issues & we'd end up somewhere similar. Just now, the people who really care, have the chance to lead it & mozilla mustn't die..

FORK THE SYSTEM!


That low latency gaming is from putting servers very close to you. This is not feasible for most of the web.


20 years ago we were playing with lower latencies than a typical modern AAA online game today.

People simply played servers that were actually local - their country or even neighborhood. You could do have a constant 15ms ping on quake easily for a public server, or sub 5ms if it was friends from same uni or similar network.

Modern AAA just puts it in "Europe West" cloud region and is done, which means realistically 50ms+ for everyone (35ms being great result) and constant raging because of huge peakers advantage and mechanisms compensating that.

Servers were truly were people were. Partially because server software were available for the players to selfhost. Large centralized modern approach was supposed to solve many things like cheating on amateur servers but that was a failed promise.


CDNs are a thing. Edge hosting (like Fly.io) is another, although I’m not convinced by the benefits.


... that is the web, websites have the novelty of being able to leverage eventual consistency; games are more real time


"... just to be shouted down for years by people insisting it was going to be epic as the much-smarter people at Google knew what they were doing and really needed this so we should just let them ram it into the spec."

In the past when I have expressed satisfaction over decades in using HTTP/1.1 pipelining for non-graphical web use, e.g., fast, non-interactive information retrieval, instead of HTTP/2 which is slower and poorly suited for the task, I have been downvoted and received snarky replies on HN.

HTTP/2 is a fine example of how this one company acting as one would expect in its own self-interest is continually seeking to exercise control over public access to a public resource (the www). HTTP/2 may as well be "HTTP/Chrome". If someone uses Chrome then they will be attempting to use HTTP/2. The same company controls the gateway to content (search engine), the hardware, the software (browser) and now the network protocol, to mention only a sample of all the pieces they control.

Let's see if this comment draws more downvotes and snark from the HTTP/2 promoters.


I’ll bite. I like HTTP/2. It may be too complicated for what it was designed to do, i.e serving web resources, but it’s really good at doing things that it wasn’t designed to do.

If you need something more complicated than what a browser does with HTTP, with a lot of bidirectional communication, then HTTP/2 makes for a great alternative to WebSockets. HTTP isn’t just used in browsers but also for server-server, mobile-server, IPC, etc. HTTP/1.1 isn’t always good enough for that.


I agree with your sentiment. We were excited by HTTP2 at the time for use in embedded/iot devices (e.g. out of order responses to relax buffering, compression to save bandwidth) until we understood it is close to being unimplementable. To me the realisation hit hard, realizing webtech is all about CDNs and large plattforms being accessed with Chrome not about enabling HATEOAS for everyone.


Please understand, those people really needed this for their annual promotion case.


Being early is the same as being wrong.


You mean implementing or criticizing? A day late and a dollar short seems like the better idiom for most HTTP features implemented at this stage.


It's a financial aphorism, but I think it holds (descriptively, not prescriptively) more generally.


This is a very obvious example of where it doesn't.

Everyone: HTTP2 is the bee's knees

GP: No it isn't that great.

time passes

Everyone: Oh yeah, HTTP2 isn't that great

GP: You see? I told you that first

Everyone: Well being early is the same as being wrong.

You see how that's not an argument here? GP was right. They were early and right. In financial markets the saying holds because if you invest too early you just lose money while the market's understanding catches up with you but here we're talking about implementing a standard (which comes with a cost). Everyone who jumped on the HTTP2 bandwagon incurred that cost. People who stuck with HTTP1.1 did not. Now everyone admitting they got it wrong have to back HTTP2 stuff out have to incur a second transition cost. The people who were early and right don't pay that either.


I never found the explanations given for why HTTP/2 Push failed compelling. Google usually just refers to a blog post from Jake Archibald [1], but that post seems to call out all the ways browsers poorly implemented it rather than reasons the protocol itself wouldn't work.

Browsers already support preload links, which should function effectively the same as a Push header. Why couldn't that same code have been used for Push, including all the existing handling for things like caching and authorization headers that Jake called out as challenges with the 2017 implementations?

[1] https://jakearchibald.com/2017/h2-push-tougher-than-i-though...


Preload headers DO NOT function "effectively the same" the same as HTTP/2 Push. Yes they have similar outcomes, no that does mean they function the same. They are related, but are not actually that similar except in the most trivial ways like using the word "Preload"

- HTTP/2 Push: When the server thinks a client needs the resource, it pushes it into the client proactively, avoiding one round trip for the resource. The goal is to improve latency by sending the resource actively, independently of any DOM->render pipeline that would trigger the fetch.

- Preload header: When the client requests a resource, the server delivers the page body, but it also tells it what it can load in parallel before parsing or JS eval. The goal is to improve latency by moving up the "fetch" stage, before the parse->DOM->render pipeline that would trigger the fetch.

And beyond those two, over the past few years a new kid has appeared on the block:

- 301 Early Hints: When the client requests a resource, the server sends a Preload hint telling it what to load before it even finishes generating the response. A 301 is delivered before the body is fully delivered, before even the first byte. The goal is to improve latency even further by moving the fetch stage up even earlier in the pipeline.

Yes, they attack the same soft spot, but mechanically there is a VERY big difference between server-initialization and client-initialization, and between along-with-body and before-first-byte. (Obviously, 301s and Preloads are closer than anything.)

Ignore the security stuff. The simplest reason HTTP/2 Push isn't so great is because it's not very good at accomplishing its goal. This is because browsers employ an ancient technique, originally written in hieroglyphs, called "caching", which means that the browser just won't request a resource if it already has it. A cache hit is the ultimate reduction in latency. But only the client browser knows what is in its cache, the server has no idea. With Push, you run a very real possibility of sending resources that aren't needed, overall losing efficiency. With 301s/preloads, you at worst receive a few wasted bytes from the useless headers. It's both much easier to get right, and much less costly if you get it wrong.

It's also complicated to implement on top of all the other stuff. Preloads and 301s are vastly simpler and much more targeted additions to the HTTP stack, and HTTP is only useful insofar as implementations are compatible. So 301s/Preload are significantly easier to support and achieve much of the same purported benefits.


"Early Hints" is 103. 301 is "moved permanently".


I suspect parent was thinking of 304 Not Modified, which is the status header usually associated with caching.


No, I was in fact thinking of 103 early hints, which are relatively new.


Ah, I see, apologies. :)


Yes, thanks! Obvious "accounting fingers" typo.


Well my understanding of the Push spec seems to have been as rusty as the spec itself. I forgot it was actually pushing full content that the server expects the client will need, I had it in my head that the request would just include a header that hints at dependencies similar to preload links in the <head>


For security reasons pushed resources have to be treated as a special case until they're claimed by the page, amongst other things this means they need a separate cache until they're claimed


That's an interesting challenge. Is the Push header being included on the document request not enough to consider it a secure resource?

I could see this being a concern with secondary requests that haven't been claimed by the document, say a stylesheet with Push headers to preload font files. I'm not quite sure how it would get into that state with the stylesheet being requested without the page having claimed the request, but is that the security concern?


Jake mentions one of the reasons in his post (that's linked elsewhere)

Push is on a connection (not request) basis so if the connection is authoritative for multiple hosts i.e. they share a cert in the way low cost Cloudflare plans used to (perhaps still do?) then you can push resources for another site

Even without that issue you wouldn't want every resource that gets pushed to end up in the browser cache by default as it leaves open all sorts of malicious behavior e.g. just keep pushing until the browser cache is full of crap etc.


Push_promise frame includes an optional stream identifier (as well as the promise stream identifier). https://httpwg.org/specs/rfc9113.html#PUSH_PROMISE

Not only is it possible to associate a push with a request, but this is crucial to Web Push Protocol (which powers the Push API in browsers). https://datatracker.ietf.org/doc/html/draft-ietf-webpush-pro...

I don't place a ton credence in the malicious behavior assertions. A web page can already request a colossal number of large resources, flood the cache. Yes the browser gets to dispatch requests as it pleases. But having push limit it's cache size also seems not absurdly hard, and like something that wouldn't impede healthy usage much.


I wonder if you can use this to implement cross-site-cookies. (A pushes cookie data to B’s cache with a well known name, then B fetches A data from local cache).


Explain why security demands a separate cache?


Cache poisioning attacks.


Oh I see. Push was not limited to the same host.

Which would have been an easy fix.


Connections arent limited to the same (virtual) host.

Additionally, to prevent cross page tracking, browsers use a separate cache for subresources loaded from different top level domains.

Anyways, it is trickier than it sounds at first glance.


> Connections arent limited to the same (virtual) host.

Huh? Doesn't HTTPS and SNI effectively require that?


Its a minor point, since the rules are quite strict so its probably not a security issue, but no, due to connection coalescing, one connection can serve multiple virtual hosts.

See https://daniel.haxx.se/blog/2016/08/18/http2-connection-coal...


I find the excuse for unshipping especially ridiculous because developers never had a good chance to make use of Push! We can't see Pushes coming at us, cant be responsive to them!

https://github.com/whatwg/fetch/issues/51

It also feels like the use case everyone was focusing on/acknowlegding of content delivery had plenty of opportunities left. There weren't very many visible public attempts! Very few people had access to libraries or starting places to begin to jump in, figure out what to Push!


You can get some of this speed back with HTTP/3 0-RTT start, and use 103 Early Hints to make browsers preload assets early. This combo has an advantage of being semantically backwards-compatible with HTTP/1 (reverse proxies, load balancers).


I'm a bit sad to finally see the death of HTTP/2 Push. It was a neat idea, with poor framework uptake that never really got the attention I think it could've benefitted from. The .NET folks never bothered shipping it; there was a basic implementation in nginx. It was always hamstrung by lack of cache digests too. Chrome ultimately got rid of it with a justification partially along the lines of "nobody would notice anyway" and because re-architecting everything to use UDP (because that works great with NAT, doesn't it?) was apparently more important.


I never had problems with UDP and NAT, but if that’s the thing that gets people annoyed enough to migrate to IPv6, I’ll shut my mouth and smile.


I'd imagine that an IPv6 host behind a stateful firewall would have very similar issues with UDP connection tracking.


It might require a firmware update but I see a clear text connection if here in the quic example https://quic.xargs.org/

Assuming that the IP port 4-tuple is not enough

I never have trouble on my networks with QUIC or other popular udp protocols like WebRTC or DNS


I also haven't had any issues with UDP behind an IPv4 NAT, nor behind an IPv6 firewall. I'm just suggesting that at whatever rate people do have issues with UDP behind an IPv4 NAT, people would similarly have issues behind an IPv6 firewall.


Perhaps the main difference is that the average IPv6-configured firewall device is newer than the average NAT device, having become common much later.


Remember talking to Mike Belshe (SPDY, and H2 spec co-author) before H2 was standardised and even then they were trying to get it dropped from H2 as the benefits were hard to realise, there were plenty of issues and foot guns too

One of the largest issues was 'over-pushing' through either sending something that was already in the browser cache, or just trying to push way too much non-critical content (I've seen some truly bad cases of push actually making the page slower to load)

There were some proposals to allow the browser to communicate to the server what was already in the local cache but these ran into some issue

HTTP Early Hints and Resource Hints have largely removed the need for H2 Push in a way that's much easier to implement, avoids unnecessary fetches and it's easier to reason about too


> This means that if webservers and websites use push and don't test in Firefox, this feature can potentially cause websites to stop working only in Firefox

That is one reading of the situation.

The other would be:

All other browsers reject HTTP/2 Push & gracefully ignore it, but Firefox majorly trips up in edge cases.

(quote: «It seems that Firefox is resetting the connection because it encounters an uppercase character in the header names»).


Your reading doesn't really make sense. Other browsers don't implement the specification. Firefox does. The specification is very explicit about how this violation should be treated. Those webservers and websites have a broken implementation. Had they tested with any spec compliant client they would have noticed it being broken.


They make it clear in their post:

> Firefox has continued supporting HTTP/2 push as this wasn't too large of an effort until recently. > However in the past few months we've encountered some webcompat bugs only affecting Firefox through HTTP/2 push

They are clearly saying that recently there has been bug report, and this blog post is to do exactly what you mentioned as other browsers do: reject H2 push and gracefully ignore it. There does not appear to be any hiding here.


It could just as easily have gone the other way: a new spec implemented only in one browser (typically Chrome, as it most frequently ships things first, often incomplete) and, badly implemented by a server, causing such a problem.


I mean, that's pretty much how we got HTTP/2 Push as a standard in the first place, so it's somewhat fitting we lose it the same way.


Why wasn't push adopted more widely? It was one of the big reasons HTTP2 was meant to be a game changer (despite the massive flaws)

Was it a lack of browser support? or that it didn't really work with CDNs?


Server push was never massively deployed because they may be harmful : the server pushes assets that may be needed by the client, but also may be already cached by the client. In the later case, this is a harmful burden, especially on mobile etc.

The server is never able to truly know if the asset is actually required by the client

Early hint is better, probably: the server quickly tells the client it will need those assets. The client can then decide if they must be fetched, or if they are already available.


I looked into using this, but changed my mind. Say you get a hit on Index.html. you probably want to push script.js and style.css. So now you've pushed a few hundred kbits.

Now, the user comes back and asks for index.html again: do you send everything again? Surely the resources are still in the user's cache. If you don't, how do you know what to send? Only the user's browser knows what's in can't and what's not.

So you face the choice: speed up first visits but waste bandwidth for subsequent visits, out slow down first visits but speed up subsequent visits.


> It was one of the big reasons HTTP2 was meant to be a game changer

Probably because it's a game changer only if you're at google/amazon/meta (or slightly smaller) scale.

For everybody else its largely a huge complexity addition, and it's essentially ignored.


We enabled HTTP2 in 2015 on the $big_financial_news_site. We had some influential JS people that were very keen to roll it out (they mostly ended up fastly after). I did warn that it would degrade all but the high bandwidth experience.

It took a long time for them to "understand" the data, and that HTTP2 was not very useful, especially when compared the impact of changing CDN, or optimising for CDN caching.


I suspect the issue is it wasn't really a game changer at google scale either.


Yeah i kinda don’t trust google anymore on going good engineering work. Nowadays they look like the cushy job providers, like banks in the 80s (i guess)


You might like the more detailed breakdown published when Chrome decided not to support push:

https://developer.chrome.com/blog/removing-push/


All that says (that is relevant to the posed question) is:

> it was problematic as Jake Archibald wrote about previously, and the performance benefits were often difficult to realize

Edit:

Clicking through to the linked post by Jake, <https://jakearchibald.com/2017/h2-push-tougher-than-i-though...>, problems mentioned are

- browsers had buggy implementations which wouldn't (always) use the pushed data. In that case, you've (as a web server) kept the connection busy and sent bytes for nothing

- browsers ignore max-age when using push cache. Given that the time between pushing and using is usually <1 second, that seems fine to me? Don't get this one. Further down is another caching header caveat. Just don't push data to the cache that you don't want it to cache?

- pushed data is dropped when the connection closes. Seems like a simple fix to me: keep data around until you navigate away from the page or the tab gets unloaded for any other reason. Or keep the connection open while the page renders because there might be more requests anyway, like for an image or commenter's avatar

- different pages using the same connection can use each other's cache, so if you "open in a new tab" two links and one of them already pushed the /css/main.css then the other doesn't need to anymore. This sounds like a very useful feature, not a reason to remove push at all. Further down in the article, under a separate point, it says "Once the browser uses something in the push cache, it's removed", which sounds like this mechanism can't work at all and they'll steal each other's resources instead?! Surely that's not the case, since it makes the sharing entirely useless?

- if you use authentication, make sure to send it along when requesting e.g. a font via JavaScript because otherwise it can't use the cache because you're not the same user. Okay, sounds reasonable and like a simple thing to do

- not all browsers implemented cross-origin pushing

Finally, there's a concluding paragraph starting with

> There are some pretty gnarly bugs around HTTP/2 push right now, but once those are fixed I think it becomes ideal for the kinds of assets we currently inline, especially render-critical CSS.

This doesn't sound like a reason to remove push altogether. Quite the opposite?


My reasoning for never pursuing push is that a lot of semantics become implicit in the business requirements of the application itself. How does the server know it should push or not push data based on the client state, e.g. should it be pushing resources which could potentially already be in cache.

That leads you to one of two paths:

1. The web content needed should be communicated to the server by the client; either requests are made with Fetch with additional parameters, or via a service worker which can then populate the cache itself. However, once you are supplying logic in the page to handle these resources, there are other approaches you can make - such as having a service worker populate the cache by downloading a single archive and unpacking it.

2. The other option would be for content which is specifically never meant to be cached, such as live updates of a sporting event. There is no API however to support this, and we already do have things like WebSocket to do this.


in retrospect it seems the semantics got overcomplicated, which led to partial implementations (and of course many bugs and confusing and hard technical questions), which eventually led to implementation projects getting abandoned.

and, probably in situations where the benefit of decreasing "time to first render" is big enough, there it makes sense to have a server that is coupled to the various low-level bits (load balancer connection tracking info, TLS session info, User Agent header, and Cookie of course) and for new sessions it can produce an optimized response payload.


Well there is a link with more info about what Jake wrote.

The tl;dr: its hard to predict ahead of time what resources browsers really need, so often you send the wrong thing or the non ideal thing. End result is the practical performance benefit was much smaller than the theoretical one.


The practical benefit is virtually nil because HTTP already has a proven mechanism for caching and preloading.

By the time you visit the second page on a multi-page site, you are likely to have almost all stylesheets, scripts, and common images already cached. This means you will reject almost all of the resource the server tries to push to you. Even on the first page, modern browsers are very quick to identify resources they need to load, both from headers and the early parts of the document. They load those resources concurrently while they are still parsing the document, so they are rarely blocked for a noticeable time.

Meanwhile, single-page sites have evolved to a point where the entire application is contained in one or two heavily optimized, minified scripts. You just load that one script and you're done. Most small images these days are SVG embedded right in the source code, so again, there's little need to push other resources.

The only time a modern website makes you to load a lot of unanticipated resources is when those resources are used for tracking and advertising. They can't be pushed because they live on different origins.


> Meanwhile, single-page sites have evolved to a point where the entire application is contained in one or two heavily optimized, minified scripts. You just load that one script and you're done. Most small images these days are SVG embedded right in the source code, so again, there's little need to push other resources.

This isn’t true of most websites - look at the developer tools traces - because it’s terrible for performance to optimize for the IE6-era browser design. With those kind of bundles, you have to push a ton of content which didn’t change any time one byte of any resource changes. Since HTTP/2 substantially reduced the cost of multiple requests any site which has repeat visitors will benefit enormously from letting the bulk of the content which hadn’t changed be cached and only refetching the few responses which actually need to be updated.


Not saying it's good design, but a huge bundle is the default for a lot of frontend frameworks these days. :(

On the other hand, sites with lots of separate static assets are often split into multiple origins. Images from a CDN, fonts and libraries from another CDN, and API endpoints on a "serverless" platform somewhere else. These sites won't benefit much from having Push enabled, either.

Of course with HTTP/2 it's better to serve as many resources as possible from a single origin. But that doesn't sit well with the recent trend of sprinkling your stuff across buckets and lambdas, duct-taped together with CORS. Good ol' Cache-Control still does most of the heavy lifting there.


Did you have a particular framework in mind? None of the ones I’ve worked with do that by default and I very, very rarely see even close to a single bundle on a website any more. A decade ago it was more common since HTTP/2 hadn’t shipped and IE6 was still a concern but the cache wins are compelling and well known by now.


I don't have much experience fine-tuning the build options, but last time I tried a new project with default settings, React produced 2 large chunks, and Svelte gave me a single .js file.


Same thing with Vue.js


> Even on the first page, modern browsers are very quick to identify resources they need to load, both from headers and the early parts of the document.

The premise of http2 push is that those resources could be sent before the first round trip even occured - before any link headers or request body is available.

Which is kind of two things - the round trip of network latency and also the backend latency of how long it takes to make the resource.

The solution was to give up on the round trip network latency, but concentrate on backend latency. Which you can do by sending link headers before the resource is finalized, or if you cant even do that, by sending a 103 response.

The new solutions are still leaving some latency on the table. Price to be paid for simplicity.

The other side of that coin, is if you really care about the latency of those resources for clients who dont have them cached, you can just embed them in the html document.


Backend latency should be negligible next to RTT latency though. If RTT is on the order of 100 ms, shaving off 5-10 ms for request processing doesn't help much.

I don't know the history here, but nginx seemed to have a solution (which was removed) to be able to conditionally attach pushes to a location directive based on a cookie, and then set that cookie to know if it was a first visit. Seems simple enough? Did it actually never work or something?


> Backend latency should be negligible next to RTT latency though

"Should" is doing a lot of work there. For dynamic resources this is often very not true. On the other hand i suppose the landing page is often highly optimized.

The cookie approach does seem like the most common approach to that problem. I have no idea how well it worked.


A modern SPA is often just a static HTML page that loads a whole bunch of minified JS. Backend latency is indeed negligible in that case, compared to the RTT for the multiple MB of scripts that you need to load, parse and execute.

A more traditional multi-page website, on the other hand, can benefit a lot from putting a CDN in front of it. Once the main dynamic resource has been generated, the rest of your assets can be loaded from edge servers closer to your users. When RTT is on the order of 10ms, it doesn't really matter whether you push or pull.


Sure there are scenarios where it doesn't really matter, but that is not everyone.

Edge caching is great if resources can be shared. Sometimes resources are per-user.

SPA's that involve dynamic per user content still have to load it to show the user anything interesting.

Edge caching is definitely not 10ms for everyone. Depending on who you are targeting that could be a reasonable assumption, but it is definitely not true of the world at large.


I was indeed reading that while taking notes. The comment is now updated with those.

I didn't read in the post what you said its tldr is supposed to be; maybe I read over it


> Why wasn't push adopted more widely?

It wasn't given enough time, would be my guess. First Apache release that included HTTP2 support was in late 2015, same year HTTP2 RFC finished going through the process. In 2017 there still was browser inconsistency on how server push was handled (one of the reference articles in the sibling comment link). nginx implemented support for push somewhere around 2018 (based on wikipedia information).

All those changes need to trickle down. Versions available in stable distros, config defaults etc. First time I've enabled http2 on my Apache servers, was in 2020 (and there are likely application servers out there that still don't support it directly in 2024).


It’s really hard to architect around http push. Say I have a website in Python/Django, where I know that a page uses certain images. In order to use http push, my app would need to stream those images itself, rather than let a CDN or even Nginx do it.

I suppose that some solution could have been implemented so that my Python app can tell nginx to stream certain static resources over a given connection… but the tools for that never surfaced. Most HTTP frameworks don’t support push.


Can you explain the problem is solved better than Link headers, websockets, or SSE?


Websockets and SSE are completely unrelated. Link headers can be used for preload, but until 103 Early Hints it could not be sent before the body content was processed on the server (unless your body content did not affect your headers, which is quite rare).


… in Firefox.


It seems to be removed everywhere else already.


yeah, but the submission title could be clearer.


Related (I think). Others?

Removing HTTP/2 Server Push from Chrome - https://news.ycombinator.com/item?id=32522926 - Aug 2022 (201 comments)

A Study of HTTP/2’s Server Push Performance Potential - https://news.ycombinator.com/item?id=32097013 - July 2022 (2 comments)

HTTP/2 Push is dead - https://news.ycombinator.com/item?id=25283971 - Dec 2020 (168 comments)

Blink: Intent to Remove: HTTP/2 and gQUIC server push - https://news.ycombinator.com/item?id=25064855 - Nov 2020 (133 comments)

We're considering removing HTTP/2 Server Push support - https://news.ycombinator.com/item?id=24591815 - Sept 2020 (2 comments)

Performance Testing HTTP/1.1 vs. HTTP/2 vs. HTTP/2 and Server Push for REST APIs - https://news.ycombinator.com/item?id=21937799 - Jan 2020 (71 comments)

How HTTP/2 Pushes the Web - https://news.ycombinator.com/item?id=18216495 - Oct 2018 (16 comments)

Nginx HTTP/2 server push support - https://news.ycombinator.com/item?id=16365413 - Feb 2018 (63 comments)

HTTP/2 Server Push on Netlify - https://news.ycombinator.com/item?id=14798271 - July 2017 (23 comments)

The browser bugs and edge cases of HTTP/2 push - https://news.ycombinator.com/item?id=14445728 - May 2017 (20 comments)

A Guide to HTTP/2 Server Push - https://news.ycombinator.com/item?id=14077955 - April 2017 (59 comments)

HTTP/2 Server Push - https://news.ycombinator.com/item?id=13990074 - March 2017 (2 comments)

HTTP/2 Server Push and ASP.NET MVC – Cache Digest - https://news.ycombinator.com/item?id=13659962 - Feb 2017 (9 comments)

Accelerating Node.js Applications with HTTP/2 Server Push - https://news.ycombinator.com/item?id=12296922 - Aug 2016 (6 comments)

Rules of Thumb for HTTP/2 Push - https://news.ycombinator.com/item?id=12224258 - Aug 2016 (25 comments)

Google's Rules of Thumb for HTTP/2 Push - https://news.ycombinator.com/item?id=12223352 - Aug 2016 (2 comments)

HTTP/2 Protocol for iOS Push Notifications - https://news.ycombinator.com/item?id=11175980 - Feb 2016 (13 comments)


Well I'm not surprised. This is usually what happens when you take an existing protocol meant for a specific purpose and try to tack on additional functionality on top which has little to do with the original design goals of the protocol; you're almost always better off starting out with a low level protocol and going straight for your use case. In that respect, I think WebSockets already solved the problem of bidirectional communication elegantly.

In terms of reducing latency when loading deep file/script hierarchies, the <link> tag with rel="preload" or rel="modulepreload" is an excellent, simple construct.


HTTP/2 Push would have fit the exact same use case as preload links with reduced latency.

The whole point was for the server to send the content that should be preloaded as a header so it could be preloaded before the link tag was ever sent or parsed. This is less useful, in my opinion, for pages that support streaming but for any request that hangs until the entire HTML page has been rendered server-side could have noticeable gains.


> HTTP/2 Push would have fit the exact same use case as preload links with reduced latency.

I’m no expert, but it sounds like this still breaks a lot of assumptions about the 1:1 request-response based protocol that http is. If there’s no request, headers are… inferred? And the determination of what needs to be sent requires the server to model relationships between resources, as well as infer user-agent behaviors.

Anyway, I never looked into it deeply. But I always felt this was a massive complexity burden spanning browsers and servers, for essentially bootstrapping cache performance only. Given that websites often don’t even cache properly, don’t bundle their bloated JS, etc, there’s so much low hanging fruit that needs addressing first anyway.

So I’m happy it’s dying. Bank for the buck is crucial, and complexity creep is suffocation.


Http/2 was a new low level protocol and http push was never really about bidirectional communication.


Is HTTP not bidirectional communication?


I was just thinking about it yesterday! It would be a nice way to implement REST APIs for related resources.

For example, if we have a /books collection with related authors for each book, client could request books with authors. Server would then fetch the books and authors in a single DB request, return just the books, and push each author as an additional response.

Of course, this can be implemented with other mechanisms (JSON:API [0] comes to mind), but this seemed like a nice idea to explore. Guess we’ll never know!

[0]: https://jsonapi.org/


HTTP/2 Push was really useful when we wanted to avoid Websocket , Ajax Polling , Long Polling.

The problem with not adopting is due to Websocket getting more popular and Devs stop caring about HTTP2 altogether. But there are many cases Websocket is overkill , even text chat application dosen't need websocket.

Similar functionality can already be achieved by using SSE/Eventsource although HTTP/2 Push was more powerful.


Are you talking about Server-sent Events (https://developer.mozilla.org/en-US/docs/Web/API/Server-sent...)? Which can be implemented under HTTP/1 too.

HTTP/2 Server Push is another thing. It allows server to send additional related resources to a client without the client explicitly requesting.


Not wrong, but for context: websocket http/1.1 only. It cannot use http/2 at all.


I think it's more of a problem on the implementation side.

IETF RFC 8441 (https://www.rfc-editor.org/rfc/rfc8441.html) described a way to bootstrap WebSocket under HTTP/2.

That said, I don't know whether or not people are actually doing it in the wild.


As far as web frameworks go, WebSockets over HTTP/2 are supported in for example ASP.NET by default (https://learn.microsoft.com/en-us/aspnet/core/fundamentals/w...). There is however still a surprising amount of languages/libraries/frameworks/servers where the support is missing (e.g. go https://github.com/golang/go/issues/49918).

On the client side, both Chrome (https://chromestatus.com/feature/6251293127475200) and Firefox will use it when available, but I have encountered some issues with cross-origin support (https://issues.chromium.org/issues/363015174)


GP here. This was completely new to me despite working with it day to day. Thanks for sharing. This gives me some hope about having websockets be more in line with the rest of the stack.


In what practical sense are websockets overkill? As far as I know, the resource usage isn't really all that different. It also isn't as if websockets are that difficult to use these days.

I am also skeptical of the "doesn't need websocket" statement. It is technically true that you can use other technologies. But, WebSockets provide bidirectional communication, while Ajax Polling and Long Polling are used for periodic updates. HTTP/2 Push is more suited for pushing resources or data from the server to the client. As far as my understanding goes the use case there is specifically aimed at pushing resources the server "knows" the client will need as well without the client requesting them. I am sure that with some creative code you can also use this to implement a chat of sorts but that is a bit besides the point.


Is there any reason to use websockets over webtransport now?



Probably the most major one is if you want cross-browser support, AFAIK, WebTransport still isn't available in Safari.


Perhaps i misunderstand, but i feel like http2 push was trying to achieve something very different than websicket/ajax poling/long polling/sse. They dont seem comparable to me at all.


That's an interesting point, would you mind elaborating on how you think the goals of HTTP2 Push differed?

Even is if is a misunderstanding it would still be useful to consider.


HTTP2 is to hide connection latency by preloading.

The other things are for bidirectional communication with a server.

I would consider them very much apples and oranges. Its not just that they are designed for different purposes; i don't think it is possible to even use server push for the purpose websockets is used for in the browser.


I don't know how to ask this without being quite direct. But why would you ask someone that when the differences are quite obvious? Certainly between web sockets (bidirectional, client initiated) and HTTP2 push (server initiated, one directional).

Are you asking because you basically want to know more about the technologies? In that case why not frame it like that?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: