We did something similar at MSN in the 90s - once a dialup ISP was spending so much on transit to us, we’d install a dedicated circuit directly from MSN to whatever mom & pop ISP. It was nice for everyone.
It's a net win for the ISP due to cost savings from not having to haul that traffic across their network/to the Internet.
You're right that powering racks isn't free either though: if the cache operator doesn't utilize the rack well and isn't offloading enough traffic to it, the ISP will give them the boot (as in, "we wheeled your rack out to our docking bay, come pick up your piece of junk if you want it back"). It's uncommon since _most_ of the orgs running these racks are competent, but does happen from time to time.
Also, even though it's a net win for the ISPs, there are still cases where the operator ends up paying them a fee. This has less to do with the economics of edge caching within an ISP network and more to do with the bargaining power certain ISPs have. The 2014 Netflix/Comcast peering agreement is a good example of how those things sometimes pan out.
MSN paid. We also carried all traffic for Microsoft properties; if you’re an ISP, whatever you were paying for Microsoft data went to zero and your other links had more capacity.
A neat secret about these cache devices is ISPs that charge for usage. The data the customers use comes straight from the ISP's Central Office, not from "The internet" at all.
My parents have a (crappy) wireless internet provider. During most evenings and weekend, they're lucky to get even 0.5Mbps. At the same time, Netflix works fine, and the speed on fast.com (Netflix-run) will be a few Mbps (I don't remember exactly). In the daytime on weekdays, they usually get a few Mbps from any speed test and generally everything feels fast.
This lets me deduce the ISP is way over-subscribed on their upstream internet connection, but also has a Netflix appliance.
What's frustrating is if they complain enough, the company will send a tech out who will "adjust" their antenna, or suggest it's a line-of-sight problem and they need a bigger tower, should cut down some trees, etc.
> "My parents have a (crappy) wireless internet provider. During most evenings and weekend, they're lucky to get even 0.5Mbps."
Sounds like my experience on Three UK here in London. Just awful! Vodafone and O2 are at least 20X faster at peak times despite Three actually having a significantly stronger 4G signal at my flat. Absolute joke of a network. I'm so angry that I was basically tricked into a 12 month contract with them.
The problem with Three is they don't have enough spectrum especially in densely populated areas, they definitely have invested a lot in backhaul capacity. 99% of cellular data problems are nothing to do with backhaul or peering, it's spectrum usage.
Some good news: they are aggressively rolling out SDL (supplementary downlink) which will really improve things. They also announced today their 5G network (which has by far the most spectrum of all UK carriers) has went live. I would expect insanely good speeds on that network.
But yeah, turns out selling unlimited data/tethering/roaming/text/minutes for as low as £11/month isn't a good business strategy.
Generally though: EE is good everywhere. Vodafone is good in North London, O2 better in South London. Three generally awful everywhere.
Three’s 5G has actually been running since last summer : I should know, I was in one of the pilot areas in London and signed up for their home broadband offering on day one.
No complaints, 100-300Mbps on average.
They seem to have done great at the 5G spectrum auctions, they’re the only UK carrier with 100+ Mhz.
I’m also using them for mobile 4G and yes that’s often awful, as others are reporting. I should probably switch to giffgaff (safer to use different providers anyway in case Three has a catastrophic outage), but Three’s "Feel at home" free roaming in many countries beyond the EU (including the US) is super handy and has no equivalent that I know of.
> "Three’s 5G has actually been running since last summer : I should know, I was in one of the pilot areas in London and signed up for their home broadband offering on day one."
But this is not really the Three network. It's a separate network with a different network ID and cannot be accessed with ordinary Three devices/SIM cards.
It's based on the network formerly known as "Relish", which was bought by Three in 2017. The areas covered by Three Broadband's 5G are basically exactly the same areas which were always covered by Relish. They just upgraded the equipment to use 5G radio.
Relish held a lot of spectrum in the 3.5 Ghz (n78) band. Three, by arrangement with OfCom, added Relish's holding to their own 3.5 Ghz spectrum won at auction to give them the contiguous 100 Mhz.
There is an additional 200 Mhz of 5G spectrum being auctioned this year, which should bring the other operator's 5G holdings up to comparable levels with Three's:
> But this is not really the Three network. It's a separate network with a different network ID and cannot be accessed with ordinary Three devices/SIM cards.
I'm not in the UK and ask purely out of curiosity, but what SIM/configuration do you use then?
I've found 5G speeds on Vodafone to be quite variable. I have a weak 5G (Vodafone) signal at my flat, and at times I've seen it as high as 130 Mbit, but at other times it drops to 1-2 Mbit or vanishes completely. Where as on 4G I get a consistent 20-30 Mbit all day long.
I suppose this will improve when they build out the network more, but even in locations with strong 5G signal the speed seems to vary a lot.
Seems like 3's 5G network has still not actually launched, but they're now promising it "by the end of February".
Is 3's SDL in addition to their carrier aggregation ("4G+") rollout? Because I already have that and it doesn't seem to have helped much!
Keep in mind that current 5G deployments in the UK (and I think the world) bond 4G and 5G together. This often causes problems if the 4G is heavily congested. Vodafone's London 5G deployment is still pretty spotty, it will get a lot better. Plus it uses a very high frequency band (3.5GHz). When the 700MHz band gets added in a couple of years it will be a lot more consistent.
Yes, SDL is on LTE1500, LTE Band 32. It is another carrier to aggregate with the other ones. Both 3 and Vodafone have some spectrum in that band, IIRC 3 has a lot more. It will really help with overloaded cells in the short term.
My router (Huawei E6878-870) apparently supports LTE band 32, but I've never seen it used on either Three or Vodafone. At my flat, Three seems to use band 3 (1800 Mhz) always.
Vodafone floats between bands 7 (20 Mhz channel!), 20, and 1, but gets good speeds on any of them.
Three always slow at peak times, sometimes unusably slow, despite having by far the strongest signal.
This sounds a lot like Freedom mobile/Wind here in Canada. Although it was the opposite, the cities had the good coverage but if you drove an hour out of town it was either offline or really slow.
The networked sucked at first but they are slowly improving things and offered far better customer service and contracts than the existing 2-3 monopolies.
But sadly if you really want the cutting edge of speed and spectrum you need to use one of the monopolies.
That's interesting. In Australia, if you happen to use Telstra 4G (the only provider that is reliable for most of the country) I've found that speed in remote towns can be a lot faster than in any of the capital cities. It's surprising as the remote towns can either on satellite link or at the end of 1000km of fibre.
> The problem with Three is they don't have enough spectrum especially in densely populated areas
Isn’t this argument basically debunked? Places like South Korea and Japan have no issues whatsoever with extremely fast wireless internet at a fraction of the price.
It's pretty clear that Three's spectrum is woefully lacking compared to the others, except on 5G and the LTE SDL band 32 (both not widely deployed yet).
Especially problematic when you consider that Three are the ones who have been selling super cheap unlimited data packages for the last couple of years!
On one hand you have people saying 4G is good enough, and say 5G is hype only ( It is over hype, but surely not hype ), on the other hand you have people complaining about 4G capacity.
It seems most people when discussing 4G or 5G have absolutely no idea what Capacity and Bandwidth is about. They only care about absolute speed.
I've heard that in the olden days European ISPs would sometimes charge different rates for transatlantic traffic and other traffic and similar in Australia for local and overseas, but generally nobody breaks out traffic charges by destination; it's difficult to manage, it's confusing for customers, and it's hard for customers to find out which IPs will get preferred rates or how to direct their traffic there.
Charging different rates for different traffic happens here in New Zealand. I generally see variation on a theme of ‘bonus free social media traffic’. It’s all very depressing. Eg: https://www.spark.co.nz/shop/mobile-plans/socialiser/
Well, as a subscriber everything beyond your cable modem is "the internet" and you don't care how the ISP's network is set up internally. CDNs, netflix, etc have all been embedding devices in carrier networks for years.
Some of your ISP's connectivity probably comes via settlement-free peering, yet we pay for that too. None of this is a secret, it's just how the internet's plumbing works.
Qwilt and similar products are really neat, but I don't think they've entirely panned out. The idea to do transparent caching at the network edge was hurt by the move to TLS everywhere, and there's a lot less cost incentive when things like Open Connect exist (eg. if Netflix will give me a rack that offloads XX% of my total traffic, how much additional traffic do I need this transparent cache to offload before it's cost effective for me?). Without transparent caching the economics get trickier.
It seems like their niche is as an easy to deploy, more traditional looking CDN that can run out in the RAN, which does still have value.
For now, I think the majority of bandwidth would be the zillions of iPhone users doing backups and app updates all the time. However, this is definitely a way to pave the path for Apple TV+.
That’s a sure-fire way of getting dataloss. Wouldn’t it be much easier to just upload the backups from users devices when, idk, at night? iCloud requires your device to be locked, connected to WiFi, and charging, so I imagine that already shifts the traffic to off-peak hours.
Not sure it is still the case. MacOS Server have this function if you set it up your network, it will cache all your backup, App Update, iCloud Files and Photos on your Mac.
Yes, but the peering/traffic teams are the ones that generally manage these embedded cache deployments, hence why related info are on the same pages as peering details or in PeeringDB. All the ones I linked do actually have edge cache racks, not just peering arrangements.
Do you have a rough overview of how the requirements differ across the similar edge programs linked above? If there's one thing Apple does better, it's providing a clean straight forward page with all the relevant info and little fluff.
There are some facility and org requirements depending on the various company's rack setups, but most of that's standard.
The main requirement is how much traffic you need to be doing with the peer before they give you a rack - Apple states 25Gbps; more mature programs like Netflix are 5Gbps. Sometimes that's negotiable and they'll hand them out for even smaller traffic amounts (I've heard less than 1Gbps for some of those listed, though won't mention which ones). Like most things in this area of networking, there's a lot of variability and personal contacts involved.
That's not really suprising. I imagine setting up a new deployment isn't terribly expensive (now the R&D is done), but given you are going to be spending a few weeks working with the ISP to get it up and running, you probably want it to be with someone you get on well with.
It's not only that. The community is comparatively small and tight-knit, one's technical abilities can be judged relatively comfortably from afar, and social currency matters a lot. Everyone with a stake in The Network already has to trust and be trusted by their peers. In that situation, informal agreements seem to work out well.
The website isn't really targeted at the people deploying these. If you are pushing more than 10 Gbps into another ASN you'll start getting emails from their peering coordinators.
Simpler tricks but along the same lines have been described on, IIRC The Old New Thing (Raymond Chen's blog for Microsoft) although it's possible it was Michael Kaplan in which case you'd have to dredge the Internet Archive or something because Microsoft deleted Kaplan's blog years ago, then they fired him, and then he died (he had a degenerative disease, although getting fired basically as an exercise in plumping the numbers for the stock market probably isn't good for you either).
Popular app and OS updates can be cached on a local Mac, with the recommended machine being on the network using a wired connection and always connected (desktop/Mac mini)
If you're a sysadmin you need to take an old mac mini and put this on your wifi or guest wifi network. It alleviates so many support tickets on iOS update release days.
It cache's more than updates. I can see it caching music, video and even iCloud data like my photos or iCloud drive content. For example I can set up a new Mac and have it sync the photos library from iCloud and see it all being served from cache. Pretty cool!
This seems smart. Presumably it initially gives everyone a better experience with App Store apps, and over time becomes critical infrastructure for Apple TV+ in the same way Netflix use caches in ISPs.
Definitely critical for Apple TV+ if they want to compete with Netflix and Youtube. Though both of those together account for a quarter of the total bandwidth on internet, I'm not sure if Apple TV+ is anywhere close to that yet, but long term planning here.
Unless there is some kind of limitation for the 1 year free AppleTV+ offer (when buying a new device) it is a terrible service and not even a competitor to Netflix. The prices for movies are abnormal - Predator (a 23 year old movie) is 4.99EUR for rent. For comparison a small market (<2mil people) local streaming service offers the same movie for rent for 1.90EUR.
It's a very good question! Figuring out security for these remote racks is one of the hardest parts. For some CDNs their larger POPs will be locked cages with security cameras and all, but these small deployments are significantly more exposed.
Exactly how SSL is handled (or not) varies by provider, but one thing I'll mention is that these will typically never have a cert so important that it can't be easily revoked, and the most important data will likely not be flowing across them - exposure is very limited. Using video streaming as an example, one option is to only do TCP termination for example.com (or to not even terminate that domain on the local cache, but back at your main datacenter), then use subdomains with individual certs for the local cache (eg. isp1.cache.example.com). In that case, service calls like login, retrieving the manifest, etc. are secured by the certs you're keeping in your primary dc, then the manifest has a set of https://isp1.cache.example.com URLs pointing to the local cache only for video segments.
Another tricky aspect is making sure that your main network treats them as untrusted so someone with local access can't use it to get a foothold into the rest of your infra.
>these will typically never have a cert so important that it can't be easily revoked
I think this is specifically addressed with the introduction of TLS Delegated Credentials[1]. This allows the CDN edge to use a very short lived credential in the place of the certificate's private key.
It's already supported in evergreen browsers and in certificate profiles from commercial CAs like Digicert.
Yup! Some of the names on that draft were people who have previously worked on building these sorts of edge racks, so their experience with this infra helped shape the proposal. It'll be great once it's broadly supported, but that's going to take awhile (or depending on your client mix, an eternity).
Correct me if I'm wrong, but my understanding is that revoking a certificate is usually not a very effective mitigation for a stolen cert. Since many clients don't check for revocations.
That's correct - 'revokation' in this case would likely involve rolling the DNS name to something different. Since these racks tend to have precise targeting (ie. not dns gslb) and non-user facing names, there's more flexibility.
The delegated creds draft that regecks mentioned is also relevant. That will make issuing lighter weight, so this sort of 'burn the cert and roll the DNS name' procedure becomes significantly cheaper operationally.
As long as it's dumb content, it doesn't need SSL signed by Apple.
Apple own the intelligent layer, the one that hold the API. Once you query those, it answer with the location and the hash, which allow you to download it from the distributed box and safely verify the content.
- TLS is still important to stop tampering of video content or images, as well as user privacy over what content was specifically viewed.
- Some ISPs have (and still do) intercept plaintext video content -> transcode to a much lower bitrate -> cache that for their users. That hurts the content provider, as they lose visibility (logs/metrics), and the user who may suffer a reduced experience that the content provider can’t easily fix. End-to-end TLS solves that.
My whole point was about using hash to verify the integrity of the data, so no they wouldn't be able to tamper with anything, that's the whole point of it.
You bring a good point about user privacy, and for sure a key can help with that, but at the end of the day, there's not much you can do about this once the physical server is somewhere else, TLS or not, you'll need to trust the one that hold physically that server not to void user privacy.
I would still suggest a TLS connection for that server, but it would most probably be self signed with a different root certificate to avoid someone else to be able to make others trust him being Apple over something that wouldn't verify the content with hash coming from platform owned by Apple.
> That still breaks the experience. Now you have tampered content AND broken clients.
You won't have tampered content if the data is rejected, that's absurd. You can do the same with TLS by the way, that's the whole point of TLS, being able to verify (and thus reject or not) data.
Sure it break the client, but you can do that in any situation, just have to unplug a wire ;).
Lots of people answering the technical question, but I have to wonder: why do you think ISPs would break into Apple's equipment just to extract private keys securing media files and software downloads? Do you think ISPs are typically in the business of violating the contracts and security of their partners? What would they gain by doing so?
Note that I understand the importance of securing private keys in general. I'm just asking about ISPs because you specifically mentioned them.
It's not the ISPs themselves, necessarily. You have a box that you own and that sits in someone else's network, where their security policies and practices apply. Those policies and practices are not going to be the same as yours. Perhaps more importantly, other people have physical access to the hardware. That includes employees of the ISP, but perhaps also 3rd parties (think multi-tenant datacenter, for example). That's why you should treat that box as being in a hostile environment.
Imagine you're a state-level intelligence agency, you often can force ISP located/headquartered in your country to obey, and it's probably easier to infiltrate some small-ish out-of-country ISP than an Apple/Google/Amazon data center.
It's not necessarily that the ISPs will want to break into equipment, just that they are the weakest link bad actors can use to target Apple/Apple Customers.
> why do you think ISPs would break into Apple's equipment just to extract private keys securing media files and software downloads
Being able to sign some things as Apple / MITM some Apple URLs strikes me as interesting enough to get state-security level interest, at which point you fall back to ISPs being made of people, and people being coercible and bribeable.
I can't be sure, but:
- It's possible to deliver SSL without putting the keys on boxes; of course there's a need for a network connection to the servers holding keys. It's a tech pioneered by cloudflare, can't rememeber the name.
- or, there's no SSL for the content, which is signed otherwise (like ubuntu or debian repos).
It (most likely) works like this: the device request the context directly to the edge server (several strategies available to accomplish this: you can ask a master server for the content who issues a redirect to the edge server, Apple could use a single name server and route via DNS, they could use a single name and IP and route via BGP, etc). The edge server, which is the terminating end in the TLS protocol, is aware of the content being requested. Now the edge server can determine if that content is available or not locally and server it back to the user or request it from the origin server first and then back to the user.
The problem isn't with terminating SSL, it's with keeping your keys safe on exposed infrastructure.
A single domain name and DNS to route is uncommon because it doesn't give you fine-grained control of load - you need to be mindful of the rack's capacity, and you also need to make sure that most of that ISP's customers go to the rack/people who aren't that ISP's customers don't go to it.
Anycasting isn't going to be great for traffic management or long-lived TCP conns, and if you can avoid the complexity of each rack needing a bgp session into the ISP's network you're going to be much better off.
Typically this is going to be directly routed to the rack via a unique DNS name after some form of service call.
This appears to be additional to CDNs. Apple is saying they will put their own servers directly into ISPs data centers as a cache for content requests that would normally go to a CDN. So the path for downloading data would go from your computer to your ISP and if the data you requested is cached on your ISPs Apple Edge Cache server then it wouldn't even go to a CDN. This is an attempt to address your complaint which may well be caused by slowness of traffic between the CDN and your ISP.
CDNs already use servers in your local ISP. That’s how they speed things up. This is no different than how Akamai or Netflix or Cloudflare builds their CDN. Most likely these Apple edge servers sit in cabinets and network segments adjacent to those other CDNs.
Sure, that is the best case scenario. Apple using it's own servers would have several advantages including they would presumably be able to tune/seed the cached content based on superior knowledge of consumption. They also wouldn't be competing for the same limited CDN space as others.
Not to be flippant, but I recognized your name and when I visited your bio I see you now work at Netflix. You could probably find an excellent answer as to why Apple would choose to start its own program rather than rely on existing CDN colocation by asking your internal team why Netflix does the same. My guess, without inside knowledge, is it is a combination of price and performance.
I know why Netflix does what it does, I helped build it. :)
What I'm saying is that Apple is probably already doing the same thing (using their own servers in colocations). They're just opening it up to the public now.
Well, edge cache doesn't appear to be brand new (I found https://www.c8k3.com/blog/caching-in-apples-new-cache-progra... which dates from Dec 2019) but it does appear to be an analogue to programs like the one you helped build at Netflix. So while they probably have been doing this in some limited fashion (e.g. through direct outreach), it is entirely possible your own ISP wasn't included. Hopefully your download speeds will increase if/when your ISP signs up for the program.
I'm not sure what you mean that this is being "opened up to the public now". This seems to be limited to ISPs as far as I can tell. I'm not sure who, other than an ISP, would be doing 25Gb/s of Apple-content related traffic.
Really? Where are you located? I get about 70MB/s (with a big B) from that Apple CDN. I’m on a 500Mb/s down/15Mb/s up residential cable connection. Yes, I’m aware that’s faster than the network speed I’m paying for.
You have to imagine that the design work done for that was based on an assumption that Apple would stick with the trashcan form factor long term, and that the first gen unit would at least get regular bumps to newer CPU and GPU options.
Must have been a heartbreaker sitting on all that infrastructure in July 2019 when the cheese grater was announced and the newest trashcans were still selling for full price with a six-year-old processor.
Apple gathered a few people from the press to tell them the trashcan wasn't working out in April 2017, and that a redesign was coming. Everyone who cared about the Mac Pro knew then.
Regular server racks have not, though. 1U has been 1U for a very long time... Didn't matter if it was an IBM server, Dell, HP, or custom built. This is why we have standards.
These poor saps bet BIG on a non-standard, and seem to have discovered why we like standards after all.
Because some of us have hope that Apple might be using high-performance commodity hardware with OS X and maybe they’ll let us do that one day too, so we don’t have spend $6000 for a Mac tower.
With 25Gbps minimum peak, it seems like this is targeted at game apps (which load heavy assets) or short videos apps (which load 10-20MB video assets). If they have streaming capability, then probably more apps can take advantage of it.
For all other apps (which load static image assets and much smaller dynamic response payloads), meeting 25Gbps minimum peak is going to be a challenge.
Let's do some rough math. Let's say your app needs to load 10MB of assets in every user session. Let's say your user's network speed is not a constraint. Then least number of concurrent users needed to drive 25Gbps of traffic is 25Gbps/10MB = 313 new user sessions per second. If you want to sustain this for 5 minutes or so to register as peak 313 users/second * 5 minutes = 93900 concurrent apple user sessions. Let's say your users realistically have 10Mbps of speed, we will have to multiply 93900 with 8 (because it takes 8 seconds to load 10MB with 10Mbps speed)!
I think you got the use case of such edge caches wrong. You don't get one of those for a single app. Most companies who participate in such edge cache programs don't even have apps. These are boxes that internet service providers plug into their networks, usually close to their customers, hence the name "edge cache". The edge cache owner then routes user requests for big and static content (video streams, large apps, OS updates, stuff like that) coming from networks equipped with edge caches to the caches located within their networks instead of a third-party CDN or the cache owners own server farms, effectively reducing the traffic that particular ISP has to route through to the edge cache owner. It is this aggregate traffic that has to meet a 25Gbps peak minimum to qualify for getting an Apple edge cache.
You apply to ICANN. The registration paperwork and associated work will be fairly expensive, figure on spending $1M for a relatively uncontentious proposal, like maybe you want .pier25 that seems plausible unless there's some big community out there which feels they own it (see .amazon)
There are two routes, the key differentiator is are the resulting name hierarchy public (like .com) or is it yours and jealously to be guarded from all others (like .mil) ?
If the former ICANN will also require you to do a bunch of legal work to ensure that when you fail (because realistically you will) any names can be scooped up and preserved by a new operator of the TLD.
If the latter you're likely to have a tougher time defending why you should own this, unless you're a huge global brand.
Then you need to either spend a lot of money (again estimate $1M a year at least at first) yourself on infrastructure to serve your TLD, or you need to pay somebody else with relevant experience to do it for you.
A surprising number of companies bought vanity TLDs which they then don't use at all because of course they're much less convenient than a short name in an existing TLD. For example the KerryProperties TLD isn't used at all, kerryprops.com is much easier.
$ whois goog
% IANA WHOIS server
% for more information on IANA, visit http://www.iana.org
% This query returned 1 object
domain: GOOG
organisation: Charleston Road Registry Inc.
address: 1600 Amphitheatre Parkway Mountain View, CA 94043 US
address: United States
contact: administrative
name: Domains Policy and Compliance
organisation: Google Inc
address: 601 N. 34th Street
address: Seattle, WA 98103
address: United States
phone: 1 202 642 2325
fax-no: 1 650 492 5631
e-mail: iana-contact@google.com
Google also own .new which seems like the sort of ridiculously expensive experiment only one of the big startups with cash to burn would do. It allows you to type docs.new or sheets.new (etc) into your address bar and instantly get an empty document.
Sadly it always defaults to the first logged in user, which makes it almost entirely useless to me.
> If your business meets these requirements, request an invitation.
Doesn't that defeat the purpose of being "invitation only" which, to me at least, implies the other party knows who they want to invite? That is, invitation only implies hand picked, or pre-chosen by some prior criteria to me. If it's exclusive to select ISPs that meet the criteria and they have to apply, why not just say that instead of the using wording that requires additional explanation to get past people's likely initial interpretation?
It's a "Hey can I come to your party?" and "I'll think about it and get back to you if you can" versus a "I'm coming to your party despite not being asked"
I know what they're trying to do, it just feels amateurish and jarring because cause it's jamming two concepts that are somewhat opposed together.
The whole point of saying the party is invitation only is so random people don't ask you to come.
I think the normal way this is normally done is to say "we're being very selective with who we partner with at this point. Please apply if you think you qualify and we'll get back to you."
That they've chosen not to do the obvious seems purposeful, and that it was done so jarringly on purpose is odd.
Why now? Apple has had hundred of Millions iOS users for years and fast approaching a billion. Why didn't they do this earlier? Or they did but it wasn't public ?
What are the chances this is a Mac Pro rack, even though it is highly unlikely to be running on macOS ?
Do they Cache iCloud Backup, Photos and upload with these Edge Appliance? Same as macOS Server?
This is for large consumer ISPs, obviously, but Mac OS X Server used to have an optional cache feature for iOS apps and OS X software updates that was relevant for enterprise and campus networks (or even a home network like mine with a 3:1 iDevice to human ratio...)
I am absolutely not informed on the dynamics of these technologies and business agreements so pardon my ignorance.
Q: are you getting paid for allocating such a cache? Or should you feel honoured that Apple thinks you are eligible to freely distribute their content?
This is not related to net neutrality. Typically the way things work is that your end-user ISP is connected to a "Tier 1" or "Tier 2" ISP. Then the website you want to visit is also connected to an ISP in a similar manner. To get packets from the end user to the website, the traffic has to transit over the Tier 1 and Tier 2 ISPs. Those ISPs do not offer this service for free. To lower costs and potentially improve latency, many websites are happy to connect directly to the end-user's ISPs.
Imagine that your Internet service is just an Ethernet cable that goes to a router in a datacenter. This is Apple offering to plug their servers into that router. Now you can get to their servers without going out to "the Internet" via a Tier 2 or Tier 1 ISP. That is where the word "internet" comes from -- interconnected networks. More connections is more internetting.
This is all super common. Many big companies are happy to peer with small ISPs if they're already in the same building.
Edit to add: The edge cache thing that this article is about is similar, but not quite the same as what I'm describing. Instead of connecting you to their network, they just put some of their servers in the same datacenter as your network. Even less latency!
Step X of competing with aws, gcp, azure.
Arm cpus
Serverside swift
Foundationdb
They’re spending way too much on aws for iCloud, and probably i* distribution
It has benefits for the ISP, too. I used to work at a small-medium regional ISP. Something like 50-70% of ALL traffic was streaming video and specifically Netflix. We weren't big enough to get into their version of this program at the time and have boxes dropped at our head ends. But it would have been very helpful. We literally had to upgrade gear and buy bigger pipes from our upstream ISP's to handle the volume of traffic from Netflix. Having the bulk of that content come out of a box already on net would have meant significant savings.
ISPs used to do that more commonly before HTTPS killed it but it's an expensive service to operate: very high traffic and if anything goes wrong your customers have a bad experience on the entire internet. The only way to do it is by intercepting TCP connections to port 80 so that system has to be as close to 100% uptime as you can manage.
Site owners generally hated it, too, since tampering proxies were a perennial source of compatibility bugs and protocol violations even before you had things like the ones which tried to “optimize” images by recompressing them, giving everyone on that ISP a bad experience which you don't know about. Stack Exchange has a number of threads where someone was trying to figure out why only some customers had complained months-stale content (Hi, Telemundo!), low-quality images (Hi again, Telemundo!), mismatched languages or truncated/corrupted contents, etc.
That makes me wonder... I wonder if there is a process by which providers would sign certs to individual ISPs and providers to let them intercept low/medium security content like streaming.
Like, if Netflix is going to serve streams over TLS for philosophical/privacy-from-government/privacy-from-wifi reasons, but wants to lets ISPs cache data, they could create a certificate for each ISP/organization and provide the keys to that org.
Then, if NF can identify you are coming from a particular ISP, they can have your content served from the ISP's netflix subdomain, and the ISP could intercept/cache/re-encrypt the data.
Could be, but it would be terrible if it went beyond impersonal data like Netflix content. One of the main benefits of SSL is that you don't have to trust your ISP or anyone else in-between with your data. I'm not a crypto expert, but your proposal sounds like a backdoor that could be abused.
The first kind is the ones that Apple wants - they are the likes of Comcast/Optimum/etc. There are the ISPs that have lots of eyeballs and have captive audience. If Apple has shitty connectivity to them, it is going to be bad for Apple ( because consumer cannot replace such ISP - there's no real competition in those markets ). These are also ISPs that happen to peg their transit PNIs or free peering PNIs as much as possible forcing others to to buy paid peering to have non-shitty connectivity to them. These ISPs are going to charge Apple for colo, power and bandwidth and Apple is going to pay and it is going to pay through the nose ( just like Netflix does and just like Akamai does )
The other set of ISPs are the ones that want Apple a lot more than Apple wants them. There's no way Apple will pay for access to their customers. Those ISPs are going to give Apple space, power and even bandwidth for free. Hell, they may even have to pay Apple.
Source: did that for both ISP side and content provider side (at different times)
Because they will not have to pay for all that repetitive data being peered to Apple's network. It's cheaper to pay the electricity for the free Apple-provided servers than to pay for all the data transit, I guess.
Because it saves them money on transit costs. The Internet Peering Playbook is a good resource if you want to learn about some of the economics behind programs like this one (http://drpeering.net/core/bookOutline.html).
Edit: the flip side of this relationship is also interesting - if Apple or whoever doesn't offload enough of their traffic to the rack, then it isn't cost effective and can really annoy the ISP. I've known some ISPs to boot these caches out of their network when the related company wasn't utilizing it effectively.
The question here is why you would host such a device for free instead of just having SFI peering with apple (which is apparently required in this program). I suspect that the incentive there has mostly nothing to do with transit costs and is mainly about capacity planing inside the ISP's network and thus with fixed hardware and infrastructure costs. (ie. SFI is free, but the 100Gbps port on your router is not)
It may be possible to place caches deeper inside an ISP's network than the peering points. For example, it looks like Apple peers in Dallas but not Austin or Houston so putting a cache in Austin would save bandwidth up to Dallas.
In my experience it will invariably be placed deeper into the network, that is at least into network core of the ISP, which typically isn't anywhere near the edge router placed in some wonderfully expensive colo space associated with some IXP.
Just to give perspective, Netflix and Youtube together constitute almost 25% of the total bandwidth traffic. If you could lower your cost of operation by 25% for free, why would you not?
These devices would be in addition to Apple's (likely large) colo footprint. Sending iOS updates to 1 billion+ devices is dissected in a paper here https://arxiv.org/abs/1810.02978
The cost (monetary, and environmental) of electricity associated with transporting data is high (as a recent HN article illustrated). Anything to lower costs is good for everyone. ISPs probably are only in it for the monetary savings, but thats still good.
ISPs usually don't do it "for free." Placing caches on operator networks is typically part of a larger business agreement that may not involve exchange of cash, but will exchange something of value.
So... per the upthread comment, "something of value" is indeed being exchanged?
Don't be silly. Obviously this is a business arrangement from which both parties expect to benefit, it's governed by a binding contract, etc... It probably even does have a set of billing rates for stuff at the margin, even. But no, it probably doesn't involve much real money flowing.
None of this is "for free" in any kind of economic sense.
Can you give defintions for "a benefit" and "value" such that your statement is not self-contradicting? Those look like synonyms to me. And to the law, for that matter: there's nothing that requires a business contract be exchanged with money alone.
Do we really need to be this pedantic? There is no money exchanging hands for this transaction. Apple / YouTube / Netflix puts a rack of stuff in the ISP for no charge. With the right traffic patterns everybody wins:
- End users get a better experience because there are fewer hops between the content and their computer.
- ISP gets to pay less on transit
- The content provider also spends less on transit while providing a better end-user experience
That's the point though. Value isn't "money" -- that's not pedantry, it's a comparatively profound truth. Asserting that this business arrangement constitutes getting something "for free" is missing the point.
This shouldn't be downvoted. The relative benefits to the ISP and Apple are part of a negotiation. Pinning the price to 0 means they're only interested in ISPs who will not be too demanding.
Fascinating. A hyphen and a dash are two completely different things. A dash isn’t ambiguous at all, it’s literally the wrong word. Those who live in glass houses..
Sure, that’s the important question to ask about this announcement. Apple could literally cure cancer and someone would complain about the kerning on the announcement.
When literally all of your value comes from "being the most aesthetic tech company," and you intentionally target perfection, people are going to nitpick. Apple deserves this.
The product they're presenting isn't significant, and other people had already made fine points on it, so I pointed out something that I care about that was on-topic instead.
"Apple Edge Cache (AEC) is Apple supplied and managed hardware for deployment within our ISP partners networks to deliver certain Apple content directly to our shared customers."
If you're interested in similar edge cache programs:
https://openconnect.netflix.com/en/
https://www.facebook.com/peering/ (though I don't see FNA specifically mentioned there)
https://peering.google.com/#/options/google-global-cache
https://www.akamai.com/us/en/products/network-operator/akama...
https://peering.azurewebsites.net/peering/Caching
https://www.cloudflare.com/partners/peering-portal/