I categorize this as another reason why "just trust us," just isn't acceptable enough when it comes to data privacy and ownership. Companies just cannot be trusted to treat their users' data with respect given the option of: profit or privacy.
People love to hate on Apple but the fact is, they continue to release features to better showcase or restrict developers that abuse your privacy. The "walled garden" also ensures they apply a ton of checks to apps to better restrict abuses. Sometimes it's overly sensitive and bad things happen, but in general it's awesome that over time it becomes harder and harder to get away with apps blatantly spying on you.
I am certainly happy about the steady pro-privacy process. I personally consider Apple full of shit until two features are released:
1. Contact sharing needs a complete overhaul. Some apps need to have access to my contacts. I get this. But they only need the name and the phone number. They don’t need addresses, birthdays and additional notes I put in m contacts.
Sure, I could have a separate contacts app with "meta data", but this would break the integration of Contacts in other Apple products.
2. Photos. It is either full access or no access. For example, I don’t trust WhatsApp. I share photos through WhatsApp by opening the Photos app, tap share, share via WhatsApp. This works okay.
But generally speaking: why can’t Contacts and Photos have the same sophisticated access control system like Health? Heck, make it optional for iPhone users, but at least offer it.
Agree tighter control over contacts sharing would be nice but I don’t think it’s malicious on Apple’s part that this isn’t possible - they’ve quite clearly shown they are on the side of user privacy, but they do also tend to move at a fairly slow pace
This seems to increase the amount of work a user has to do in practice. I suspect most users will end up sharing the entire library. From the link above:
> There's also the entirely new option Select Photos..., which leads the user through to the Camera Roll to pick one or more images to share. It is specifically images that users can opt to share, rather than albums.
> Which then means there is an issue that the next time a user wants to post an image, they find their selection confined to solely the ones they specified before. To change that and allow all or just different images, the user has to go to Settings on their iPhone.
My wishlist for fixing photo privacy on iOS:
1) Applications don't need to ask for permission to write photos to iOS folders. These get written to a separate album ($appName or $appDeveloperName by default), e.g. if you save a photo from Twitter it gets saved to your Twitter folder.
2) Photos taken by the iPhone Camera (presumably your personal photos) get stored in a special 'Camera' folder. Apps can ask for read/write permissions specifically here. Eg a photo editing app like VSCO or Darkroom may only need read permissions to begin with, but if it also wants to in-place replace your photos with its edited photos, it'll need read+write permission as well.
3) What about apps that occasionally need access to photos (e.g. social media apps) but you don't want them to have access to everything? The solution is to implement a OS-level photo picker in iOS with a UI can't be over-ridden and which makes clear you're sharing your selected photos with $appName. And ensure apps which want access to photos have to make the user go through the OS-level photo picker.
> he solution is to implement a OS-level photo picker in iOS with a UI can't be over-ridden and which makes clear you're sharing your selected photos with $appName.
This has existed forever - in fact, for far longer than applications have had the option of requesting full access to your camera roll. Unfortunately most applications have decided they prefer to take over the experience, and provide absolutely no fallback option if you reject giving them access.
Apple really just needs to make it mandatory to present a UIImagePickerController instead of whatever "integrated experience" an app provides when permissions to the photo library are denied. That would have been a much saner solution than this abomination - I don't want Teams to have the ability to wander around my photo library just so I can share a quick snap of a whiteboard. But I don't get a choice, because denying permission just makes it throw an error message up saying it doesn't have access.
From using my app on iOS 14 it appears the way it works is based on you accessing the photos library. So it goes like this:
Initial launch - user chooses a few photos.
User switches apps, and returns - the same photos are selected.
User force quits app (or doesn't use app for a few days and it gets killed off)
User opens app, and is then prompted whether they want to "Keep Current Selection" or "Select More Photos" the first time the app accesses the photo library in some way (I think this is based on when you do a photo permissions check, but not positive.)
#3 has existed since the iPhone added apps - UIImagePickerViewController - if you use it, you don't need photo permissions and you only get access to the photo the user selected. Most social media apps probably just skip using this because they want photos permission everywhere to do things like "post latest photo" or to show their own photo picker UI.
Select photos seems reasonable and if it's implemented right then it should be no more work than I was about to do anyway (select specific photos to share).
That's a very good start. I hope the feedback during the beta causes those controls to evolve a little bit so that it's more straightforward to change which photos an app can access.
The choice of only allowing access to specific actual photos seems an unusual one.
I would have thought there was a big debate in Product Mgmt over this vs the more obvious allow an app access to a given album.
One presumes the sticking point came when someone took a photo out of an album. Does that mean they are explicitly removing access? I don't see it as a huge issue... maybe there is some kind of technical
hurdle involved as well, otherwise the choice seems unusual
Nailed it. Using albums is the engineer’s answer to what is technically best. In the real world it doesn’t work because nobody knows how to, much less actually uses albums. And even if you do, what are the chances you have an album with exactly the photos you want to share? So you’ve got to select the pics you want anyway, but now you’ve also got to create an album first to put them in. It just adds to the work and confused and irritates people.
Yes they do. People who value the curation of their photos will take the time to do it. I create an album for any event etc. which I expect I'll want to photograph. It's easier to share and re-share the same set of photos, and it acts like a log of cool stuff.
Also, I don't have to scroll through months of memes to get to that one good photo I took in July 2017... or was it August..... maybe it was 2016......... shit.
> People who value the curation of their photos will take the time to do it.
Sounds like only something people who aren’t stressed from their underpaid jobs can do? Most people are kept busy and don’t have time to fit into this dark (corporate app) pattern.
The argument is that many corporate apps upload things without the user's consent or prior knowledge (revealed here by iOS 14 [1]).
Your post was in my eyes saying this issue was up to individual users to tackle. I disagree with that. I think it is instead the governments' role to regulate and reel in predatory and parasitic corporations.
My wife is not in a tech related industry, so I consider her "normal," and yes, she does use albums.
She has albums for work stuff. She has albums for home decorating ideas. She has albums for the various screenshots she collects of things she wants to remember. She has albums for different places she's been.
I know that the people she's friends with use albums because I've heard it mentioned.
I think normal people use albums. Tech people don't. Which explains why a company like Apple, that tries hard to court normal people, not tech people, has them.
I've had an iPhone for years. I didn't even know albums existed. I've just noticed the tab in Photos when I went looking for it after reading your comment. And it turns out, I do have one other album already, with a couple of photos in it, though I have no idea why or whether I created the album or how the photos got there or what purpose it serves.
When it comes to security features, simple and obvious behaviour is good, pretty much always. The same is true of user interface design, and the lack of both documentation and natural discoverability on iOS has always been a pretty glaring weakness of the platform. Complexity creates edge cases, and edge cases create vulnerabilities, including due to misunderstandings and resulting human error.
Judging by the other replies to the parent comment, apparently I'm not alone here, so I'm guessing if Apple did any user research about this, that "big debate" probably lasted a few seconds...
I take a lot of photos and I use the album feature a lot. But when I am going to for example post a photo to Instagram I don’t at all want to have to put the photo in a dedicated Instagram album just so I can post it. That is to say, one mans “obvious solution” can be another mans annoyance.
Interesting that you "personally consider Apple full of shit until..." and then demand _they_ be more granular (It is either full access or no access.) Couldn't you consider Apple partially filled with shit?
Ok, if you take it literal: Apple is partially filled with shit, because they try to tackle privacy but seem to miss some very obvious design choices where users would benefit a lot if they were implemented properly.
As I said, I'm glad they tackled the Photos problem. But of course, I could ask what took them YEARS to do so. They even have a private album in Photos but didn't think that some apps shouldn't get access to these pictures?
There's totally a middle ground between 'full access and no access'. Apps can show UIImagePickerControllers and CNContactPickerViewControllers whenever they want, without any permissions. They then get the photo[s]/contact info the user picks.
Which is exactly what most apps actually need.
WhatsApp has no good reason to look at any image you aren't explicitly choosing to share right now. The only user-facing WhatsApp feature that requires Photo library access is the scrolling list of recent photos on top of the in-app camera.
WhatsApp has a better case for asking to continually scan your contacts to show you people with accounts. But instead of just falling back to asking for a phone number when you don't give permission, it could show the contact picker, and check the accounts you pick.
Unfortunately, in both cases, WhatsApp takes the all-or-nothing approach - it asks for the blanket permission, and has no fall-back if it is denied.
> There's totally a middle ground between 'full access and no access'. Apps can show UIImagePickerControllers and CNContactPickerViewControllers whenever they want, without any permissions. They then get the photo[s]/contact info the user picks.
If they don't use this control you can also inject whatever photos you want into most apps using the share sheet. It does mean you have to exit the app and go to photos, but as you point out, it's the app maker's fault for not supporting the extremely privacy friendly `UIImagePickerController`.
Roughly speaking, current OSs "stop" at tools for interacting with data, and the hardware behind it.
In this world where we expect internet access, I'm beginning to think OSs need to manage certain types of data more proactively. I'm trying to wrap a general point around your concerns about contacts. Contacts seem one of the data types that need something approaching OS level tooling. For me, another is "tags". I want to use the same set of tags I apply to "files" to apply to "contacts" too.
I keep hoping someone will make a rival OS that tackles this head-on. Start at Haiku, sprinkle some of Apple's "the UI isn't a virtualised office any more" UI paradigm, model a small handful of human-centric data types (like places, people, maybe individual health, too) and the access and interaction rules that support them safely and really run with it.
They had a People hub that collated all your contacts and had reasonable sharing mechanisms for the data. HERE was essentially that places concept. I'm sure if Windows Phone had kept traction, it would be integrating your smart device health data into live tiles and a hub interface for all the metrics.
My son and I were both long time Windows Phone users. It never gained the public acceptance required to survive, but I don't know anyone that used it for any length of time, that does not miss it. The UI was very intuitive and it just worked for me. My brother is still running it on his Nokia phone, that seems to be lasting forever. I am not sure which model it is, but the camera on it is fabulous. I wish I had picked one of those up.
Windows Phone’s hub concept was marvelous. As a user I don’t care if I’m messaging you though MSN Messenger* , Skype or XMPP; I just want to IM. Gaming hub integrating with Xbox Live was a nice touch, it felt like MS finally got the concept of an ecosystem.
Except when you realize all of those implementations needed to be coded by Microsoft. There was no way for a third party to plug in. I heard some things from MS people that the clients for IM services were driven server side which would have made it hard and inelegant to add additional protocols.
Nokia's maemo had this done with better execution. The SMS app had a plug-in for xmpp and I used it for Google talk. I think I used a third party one for Google voice. There was a Skype one that supported calling through the normal phone app but it didn't work very well. The clients were run on the phone and not in the cloud.
IIRC you had to do server side push as Windows Phone 7 didn't support local notifications and always-on Internet connection. Some IM clients used some tricks to run in background, such as masquerading as a streaming audio player (that had always-on capabilities enabled) but you lost the music player capabilities of your smartphone when running those apps.
WP8 relaxed some of those restrictions but it wasn't enough to truly develop a IM client.
It's true that only Microsoft could create such integrations, but it was a business decision. On Windows Phone 7 era, regular developers couldn't deploy native code and you couldn't call native APIs directly from the managed .NET/Silverlight runtime. Native SDK wasn't available at all, but it was a regular Windows CE at its core.
Maemo's was way superior to Windows Phone. It's a shame that Microsoft trojan-horsed Nokia.
Add Background App Refresh to this please - considering that apps exfiltrate 4G/WiFi connectivity info (helpful to triangulate your current location) regularly to tracker/analytics scum APIs with this feature - it should be exposed as a Privacy setting, not buried in Settings. I don’t understand what’s hard about this for Apple to be eerily silent on this.
Background App Refresh also has a significant battery life impact. I keep mine disabled and get a significant battery life increase with little downside.
The feature should be a per-app opt-in instead of being enabled by default and buried in settings.
1) a way for apps to display a view that shows the contact name for a phone number, with specified styling / sizing / etc, but without being able to determine what that contact name is.
2) an App Store rule that forbids apps from requiring contact access unless they can't function without it. WhatsApp forces you to provide contact access, giving Facebook your place in the social graph even if you don't use Facebook, even though WhatsApp should be usable (using phone numbers) without it.
WhatsApp works so well because it is tied into the same contacts that you already have on your phone. Without access to your phone’s contacts you would need to set up and manage an entirely separate set of contacts. Right now, Grandma could download WhatsApp and instantly start chatting with her granddaughter without having to remember what her phone number is because it’s already there. That’s a major selling point of WhatsApp.
I revoked Contacts access in WhatsApp a year ago. It works just fine. Problem is: WhatsApp only shows the phone numbers in the list and NOT the usernames of people. This is rather annoying, because I don't know any phone numbers by heart. Profile pics help a bit, but people change them and often don't have pictures of themselves.
Signal shows the nicknames next to the numbers which is a really nice feature and makes it pretty much perfectly usable without contacts permission, except that it constantly nags to grant the permission.
I understand that. I'm just saying that there should be a middle ground between "give Facebook full access to my entire contact list" and "cannot use the app at all". For example, WhatsApp should be able to trigger a contacts picker, without needing to have access to the full list of contacts. And it could even be able to show a styled view for "the contact name for this phone number" without needing to know what the name is.
WhatsApp does work if you revoke Contacts permission after setting it up, but IIRC you can't onboard when you first install the app if you don't grant it. Forcing the granting of the permission should be against App Store rules.
I use WhatsApp after revoking its contacts permission and it's pretty much fine. As an aside, same with Signal, and I really don't understand why a supposedly privacy-focused app like Signal nags hard to get contacts permission when it works perfectly fine without; it even shows people's chosen nicknames next to their numbers.
You can easily solve this with UX. Select new message, show the picker, select one or more contacts and then you have the identifier you need and you don’t need blanket permissions.
In fact, I'd much prefer to share with apps like WhatsApp (although I don't use it anymore) and Telegram only the contacts I manually select. There's no reason for them to know all barbershops I ever called too, and I don't even really need to know that an acquaintance of mine started using Telegram, I only contact those who I know to use Telegram beforehand anyway.
“Instead of sharing your entire Contacts list in third-party apps, you can now type individual names to automatically fill their corresponding phone numbers, addresses, or email addresses in fields that request it. The autofill happens on your device, and contacts are not shared with third-party developers without your consent.“
This is pretty clear to me, you type a name, it’s looked up in your contacts by the OS, data is retrieved if there is a match and placed in the form. This is not the same as sharing an individual contact and allowing the app to continue to read it later, but still gives you a means to give contact data to an app without giving it access to the entire list.
Apples walled garden approach is not necessary for any of this though (nor does it even make it easier). You can introduce sandboxing, fine grained permissions etc without locking devs and consumers into a controlled app store - these are OS features, not app store features.
Fine-grained permissions aren’t useful if an application is going to request access to everything anyway - and non-technical or non-privacy-conscious users will click-through any and all permission prompts so [they can see the dancing bunnies](https://blog.codinghorror.com/the-dancing-bunnies-problem/).
In the case of very popular, aggressively-marketed, apps like TikTok and Facebook’s: the lack of easy side-loading or alternative app-stores (with looser auditing) means they’re forced to comply with Apple’s regs against unnecessary permission prompts, and this means they simply can’t take advantage of users’ ignorance (or overriding desire to see the dancing bunnies) to get them to grant unnecessary permissions.
I don’t think this is the case. The way iOS tells users that an app is tracking location in the background has led to a large increase in users opting out in all the apps I’ve worked on. There are ways to be very effective at this as Apple has shown since that article was written over 15 years ago.
Second, this effectiveness doesn’t require the walled garden and forcing apps to pay 30% of revenue to Apple.
Sounds like it’s working then? If an app asks me for location I personally go wtf no why do you need to know and most apps honestly don’t. I stick with apple for such a reason
The OS can stop an app from using certain APIs. This has nothing to do with the walled garden. ios tells you when an app is using your location in the background, it can do this for side loaded apps as well. It then allows you to disable it for that app, which it an also do for side loaded apps.
Currently, App Store doesn't just review safety and UX, it also reviews whether or not Apple simply likes your idea or if you are competing with a feature they've integrated into the OS.
If the App Store remains the only method for installing apps, and Apple continues to reject apps that they simply don't like, then it's not a healthy platform for consumers in the end.
This isn't a value judgement, but if enough users sideload or use the open market governments will probably have to step in to advocate permissions checking because Apple won't have the influence to regulate developer behavior on their platform(s).
You can sort of see this on other platforms - the Mac App Store has very few quality apps listed on the store and Apple is further moving towards locking down root permissions b/c users can download apps or install software from anywhere on the web. It's typical for users to install anti-malware software on new Android devices, etc.
At least in Android (not familiar with iOS) you can deny apps access to any and all permissions, the features just won't work. I.e. if you deny Snapchat access to the camera you can still browse the app, read messages etc - you just won't be able to take any photos.
That’s not my point: I’m arguing that apps like TikTok and Facebook are big enough that they could convince non-technical users (who are either ignorant-of, or just don’t care about, app permissions and privacy) to switch to an unofficial app-store where they could list their app without it being denied approval by Apple or Google for unreasonable app permission prompts.
...but the fact that unofficial app-stores for unjailbroken iOS devices do not exist makes this impossible for now.
It’s very easy to imagine a TV ad or movie trailer ad for a TikTok or Facebook app with the cheerfully-voiced narrator saying “Just visit the TikTok Android App Store” or “Just open the Facebook iOS App Store” - then when the app is installed and first-opened the app would use a single “grant everything” permission prompt - or if the OS doesn’t allow that it could bombard the user with many prompts all-at-once and if the user denies any of them then a curtly-worded new messagebox would say “you must grant these permissions to use our app” otherwise the app quits. There’s not much Apple or Google could do to stop this that those app developers couldn’t work-around. Apple’s iOS App Store rejections for privacy reasons is a human solution to a non-technical problem, as it’s well-established that technical solutions to non-technical problems are ineffectual.
It can be argued this is possible on Android - which does allow for other app-stores - and I did wonder why this isn’t already happening with Android users - then I realised that probably most Android users have those horrible carrier and OEM locked-down devices that make it harder (if not impossible) to change system settings or add other app-stores.
The scenario you're speaking of hasn't happened on Android.
As a famous example of a popular app that eventually caved into Google's demands is Fortnite [1] and children are tech savvy (or at least motivated) enough to install from outside the app store. If Fortnite couldn't do it, then no, it's not easy to imagine TikTok doing it, especially given TikTok's market share is made of mobile users mostly, so no PC, no PS 4, no Xbox.
There are indeed alternative app stores from Samsung, Amazon, maybe others, however Google's Play absolutely dominates the Android ecosystem.
I'm an iOS user myself, however this whole reasoning is bullshit. The only reason Apple keeps such a tight control is because they want to keep that 30% commission on all sales, which is highway robbery. And I also suspect them of wanting to have enough reason and leverage to get rid of any app that threatens their own products.
I see grownups and children alike using the web successfully all the time. The web can be secure without a gatekeeper because browsers do a reasonable job at sandboxing. In fact it is the competitive nature of the market that makes it secure, consider that's how extensions and ad blockers happened (in the meantime I still don't have a browser on iOS capable of using uBlock Origin).
And yes the web has dark corners, yet we live with it just fine. Look, we're having this conversation on a web page that's not gated by Apple and we're still alive.
This is true in most parts of the world, but in China where almost all phones are Android and Google apps are not preinstalled on any of them, the alternative app store hijacking definitely happens.
In particular Tencent is notorious for not being the default app store on any phones, but somehow "mysteriously" if you follow links from WeChat or QQ or even certain websites, it will try to make your phone download the Tencent app store to install the app instead of just using your phone's default app store. Even your phone gives a warning not to do it, people still install it. And, sure enough, Tencent app store is now the biggest app store in China, with 25% of the market.
Tiktok is owned by Bytedance, which doesn't even have an app store in China, so i can't see them making a play.
Fortnite, on the other other hand, is owned by Epic who definitely used the popularity and income from Fortnite to leverage their way into the PC gaming marketplace, disrupting the major player (Valve). They might not have won this battle for the phone marketplace, but by the sounds of it they still haven't given up the war.
So, i do think it's fair for the grandparent poster to consider a future where users bypass whatever protections came from their phone manufacturer and end up shooting themselves in the foot. But i also think you're right that it doesn't matter. That's the "price of freedom".
We already see it a little bit now where some people choose Android over iOS (or vice versa) for ideological reasons. Loosening manufacturer restrictions even further seems reasonable to me. Some people would choose ultra-safety through open source, others would choose to use closed source from a company they consider trustworthy. Most would not care and just use whatever environment they are most familiar with, and install whatever plugins and cleaners they need to make them feel more secure. That's basically the PC market right now, and i think it's largely fine.
As extortionate as Apple’s fees are, I’m actually glad that they have a business model that isn’t dependent on invasion of privacy. Without them continually calling attention to it, Google would have little incentive to improve privacy.
Google payed Apple $12 billion in 2019 to remain the default search engine in Safari.
I hear this line about their business model all the time, however it is bullshit. Given the opportunity all companies will take the money. And I fear that it is nothing more than a conspiracy theory, without much evidence, much like anti-vaxxing.
Google these days is a very big target. The EU would love to have reason to slap them with another fine, given all the legal tax evasion they've been doing. Yet they've always been transparent about what they collect and have always been responsible with user data (versus Facebook).
Don't get me wrong, I enjoy the privacy features of my iPhone, it always fared better than Android in that regard, but it has nothing to do with Apple's tight grip of its App Store.
Google payed Apple $12 billion in 2019 to remain the default search engine in Safari.
It's a large amount, even for Apple, but they would survive losing that. Besides that, they are even taunting Google by putting DuckDuckGo in their marketing copy:
I think they are slowly preparing to loosen that tie.
I hear this line about their business model all the time, however it is bullshit. Given the opportunity all companies will take the money.
I agree. Apple's incentives are just temporarily aligned with customer's privacy. Their margins on hardware, services, etc. are so large that they can afford to make privacy a differentiator. If they are not in that comfortable position anymore, they would monetize the vast user data trove.
But while this is the status quo, I am happy to use an iPhone for privacy.
I really think Apple will buy DuckDuckGo at some point. The question is, to what extent will Apple make DuckDuckGo (or whatever they'll rename it to) available for non-Apple platforms?
This is definitely happening already, all the Mac App Store devs that left that store for their various reasons, some of them pushing updates only to their site forcing me to move away from the App Store, and therefore Apples guidelines making them behave properly. So this is happening and it is being abused, see Zoom using preinstall scripts. This wouldn’t have happened if the Mac App Store was the only way to install an app on a mac.
>they could convince non-technical users to switch to an unofficial app-store
They could, but they're absolutely not going to. Every barrier you put between and user and installing your app is a percentage of those installs that you're losing. Doubly so for "non technical" users, who can barely work the app store in the first place. No company of that size is going to lose that many downloads just to steal a few more downloads.
>then I realised that probably most Android users have those horrible carrier and OEM locked-down devices that make it harder (if not impossible) to change system settings or add other app-stores.
Stock android makes you jump through hoops to install third party apps, and for good reason. No, it's not because "OEM locked-down devices", the reason you don't see it on android is because it doesn't make business sense.
This has been my feeling for years, and why I think it'd probably be in Apple's interest to let people sideload apps. They could still require them to be signed, but otherwise be hands-off. The vast majority of users wouldn't go through whatever hoops were necessary to set that up -- even if it's just the single hoop of flipping an "allow non-App Store apps" switch in Settings -- but making it possible to do that gets them out of a lot of the regulatory imbroglio they've been heading toward. (I also can't help but feel it's necessary in the long run if they're serious about the iPad in particular being a general purpose computing device rather than an application console.)
When it comes to a mainstream app like Facebook or TikTok that already has network effects and a critical mass of users, people will put up with significant efforts to alleviate their fear of missing out, including sideloading the app.
You can deny permissions to any runtime permissions beginning with apps built for Marshmallow. You cannot deny other permissions. Some apps will absolutely block you from using them unless the permissions are on (this is by design of the app, not an OS limitation).
Internet is a permission that is required if your app expects to go online. You cannot turn this permission off in the OS. If you modify Android to allow changing this permission (usually via Xposed) or rebuild the app to remove it from the manifest, many apps will actually crash when they try to go online; this is part of the reason why people use a firewall even on devices with Xposed installed. My vague understanding is that this is how Android works when an app tried to do something it can't--it closes the app. IIRC there is an Xposed module that filters by the URL, but I'm guessing it fakes the network response (more complex than simply disabling permission), and it doesn't work with ndk.
With Marshmallow, runtime permissions were introduced for a number of existing permissions, where it would prompt you the first time the app tried to access privileged data. If your app is older than Marshmallow (ie, written for lollipop or KitKat), disabling any of the enabled permissions is liable to crash the app as soon as it tries to use them.
By and large this is true. However, the android Citibank mobile app refuses to do anything useful if you don't give it access to your entire file system upfront.
I don't think Apple would allow that kind of permissions abuse, but apparently Google does.
> However, the android Citibank mobile app refuses to do anything useful if you don't give it access to your entire file system upfront.
Considering Citi’s corporate culture, I’d attribute this to incompetence rather than malice or a desire to spy on users.
I’ll bet they’re using a third-party anti-spyware library to examine the Android FS for keyloggers/etc to protect their users’ security. It’s well-intentioned, but still idiotic.
This is the same Citibank that’s been engaged in an idiotic arms-race with Google about blocking password-safes on their online banking login page for the past 5+ years - while also allowing me to do phone-banking without any real authentication - and STILL haven’t given me an EMV Chip+PIN credit-card, while the EMV Chip+Sign card I do have from them DOES have NFC without a purchase limit... anyone could steal my wallet and “tap” a couple grand off it. Arggghhhhhh.
The “banks who think they’re smarter about security than platform vendors” trope is getting real old.
To the consumer it doesn't really matter if it's malice or ineptitude or laziness. Fact is Apple will remove your app if you try something like that, but it is not uncommon to encounter this on Android.
Oh, of course - I understand most (if not all?) major banks have serious ethics issues from the top-down - but money-laundering is a business-objective and is distinct and separate from online banking security.
The $70m fine (a joke to a multi-billion-dollar company) is insignificant to the potential damages from a class-action lawsuit from a wide-ranging vulnerability in their online banking platform - hence their focus and over-engineering on their online banking security - while the risks from credit-card abuse and individual identity-theft are much more limited in scope - and are a known-quantity.
That model was pioneered by Apple in iOS long before Android started taking it up with Android 6 (runtime permissions instead of collective install time permissions). Android took a few years to catch up and increase the range of runtime permissions, and apps on Android at that time would actually crash if some permission wasn't given.
Even today, there are apps on Android that ask for needless permissions and refuse to continue unless the permissions or granted. That same app on iOS would provide more functionality (that's possible without having the permissions). There seems to be a very different mindset between Android developers compared to iOS developers.
> Fine-grained permissions aren’t useful if an application is going to request access to everything anyway
If you build it right, it is totally doable. Implement it like in Health so that the app just gets empty data and doesn’t really know if it has access or not.
If the app doesn’t function properly with an empty data set, reject such an app through App Store guidelines.
I wonder if location prompts would be more effective if, instead of asking "Allow 'Example App' to access your location while you are using the app?", they explicitly state "'Example App' would like to know precisely where you are. Would you like to share your exact location?". I feel like those adjectives, "precisely" and "exact", would go a long way toward encouraging people to put more thought into the decision. Similar wording could be used for other permissions, maybe in conjunction with a one-or-two-second timer on the buttons.
This is a neat idea. Also, many apps don't need your EXACT location, just something like a zip code for convenience. AFAIK there is no mechanism for an App to request permissions to get a "rough" idea of where you are as opposed to a precise location
> I saw a post the other day (I'm not sure where, otherwise I'd cite it) that proclaimed that a properly designed system didn't need any anti-virus or anti-spyware software.
Forgive me, but this comment is about as intellegent as "I can see a worldwide market for 10 computers" or "no properly written program should require more than 128K of RAM" or "no properly designed computer should require a fan".
The reason for this is buried in the subject of this post, it's what I (and others) like to call the "dancing bunnies" problem.
> What's the dancing bunnies problem?
It's a description of what happens when a user receives an email message that says "click here to see the dancing bunnies".
The user wants to see the dancing bunnies, so they click there. It doesn't matter how much you try to disuade them, if they want to see the dancing bunnies, then by gum, they're going to see the dancing bunnies. It doesn't matter how many technical hurdles you put in their way, if they stop the user from seeing the dancing bunny, then they're going to go and see the dancing bunny.
> There are lots of techniques for mitigating the dancing bunny problem. There's strict privilege separation - users don't have access to any locations that can harm them. You can prevent users from downloading programs. You can make the user invoke magic commands to make code executable (chmod +e dancingbunnies). You can force the user to input a password when they want to access resources. You can block programs at the firewall. You can turn off scripting. You can do lots, and lots of things.
However, at the end of the day, the user still wants to see the dancing bunny, and they'll do whatever's necessary to bypass your carefully constructed barriers in order to see the bunny
> We know that user's will do whatever's necessary. How do we know that? Well, because at least one virus (one of the Beagle derivatives) propogated via a password encrypted .zip file. In order to see the contents, the user had to open the zip file and type in the password that was contained in the email. Users were more than happy to do that, even after years of education, and dozens of technological hurdles.
All because they wanted to see the dancing bunny.
The reason for a platform needing anti-virus and anti-spyware software is that it forms a final line of defense against the dancing bunny problem - at their heart, anti-virus software is software that scans every executable before it's loaded and prevents it from running if it looks like it contain a virus.
As long as the user can run code or scripts, then viruses will exist, and anti-virus software will need to exist to protect users from them.
—————
This was written 2005, before the iPhone and iPad. One could argue that the whole AppStore/Gatekeeper/Notarization system itself is a big giant patronizing Anti-malware-Software by Apple or focus on the last sentence, that on iOS the user can’t run non-sandboxed scripts and code.
But it is also the case were Apple again did “think different”.
> I saw a post the other day that proclaimed that a properly designed system didn't need any anti-virus or anti-spyware software.
Forgive me, but this comment is about as intellegent as "I can see a worldwide market for 10 computers" or "no properly written program should require more than 128K of RAM" or "no properly designed computer should require a fan".
my Android phone warns me if an app is trying to use features that require permission while in background and asks me if I want to revoke the permissions, enable it only while the app is active or let it use it always.
pretty easy to use and anyone can guess that the bus or car sharing app doesn't need to use GPS all the time
When the controller is a "smart" app store, you know what they delete, but you don't know what they keep and why they do it.
they chose for you and never ask you if you're okay with it or not, so basically it's not your phone, it's their phone.
"-This clock app needs to access your photos, contacts, all the hardware the phone has and all your cloud accounts
-No
-The app can not function without the required permissions."
That's a bit of a stretch. On play store permission bombing is so prevalent and no one is installing any apps with less than 4 stars so I doubt the "malware" makes enough money to sustain this.
Yes it is, because devs aren't going to respect it.
Even Android is going into this direction, locking down APIs, access to Linux syscalls (not even considered part of NDK official APIs), background execution modes and file access.
In this respect, I see your point - a properly designed and secure OS, with a user and installer in "non admin" mode, should be able to do these things without locking the source of an app down to one location.
- Fine-grained permission control that offers more than iOS: control whether apps can access the network, which directories an app can access, if it can print, and even whether or not it can access PulseAudio.
- Cross-platform: runtimes are OCI container images and can be targeted on any distro that supports Flatpak (which is almost all of them).
It's gained adoption from a number of recognizable FOSS and proprietary names: Zoom, Spotify, Steam, Firefox, VLC, Discord, Libreoffice, Skype, Inkscape, both Minecraft and Minetest, Microsoft Teams, Krita, IntelliJ IDEs (both Community and Professional), and Blender are available as Flatpaks through Flathub.
GNOME and KDE release almost all their apps as Flatpaks through the `gnome` and `kdeapps` Flatpak repos, and copy them over to Flathub when they're confident that Flatpak-ing didn't introduce any bugs.
Flatpak also clutters your hard disk with gigabytes of copied libraries and other data. I had to deinstall it to prevent a system crash, because my root partition went out of space rapidly - source of the problem: Two flatpak apps.
Isn't this the problem with iOS apps too? They can't share libs or .so between them, which is why each iOS app is colossal for no good reason, eg. Google Sheets 180MB alone, Google Docs also 180MB, YouTube 280MB..... insane sizes for these.
Similar in character, but it's probably a factor of 10 worse with Flatpak. With your examples on iOS, the bloat is stuff that's common to the google apps but not part of the platform. With flatpak, it includes stuff that is part of the platform but can't be relied on to be the right version.
It would be nice if Apple would let packages signed by the same key share versioned libraries between them, but I suspect relatively few developers would be able to take advantage of that. Maybe only google and microsoft, to a rough order of approximation.
Does its sandboxing support fake (or altered) access? That might be the additional permission control needed. For example, to grant fake access to the audio, the program will work but there will be no audio output (and all audio input will be silent); or you can specify to save audio to a file instead of making it immediately audible, or change the volume control for that program only.
Android has. (Also I assume a pile of other, no longer available/not very successful mobile OSes, but the ecosystem is just Apple and Android at the moment).
Android has far too many apps that refuse to work unless you give certain permissions. One of my banking apps needs the camera permission to work at all. The permission prompt states it's to digitally cash checks, but it asks at startup and if you deny permission the app quits immediately. Most delivery apps won't work without giving GPS permission. The iOS app store does not allow this kinda bs.
Android similarly has been continually improving the privacy/permissions model of the OS when it comes to third party apps. I am not sure that Apple has any obvious advantage in that department specifically.
Even if one were to ignore Google’s data collection, any non-vanilla android installation would have been butchered by the vendor (Samsung, Motorola, etc) to the point any expectation of security (and in turn privacy) is lost to the least secure app pre-installed.
I had ESFileExplorer installed on a Nexus 7 tablet I barely used. One day I start it to find the charging has switched to “smart charging” where this software shows a banner ad on the home/charging screen. There is no end to the madness of what each app can do or even allowed to ask for.
I often use Motorola devices as I find they are one of the OEMs which applies the fewest customizations to the OS. However Samsung is definitely a problem when it comes to that.
I am not sure what happened in your case with ES or how that would be possible. It sounds like maybe the app just pushed you an advertisement as a notification. Notifications can be disabled on a per-app basis but I think it is pretty reasonable that they are enabled by default.
Apparently it is not new change [1]. I just happened to notice it now since I rarely used it before. Now the tablet functions as a handy Zoom whiteboard [2] drawing tablet. Good thing I didn't throw it out.
I often use Motorola devices as I find they are one of the OEMs which applies the fewest customizations to the OS.
They got worse, also with updates since they were acquired Lenovo. Last time that I surveyed the Android landscape (~2 years ago), Nokia was the place to go for a pristine Android experience with quick updates.
I was running Lineage on the tablet and I am usually very careful about what I allow. I have no idea how it got set to take over main screen at charging time.
Btw, all these distributions (Lineage and Cynogenmod before that) don't benefit from automated updates. So that is another headache to remember to manually reflash/upgrade.
No, and I consider that a feature :) I have Zuk Z2, which incidentally is still the only phone in the world that matches my small list of requisites: reasonably modern, not huge, has Lineage support.
I didn't check properly before I bought a Moto G5... the only Lineage image there is crashes its sound server whenever you connect a Bluetooth headset. I'm trying to get the dev to share their build steps so I can look into what is happening there, but they're not responding :(
I've only ever been an Android user, but for a while I was paying a lot of attention to iOS too. It seems to me like they've been back and forthing, as one side figures out some improvements, the other side more-or-less re-implements them on their next release with their own unrelated improvements.
From my perspective it seems like Apple keeps releasing new privacy features and Android keeps being forced to catch up. What are the major privacy enhancements that Google has put out first?
Sorry, I don't have a history of this here. I'm running off my sense of a fair portion of the time when I hear "this new privacy feature coming to iOS" I think "oh I've had that for a little while", and the rest of the time something equivalent shows up later for Android.
In the beginning, Android showed you what an app could before you installed it, and it was an all-or-nothing approach – if you didn't want the app to do those things, your only choice was to not install it.
In the beginning, iOS didn't have this, and instead it prompted you for permission the first time an app wanted permission to do something. Additionally, app review had rules that apps had to operate correctly if you refused permission and that apps couldn’t ask for permissions irrelevant to what you are doing. So you can install an app, then pick and choose what you grant permission for.
Later, Android added the prompts to work the same as iOS. iOS hasn't changed to include the Android approach.
In the beginning iOS basically didn't have permissions at all. There were some random ones like GPS (and push notifications?!), but there was for example no permission to access contacts. Or photos. That didn't come until iOS 6.
So in the very very very beginning it was:
iOS: prompts for permissions, but almost nothing (including accessing user data) requires permissions anyway
Android: Granular permissions for everything, but only asked at install time.
Since then iOS has become "more Android-y" in adding increasingly more granular permissions, and Android has become "more iOS-y" in those permission grants being on-demand and time-gated.
Android had install time permissions for a long while. The user either had to allow all permissions asked by the app during install or not be allowed to install the app. On the other side, iOS had runtime permissions (nothing during installation) that were prompted by the system whenever the app needed a permission. When runtime permissions were added to Android 6, apps used to crash when not granted the permissions (so much so that some Android versions also started faking location data to apps when the user denied location access).
This took a few years to improve, but even today, there are Android apps that will refuse to work if you don't grant some (unnecessary, in the view of the user) permission. That kind of behavior is very, very rare among iOS apps.
As a checkbox prompt on install, yes. iOS has always and will continue a slightly different model of throwing a modular question up to the user at the moment the action that requires the permission first occurs. This gives the user more context about the question, but also gives more alert fatigue. Which is better is left as an exercise to the reader.
Android has supported piecemeal permissions for ages (only let app X access this one photo, or this one contact), which Apple seems to be starting to copy in iOS 14 (though, as usual, seemingly without a single thought to the long-term UX).
You can send an Intent to request that the user pick a contact, without having the contacts permission. The app gets a copy of the contact that the user picked.
Not being an Android user, one of the negatives that was often talked about (maybe no longer true?) is that a lot of phones could not upgrade to new versions of Android. Is that still a thing, or was that limited to the lower tier phones?
It's crazy that this ever became a thing. When Android first came out I assumed it would essentially be like Windows for the smartphone.
But imagine if Windows worked the way Android does. You buy a Dell Windows laptop and then you receive all your OS updates directly from Dell, they limit you to only 2 years of updates (if that), and put some bloated skin over the whole OS.
Updates are still pushed by the manufacturer. I have the Xiaomi Mi A2 Lite (with Android One) and only got Android 10 last week.
Biggest advantage is that it's a pure Android with no bloatware from the manufacturer. Also you get a guarantee IIRC to have at least two versions upgrades for the phone (my Xiaomi came with Android 8, so 10 should be the last one), and most of all security updates.
Phones only getting 2 major software updates (which are released yearly, so 2 years) is a bit of a deal breaker for me. I understand older versions of Android are pretty well supported at least by apps in Google Play.
Also what bothers me about Android devices: I got a Galaxy S8 to do development for work, and not all manufacturers are created equal in updates of course; IIRC I waited almost a full year after the Google flagships to receive Android 9 — in fact I think I got Android 8 around the time Android 9 came out.
That's only because there are no other vendors to muddy the meaning of "pure". Only one vendor of devices for the OS means only one conception about what "pure iOS" means. It doesn't necessarily mean that Apple's conception of iOS is the best possible conception of iOS.
For some of us, having a locked down, reliable, secure, and pure phone is a great solution.
I don’t want iOS on my random experimental project laptop, for that I have Linux or windows, or vms. But, cellphones are not something I need to hack around on.
Because "unpure" Android barely receives updates and the few occasions where it does, it's extremely delayed. So that's pretty "worse"
You can also choose to jailbreak an Apple device if you really want to be unpure, and luckily doing so won't prevent you from updating your phone in the future.
Fun. My Xiaomi came with ads baked into the app installer and a very questionable use of tracking in the web browser. Yes, Android One is great but not every device is in that program.
It's still true. You'd be lucky to get updates beyond two years (so choosing the brand with this in mind is more important within the Android ecosystem). There are also devices that don't get updates after a few months of launch. Updates are also delayed by several months depending on the brand.
Except for all the Samsung devices that don't, of course. Mine periodically says "security update available", I click "install", it says "install failed" and that's the end of that. There appears to be no way to know what the update was or to retry the install.
Apples vs oranges. With iOS the norm seems to be that apps support 1 version back, if even that. So as soon as your device is no longer updated (or if you dislike the update!), you're stuck.
Android puts much more emphasis on backwards compatibility, with Google moving more and more functionality into app libraries rather than system frameworks. Our app still supports Android 4.4 (released in 2014!).
You can walk into any Best Buy and buy new devices on the shelf that are running versions of Android so old they can't run apps like Netflix -- devices that are out of date now, and will never get an update. This is part of the reason Android developers need strong backwards compatibility.
No, the norm on the iOS is current and two versions back.
It is also worth mentioning that iOS are usually supported for much longer than Android.
iphoneOS 15 will be supported by iPhone 6S: this was released in 2015. If app supports iOS 12 (which most do, as it is only one version below the current iOS 13) it means they support iPhone 5S: which was released in 2013.
And Apple users are likely to update: 92% run the latest version, and 7% are on the iOS 12.
It is true that many vendors abandon their devices more quickly than I would like, but it depends on the vendor and the "flagshipness" of the device. Also, since version 8.0, Google has made a number of improvements to the modularity of the OS which make it easier for vendors to release updates ("Project Treble"), so the problem has been reduced somewhat
While fact checking my other comment here, it seems like Samsung has been slow to roll out major version upgrades _since_ Android 8. Have other manufacturers improved in this regard since Project Treble?
The iPhone 3GS was released in June 2009, and the first iOS release that didn't support it was iOS 7 in September 2013.
4 years is much better than current Android phones, though it's true it doesn't quite live up to the current Apple lineup where iOS 14 is going to support the 6S which will be 5 years old by the time it launches.
Google doesn't sell data, it sells targeted access to users based on that data. Let's be precise if we're going to discuss the practical privacy implications of both platforms.
Meanwhile if you use iCloud backups all your data is one subpoena away from law enforcement.
Correct. But why does google deserve to know everything about you? What does it matter if they can still browse everything about you and they happen to resell targeting. All still bad. I agree the Warrentless wiretaps are a problem for every American company, google included.
My point stands, apples revenue is not from privacy violating advertising and has no motivation for data collection beyond product improvement
Remember the fun we had making fun of Apple Maps? Why in the world would Apple have dropped Google as the back end for their original Maps program, right? Well, it was because back in 2011 or so, Google refused to give Apple access to true turn-by-turn navigation features unless Apple gave them more access to user data. Rather than do that, Apple decided to go it themselves, even though that made the Maps product worse for years. This is consistent with Apple's behavior in other fields. (Hey, Siri!)
There are a lot of criticisms to be made of Apple, but "they're selling your data by proxy" just doesn't seem to be one of them.
Use any application that lets you view your network traffic and you'll see that Apple doesn't phone home anything it doesn't need to. Google phones home for everything.
That's not exactly true. Assuming you didn't mean TVs completely without any internet connection whatsoever then you'll find that pretty much since that ability was added there have been attempts by the very manufacturers to do just that.
Doesn't help much when the OS itself contains a million first party privacy violations, unfortunately.
Both systems have extreme downfalls with the strategy they have taken, Apple's walled garden, and Google's necessity to use tracking because they are an ad company and this is how they make money.
Overall, Apple's certainly the lesser of two evils, but I think I'll be considering a PinePhone next, once the software ecosystem has matured a bit. I'm going to start speaking with my wallet and not conceding to settling.
Yes and no: they still allowed developers to do it for a very long time. I remember years ago when a social app was chastised for uploading your entire contact book to their servers without asking, because at the time there was no permission barrier for it.
In this instance, apps have been doing this for years. Apple knows this. It’s not entirely clear why they only decided to act on it now.
“Either iOS is secure or it’s not”
You must not be up on security, and that’s cool, we all have stuff to learn.
So let’s define our terms a little bit. What is our walled garden and what does it bring to the table in terms of security?
Off the top of my head I’m thinking we get more eyes during the review process, maybe some static/malware analysis, etc..
Another thing is the “soft controls”, not allowing certain classes of apps that Apple doesn’t feel “belong” in their garden. Well that’s a net security benefit too. Less apps, less possibility of exposure, more careful and selective choosing of apps allowed, you’re going to see less shady stuff and abuse.
Compare the two app stores, it’s not even close.
So the security benefits of a tight review/control process are pretty clear, and play out in the results of malware outbreaks between Google and Apples app stores.
The question of trading freedom for security is another topic, one that cuts deep into the fabric of western society. Too deep for this convo!
It's not wrong. Browsers don't have a walled garden. They just try to be secure period. Apple's fictions is their walled garden saves you from bad apps. It doesn't. There have been and will be plenty of bad apps. A secure platform saves you from bad apps, period.
Again, you speak in this world of binary absolutes. Passionate, but clearly not an experienced security practitioner.
“A secure platform saves you from bad apps, period”
How did you come to this statement? Because my initial reaction is not a flattering one for you, but hey, I’m learning too and I find this topic super interesting.
Could you provide an example of a secure platform securing against threats in such an absolute way? Maybe QubesOS?
I’d like to hear your reasoning a bit more.
Also, I want to touch on your statement of browsers not having walled gardens, and being secure in a general sense.
Are you under the impression that modern browsers are equivalent to all other kinds of apps in regards to their threat profile?
Also, are you aware that most modern browsers phone home URLs to check against a malicious site list? I look at this as “walled-garden lite”
Personally, I keep flip flopping between MacOS, Windows 10 with WSL2 + Fedora, and a 8GB RPI 4B, which for the last few days has been doing ok for a desktop.
My point? The security vs freedom debate is complicated, and for many, rages back and fourth even in the same person. There are very few absolutes in this world. Save your hills to die on for points you KNOW you’re right about, because in this, you’re waaaaaaay off base.
Yes. First of because "walled garden" is just a metaphor for the type of system the App store represents and Second if you have "walls and fences" but with an open door where everyone can enter if they Pay 30% of their revenue then those walls are there for revenue generation and not for security.
The point of the comment isn't to claim security is binary, but rather that if we're making a distinction between "secure" or "not secure", the walled garden contributes nothing to the distinction.
Maybe the Chinese government forced Apple the same way that they’re forcing them to secretly share [Chinese] users [in China] most sensitive data with the government (photos, videos, notes and everything automatically backed up by iCloud).
You do know that iCloud isn’t owned or operated by Apple in China? Much like Microsoft/Office 365, it’s owned and operated by a state owned business. This is true of a lot of large cloud services that run in China.
I’m not saying it’s any better, or indeed worse, rather it’s an important distinction.
I do and I don’t think it’s an important distinction at all.
That being said, I of course know that the Chinese government didn’t force Apple to implement this privacy violating feature, and I know that the Chinese government didn’t force Apple to allow TikTok to abuse it. But many people see Apple as a privacy conscious company thanks to its marketing, and its good to remind people that for Apple, privacy is simply a marketing gimmick.
Do you know if the new ios will bring back (?) fine grain controls over microphone access? You can either give rights to an app for access or no access at all, which is really bad and IIRC through the "while using only" option in ios versions before was more respecting of customer privacy. If Apple removed granular mic controls before the current ios version, I really don't understand why.
I've asked in the twitter thread if early adopters of iOS 14 could also check if any apps access the microphone while in background (even though they shouldn't have to by use case).
I have the suspicion that some apps listen into conversations to apply speech recognition and NLP for targeted ads and maybe even more malicious practices - though I'm guessing networks close to FB for example would have been smart enough to have turned off those "features" for their apps by now but maybe not.
And that's the dilemma of our society. How do we trust all these people and big companies we don't know? Maybe we can't!
It seems like an intractable problem but I think we still have a few tricks up our sleeves. For one thing, we can look at what public companies report in their quarterly earnings and what major business decisions they make. For Apple, the vast majority of their revenue comes from hardware and services. A very, very tiny amount of their revenue comes from advertising.
Compare that with Google and Facebook. These companies make nearly all of their revenue from advertising. Why is this distinction important? Because advertising is all about data collection. Hardware sales? Much less so.
So when I need to figure out whether I should trust Apple vs trusting Google, I look at where their incentives are and how much they align with my own. Google's incentive is to collect as much of my data as possible and monetize it while keeping me engaged with search and YouTube. Apple's incentive, on the other hand, is to sell me new hardware and get me to subscribe to their services.
It seems pretty clear to me that Apple has far less incentive to snoop on my personal data and abuse my privacy than Google does, so I deem them more trustworthy. Is this a complete picture? Probably not. But I think it's still a valuable one.
So then why people can't run their hand inspected code on their iOS devices they own ? Why can't people run good privacy respecting GPLv3 licensed applications ?
The walled garden actually makes it harder to run software you and others can check to be sucure, instead you have to depend on some opaque QA somewhere in Apple to check that for you.
Not to mention you can't elect to use software that has features blocked by default for security reasons where you are sure it will not misuse them as you have audited the source (or even written it yourself!).
That's not what iPhones are for. The android market is more for the tinkerer historically, Apple is just supposed to work the way it was intended, always was that way. Android is moving that direction too so not sure what to choose now?
Well, I have never used Android or iOS on my primary mobile device - went directly from cell phone to Maemo on Nokia N900, then MeeGo on Nokia N9 to Sailfish OS where I am to this day, running Sailfiosh OS on supported Xperia devices.
Hopefully, with reasonably open mobile hardware (PinePhone) other open mobile OS efforts will get more traction now, as the main obstacle of mobile OS development to this day has always been closed hardware & all the crapy proprietary software bundled with it.
There was a time when people could run opensource software on their primary computing device, and total privacy control was available immediately, and not dolled out and taken away at the whims of their corporate overlords. Apple has done more to kill opensource, and thus eliminate privacy, than any other company in the history of computing.
I'm probably going to get downvoted to hell for saying this (again) but this still doesn't solve the problem of whether Apple themselves are abusing your privacy.
Also, the closed-source OS means it's impossible to see what things are doing under the hood, or modify the behavior of the OS itself to be more privacy friendly.
For example, on Apple if you aren't happy with an app snooping on your IMU data, you're out of luck and can only just choose to not use the app. On Android (by modifying the OS) you can actually send back fake IMU data to make the app think it got the permissions it wanted but really didn't. Or you can let access to your photos, but only let it see a walled garden of a few select photos.
> this still doesn't solve the problem of whether Apple themselves are abusing your privacy
Eventually you have to trust someone. This added transparency from Apple is commendable. I support open source for publicly funded software, but if it's privately owned and funded, you can choose to buy and use it or not. Private companies are not under any moral obligation to open source their code or methods.
Some people are successful while open sourcing everything, and that is commendable too.
>Eventually you have to trust someone. This added transparency from Apple is commendable. I support open source for publicly funded software, but if it's privately owned and funded, you can choose to buy and use it or not. Private companies are not under any moral obligation to open source their code or methods.
Funny how you change your argument mid-paragraph. "It's okay! You have to trust somebody. Or maybe you don't. But anyway, it's a private company, so just don't buy it!"
Well the latter is not what GP was arguing in this thread is it? -.-
Perhaps, but Apple is the last company I would trust. They work in a culture of secrecy and engineer for obscurity rather than transparency, and that does not make them trustable at all.
> Private companies are not under any moral obligation to open source their code
But I will give far more trust to those who do so, or at least the privacy-critical parts. With Android I need to trust no-one; I can modify things on the OS level that do not necessarily execute apps in the way those apps expect to be executed, and that is the ultimate privacy guarantee.
I own the hardware, so how my hardware runs software should be my choice; the entire set of instructions and APIs for creating phone apps is merely a suggestion for how the OS should execute apps, and how a stock OS executes apps, but does not necessarily reflect how I choose to have my hardware execute them.
Because you would need to write and/or audit your entire technology toolchain — software, build tools, operating systems, hardware, etc — which isn't feasible for anyone.
I mean, I've known a few developers that worked at apple. I trust them every bit as much as the average developer who reads open source code.
They vouched that the code they worked with at apple was attempting to be secure and wasn't trying to steal user's data.
If I'm not going to read the source code myself either way, why should I trust that random open source code-reader X who says "yup, didn't see any malicious code" vs a developer friend who works for apple and says "yup, no malicious code"?
Honestly, more often than not there's a lot of overlap between those people... And I'd bet that there's a ton of eyes I'd trust on iOS's source code given how many devs apple pays to work on it, while I think there's far fewer on most non-corporate open source projects.
I don't think the issue we should worry about will always just be one bad actor in an organisation.
A corporation like Apple (or TikTok) will sometimes decide to implement features that are antithetical to its users best interests, because there's a commercial imperative to do so.
If the code is closed-source, it's more difficult to assess whether this is the case.
Being able to inspect the code is a necessary but insufficient measure for being secure. If you cannot inspect the source code, you are not fully aware of how the program works.
Tools like strace can help you analyze a program's behavior from the outside, but you get limited insight into its internals (e.g., what algorithms is it using?).
Being open source does not automatically make software more secure. A successful compilation doesn't automatically make your code bug-free. Yet both are necessary to achieve the desired goal: security and correctness, respectively.
How often do you inspect the code of the open source programs you use every day? How about when updates come out? Do you check again?
Or, do you trust that someone has looked at it? How much faith do you have in someone out there in the community?
My point isn't that open source isn't a good thing--my point is that it's not the silver bullet a lot of people blindly assume it to be. Hence the second line of my post:
> In theory it means something could maybe be safer, but it far from guarantees it.
Sure there is: tools like strace, dtrace and eBPF let you “peak under the hood” of nearly any application, not to mention disassemblers like IDA Pro and Ghidra.
I have debugged all sorts of issues with closed source applications using these tools.
neither did I say it's automatically safe, nor define "safe" as in bug free.
The commented I replied to implied that one would walk through the entire stack, every line of code, to audit e.g. an app running on a phone. This is most certainly not what OP meant. Rather, on a whole OSS is mostly transparent, while proprietary software is not. There are of course bugs, but that's not the focus here with "safety".
What we care about is intention. Private companies's have a track record in implementing features that go directly against the benefit if their end-users, e.g. tracking or vendor-lock-in. These anti-features, like the one described in the article, are much harder to detect precisely because the software is proprietary.
Sure, but then you're putting your trust in those people. The point was that an individual can't possibly have the time or resources to do it themselves and so, at some point, they have to put their trust in someone else.
I read in a news paper that one time this search party completely missed a young girl they were looking for.
Your logic would dictate that search parties are a waste of time.
Is open source a panacea? No, clearly, as you stated. Are more eyes, even untrained ones, better than no eyes? Ask all the people saved by search party volunteers every year.
Wait a minute. So it's better to have government finances be closed to the public, right? Because after all, you would need to audit 100,000s of pages of documents to understand what's going on -- which isn't feasible for anyone.
Homebrew software and hacks aren't feasible solutions for the general population. You can't expect a tech-illiterate person to put up with all of that to protect their right to privacy. Like it or not, solutions like Apples' are much more efficient at protecting 1 billion (exaggeration) people at once.
Not just illiterate. How many people here have the time or desire to inspect every app they use? It's a ridiculous proposition. We all have better things to do.
Obviously the onus isn't on every single user to single handedly verify themselves that the software is correct. Do you check your government's finances yourself? So why should they even publish it? By your logic, that is.
The answer is that if it's open then multiple people with differing interests, such as competitors, or even independent organisations such as non-profit auditors, can check the code.
That's like saying that freedom of speech should be banned because most people have nothing to say. User rights needs to be there for the people who need it. The iPhone is currently a black box where it's very hard to know what is happening in it, that's a big issue in terms of privacy and accountability of the platform, not to mention the anti-competitive behaviours.
You're right, but you are also missing the point. Consumers in general have access to one of only a few options for computing tasks. To date, of these mainstream options, only Apple is taking a meaningful stance on privacy. Is it self-serving, sub-optimal, or otherwise broken? Probably. Is it the best thing for privacy currently widely available? Probably.
Apple’s approach improves privacy in most contexts for all of its users, enough to shape market practices. Your approach works for a tiny, tiny sliver of people - which, by the nature of population-level snooping and marketing, are irrelevant.
It's a self-fulfilling prophecy. Don't bring it up next time. Reverse psychology (in an attempt to not get downvotes) doesn't work here, it just pisses people off (and is against the rules to complain about), so you WILL receive downvotes for mentioning it.
It happens whether or not I say that actually. Every time I disagree with Apple I get downvoted to oblivion. There are just too many Apple fanboys here who believe Apple is the be-all end-all of everything privacy.
Or what you’re saying doesn’t add to the conversation. Simply blaming ‘fanboys’ isn’t useful. I don’t think any of the responses to you post believe that Apple are the “be-all end-all” of privacy, but they are arguing that for the typical consumer (something that is too often overlooked) they currently provide the best solution available. The critical bit there is typical consumer. If you’re a “power user” (what a horrid expression) and want to go another route, have at it! You have a choice.
You say “anybody”
But I’m not a programmer, I would have no idea what I’m looking at. And trust me, neither would the vast majority of humans.
Beyond that, have you considered the actual average user? You either never worked tech support or forgot. It’s bad out there, it’s like people are moving backwards with computer skills because of phones.
> You say “anybody” But I’m not a programmer, I would have no idea what I’m looking at. And trust me, neither would the vast majority of humans.
That's still a way better outcome than having a single uncountable company being able to do audits.
> Beyond that, have you considered the actual average user? You either never worked tech support or forgot. It’s bad out there, it’s like people are moving backwards with computer skills because of phones.
People are becoming worse with computer skills because manufacturers try very hard to remove owners from how their machine works, I'm not sure going further into this way will help. In a very locked down device like the iPhone, the phone owner can't even understand what it's doing even if they wanted to.
There's nothing to be gained by trusting a large corporation.
By defending user privacy, Apple is able to have a direct affect on the bottom line of its rival corporations.
All corporations behave according to incentives that will help they to progress further than their competitors. If they don't rival coporations will take the lead.
So should why should I trust anyone? The crux of these arguments is essentially anti-capitalism. That’s fine. But be upfront about it. There is a debate to be had around the ethics of privacy in a capitalist free market economy. In this instance, for me at least, Apple is the devil I know. Your point around trusting corporations is totally valid, but I’ve yet to see a similarly valid argument for trusting open source projects. “You can read/compile the source code” is only valid if it is all encompassing. At some point in that line of argument trust still has to be deferred.
Conceptually, trust suggests we don't need to question, or exercise due diligence. I'd argue, in a capitalist system, you definitely shouldn't trust anyone! Capitalism is based on incentive.
--
So following on from this. Is the crux of these arguments anti-capitalist?
No .. it's not anti-capitalism. It's realistic. Even more realistic if you believe in a capitalism system and want to compete.
I feel that capitalism works most successfully when it provides more people with a better quality of life.
Fundamentally, there are conceptual problems with a capitalist system .. factors that lead to scenarios that are likely to lead to the system operating sub-optimally.
An example of a problem; if a corporation is allowed to grow without limit, how can new companies realistically compete?
The remedy? I guess it's up for discussion. But I think regulation can prevent the worst form occurring, and many markets have measures in place to carry out this type of regulation.
--
Why am I saying this?
Because I feel that open-source is simply a remedy to a conceptual problem that exists within our capitalist economy. It is not antithetical to capitalism.
Namely, the question it serves to answer is; how do we balance the power provided by digital technologies with the incentive companies have to exploit these powers for commercial gain?
Open-source is a logical answer to this. You could argue, we don't have the tools available to highlight any injustices contained in a code-base. My answer would be .. that doesn't mean we conceptually couldn't or won't in the future.
Closed source is a conceptual dead end - and won't lead to a better future.
Yes but you could do this on Android since version 1 because it was open source. You could always modify the OS to spit out fake sensor data and appease apps that would otherwise not run if you didn't give them permissions.
My privacy shouldn't depend on Tim Cook's product management timeline.
You forgot option 3: use Lineage+microg and enjoy the most private and secure platform available on a smartphone today. But if you chose the first one then you can't, because apple won't let you choose the software that runs on the equipment you bought.
I used to use linageOS no gapps and I loved it but then my nexus 5x died and I got a pixel 2 and linageOS support stopped at the pixel 1. It seems to me that the age of custom roms is over. Android got good enough that the only reason for a custom rom was privacy and extended updates. Since the pixel 2 is still updated there is no reason other than privacy and I guess no one was willing to put in the work for just that.
> Since the pixel 2 is still updated there is no reason other than privacy
Features.
- The ability to run Android without Google.
- The ability to change the location of your clock.
- The ability to have some apps open with the status bar and navbar and some not.
- The ability to change your WiFi network without opening settings (yes, I'm still upset they removed that. I use an Android 8 tablet daily, so I'm unlikely to forget).
- Adding invisible left/right dpad buttons and a menu button in my navbar.
- Change the number of quick settings.
- Change the screen dpi (this is implemented these days, I think, but only with 3-5 settings available)
- use adb without first plugging into a computer. Maybe I'm going somewhere and I'll need to restart my phone.
- run Linux in a chroot (far better than proot)
- install Fdroid privileged extension without losing access to security updates.
You can say full well that you don't care about these, you don't have to care. However, these are all things that you can't do without an alternative ROM (or without rooting, which loses access to updates).
In that Reddit thread the author of a Reddit app mentions that they look at the clipboard to see if you have a Reddit link, and offer to open that page in the app (as iOS offers no better way).
On Twitter I saw a. Doing app mention they trigger the notification on every key press because they have custom ‘paste’ button that only shows when you have something copied.
For Apollo's use case there's a solution: iOS 14 adds a new API that lets you perform a pattern match against the contents of the clipboard. That way Apollo can attempt to match a Reddit link, and only actually read the clipboard if that match is successful.
Sounds like that’s the Apollo app. It’s a pretty nice feature but I find it only seems to work for me when I open the app, so I have to force close it. There’s probably a button for it somewhere, but having one on the home page (or similar) seems like it would be equally good or better UX while also getting the users consent to read their clipboard (since they’re clicking the button).
Yep, for Apollo app, it makes sense why he does it. But for other apps like TikTok, it makes no sense. I wonder if the app also sends the clipboard data after that. Someone should look into network requests being made.
iOS supports universal links, so a website and iOS application can indicate that when you open a link to the website but have the application installed, the application opens and takes you to the content. This is what TikTok can use to open links to TikTok in the application.
But this isn’t a general purpose website => app association. Only the website owner can allow an application to do this. You wouldn’t want, say, Google to set up their application to open DuckDuckGo URLs, for instance.
So when it comes to third-party Reddit clients, they can’t automatically open reddit.com URLs because they don’t own reddit.com. The clipboard trick is a workaround for that.
It’s worth noting that if iOS allowed for setting default apps like other operating system this wouldn’t be as much of an issue because then I could always open a reddit link in the app of my choice and no longer need the clipboard workflow.
Okay, this could be simply a dynamic link library checking for a deep link in the clipboard.
Why do this? To preserve the state after install.
Firebase does it. When you click on a deep link but you don't have the app installed, the webpage would copy the url to clipboard and open the App Store, after you install the app and open it Firebase would check the clipboard and take you to the the correct screen.
The apps in the video don't need to be malicious, they simply could be checking if there's a deep link in the clipboard to restore user session.
Of course, with iOS 14 the best practice would be to do this only once after the install.
Yes? That's the definition of a deep link? The way you get notified is you open their app...
This is a special case when you don't already have the app installed, but being able to read the clipboard without warning is it's own thing, but this specific deep link use-case is extremely benign...
Ideally, we’d be able to differentiate between local parsing (which I’d deem acceptable) and a remote request, but that quickly enters a gray area. How long after parsing do we validate a request? Is it even viable for a compiler to track the status of a property beyond assignment? And for how long? etc.
It’s for two factor auth tokens. When I fill in a password it copies my two factor code to the clipboard, then ~30 seconds later it restores the original clipboard content.
All pretty silly all things considered but I can’t think of an easier way to do things.
The app does a check for URLs and will offer to open the URL using its own browser if it detects one.
Reading around it looks like there are better APIs for doing this, where you can ask iOS is the clipboard contains a string matching a pattern, which actually getting access to the content.
"It seems like a ton of apps are abusing this feature:"
Honest question: Why do we need this feature?
As a user, I am happy to sacrafice whatever benefit it provides -- to end users -- to stop the abuse. Obviously the feature provides benefits to app developer personal data collectors.
Discord and other apps will paste in one-time codes from the clipboard. Copy code from an authenticator app and switch back to the app. The code gets auto-pasted into the box.
It's pretty handy since paste itself is a somewhat cumbersome shortcut on many keyboardless devices.
A dictionary app I use looks at the clipboard when opened and automatically displays a translation iff 1) the copied text is new compared to the previous time the app was used; and 2) the text is in the source language.
It's a pretty neat use of the feature, although I'd still gladly let go of it if it means that the other 20 apps I regularly use don't get to rummage through my clipboard for no good reason.
Agreed that this downside is not worth the convenience. But, I think iOS 14 provides a great solution to this problem; you can pattern match against the contents of the clipboard without reading the contents. Then you can always read it if it matches.
One I use regularly is that Chrome on iOS does it when you open a new tab. It offers to search for the text in clipboard or go to the URL if it's URL format.
I have a feeling like many of the apps are reading the clipboard on the launch because they probably act on copied text (e.g. if it's a link or a number, they can use that to show a suggested user action).
Obviously I might be wrong, but we live in times where many applications are reverse engineered and their traffic is MITM'ed all the time. I'm not sure if anyone’s sending those clipboard contents outside the device. That would be a problem.
How, exactly, are they abusing it? Are you suggesting that they send the contents of the clipboard back to their servers? Do you have any proof that they are using the clipboard for nefarious purposes?
It’s disappointing to see the lack of skepticism applied on a site like Hacker News.
The contents of your clipboard _can_ be directly related to the functionality of an app.
For example, a link saving app like Pocket might check if your clipboard currently contains a URL when you open it. That allows the app turn a slightly tedious operation (tap/hold input field to bring up context menu, tap paste, tap button to save) into a single tap ("save copied URL?").
Whether or not the convenience is worth it might be debatable, but I fail to see how one would call that nefarious.
There a APIs in iOS (which existed before iOS 14) that allow you ask the OS if the clipboard content matches a pattern (e.g. is it a URL) that doesn’t trigger the warning in iOS 14.
It does appear that lots of apps don’t use these APIs, the developers probably never knew the existed till now, but there is a privacy preserving method of the building the functionality you talk of.
Aren't they just checking with an UTI? So for example a package tracking app can't ask if it contains strings matching patterns describing common package tracking code formats?
True (and I'm aware of those APIs). Just pointing out that "I want to know what's on the system clipboard without the user explicitly pasting" does not automatically equal trying to hoover up your data and phone home with it.
The maker of the Apollo Reddit app chimed in on the reddit thread that his app would check to see if a reddit URL was in the clipboard and offer to take you to that page.
Chrome uses it so the URL appears when you select the address bar.
Some apps detect if you have a 2FA auth code copied and auto-pastes it. I think Discord might be one? I can't remember which, but I've seen it on at least one app.
I can totally expect others to detect copied URLs that may belong to the app's domain and then offer to direct you to that particular URL (for example, I think the SomethingAwful app on iOS does that - if it detects a forums.somethingawful address it'll offer to load that particular thread for you).
That said, I definitely want to see more visibility about when and why this is done. Apple are absolutely in the right to show me a popup whenever it happens, so apps are forced to be clear and transparent about it.
I can guarantee you that there are tons of apps that send your clipboard verbatim to their analytics services, because some product manager "wants to see the data".
Generally, no. But there are ways for an app to stay active in the background. Some are grounds for App Store suspension. Music playing apps may stay somewhat active in the background. However, I don't know if the clipboard API will still work when an app is in the background.
Create a bitly account if you don't have one and login and create a bitly link for anything, it doesn't matter what it is.
Copy that bitly link to your clipboard and repeat what you're doing in that video.
Monitor the bitly link for clicks.
Better still do it on a website you control with a unique URL that won't get indexed by a search engine and monitor the web server log files for hits and keep a record of the IP addresses.
It is honestly still kind of crazy that Apple still hasn't fixed this gaping hole in their security model, along with others. A notification is not solving the problem.
I wonder if Apple is playing 4D chess here though. As people learn about this, they will become outraged and care more about privacy. This in turn benefits Apple since that's their marketing stance.
I wish they just cut the bullshit and fixed these holes though, they've been around for years. It's really depressing to see Apple's fantastic security work in other parts of the stack be completely and utterly compromised by OS design decisions like this.
Agreed. Reading the clipboard should require the user to choose "paste" just like the browser does. You can't read it unless the user expressly trying to paste into your app.
Manipulating the clipboard is not the problem, reading it is. AFAIK there is no way to read the clipboard from JavaScript without user interaction. If there is please post a repo.
It used to be true but all that was fixed like 10 years ago.
An app being able to declare some operations to perform on the clipboard (overwrite, append, etc) that don't return any information about the contents are not a problem from a privacy or security perspective. At most they can be annoying.
If your clipboard contents were overwritten with an arbitrary wallet address immediately before you paste into an exchange's "Send BTC to..." field, that would be a pretty big problem.
I wouldn't expect that most people would double-check to make sure a series of gibberish characters matches what they expect, especially if they don't have any reason to suspect that paste wouldn't output exactly what they copied a moment ago.
This is a really good point. Those popups are disgusting and this is yet another reason to not disturb your users. I'm fine with a button in the footer of every page to do this, I think displaying a popup for this is terrible and I won't do it unless compelled to. Does anyone know what the rules are, I think they are much vaguer than most people suggest.
Sadly, user interaction does not have to be something done by the user with intent. As another comment below mentions, the code to manipulate the clipboard can be hidden yet still kicked off by the user interacting normally with the website.
If you're using JS to read the clipboard, you don't need to paste. Assume that the purpose of reading the clipboard is not for a useful thing for the user but a nefarious use for the app/site maintainer. The would just send the read data back to their server without the user knowing.
This is true for nearly all apps, but it is important to consider special cases. I use a keyboard app (SwiftKey) that shows contents of clipboard, if recently added, as a shortcut button. It's pretty nice. I can probably turn it off but I don't intend to.
Accessibility apps likely have a lot of examples like this too
There are cases where it's fine to access the clipboard without me pasting. My reddit app for example auto-detects reddit-links on my clipboard and asks if I want to open the link in the app if it finds one
Well, anything that requires jumping (knights) must require at least a third dimension. Now, 3D chess I’ve never learned, but how do pieces move there?
It's even easier than that, just use ngrok. I recorded a demo video of a web interface running locally via ngrok. Left the tunnel up while the video uploaded to YouTube so I could send it privately to a colleague and during processing I started to see requests on my tunnel. YT scraped the URL from the video and was requesting it one char at a time until the entire address was complete. IIRC this was almost two years ago. That was also the day I decided no more visible private links in YT videos.
That's wild! What do you mean they were requesting it one character at a time? The URL itself? If so,how do you know that? Do you also own urls in that "character space" leading up to your URL?
My guess is that each frame of the video was OCR'd for text, so as the author typed a URL in one character at a time it was producing unique substrings on-screen and the youtube bot dutifully tried to fetch each of those unique substrings
That seemed like the gist of it. I wasn't surprised that it was OCR'ing text, but I was floored that it recognized a URL and started firing off requests.
I was in a situation like this once recently. I was trying to send a password pusher link to my brother over Signal. The password pusher was set to expire after 1 day or 1 view. And my brother kept saying the link didn't work.
Of course it doesn't work when Signal fetches it before him to make a preview!
Had to set it to exactly 2 views and then he could view the password.
And iirc the Signal-desktop release for Linux I was using could not disable previews of links. And even if it could, his end might have previewed it.
I've had these inspections deplete single-use links such as a Slack magic login link. I actually side with "GET should be idempotent" but it's still annoying.
Fastmail as well. It minimizes attack surface on a client. If you send an email to bob@fastmail.com, fastmail's IP is recorded as accessing the image, not Bob's home IP address.
Using fiddler I briefly looked and didn't see tiktok sending my clipboard contents anywhere.
Edit: However, tiktok is one of the chattiest apps I've looked at. They have a huge number of tracking/logging/collection endpoints constantly slurping data in the background. See my hosts list which aims to block this:
Genuine question, why don't companies proxy tracker data through a single host that the app also depends on to serve data?
That way it wouldn't be possible for users to block individual hosts to prevent tracking. I guess it's not worth the effort though because laypeople won't care either way?
They aren't proxy'd because advertisers want to ensure they control the domain and can verify requests are going to it, without having to trust the content provider.
Some content providers don't allow you to view content if you block the advertising hosts.
Then it becomes a matter how you want to deal with failure, do you want the site to break if your ads don't load?
Most trackers are 3rd parties that require you to go through their hosts. Additionally, these trackers are usually picked out by business analysts / product managers on different teams - so marketing my have their own tracking solution, sales might have their own, and engineering might have their own. Tends to be faster/easier to plop javascript on a page than engineering your own tracking/analytics pipeline at early stage startups
> Additionally, these trackers are usually picked out by business analysts / product managers on different teams
This. I remember at my last company, we ended up with 8 or 9 different analytics tools all getting different data and showing one or two "cool views" the PM had put together.
It's such a big problem that solutions like segment.io exist to broker your events to N different downstream solutions.
> Using fiddler I briefly looked and didn't see tiktok sending my clipboard contents anywhere.
Maybe you just didn't copy anything TikTok was interested in keeping track of. I can think of a lot of really obnoxious things you can do with clipboard data, everything from scanning the contents to collect interests, scanning for URLs, collecting information about what applications are installed. A lot of this could be analyzed on device and it would only update infrequently.
If I had to design it, I would not directly send the data to the central server. I would first store it for a period of time, then compress everything and then send it while the phone is inactive and plugged to the sector. Or I would build a model with the data on the phone, then send that to the central server. Both ways would be asynchronous.
I know there are a few apps which will check the clipboard in order to provide functionality to the user. For example, some shipping apps will check the clipboard to see if the user has a copied tracking code and if so, ask the user if they want to track their copied code.
Not sure if TikTok does something similar, but there are certainly innocent reasons for checking the clipboard.
I'm disappointed that so many people think "hey they could just be doing this for innocuous reasons" instead of "oh maybe nobody should be doing this even if it's the absolutely most straightforward way to do it."
Even if you're only looking for a shipping tracking number and then only so that you can provide useful auto-populate, will you lose out by only checking the clipboard when the user hits your text input field? Is it that much to ask that you find the least offensive way to serve your user?
On the other hand, what will you lose when the news gets out that you've created a keylogger? What about when someone else at your company pushes you to monitor for something else for strategic advantages? Or what about when another developer doesn't understand the implications and now your app is responsible for revealing passwords or other sensitive information? Are all of these worth saving one click?
This is a OS bug not an app bug. You can't expect millions of app developers to get this right. An app should not be allowed to read the clipboard until a user chooses to "paste". That's on the OS for allowing this behavior. It's silly to think that making it possible to read the clipboard at any time that some how all millions of app developers will use it correctly even if they have no malicious intent.
A high speed collision happens on a freeway, killing both drivers. This is a car manufacturer bug, not a human bug. You can't expect millions of drivers to get driving right. A car should not be allowed to drive fast.
How about: stop apologizing for billion dollar corporations. Fault can be placed on both the OS and applications. I expect better, from everyone.
The difference is one problem is trivially solvable. If an app isn't allowed to read the clipboard the problem is solved. Smart people choose solutions that actually solve the issue when those solutions exist rather than just making some guideline and praying people read it.
Companies developing apps should be held accountable for their decisions, i.e. spying on the clipboard in this case. Don't excuse them with the reason "well it was easy to do, so it's ok" that's like saying "well my car was stolen, but it was easy to break the lock so it's ok"
By this logic, how do you trust Apple? they're a famously blackbox company with access to all your data.
just because they say they're not data mining you, doesn't mean they are not.
slippery slope arguments go all the way to the bottom
> An app should not be allowed to read the clipboard until a user chooses to "paste".
Sounds good. There are clipboard apps on iOS that work with a share sheet and also get the clipboard content when launched. They could be modified to have the user actively paste in the app to store something if the app is launched in the foreground (just like crude apps on a desktop would).
>will you lose out by only checking the clipboard when the user hits your text input field? Is it that much to ask that you find the least offensive way to serve your user?
In your toy example of my app's main screen being a text box where the user can insert a tracking code, yes, I do lose by making the user wonder every time "you know I have a tracking code why are you making me type it in?" In a more realistic example of, say, Amazon, where the "track my previous order" button is not the main screen of the app, the convenience is further increased by doing the detection automatically.
And what about non-text clipboard contents? Not every interface is a text-style document into which content can be embedded. Even "Copy URL" requires knowing that "share website" shared a URL and not a website. It doesn't make sense for images at all.
I'm not an iOS developer, but I think "Sharing and Actions" [0] is the right thing? In Android, 5 years or whatever ago it was by binding to an "Intent" IIRC.
If I copy a link on Android, I can go to Chrome, click on the address bar, and it suggests "link you copied: $whatever". This is how this should work. If there's an image in my copy buffer, it doesn't need to do anything. I don't need Chrome monitoring my copy buffer when I'm doing other things in case I copy something that looks link-like.
Clearly there needs to be a way for the parcel tracking app to tell the OS, “I am looking for plain text strings that match this pattern.”
Which iOS and macOS already have, they are called data detectors. When the message arrives notifying you of the tracking number, you can select “track this parcel” from the context menu. No need for an app to snoop on the clipboard.
I know what data detectors are, but I most certainly don't get any such tracking option for any of my messages containing tracking codes. I still need to copy those codes into Parcel, and find the automatic pasteboard reading useful. And UIPasteboard doesn't offer such an API, AFAICT.
I'm not sure that looking at what a user is typing into the comment box for the application really qualifies as a significant privacy violation. That presumes a privacy model that isn't terribly intuitive or practical.
Now, if they had evidence that the data from the keyboard was being sent up to a server, that'd be a different story.
In this case we're talking about apps monitoring the copy buffer when they aren't foregrounded. Your example of monitoring a comment box isn't what we're discussing. In the copy buffer example, I think a privacy model of "an app in the background can't read what I'm copy-pasting in other apps" seems very reasonable and fits my idea of what a "standard user" would expect.
Personally, I would prefer a world where nothing can pull from the copy buffer, it needs to be actively pushed by the user. It seems crazy to me that that isn't the case.
I agree. I am very surprised Apple started with a notification. This absolutely should be permission based like location or anything else. There are zero cases where TikTok or facebook should need my clipboard.
A notification that they’ve already done it is not enough. It should tell you every time even if you approved it.
The only explanation I can come up with for why Apple isn’t making this opt in like location is it is so widespread it would break many apps. I just can’t understand how, or why they wouldn’t announce a transition-by date like with Sign in with Apple.
I agree. I am very surprised Apple started with a notification. This absolutely should be permission based like location or anything else. There are zero cases where TikTok or facebook should need my clipboard.
A notification that they’ve already done it is not enough. It should tell you every time even if you approved it.
The only explanation I can come up with for why Apple isn’t making this opt in like location is it is so widespread it would break many apps. I just can’t understand how, or why they wouldn’t announce a transition-by date like with Sign in by Apple.
Well, the problem is we don't know why they are using it.
That almost makes the notifications useless.
Apple puts app developers through an annoying review process. It seems like the least they could do is check with the developer on why they want that access and see if its legit or not.
> Well, the problem is we don't know why they are using it.
>
> That almost makes the notifications useless.
If your app is checking the clipboard or my location on a frequent basis, it's your job as the developer to communicate to me why you are doing this.
If the notifications are frustrating, users will turn off the permissions or remove the application entirely. Crappy snooping applications are gone. Mission accomplished.
> If the notifications are frustrating, users will turn off the permissions or remove the application entirely. Crappy snooping applications are gone. Mission accomplished.
Will they? That didn’t happen with the UAC dialogs in Windows.
Why shouldn't apps use the API they are provided with?
It wasn't offensive to the user before iOS changed the rules. It is just a technical detail behind a small feature.
Every app can do bad things. For example, every app with a password field can use that data to crack your account on others services. You shouldn't reuse passwords but we all know that too few people follow that rule.
If you installed an app from some company, it means that you trust it to some extent and with that in mind it is reasonable to think that the issue is innocious. If you think a company wants to steal your passwords, why did you install its app in the first place? Clipboard or not, it will find a way of doing bad things.
> Is it that much to ask that you find the least offensive way to serve your user?
Up until now there’s been no way the user has been offended because they haven’t known it’s happening. So there’s no real incentive to do it when you focus on a text field vs anything else.
And more broadly, there isn’t a downside if you use the API honestly: e.g. to check for a numeric code that matches whatever regex for one of your orders and otherwise disregard the data immediately. I’d bet a good number of users find it useful.
"The OS snitching on us and then annoying the user" is an interesting definition of "offensive", but definitely not the one I meant. Let me put it another way: if you put the appropriate amount of effort into asking "what's the robust, minimally invasive, least-likely to be misinterpreted and/or abused, way for me to accomplish this?" you are likely to create a better product and less likely to have something like this pop up.
So let's say you're making the tracking notifier, and you work for UPS. The regex is `1Z[0-9]{16}`. All good, you're being nice, someone opens your app and you already know what shipment they're interested in. Then a "growth hacker" joins your group and mentions that it'd be nice to know how many of your customers also use FedEx, so the regex is changed to also grab FedEx tracking numbers (`(1Z)?[0-9]{16}`, I think). And now someone gets the genius idea of checking up on package shipped by competitors and popping up a notification "tired of waiting on DHL? UPS delivers within 2 days 99.995% of the time" when they miss a delivery. Even though they never asked UPS abotu their DHL package.
See how that progresses? See how it's offensive, even if you're not annoying your user more than you normally would with spammy push notifications, and even before your user suspects that you're spying like this? Do you see how this whole series of escalations aren't available, or at least not as easy, if you only check the copy buffer when it's likely a user is about to paste? Instead of "tweak what we already have" you have to "include a new snooping routine".
If you're thinking "all is fair in love and war" here, and this seems like genius marketing: 1) this is your heads up that your morals are not in line with society's, and 2) do you think this will be a marketing win if the regex is loosened enough that you pop up a UPS notification about "package with tracking number (phone number someone just gave me)"? What about if my UPS account for work notifies me about some very private personal packages? Especially some shipped via OnTrak?
Anyway, as I said in my first comment. I'm disappointed that people don't think they should try to worry about downsides and failure modes of their design and engineering work. Maybe it's a matter of norms and priorities being different in the consumer app/web world vs. many other domains.
Monitoring the copy buffer, from the background, is an overreach, and doing that and then pretending like it's the reasonable thing to do is what I describe as offensive. The point I was trying to make was that even doing less than that, but not designing with an intent to be minimally invasive from the start, sets you up to head into the "outrageous scenario" that yeah, I'll grant you I completely made up. But FWIW it is also completely in line with how I've seen people operate.
Suppose you want to offer this capability but only check the copy buffer when the user has signalled an intent to provide you with input. How is that not the least galling design decision? I'm having trouble figuring out how to express that it also serves as a personal (and team-internal) signal that "we are here to serve the user, and not to take advantage of them, even if that's inconvenient for us". Maybe that doesn't matter, or maybe lacking that is what leads to things like the Uber "Ride of Glory" blog post and worse?
Something I meant to imply in my first comment, but not the reply to you, is that furthermore limiting your exposure to user data limits the likelihood that a series of bugs puts it into your logs and then leaked out to the world. No, it's not done on purpose, but no amount of good intentions fixes it. Defaulting to being less invasive also reduces your likely level of impact.
This is exactly why I am happy to see Apple cracking down more. I will grant you that this could very much be "innocent", but it's still completely unacceptable to me as a user.
And just about every PM or founder I've ever worked with seems to think like this - when given the choice between the straightforward, obvious way to do something, and some crazy, brittle, privacy-abusing hack that might improve conversion by 5%, they'll choose the latter every time. This is a perfect example of this kind of thing, "let's just check the contents of the clipboard every 2 seconds in case the user doesn't know how to paste." It might have even been born out of real complaints from a fraction of users who really can't figure out how to paste. And somewhat depressingly, in the defense of PMs it does actually seem to work sometimes and improve conversions. And all your competitors are probably doing it already anyway. So the only way to stop this crap is for the platform itself to step in.
Another example that's also fixed in iOS 14 - Apple already provides a perfectly usable share sheet for choosing a photo, where I can look through my whole phone and pick just a single photo to give to the app. But for some crazy reason, seemingly every app wants to recreate this themselves. So they require you to give them access to your entire photo album so that they can display it back to you with a marginally different design and a different color background than the default page. Now in iOS 14, Apple is giving the user an option to just pick individual photos to share anyway, and then presumably make it appear to the app that those are all the photos you have on your phone.
Oh man, that's awesome! I wish they'd do the same thing for contacts. There are a bunch of messaging apps that "need" access to every contact in your phone. I have 1 friend using their service, I don't want to give them access to my other contacts. So I don't use that app. But if I could restrict them to only the one contact I care about, I'd probably use them.
A similar problem is that if you want to save a photo from an app to your album, you need to give the app access to your entire album for some reason. I'm not sure that it's because there's no API for write access only, or app developers choose not to use it, but it's really annoying (and a little worrying: what if the app accidentally deletes ones of my photos?).
The mentioned photo change sounds awesome! I was not aware of this iOS 14 feature. Giving an app access to my entire photo album always seemed overly broad, especially given that photo EXIF data can give away other details like location.
You'd think if the user knew to copy a tracking code into their clipboard, they could also paste it into the appropriate field without the app needing to extract it for them.
Not having to paste is very useful. Especially on mobile where pasting is slow and tedious.
A few months ago I would have said the same thing as you, but then I experienced some applications which looked at what I had copied and automatically did all the hard work. It's a pleasant surprise to see it happen, and having experienced it, I am happy that the applications have this functionality.
Of course if you're living in a world where all the code on your device is considered hostile, then you may not want this. But I use almost only free software and there you can generally start with a presumption of goodwill instead of starting with a feeling of distrust, like with TikTok.
I mean, sure, you're technically correct, but you're missing the point. For just one example, it's so heckin' convenient when an app recognizes that you have a 2FA token in your clipboard copied from your 2FA app and "pastes" it for you automatically.
Yes, but this is still convenient. For example, Pocket (a read-it-later service) checks your clipboard when you open the app. Normally to add a new URL to your reading list, you need to tap through a menu or two from the front page of the app. But if it detects your clipboard has a URL on it, it provides a small one-tap "Add this copied URL to your list?" button at the bottom of the screen, reducing the friction.
Google/Apple maps does this too; if you've copied an address and then open them it offers you the copied address as a suggestion (usually with a clipboard icon) or as an option to navigate to instead of having to copy/paste manually.
Google Maps does this, and it saves a couple taps. If you have an address on your clipboard and then tap on the address input it will be the first suggested result.
I use a shipping app that does that. Doesn't the shipping app know when it's newly in focus, and shouldn't it only then check the clipboard, not constantly check?
FWIW, the one I use only checks once — upon startup. It's sometimes annoying that I have to kick it out and re-launch if I've copied a tracking URL from e-mail after the delivery tracking app is already open, but now that I know that's the price of privacy, I'm perfectly OK with it.
Deliveries on my iPhone does this. Though I’m pretty sure if apple opened up a API that would allow you to provide a hard coded Regex it would solve a number of privacy issues.
Chrome accesses the clipboard every time you touch the address bar to check if you have a URL to go to. Safari does the same, but without a pop up. Apple must be cheating somehow because even they know how annoying it is.
> Not sure if TikTok does something similar, but there are certainly innocent reasons for checking the clipboard.
If I find someone digging through my mailbox, I don't start rationalizing their reasons for it - I ask them to explain themselves, and that explanation had better fucking be a good one.
For sure. I am not in a position to determine whether TikTok is looking at clipboard contents for good or evil. I simply wanted to point out that there exist innocent reasons for looking at the clipboard.
The argument is that there is no innocent reason to be looking at the clipboard. Either the user is going to paste the thing in your app ormit’s not yours to fiddle with in the first place.
I copy passwords from a password vault. I don’t want your app knowing that I just opened Hacker News five seconds ago and have this password-looking text in the clipboard.
In the coming days I feel as if we are going to see a lot more threads about the new privacy features in iOS 14. Apple has a video overview of a lot of the new stuff [1]. I suggest watching it (at 2x speed - they talk very slowly). Especially at 16:30 where they emphasize that SDKs are part of your app and that you the developer are fully responsible for the data collection of any SDK you include in your app. This is going to be a huge problem for a lot of developers.
That's it. I'm switching to iOS. Android is like child's play when it comes to privacy compared to the features introduced just in this version of iOS. I'm sure I'll hate the lack of customizability but at least the privacy features are solid.
If by problem you mean end of laziness then sure. You can no longer drag and drop FB or google analytics SDK into your project, type in a few lines of glue code, email the product owner that you have added tracking capability into the app and head down to the pub to spend some of that well earned six figure developer salary.
Now I am gleeful about this because it's win-win for me as a consumer and software engineer. Better privacy and use of my data as a consumer. More $$ work as an engineer when all those analytic pipelines need to be re-architected and rebuilt.
It looks like there could be a reasonable explanation for this. There are apps that have different behavior whether or not there is text in the clipboard (e.g. enabling a "paste" button), and they're only checking that the text exists, not what it is. There's a new API that will let devs do that without triggering the user notification.
If TikTok is actually constantly loading the clipboard, that's obviously terrible. I'd bet this behavior is gone by the next release, and that shows how useful this new notification is.
> There are apps that have different behavior whether or not there is text in the clipboard (e.g. enabling a "paste" button)
People keep saying this but I've never seen one of these app-specific paste widgets. And even if I did, I wouldn't miss it in the slightest for the sake of not allowing every app to be reading my clipboard at all times.
It's inexcusable to me that there isn't a permissions prompt for this. Two of my most common types copy-pasted strings are URLs and passwords.
I can't imagine being one of the PMs/higher-ups that decided to allow clipboard content to be shared willy-nilly like this. Like, what must be going through their minds when they make decisions like this? "User experience at all costs?" Seems contrary to their stance on privacy.
Personally I couldn't care less about google seeing whatever junk I sometimes have in my clipboard if it means I can more quickly go to the address I'm looking for
and I'm sure there are many who couldn't care less about the malware running on their windows machine as long as they can browse facebook. What's your point?
I don't use any Google apps so I don't know, but even if I did, it's such a minuscule amount of effort that's being saved by having a custom prompt when the generic path is so easy.
It's tricky; little things like this may not seem like much, but in aggregate they can be quite frustrating to users. Personally, this is something I do often enough that it noticeably reduces friction for me so I live with it because it's very convenient.
However, it seems like there should still be a way to provide nearly as much convenience to users while still protecting their privacy.
Here's an example, highlighting a band name on a festival website. Stock Chrome (and Spotify) on a stock Android on Pixel 3A https://imgur.com/a/1GqrRVs
I use bitwarden as my password manager. Out of paranoia, I have been logging into Bitwarden only while an empty tab is open in case some random website is able to access my keystrokes while I use the plugin.
I am a web developer, but I wasn't actually able to find information about whether this is a real risk or not last year when I began doing it. Can anybody clarify?
The problem is that iOS doesn't differentiate between a call that merely checks for the presence of a clipboard entry (e.g. so you can enable "Paste" in a menu/submenu) and actually copying the contents of the clipboard.
The workaround (for legitimate apps) is to simply always keep that "Paste" option enabled--even if the clipboard is empty. That way you won't freak out your users and only suffer the most minor of usability consequences.
Having said that I don't think TikTok has any relevant functionality such as enabling a "Paste" option so... Most likely nefarious!
I believe any time you paste with the native keyboard UI up it doesn't go through the app. You only need to snoop the clipboard if you want to proactively use the data without having the user paste.
It seems to me that Touch ID is slightly less secure, given that it stores your fingerprint on the device and you rely on the Apple module and crypto to be implemented securely.
With a Yubikey, the only thing you have to do to stay secure is to not lose it or let others use it.
The attack surface is larger but it's still not an attack almost anyone needs to worry about - a 0 day on the T2 (which has never been publicly found/reported on) is something only worthy of nation state attacks on other nation states.
I took a short look at this and it appears to be a work in progress that varies between browsers...
Chrome implements a "clipboard-read" permission that can be requested by calling navigator.clipboard.read(). When a page calls that it will display a permission request dialog (like those asking for permission to show notifications on seemingly every single news web page). A little clipboard icon will also appear in the nav bar showing the status of the permission (visible after a read attempt is made during that visit)
Firefox is apparently on track as well, although for now the clipboard.read function is not implemented for pages and can only be called by extensions. I'm not sure what the permission dialog for extensions is like.
So... it may be safe. But it is a work in progress and each browser is different. I've only checked the most well-documented method for reading the clipboard... maybe there is some other half-implemented feature or event listener that happens to leak some clipboard data...
I had the same exact concern, and I haven't been able to find reliable confirmations on this being completely impossible either.
It's easily testable, though, that a webpage that isn't focused (because an extension's pane is open) doesn't receive input events. Likewise, Chrome [1] and Firefox [2] extensions themselves cannot bind to relevant keys for example. All in all I would say that going through an empty tab is unnecessary - even though I got into the habit of doing it as well - and even if this wasn't true 2FA should be enough to thwart most malicious actors.
Well, you probably couldn't find any confirmation because nothing in the security world is completely impossible :) Webpages aren't supposed to be able to receive input events when the extension popup is open, but there could still be an unknown vulnerability in Chrome/Firefox.
So if you care enough it's best to mitigate that risk by using the standalone application for your password manager, or better yet use a completely separate device like your phone!
On Chrome at least, Bitwarden has it's own extension popup window with the vault. This is a completely separate web context. There's no way for the open website to detect your keystrokes barring a vulnerability in Chrome itself.
If you really want to be safe, you should use the standalone desktop apps and skip the browser extension altogether. Doing that empty tab thing probably doesn't protect you from anything.
Crazy thing is that I would open TikTok on my phone and occasionally see a loading dialog that said "Pasting Clipboard" (from MacBook Pro). I finally freaked back in March and Googled it and found this...
https://www.forbes.com/sites/zakdoffman/2020/03/12/simple-ap...
iOS already prompts for other things like "app Foo would like to know your location: Never, Only when open, Always". Users are use to this. I'd love a popup like:
"TikTok wants to see what you've copied into your clipboard: Never, Once, Always, Uninstall that spyware".
Then I could make informed decisions, like sure, my package tracker can see if there's a FedEx URL in my clipboard. I'm OK with that. There's literally no reason why I'd ever want Instagram to check my clipboard, though. May you do, and you could give it permission.
Yeah, not just "what you've copied into your clipboard", but "Careful! Sometimes you have important confidential data in your clipboard, such as passwords!"
Chinese investment money is everywhere so they try to control the narrative. I have seen for sometime that on HN, we are open to criticize but don't take a standpoint unless its something to do with western government.
I think this is due to the flame war or rating system of HN, where active discussions are relegated to oblivion. Instead of trusting biased and funded media - we here need to introspect, without us being silenced.
May be we are mostly left so we don't have a much stronger opinion unless its right - is that the case?. Just curious - why the slack?.
Apps abusing clipboard can steal the passwords from the password managers when user copies it and associate with the user account via parallel construction.
e.g. Time of stealing password from clipboard + time of my HN comment.
I've been long weary of this, android 10 has made some changes like allowing only IME & in-focus apps to access the clipboard. Not a fool-proof way to prevent the issue.
One more reason to destroy app duopoly, switch to pure Linux OS [1][2][3] and force app publishers to stick with web apps/PWA with more user control.
Perhaps there should be a separate security level for “access whatever was just Copied in the last 4 seconds, if the only other action taken was to switch to the app requesting the clipboard”. Almost any app could clear that bar, since under those conditions it probably means “user grabbed something and wants to use it here”.
What is the use case for “read whatever was copied from anywhere for any reason at any time”? If there is one (e.g. full-fledged word processor maybe), that should still be a separate entitlement and require a higher bar, e.g. extensive app review.
Appears that a lot of third party apps are just using an older api that forces this notification to show. a lot less nefarious than it originally appears.
Seems that iOS14 offers a specific new API to check if there's something on the clipboard without actually seeing it which is what all these apps are trying to do.
The API to check if the clipboard has contents has existed since iOS 10.
> Starting in iOS 10, the UIPasteboard class provides properties for directly checking whether specific data types are present on a pasteboard, described in Checking for Data Types on a Pasteboard. Use these properties, rather than attempting to read pasteboard data, to avoid causing the system to needlessly attempt to fetch data before it is needed or when the data might not be present.
I suppose that the current scheme is that apps are monitoring paste events, and when it happens have a look at the clipboard for copied data.
Perhaps the clipboard shouldn't be visible at all, and only when the user decides to paste content should the targeted app receive a "paste" message with the copied data (or perhaps some more complicated selection mechanism with a list of recent copies à la emacs). This would essentially merge the 2 steps process outlined above into a single operation.
I was shocked to find out that firefox enabled websites to access the clipboard sometime in the last five years or so. .. and I'm left stressed out that I could have had important passwords or key stolen this way.
I don't think that site's accurate. I searched around and there seems to be two ways of reading clipboard contents: making a fake dom element and waiting for the user to paste, or to use the async clipboard api. The first doesn't seem abusable because it requires you to actually paste something. The second doesn't seem to be supported by firefox, at least according to this demo: https://googlechrome.github.io/samples/async-clipboard/.
I'm certain Apple knew in advance that a fair number of apps would trip this up and which ones were going to run into issues in advance. It's interesting that they went ahead and turned this on immediately without warning developers it was coming. Perhaps they wanted people to see what they were preventing.
But at the same I can't help but be bitter. The smartphone scene is very active ever since, I don't know, 2011? All the companies and shady information dealers have gathered mountains of private information.
Is this not too little, too late? This would have been welcome at the iPhone 5 release. Nowadays I wonder what difference would these measures even make.
> It is steps in the good directions, and as far as I know they are way more advanced than Android on this point.
I have the same impression. Which is saddening because I thought of going back part-time to Android and experiment with homelab builds and p2p architecture with a few spare Android devices. But I am not comfortable with how much and lower-level Google and the phone's vendor have and I am sure that no matter how I secure an app with access to photos/contacts/etc. then the kernel could likely still extract the info it needs... Don't know, but I am quite paranoid about smartphones lately.
> let's not blame them now that they do what's right.
Agreed. Better late than never. I simply feel that marketing trumps privacy concerns here. Apple wants to have what to brag about every year so features that should come once every 2-3 months are coming annually instead.
In my Android 10 phone, if I copy a piece of text that looks like an address and open Google Maps, Google Maps will immediately offer the copied text as a search option. Google Maps doesn't do this if I copied, say, some random numbers.
I'm not sure how the Google Maps app can do this without snooping at my clipboard.
I know this and I get it....but damn it's so much more entertaining than Instagram now. The content is more varied, more raw, and the for you page filtering is so well-tailored.
Fortunately for tiktok, such rules only apply to small apps. They will probably get a polite phone call from a VP asking them to please stop doing that but noting more.
Oh yes! Please! I have wild fantasies about TikTok being banned like that.
"Why was TikTok banned?"
"Because the violated the basic capitalistic principal of existing not to make money but to amass a Nazi-like ledger/database of every person in the world on behalf of a nation state."
The clipboard belongs to the user. It is quite obvious how clipboard-snooping could disclose sensitive information, and as far as writing to it, I should be able to expect the thing I copied last to be there when I hit paste next.
I'd be in favor of banning application-initiated access entirely. I realize this would interfere with 1PW and similar. That was always a hack, and the fact that so many apps snoop on the clipboard is a great reason for it to stop! Sensitive dataflows for things like passwords need far better protection.
Real solution is to make the clipboard behave like a channel (both must be running and target must be on foreground), not like a buffer. No permission mess.
There is a part of me that is little sad Apple did not just wait to add this feature until iOS 14 released or one of the last GM's.
If only because it would be a huge wakeup call for users about what their apps are doing and possibly collecting, instead of it all being patched out now. Instead we are seeing basically no traction on this outside of tech circles.
TikTok and many popular Chinese apps, including Taobao, Baidu, and Alipay, to name a few, snoop clipboard contents, because Tencent censors and forbids almost all kind of links sharing in WeChat and QQ.
It would be easy to see foul play here... But I can only imagine the size of the development team behind such app and it's probably some dumb feature commit implemented by someone who isn't there anymore that checks to insert "Tok" every time someone types "Tik".
I find it interesting that people automatically assume nefarious reasons for accessing the clipboard... If your privacy is so important, don't use platforms you cannot trust. If that means not using a smart phone, then maybe you will do something positive for yourself.
Zoom, WeChat, Weibo has all been exposed of sending users' data back to Chinese servers. Why would TikTok be different? It's already alarming enough that a Chinese social media app is at the top of the App Store.
Can one app copy data into the clipboard and then another app paste it out?
They could even collude behind the scenes, once they’ve communicated over the clipboard as a channel to establish a link, to replace the original data in the clipboEOF[CITIZEN:8EF7720=FLAGGED]
This is actually a big reason why I always prefer to use the mobile site instead of the app. The modern browser treats the web much more as an “unsafe” place. Not to mention I prefer standard ways to do things like “go back”. Plus you get Adblock.
Do most of these apps mention this in their privacy terms? I would imagine it's somewhere in there, but who has the time to read all of that. This reminds me of HEY.com Apple is following suit in terms of notifying the user on privacy...
Why is this still on their store? I assume it breaks their walled garden approach. It seems Apple is putting kids at risk, in fact it looks like kids are having their privacy actively abused.
My bank (bunq) offers me to pay the iban (european bank account number) as soon as I open the app with an iban on te clipboard. That's the same, isn't it?
In the twitter video Tik Tok is pasting from Instagram though. I also don't know why Tik Tok would be looking for text on the clipboard (to be fair I've never used Tik Tok though).
I use Bitwarden on my iPhone and sometimes copy passwords to the clipboard for some reason. Does this mean any app could have snooped on it without me knowing before?
Is it even clear whether this data is being sent back to Tiktok or any company or is it simply to check for activating keywords to add app functionality?
> It can be an elegant design choice, but also a design choice that appears to be or is an abuse of privacy.
For decades, all programs running on your computer had access to the clipboard. A primary intended purpose of the clipboard is precisely to share information between different programs. Maybe it is in fact a security issue that should be rethought, but calling it an abuse of privacy seems extreme.
Perhaps the destination should define whether or not you can paste into it and the OS can provide that option instead. This looks like a violation of “tell, don’t ask”.
I should be able to paste into specific GUI elements that allow pasting. The application presenting the GUI element to me should be blind to whether I typed that data, or pasted it into there.
if only they'd do this for apps accessing the microphone, or do the same thing as they do with the camera making the application visibly request access
Then there should be a setting that has to be manually approved to allow the clipboard interactivity feature.
I use a password manager on my iPhone and I am copying and pasting my passwords all the time. If some random app is scraping my clipboard silently and sending the data to a third party, that means my passwords are compromised. I am very much NOT OK WITH THIS.
Keep in mind, this permission should be fundamentally different than the permissions for just manually copying and pasting. I don't want to have to deal with permissions to "allow clipboard use" that I have to approve every time I want to paste something. That would be obnoxious. I am only worried about restricting permissions for invisible passive snooping.
The developer can fix it by either (1) querying "is there clipboard content?" instead of copying it every few keystrokes or (2) stop altogether if there is no legit purpose.
It sounds like this feature is working as intended - closing was was a silent security risk.
Here is an explanation from a 3rd party reddit app developer:
tl;dr: Since Apple doesn't give a way to open URLs in 3rd party apps, he inspects the paste buffer for reddit URLs, but he aptly points out that he could read anything in the paste buffer if he wanted to:
"Hey! I make Apollo for Reddit and a few people asked me about this and if Apollo does anything with the clipboard so I wanted to answer.
Since iOS doesn't have a mechanism to open URLs in a specific third party app Apollo has a feature where if you open the app with a Reddit URL on your clipboard it'll offer to open that URL in Apollo, I think I copied this from Instapaper awhile ago.
This does cause a potentially creepy looking notification with Apollo sometimes, but just wanted to explain why/what it's doing. It's literally just like "Hey iOS, is there a URL on the clipboard? Oh there is, is it a Reddit one? Okay cool let me ask them if they want to open it." Obviously at no point does anything else happen like it leaving the device or anything. It'll show this banner even if there's not a Reddit URL because it needs to check the URL to see if it's a Reddit URL in the first place. Schrodinger's Reddit URL.
But the clipboard API (prior to iOS 14) was very open, as someone else said, what if medical records were on your clipboard as text? Well in Apollo's case, that doesn't qualify it as a URL, so it wouldn't even "look". (And even for URLs, it doesn't store a list of them even on the device, it just opens it if you ask to, and then saves the most recent URL so it won't keep repeatedly prompting you if you say no.)
But that doesn't mean other apps couldn't be! They could be doing some Creepy Shit™ so I think this API change is good. It means I'll have to be more clear with Apollo doing this, and I've already had a few Apple engineers reach out with ways, but I think it's a very good change for user security."
Sure, and the counterpoint is that those legit reasons are not worth the breach of privacy. The average user is horrified when they learn this happens, and "just trusting the developer to not abuse your clipboard" isn't an acceptable security model.
We can expect every security hole possible to be exploited by these Chinese military apps. Heck, Fortnite intalls a rootkit on your PC for "anti-cheat" and that's even more egregious, yet it is allowed. The situation on Android is probably worse, most apps request access to everything these days and users are afraid to say "no" because of fear of breakage.
You're driving along and get t-boned by a semi. Someone says, "I don't know why anybody's surprised, the crash happened at an intersection where two roads cross perpendicularly, and vehicles are traveling on those roads," are you going to nod at their profound observation?
Yes - I mean why hasn't the App Store review process already caught this in previous versions of iOS? Why do we need to wait for the technical capability to be surfaced to users?
How are we supposed to know if the app's continuous request for clipboard access is ok or not?
I feel like that was literally the only value-add in the App Store review process and the fact that its a feature in iOS seems to indicate that Apple is throwing in the towel here and saying "we don't know if this is a privacy issue or not so we will just notify you about it".
Is this really news? Google maps suggests to navigate to an address you copied to the clipboard. No app should have permissions to access the clipboard without you explicitly selecting 'paste'
I think in general operating systems shouldn’t just ask the user for permissions when an app wants to do something. But they also should provide a log about when, to what extent and how often apps are using that permission. For example when I give an app permission to read message I still have no clue how often the app does it.
This is the problem with Chinese apps. You just can't rely on them. They simply don't have the same principles of privacy like US companies do. Even Facebook won't do something like this.
I don’t even have a TikTok account. I simply pay enough to have realized a long time ago that the contents of the system clipboard are available to the application that’s in the foreground on pretty much every OS I’ve used. Hence I have no need to jump to pearl-clutching xenophobia over it.
Fu*king A... uh maybe we need to make Open Source required by law... just anything but this.
WE as society cannot allow this impunity that is occurring by a very well powerful corporation, a nation-state sponsored Corporation, no less. Where does it end? Why do we allow this to occur? Why do members of our own society not know the perils of using this technology?
Finally someone tackles the real issue here. You can't trust proprietary software, no matter which country or company is behind it. Aim for free software instead of putting your trust in a company's black box.
Fairly impressive if true - they must have already gobbled up tens of millions of passwords and other sensitive data that users had no idea was being stolen from their clipboard.
Since you're obviously using this site primarily for ideological battle, and that's against the rules here, we've banned the account.
No, not because we're communists. Rather, this is an existential issue for HN: if we want to have a forum for curious conversation, we have to limit the amount of damage people can inflict on it in this way.
Please don't create accounts to break HN's guidelines with.
I categorize this as another reason why "just trust us," just isn't acceptable enough when it comes to data privacy and ownership. Companies just cannot be trusted to treat their users' data with respect given the option of: profit or privacy.
(sourced from reddit: https://old.reddit.com/r/apple/comments/hejb9i/ios14_catches...)