I see a lot of complaints about closed code. That is the first thing that people bring up with Apple and security. But how is open source changing things here? No company open sources their server side components. Even if Google released their server code we have no confirmation that they deploying the same code on their servers. They are not vouching for that. We have a company here that really seems to want to do good on security and privacy. Immediately going to the closed source argument is just lazy and not helping.
Of course the good part about a crowd is all views come out and so th closed source thing has its place but we should atleast give them their due and some kudos. We know people will try to evaluate the implementation and see what happens. In this case it's just a PR article. Let's wait for them to release detail and see if it stands out. May be the protocol is enough to give us confidence that their claim is true. We don't know yet.
Signal has open source clients and server. Matrix has crypto support now (needs more auditing, but you can turn it on and nobody has cracked it yet to my knowledge) and thats fully open source.
No, open source does not guarantee they are using the secure algorithms advertised on their servers. But open source does is let you run your own server, that you can put much more trust in. People spin up their own signal and matrix servers all the time now for just that reason.
But Apple isn't in that business, they are trying to make common user's comm private. Open sourcing only helps the power user (audit, run their own), as you described.
Signal and iMessage both don't guarantee true privacy as we can't see the servers.
Any system where you don't control every component of that system is inherently insecure. I find it laughable that so many (especially on HN) will evangelize security while evangelizing Google, a company that literally wrote the book on and perfected the art of harvesting user data and selling it to advertisers, while tearing down Apple just because Apple refuses to let us see their code.
No service that uses a third party server can guarantee privacy unless they let you see the servers.
For common users the question is if you trust the server operator, and if not, to consider your communications insecure. What they do about that is up to them. And it is up to these providers to earn trust.
The problem with this is that it eliminates the value proposition of E2EE, which is designed specifically to protect you when you don't trust the server operator. If you already trust the server operator, you've obviated the issue that E2EE ostensibly solves.
> No service that uses a third party server can guarantee privacy unless they let you see the servers.
And there are no services that do that. Since that doesn't seem to be possible. Open sourcing the backend doesn't help here since you still have to blindly trust the server operators.
Personally I can't trust apple for a specific reason... On iOS they've made it quite difficult to turn location services off unless you jailbreak the device itself. You have to go into settings -> privace -> locations -> on - off -> are you sure you want to do it? -> yes ....
I mean when a company does that, it makes it pretty clear that they do want to monitor your location, making it annoying for you to turn it on and off.
The steps, I am a privacy junkie myself and I want to have location services off most of the time, apart from when am gonna use uber or gmaps. Even for me that I'd go the extra mile to turn it off, it just puts me off that you have to go through so many steps to do it.
Not sure what you exactly want from Apple here. Nobody other than a handful of people are regularly switching Location Services on or off. Especially at a global level as opposed to at the app level.
And even then it's only 4 taps. Which is 1 more tap than what's involved in changing WiFi networks something a lot of people do.
Well on Android it's just a swipe down on the system bar (or whatever it's called) and a tap--from anywhere. So yes of course people are constantly turning it on and off, mainly because it saves loads of battery, but also the obvious privacy reasons.
OK so if your tech-inclined friend sets up a Signal server, for little to nothing in return, that's automatically more trustworthy than a multi-billion dollar corporation with an ingrained and (seemingly) solid attitude regarding NOT selling user's data?
I roll an OwnCloud server for me and my family, and I know that I'm not invading their privacy, but I could very easily do so and not leave a trace of my having done it even that a pro user could find, let alone a layman. To say that Apple is less trustworthy than a given power user solely because they use closed source is ridiculous, there is zero correlation there.
> Power users make sure its save for them, if it is, it is save for 'normal' users as well.
Power users can make sure the code they're running is safe for them. There's no guarantee that Signal for example is running the same code they release to others.
None of that addresses the point. Without third party, independent verification of their servers and the code that is running open source provides limited (if any) improvements to privacy and security.
That's part of the point rather than separate. A closed system without any way to prove it's running something often can't to get 3rd party verification or user trust that's consistently believable. An open-source, tamper-resistant system can. Quite a bit of difference. Once verifications come in, the effects of reputation then allow users with less technical knowledge to learn what's trustworthy or not.
For a full solution, it would be a start to what they needed. The main benefit of an open-source server is that people can audit it for vulnerabilities. As in, they get free to cheap labor to reduce their liabilities. People might also submit extensions they find useful. The usual benefits of FOSS for suppliers.
"...if it is, it is save for 'normal' users as well..."
Not sure about that. I think even if a power user were to say "this is safe", you still have a boatload of integrity problems with the software.
-How do we know the power users can be trusted?
-Even if they can, what guarantee do we have that the service provider we use is using the same open source code that the power user validated?
-Etc, etc, etc. Basically, a lot of trust issues.
I agree with asadlionpk, open source can generally only be proven to help power users in a trustless environment. And where security is concerned, we cannot ascribe the "safe" attribute to any system where that safety cannot be proven.
>both don't guarantee true privacy as we can't see the servers.
What do you define as true privacy? Why isn't other privacy "true"?
What do you mean by "see the servers"? Surely you can see them as computers at the other end of a TCP connection, and the server cannot read the cleartext of an E2E encrypted message.
Helping the power user is the whole point imo: let those who know what they're doing see and run the code themselves. As such, it doesn't matter whether Apple is in some business or not, it's simply a security measure.
In a system that tries to be secure against server attacks, nobody cares about the server-side code, because we can't trust that it's the same exact code (i.e. without backdoors) anyways. Therefore, it would be possible to make assertions about how secure iMessage is with only client code.
To some extend the same problem applies to application code. As long as the hardware and platform are closed, we don't really know what is going on when the application is executed. Despite this, opening the application would be of course a step to the right direction. Not because it would guarantee that Apple is not doing bad things, but because it would help others spot problems that may be there by accident.
Even if the server code is good, frequently they're hosted on servers that have other vulnerabilities anyway. Because we're not the sysop, we have no idea what they're running on and if it's patched properly and up to date. The only way to be safe is to become an expert in digital security, an expert in systems management, an expert in reading and mitigating security problems in open source code, an expert in securely deploying secure communications systems and host it yourself.
For the rest of us, even those of us reasonably well versed in systems and application security, we kind of have to put our trust in someone and hope they don't fuck it up.
> Even if the server code is good, frequently they're hosted on servers that have other vulnerabilities anyway. Because we're not the sysop, we have no idea what they're running on and if it's patched properly and up to date. The only way to be safe is to become an expert in digital security, an expert in systems management, an expert in reading and mitigating security problems in open source code, an expert in securely deploying secure communications systems and host it yourself.
If the server is assumed to be compromised by the threat model, then none of this matters.
> For the rest of us, even those of us reasonably well versed in systems and application security, we kind of have to put our trust in someone and hope they don't fuck it up.
It is better to have to put your trust in any number of independent auditors than to have to put your trust in a single corporation.
Still, the other problems the parent mentioned still apply. Can you verify the build against the source? Can you verify the binary on your phone against the build?
No, of course not. I also can't build an iPhone from scratch or look at the hardware design or firmware. It's especially bad when you consider that Qualcomm baseband and the problems it has.
Hell, I can't even trust that with my desktop computer. Further, how do I know the light in front of me isn't fabricated and that I'm not a brain floating in fluids connected to a simulation?
Realistically, I believe Apple's intentions are in the right place. And I believe that, for the most part, iPhone backdoors are not a thing yet. Being able to look at the client code is not something I believe will happen, but I believe it would be good if it did because then the security of iMessage could be independently verified much easier. It is true that Apple could just lie about it and put different code up, but assuming their intentions are in the right place, it seems like a win-win for everyone.
You can't build an iPhone from scratch, but you can build one from parts. Of course, it would need the magic signature from Apple in order to make Touch ID work unless the home button and system board were bought together.
Client software (anything that runs on the user's device) should be open source for two reasons:
1. It's easier to audit, although binary analysis is still useful (and reverse engineers are often better at finding security holes than someone doing a source code review).
We don't have to speculate how Apple could possibly handle account recovery without entirely sacrificing security, because it's spelled out in their iOS security whitepaper: https://www.apple.com/business/docs/iOS_Security_Guide.pdf
TL;DR: Keychain recovery relies on a cluster of hardware security modules to enforce the recovery policy. After 10 tries to guess your PIN, the HSM will destroy the keys. Apple support gates all but the first few of these tries. The paper also implies that you can use a high entropy recovery secret as an alternative, though I can't figure out how you would enable that.
This seems like a pretty reasonable point in design space to me. Of course, you are relying on Apple's trustworthiness and competence to implement this design. But that is true without recovery, since the client software is also implemented by Apple.
That's a good point, but the question is not just how to maintain security and usability without account recovery, but how to do so without device redundancy. There's no speculation about how to maintain true E2EE with a network of trusted key pairs, but without multiple devices the user is very vulnerable to permanently losing access. I think the recovery key is a clue.
As I speculated elsewhere in this thread, I think they're going to do it with multiple recovery keys ostensibly written down by the user and never transferred directly to Apple, which each then redundantly encrypt all user data before transmitting respective copies to iCloud.
That would pull it off, and it basically just shifts the trusted device redundancy problem to a trusted key redundancy problem. The only remaining usability obstacle is to make sure the user has safely recorded all recovery keys.
The "HSM cluster" serves as a redundant "device" which is in Apple's possession rather than yours, but which you must trust to withstand tampering, even by Apple.
The option to record the keys yourself is also described in the whitepaper:
"If the user decided to accept a cryptographically random security code, instead of
specifying their own or using a four-digit value, no escrow record is necessary. Instead,
the iCloud Security Code is used to wrap the random key directly. "
As I said, I can't actually find this option in my iOS settings. Maybe you have to disable Keychain first?
I think this relates to a time before 2 Factor Authentication, when Apple used 2 Step Verification. You could st that time (if you were using iOS 6 to iOS 8) chose various recovery options.
I'm no expert but this is my recollection.
In the scenario where you have 2 devices, one is iOS 9/10 and have migrated from 2SV to 2FA, the other is iOS 6/7/8, you can still access the recovery menus on the iOS 8 device, but it does weird things to the keychain if you mess about with it.
For secure, private messages, your sane current options are Signal, WhatsApp, and Wire. Signal is the best option, but you're going to make some UX sacrifices for security. WhatsApp and Wire are extremely comparable. If you worry about implementation or operational security flaws, WhatsApp has the Facebook security team behind it, and a long-term relationship with OWS; no cryptographically secure messenger is better staffed. If you're worried about Facebook seeing your metadata, which is a sane worry, Wire is approximately as slick and usable as WhatsApp with mostly the same underpinnings.
Regardless of the underlying cryptography, in the absence of a well-reviewed published crypto messaging protocol, iMessage is basically just an optimization over SMS/MMS. It's great for that, but it shouldn't be anyone's primary messenger.
Facebook mines WhatsApp metadata [1]. I'm not sure I'd elevate WhatsApp over iMessage, despite my respect for the OWS team. Signal is definitely the most secure.
If you're worried about Facebook seeing your metadata, which is a sane worry, Wire is approximately as slick and usable as WhatsApp with mostly the same underpinnings.
> as implemented in versions of iOS prior to 9.3 and Mac OS X prior to 10.11.4
This fall, these will both be two major versions behind the current state of the software. Considering how openly pro-privacy/pro-security Apple has been over the past couple years, I'd expect they've fixed these issues and more by now.
I'm not making an argument that Apple is infallible; I acknowledge that it's possible there are still security flaws in iMessage. But I think that Apple is the only one of the really big tech companies who seem to be taking a privacy stand that the users can get behind, which is something to applaud.
Signal Desktop is hardly a web app in the sense that you open a website and you're done. You need to use Chrome and you need your phone. It's probably more secure yet rather inconvenient.
AFAIK, Moxie(OWS Founder) was responsible for the integration of the Signal protocol into WhatsApp. So, if there are any design/implementation flaws in the underlying protocol, wouldn't they also be in Whatsapp too ? Again, this might not be the case and everything might be properly vetted but this just slightly increases the chance of an error propagating and be in both the messengers.
It would be nice if we could reason about security and privacy properties of software simply by considering which company or organization provided it, but that is not how software security --- particularly cryptographic software security --- works in the real world.
If the msg content is encrypted what metadata should I "worry about" when using WhatsApp IF I am okay with not being anonymous (eg. for everyday use with friends). What can FB tell me based on JUST WhatsApp? (let's say they haven't connected my identities with FB yet)
Here's another way to think about it: privacy and security are related but not the same thing. If you care more about PRIVACY, use Signal or Wire. If you care more about SECURITY, use Signal or WhatsApp. I care more about security, because privacy controls don't much matter if there's an exploitable vulnerability anywhere in the messaging stack. But you could reasonably argue that privacy is an everyday concern and security is a black-swan concern --- that is, when we're comparing Double Ratchet protocol messengers like Wire, Signal, and WhatsApp.
The question is whether Apple will allow recovery if you lost all your devices.
If they don't, I don't think it is that hard for Apple to extend their current security model to iCloud. They currently rely on senders encrypting messages with each destination device's public key, so they can store the individually encrypted messages separately in iCloud.
When a new device arrives, they could have an existing device perform re-encryption of the messages for it (after the user authorizes that the device should be added).
Even without the new iCloud functionality, Apple has always been in control over the key exchange, which would allow a malicious employee / government to write code that could add a new authorized device/key silently and thus allow Apple to eavesdrop from that point on in future conversations.
> allow a malicious employee / government to write code (...) from that point on in future conversations
This is exactly what Apple fought to protect "recently". Once a government entity forces backdoors on citizen, privacy goes out the window. Apple knows this and I think you can expect them to make a gigantic media mess if they were forced to. It would ruin their business world-wide.
My thinking is that Apple will try to gradually replace password-oriented access control with a mix of PKI and key redundancy.
Based on the company's movements towards public-key based two-factor authentication, I think they can reasonably get away with phasing out password-based account recovery by relying on two methods:
1) The user has more than one trusted device authenticated to the iCloud account; account recovery can take place using the other trusted device and passwords are not required
2) The user only has one trusted device; the user has a primary public/private key pair that encrypts all data on the client, but in addition there are 9 backup keys which are generated on the client, never transferred to Apple and (hopefully) written down by the user
In the second scenario, Apple bypasses the obstacle to full PKI-based access control by implementing authenticating key redundancy instead of authenticating device redundancy. User data can be end-to-end encrypted by each key, transferred to iCloud, and if the user loses access to the device they can recover their account data using one of the recovery keys.
The headline is misleading. There are two features here, iMessage syncing and iCloud device backups. All Apple has announced is better iMessage syncing with no change in (already maximal) privacy. There's no indication that Apple is going to stop backing things up the way they do now, which is not maximally private but is capable of surviving a forgotten password, which is probably a good default setting for consumer backups.
If Apple has changed backups to function in a more private manner, then they would announce that, not something exclusive to iMessage.
More detail: iMessage syncing has always been maximally private from day one. However a drawback to the current implementation is that new devices cannot sync message history. The reason is that each message is encrypted separately by senders for each currently registered device for the receiver. And yes that means if you have 3 devices on your iCloud account, whenever someone sends you an iMessage, 3 separately encrypted copies get sent. Apple has gone to great lengths to ensure that private keys are never shared by devices.
So what's new is apparently Apple's figured out a way to sync history via iCloud. I'm interested to hear the implementation details, but there can be no doubt that it still respects the design goal of never sharing private keys.
Now, the privacy goals for backups are different. You obviously want them to be as private as possible, but most people generally want to be able to recover their life in the event of a simple forgotten password. There are certainly scenarios where you want to encrypt your backups, but it always should be an informed, opt-in choice. You should clearly be aware that if you forget your password, you lose your backups. So generally it's desirable to default to having a fallback recovery method.
Like I said earlier, if Apple has figured out a fallback recovery method that somehow does not involve storing your data in a manner they can decrypt, that would be something they announce as part of iCloud Backup... not just for iMessage. But it seems almost a fundamental design constraint. You can either have something impossible for anyone else to decrypt or conveniently recoverable backups, not both.
> iMessage has always been maximally private from day one. However a drawback to the current implementation is that new devices cannot sync message history.
No it hasn't and yes they can. I've done it several times. The ability to restore messages to a device is specifically what breaks the otherwise end-to-end encrypted iMessage architecture, which is why Federighi talking about the new iOS 11 capabilities is intriguing.
To your last point, my personal hypothesis is that Apple has designed a cryptosystem that uses a PKI with redundant key pairs to extend the redundant encryption. That shifts the recovery usability solution from redundant trusted devices to redundant keys that are written down.
> I've done it several times. The ability to restore messages to a device...
Interesting. To be clear, you're not just talking about restoring messages to the same device, like after resetting it?
Looking more closely at Apple's security whitepaper, perhaps restoring history on a new device is possible if you enable iCloud Keychain. Looks like that would in fact share the private decryption keys among devices.[1]
Ah, and that more clearly points at what this iMessage change may be: Mandatory iCloud Keychain, at least as far as iMessage keys are concerned. Which would suggest another, hidden improvement: no more need to redundantly encrypt a copy of every message for every recipient device!
I want to add however that this still does not suggest anything about changing the security of backups, which was the implication of the article. Nor would I necessarily characterize iCloud keychain as "breaking" encrypted architecture.
It sure sounds like you're talking about restoring from backups. There's nothing in that article that suggests you'd normally be able to sync iMessages to a new device apart from restoring from a backup. And that's expected behavior.
I just noticed that you misquoted me.. I said iMessage syncing has always been maximally private, and drew a distinction between that and backups. The article you cite mentions the tradeoff between strong cryptography (maximal privacy) and user pain (losing your data forever). Apple has made the intentional design choice of enabling the former for syncing but allowing backups to survive an account password reset. I think it's pretty clear that's a good choice for a consumer device. You can always turn off iCloud backups and back up locally via iTunes if you want maximal backup privacy.
Well, unless you're Chinese, this doesn't apply to you anyway.
And I'm not sure what we "seen this week". If you mean the link posted below, that's unrelated from some "unencrypted Cloud" -- it seems to talk about data on Apple customers (names, addresses, email, numbers, etc), from some POS/CRM system or such.
This looks totally unrelated to some "unencrypted cloud" access the parent mentioned.
It says "The suspects allegedly used an internal company computer system to gather users’ names, phone numbers, Apple IDs, and other data, which they sold as part of a scam worth more than 50 million yuan (US$7.36 million)."
The trade-off is that if you lose your keys, you're shut out.
I would recommend having an option to generate keys based on something you have and something you know that you won't easily forget, such as a passphrase. That way you can always recover them later!
What if you could enter a special private iMessage chat with someone where to decrypt and read/reply the participants had to verify each message with Touch ID?
Has there been any known exploit(by government or any other actor) that worked by exploiting advanced cryptography. I feel using a zero day is more easier way for exploiting anything. Also, there are limited ways in which one can exploit cryptography, unlike zero days for which there is a free market and continuous supply.
To your first question: I'm interpreting you to mean a zero day of the form, "The NSA is aware of a cryptanalytic weakness in this encryption algorithm"; as opposed to a backdoor, e.g. "Microsoft provided a way for the NSA to bypass Skype's encryption without breaking it."
I don't recall any specific examples off the top of my head, but I believe it's probably happened and does happen. But backdoors are much more common; so much so in fact, that I'm led to believe the NSA doesn't have significantly greater cryptanalytic capabilities than academia and industry these days, given that their modus operandi is usually to demand a backdoor rather than breaking it. Their advantages probably stem from access to superior computing power or simply much more of it. I imagine a lot of the agency's research is in fundamental paradigm shifts that can broadly attack many algorithms (like quantum computing) - my edit at the bottom gives an example of this.
To your (implied) second question: it's probably not true that zero days are easier. When a company like Apple develops a novel cryptosystem, the NSA is not likely to break it for years (barring conspiracy-theoretic capabilities that we have no way of verifying). Zero days incur massive amounts of research and development time to go from identifying a useful cryptanalytic weakness (i.e. get an algorithm from exponential, to sub-exponential to quadratic time) to deploying an exploit. All the while, earnest cryptographers in industry and academia are attempting the same thing, except they'll publish their results. And if the NSA has a functional exploit, they will use it like you would a classified weapon: sparingly.
EDIT: Actually, your question reminded me of differential cryptanalysis. That's more of a paradigm of attacks against a variety of algorithms instead of a zero day against any one particular encryption algorithm; still, the NSA apparently developed differential cryptanalysis and maintained it as a classified capability before the public community independently came up with it. That probably qualifies for your question.
I was referring to zero days as in the way to gain the ability to run the malicious code in user's device, preferably in the root account which cannot be stopped by updating the software, something like that was done in jailbreaking using browser(a long time ago), or pwn2own. I was not thinking of encryption related zero day specifically. If someone gets root access, they get access to all the contents, no matter what transport security is used.
I stopped using iPhone. However, I believe they did at least ostensibly fix that problem by not trusting new devices added until you trust it on at least one other device that already has the keys.
I think the last remaining piece of trust is trusting that Apple's servers are correctly advertising your iMessage public keys, at least if I'm understanding how it works correctly.
After all, if you lose access to all devices you can still reset from scratch and get back on iMessage and at least receive messages. Your recipients won't know this happened. How would anyone know if Apple is performing a MITM using this mechanism?
If it's like how iCloud Keychain works now, you have to approve the new device on every existing device, and I believe each device pair generates keys known only to them.
Apple can't simply register devices into your account, they would have to have access to your iCloud account, which they can only get using one of those keys.
I guess if the devices are only willing to share keys with other devices which can individually prove they have the iCloud password. I wonder if it's an online procedure or if they just upload the keys to iCloud, encrypted with the user's iCloud password.
> "Our security and encryption team has been doing work over a number of years now to be able to synchronize information across your, what we call your circle of devices—all those devices that are associated with the common account—in a way that they each generate and share keys with each other that Apple does not have."
> It's unclear exactly how Apple is able to pull this off, as there's no explanation of how this works other than from those words by Federighi. The company didn't respond to a request for comment asking for clarifications. It's possible that we won't know the exact technical details until iOS 11 officially comes out later this year.
> Meanwhile, cryptographers are already scratching their heads and holding their breath.
This might be uncharitable, but in my mind I think this writing and presentation of facts (probably unintentionally) implies that this capability is novel, when it's not. Sharing keys between multiple devices is a straightforward issue if you're willing to make user experience trade offs. Cryptographers are not scratching their heads wondering how Apple could achieve E2EE with a network of devices, they're wondering how they did it without sacrificing account recovery. It's not clear to me that readers would automatically understand this, because the real head scratcher isn't addressed until near the end of the article, which brings me to my next point:
> "The $6 million question: how do users recover from a forgotten iCloud password? If the answer is they can't, that's a major [user experience] tradeoff for security. If you can, maybe via email, then it's [end-to-end] with Apple managed (derived) keys," Kenn White, a security and cryptography researcher, told Motherboard in an online chat. "If recovery from a forgotten iCloud password is possible without access* to keys on a device's Secure Enclave, it's not truly e2e. It's encrypted, but decryptable by parties other than the two people communicating. In that sense, it's closer to the default security model of Telegram than that of Signal."*
I'm hesitant on how much faith to put in Apple's scheme here. On the one hand I generally trust Apple very highly when it comes to security and cryptography in particular. On the other hand I don't see them making account recovery impossible.
However, over the past few years they have been increasingly pushing two-factor verification, and then full two-factor authentication based on a network of trusted devices. The iCloud password used to be enough to manage the account's security and trust, but now it frequently defaults to requiring authenticated approval from a trusted device (instead of e.g. security question responses).
I could see Apple abandoning conventional account recovery if they keep proceeding down this path by providing a huge amount of access redundancy. For example, they could keep redundant copies of all user data synced in iCloud which are respectively end-to-end encrypted on the client with a user's backup keys. Each authenticated user device might have 10 backup keys, with a typical warning that they should be written down and will not be displayed again, etc. The keys could be downloaded from the device and stored by the user but never given to Apple, and would primarily be useful in circumstances where a user only has one trusted device authenticated to iCloud. Then if a user loses primary access to any given Apple device, the user has two ways to recover data:
1) Authenticated approval from another of the user's trusted devices, or
2) Use the backup keys, which do not provide a method of changing the account password, but which instead decrypt the redundant user data corresponding to the key.
The basic idea is that removing conventional password-based account recovery required inordinate redundancy to counter usability loss; you can do this with redundant authenticated devices (each with their own keys), or you can simulate it on one device with redundant keys that are ideally harder to lose.
First priority to make iMessages more private: disable iMessages by default when iCloud sync is enabled, or at least give users the option to have iMessage backup disabled when iCloud sync is enabled.
With true end-to-end encryption there is no need for a middleman.
Each user
1. encrypts her data at the source i.e. on her own computer and
2. sends the encrypted blob over the untrusted network, or so-called "dumb pipes".
Hardware company that makes the users computer tries to dictate whether and how #1 can be done.
Not necessary.
The software for doing #1 does need to be open source.
On mobile, does such software even exist?
And even if it does, is a mobile phone really the users computer? It is an effectively locked enclosure with several computers controlled by third parties.
The way to do secure mobile messaging would be to encrypt the message on a computer the user controls, then move the message to the "mobile phone" and then send to the untrusted network.
Alternatively, do not use a mobile phone for messaging if worried about others have access to the messages. Wait for a pocket sized portable computer that can be tinkered with. No baseband, etc.
Not all computers are Intel or AMD and not all Intel or AMD computers have the "features" to which you refer.
Even more, nothing requires these computers to be connected to the untrusted network. ME requires an internet connection.
The message encrypted on the user-controlled computer can be moved to the "mobile phone" via a wired local network, serial link or removable media. Is such transfer to and from the device made difficult by the way these mobile phones are constructed and configured? Yes, and probably this is intentional. Companies want user data and the way they get it is by encouraging users to store data in the "cloud".
Most importantly, these Intel and AMD based computers are not the only computers capable of encrypting messages.
If ME scares you then do not use computers that have it.
If you have computers with ME then disconnect them from the internet and get a computer without ME for your internet needs.
This message was typed on a computer that does not have ME, would not be considered "desktop" and would probably not qualify as "modern". It makes no difference. No problems encypting messages on it.
Of course the good part about a crowd is all views come out and so th closed source thing has its place but we should atleast give them their due and some kudos. We know people will try to evaluate the implementation and see what happens. In this case it's just a PR article. Let's wait for them to release detail and see if it stands out. May be the protocol is enough to give us confidence that their claim is true. We don't know yet.