> We disclosed our results to Apple on May 24, 2024. Apple’s Product Security Team have acknowledged our report and proof-of-concept code, requesting an extended embargo beyond the 90-day window. At the time of writing, Apple did not share any schedule regarding mitigation plans concerning the results presented in this paper.
The vulnerability is over half a year old and over a quarter over the embargo window.
I'm not saying it's bad (I'm pretty close to a disclosure absolutist), though I don't really know what the norms are for hardware attacks --- both papers are well past the normal coordinated disclosure window for software.
As someone who has reported several (software) security vulnerabilities to Apple, I couldn’t care less. Some of the things I reported are trivial to fix, and dealing with their team has been a frustrating experience. They don’t deserve extensions, they need to get their shit together and stop releasing new crap every year. Clearly they can’t do it properly.
It's could be unfixable without a significant performance penalty, but at minimum they could make safari do proper process isolation like every other browser does.
Right, but that was before they knew about this exploit. My point was that even if they decided they needed to urgently switch to a multiprocesses architecture because it's the only way to mitigate this exploit, they might not be done yet.
This class of attacks is not new. Spectre demonstrated this possibility in 2018, and Apple was previously targeted by speculation attacks, e.g. https://gofetch.fail/ or https://ileakage.com/.
They'll weaponize them at some point. How exactly is to be seen, but if people associate your product with domains you do not control (e.g. via SEO searches and hyperlinks left in public places), then everyone is on the hook the moment these domains stop redirecting to your service.
Useful to know. If a given problem is possible to encode as a function of three bits (eight states), then there's an efficient instruction to process it.
The naming is unfortunate, though, since TERNLOG implies ternary logic - an operation working on trits, with possible values being -1, 0, 1, instead of just 0 and 1. Instead, it's ternary in the same vein where C's
foo ? bar : baz
takes three operands, and is therefore ternary, and it performs logical operations - hence TERN-LOG.
> In a reduction, each application of the binary operator decrements the number of values by 2−1=1, but each application of the ternary operator decrements it by 3−1=2!
Ha, there's an exclamation mark there! It's not 2, it's actually 2 factorial, which i... Oh.
You can have the same step-by-step in Lisp via LET*. The problem outlined by the author happens when you have a function call where 1) the operator has a long name, 2) there are many arguments. The TREE-TRANSFORM-IF is such a case and you can't solve it by just "going step by step".
Yes, I think for whatever reason, Lisp languages tend to use long names for funcs and vars whereas Haskell and other FP languages prefer symbols like >>=, <$>, etc. Personally, I prefer somewhere in the middle.
> The TREE-TRANSFORM-IF is such a case and you can't solve it by just "going step by step".
> Common Lisp in particular is extremely unfriendly to threading macros. Arrows imply a consistent thread-first or thread-last functions. But CL's standard lib is too inconsistent for that to work. So we're left with picking an indentation style we don't necessarily like.
All in all, this post reads like a rant, and I realized that upon reading "now what I'm about to suggest is likely not to your taste". That style of indentation is something I use often when writing calls to long-named functions like COMPUTE-APPLICABLE-METHODS and I haven't ever thought of it being not to my taste, or even of it being ugly as the author suggests.
Well, it's all based on my experience. In two Lisp companies I worked in, colleagues complained or even were trying to fix how I indent things. So it's not just some anxiety whatever, it's experience that needs explanation and justification.
I'm aware of arrow-macros/diamond wands, and am using them myself. But I often have to write dependency-less libraries for moderately complicated algos, and that's where indentation style is the only thing I can rely on.
> for me, actually cracking a real-world in-use key crosses an ethical line that makes me uncomfortable
They've contacted the company with the vulnerability and resolved it before publishing the article - search the original article for the substring "now no longer available".
Usually, you demonstrate that an online system is vulnerable by exploiting that vulnerability in good faith, documenting the research, and submitting it for review. It does not matter if you're cracking an encryption scheme, achieving custom code execution for a locked-down game console, proving that you can tamper with data in a voting machine, or proving that you can edit people's comments on a Google Meet Q&A session - the process is the same.
If you say something's vulnerable, people can shrug it off. If you say and prove something's vulnerable, the ability to shrug it off shrinks. If you say and prove something's vulnerable and that you'll publish the vulnerability - again, using the industry standard of disclosure deadlines and making the vulnerability public after 60-or-so days of attempting to contact the author - the ability to shrug it off effectively disappears.
I read the article, and I don't think it changes it. If you crack someone's key, they might be well within their rights to pursue a criminal prosecution. Of course it would also have a Streisand effect and there's reasons not to, but I personally wouldn't allow or recommend a security researcher to do it. It's needlessly risky.
In general, subverting security and privacy controls tends to be illegal in most jurisdictions. Best case is when you have clear permission or consent to do some testing. Absent that there's a general consensus that good faith searching for vulnerabilities is ok, as long as you report findings early and squarely. But if you go on to actually abuse the vulnerability to spy on users, look at data etc ... you've crossed a line. For me, cracking a key is much more like that second case. Now you have a secret that can be used for impersonation and decryption. That's not something I'd want to be in possession of without permission.
> If you crack someone's key, they might be well within their rights to pursue a criminal prosecution.
If that were true there would be no market for white hat hackers collecting bug bounties. You need to be able to demonstrate cracking the working system for that to be of any use at all. No company will listen to your theoretical bug exploit, but show them that you can actually break their system and they will pay you well for disclosure.
What is or isn't illegal depends on where you live. Where I live, using any kind of digital secret to do something you shouldn't be doing is technically illegal. Guessing admin/admin or guest/guest is illegal, even if they're public knowledge, as long as you could reasonably know you're not supposed to log in.
Generally, law enforcement and judges don't blame you as long as you use best practices, but you need to adhere to responsible disclosure very strictly in order for this not to be something the police might take an interest in.
Demonstrating the insecurity of a 512 bit key is easy to do without cracking a real life key someone else owns; just generate your own to show it can be done, then use that as proof when reporting these issues to other companies. The best legal method may be to only start cracking real keys if they ignore you or deny the vulnerability, or simply report on the fact you can do it and that the company/companies you've reached out to deny the security risk.
Companies that pay for disclosure won't get you into trouble either way, but companies that are run by incompetent people will panic and turn to law enforcement quickly. White-hat hackers get sued and arrested all the time. You may be able to prove you're right in the court room, but at that point you've already spent a ton of money on lawyers and court fees.
In this case, the risk is increased by not only cracking the key (which can be argued is enough proof already, just send them their own private key as proof), but also using it to impersonate them to several mail providers to check which ones accept the cracked key. That last step could've easily been done by using one's own domains, and with impersonation being a last resort to prove an issue is valid if the company you're reporting the issue to denies the risk.
> Demonstrating the insecurity of a 512 bit key is easy to do without cracking a real life key someone else owns; just generate your own to show it can be done
As I said in my post, no company will listen to your hypothetical exploit.
Show them youve hacked their system and they listen.
Bug bounties are a form of consent for testing and usually come with prescribed limits. Prescribed or not, actually getting user data tends to be a huge no go. Sometimes it can happen inadvertently, and when that happens it's best to have logs or evidence that can demonstrate you haven't looked at it or copied it beyond the inadvertent disclosure.
But to pursue data deliberately crosses a bright line, and is not necessary for security research. Secret keys are data that be used to impersonate or decrypt. I would be very very careful.
I see it the other way around. If some hacker contacted me and proved they had cracked my businesses encryption keys and was looking for a reward, I dont think I would be looking to prosecute them and antagonise them further.
They can pursue what they want, it doesn't mean it will go through.
Looking at public data, using some other public knowledge to figure out something new does not make it inherently illegal. They didn't crack it on their systems, they didn't subvert it on their systems, they did not use it against their systems. I'd love to see some specific examples under what it could be prosecuted under specifically. Because "that door doesn't actually have a lock" or "the king doesn't actually have clothes" is not practically prosecutable anywhere normal just like that.
Especially in the EU, making such cryptographic blunders might even fall foul of NIS2, should it apply to you.
It's more like the door has a weak lock that can be picked. Just like many real world doors do. Here's how it would go in court:
"Are you aware that this key could be used to decrypt information and impersonate X?"
"Are you aware that this key is commonly called a Private key?"
"Are you aware that this key is commonly called a Secret key?"
"Are you aware that it is common to treat these with high sensitivity? Protecting them from human eyes, using secure key management services and so on?"
"Was it even necessary to target someone else's secret private key to demonstrate that 512-bit keys can be cracked?"
"Knowing all of this, did you still willfully and intentionally use cracking to make a copy of this secret private key?"
I wouldn't want to be in the position of trying to explain to a prosecutor, judge, or jury why it's somehow ok and shouldn't count. The reason I'm posting at all here is because I don't think folks are thinking this risk through.
If you want to continue with the analogies, looking at a lock and figuring out it's fake does not constitute a crime.
That key can not be used to decrypt anything. Maybe impersonate, but the researchers haven't done that. It's also difficult to claim something is very sensitive, private or secure if you're publicly broadcasting it, due to the fact that the operation to convert one to an another is so absolutely trivial.
And they did not make a copy of their private key, they did not access their system in a forbidden way. They calculated a new one from publicly accessible information, using publicly known math. It's like visually looking at something and then thinking about it hard.
I wouldn't want to explain these things either, but such a prosecution would be both bullshit and a landmark one at the same time.
> Bluesky is being valued at around $700M in new funding round.
Also known as:
> Bluesky takes a loan of other people's money. It'll then need to pay it back, probably by enshittifying their product in order to extract the money out of its users.
Unless Bluesky finds a source of money that is not venture capitalist funding, the above will happen.
There are 2 broad classes of financial assets. VC money is almost always the other kind, not a loan.
I would rather you had written something like, "Bluesky has sold part of itself. If the leadership of Bluesky is not committed enough to Bluesky's making a profit, the owners of Bluesky can vote to have the leadership replaced. This sale makes it less likely that the owners will overlooking any laxness on the part of the leadership in pursuit of profit."
There are two kinds assets of financial assets, but both of them work the same way - you pay back the investors or face the consequences, i.e. the loss of control over how the company functions.
In other terms, it doesn't matter to the end user who exactly introduces the resulting enshittification - Bluesky's current leadership that doesn't want to lose control of the company, or Bluesky's future leadership that replaces the current one.
Well you don't even have to pay them back in some cases. If your market value keeps increasing that's usually fine for them too. Most shareholders aren't in it for the dividend but are gambling on an upwards value.
Of course what goes up can't go up forever. This is when the enshittification starts.
I thought the whole point is that it’s built on an open protocol and decentralized? Granted a number of people couldn’t or wouldn’t, but you could self host if it gets too enshittified is what I’ve always understood.
Decentralized? Not in practice. ATproto fundamentally requires every participating node to index/process every message being sent around the network. The whole thing about AppView can be summed up as "a global indexed view of every event from every node".
ATproto delegates all the crappy parts (identity management, moderation) to those that would like to participate in the network, but all of that is pointless if your messages are not shared with the AppView servers.
This has been Bluesky's intent from day one: by letting people host their own data at their own PDS, but by ensuring that the communication is only meaningful if passing through their indexers, they are effectively aiming to be to the "Social Web" what Google became to the old www.
Let me present you with a use case: let's say that the New York Times (55.2 million followers on Twitter) decides to leave Musk's land and that they will fully embrace decentralized social media. Let's pretend that they learned their lesson and that they swore to never rely on someone else's platform to broadcast their own information, so they are looking to do anything they need to own the infra needed to publish all their content and reach anyone who wants to access them.
In our scenario, they don't care where their 55M followers are hosted, but they do care about giving accounts to their ~5k employees and they want to be able to follow any of the newsworthy people and institutions: politicians, entertainment celebrities, athletes, etc.
What would be the requirements and associated costs for the NYT to provide this, and how do these costs change if/when more institutions decide to follow suit?
An appview doing partial sync of the atmosphere is going to have two drivers of its cost: 1. number of users being synced, 2. number of direct users of the appview. #2 because some additional work is done for the direct users, such as computation of the following feed.
I won't have exact numbers until we actually finish that distro and deploy it, but my ballpark for 5k direct users with, say, 100 follows on average leading to 500k synced users, I'd say $100-200/mo with a max of $500. If you're not running a big instance, you're not paying big costs.
EDIT: I should clarify that the partial sync runmodel that we're developing will still operate on atproto's "shared heap" approach which means that activity outside of the selected sync-set (the 500k users) will not be visible to the NYT users. How we solve that limitation is an open question. We could introduce message-passing to notify about that activity, for instance. I think that's the most notable downside compared to a comparable AP deployment, which inherently operates on message-passing.
You absolutely buried the lede. Of course the idea is for the NYT to see all messages from its followers as well, not just their own follows. They should also be able to search and discover anyone that is on the atmosphere (after all, the first thing that journalists will want to do when investigating a person is to see their social media presence). With that in mind, what would be a real estimate for operating costs?
I'm really taking your "EDIT" as admission that ATProto will only be able to claim "actually decentralized" once it changes into something that resembles ActivityPub.
It hasn't fully delivered on that part just yet, so it's still in the "maybe it might happen" zone. The moment it happens, I'll change my mind.
In addition, every round of VC funding also requires them explaning their VCs how implementing an open protocol and decentralization is going to help them pay their investment back - I do not expect such talks to go well the moment the VCs realize that both of those make it possible for the platform to effectively resist enshittification attempts.
The difference is that theft is a criminal offense, where you’ll be prosecuted by the state.
Violation of a software license is not a criminal offense but a breach of contract, opening you up to civil suits. So, it’s up to the rights holder to file suit and drag you to court for damages.
Both break the civil law , you can absolutely sue the thief for damages for lost property.
Typically this is not done because it is not worth the lawyer expenses as recovery chances are pretty slim unless it is a kleptomaniac billionaire maybe , instead you claim insurance to recover on your losses.
Similarly copyright theft is also same as any other property theft, you can charge under criminal law as well , typically success rate is not high , but people have gone to prison over pirating movies or bootlegging stuff etc
So what’s the case law for violations of the GPL? Did anyone get criminally convicted prosecuted for violating any software license at all, and was anyone convicted? I’m only aware of civil suits in this regard.
A (possibly stupid) question: is it possible to generalize this pattern and will it bring any kind of improvements? I'm thinking that for functions shaped like (LAMBDA () X), where X is a literal object or a lexical variable, COMPILE can keep a hash-table with the X being its weak keys and the respective compiled functions being their values. Whenever a form like (LAMBDA () X) is encountered as an argument to COMPILE, it can "intern" the function in the table (if necessary) and return it.
I have no idea if it would bring any or a lot of gains besides the cases for NIL (and maybe T) because of CONSTANTLY and closures, but it's a thought that came to my mind upon first reading this commit.
https://ecl.common-lisp.dev/
reply