Boing Boing Staging

What Ken Thompson's seminal (terrifying!) "On Trusting Trust" tells us about the Spectre and Meltdown bugs

When Unix co-inventor Ken Thompson won the Turing Prize for his work, he dropped a bombshell in his acceptance speech: as an exercise, he had buried a back-door so deeply into the Unix infrastructure that no one had ever found it (to his knowledge).

That revelation, described in his paper Reflections on Trusting Trust (previously) forced computer scientists to contemplate the possibility that their tools were compromised at the very lowest levels — as though a mischievous deity had compromised the very law of physics, rearranging them when we weren’t looking to undermine our attempts to master the world (today, bad actors deliberately create demon-haunted computers — and Karl Schroeder’s excellent Sun of Suns imagined a world where all-powerful AIs did the same).

In an editorial for Breakfast Bytes, Paul McLellan considers the lesson of Thompson’s prank/terrifying hack in light of the Spectre and Meltdown bugs, which implicate the lowest-level architectural decisions present in the design of virtually every computer in use today, with no simple means of mitigating them.

For years, I’ve heard dark mutterings from infosec people about unnamed people in the know who claimed that our silicon itself had been poisoned — that spy agencies, corporations or hackers had hidden some extra traces way, way down in the design of our chips to let them attack microprocessors. Some people have attempted to mitigate this theoretical attack with fully free/open toolchains that are entirely auditable; my friend Ben Laurie (previously) has mooted computers whose I/O is encrypted by FPGAs that are field-auditable with commodity tools, ensuring that the unauditable silicon CPU and subsystems inside never get any cleartext that they can leak.

But as the Ken Thompson Hack demonstrates, it’s turtles all the way down: if the tools you use to make the tools to make the tools to make the tools that you trust aren’t also auditable, they could be sneaking bad stuff in that is virtually impossible to root out.


But that’s just Unix. We work in the semiconductor and EDA industries. So the thought experiment is what happens if, instead of corrupting the source code for the login command, you corrupt the source code for a test insertion program? Instead of adding a backdoor password to the login command, if the corrupted tool detects it is adding scan test to a security block, it can add a few extra gates. Obviously, if you add a million gates to a design it will get noticed. But a few gates in a billion-gate design might well go unnoticed. Nobody has a clue what all those scan test gates do exactly, they just have to make sure the timing is right.

A couple of years ago, I think in a DVCon keynote, Wally Rhines of Mentor said that he asked some “three-letter agency types” whether they were worried that the bad guys had broken into IP companies, and inserted some backdoors. He said they just laughed, in a way that implied he was naive to ask the question. Of course, the bad guys where doing it, and so were they.

The KTH means that it would be possible to do that not by breaching the security of an IP company, and changing their Verilog, but going upstream to the compiler companies. Once the malicious code is inserted into the compiler (and then removed), the compiler source code is clean, the source code for the test insertion tool is clean, the Verilog for the chip is clean. And yet, there are a few extra gates added to every design to allow the test logic to be used to read out the secret keys (or something equally bad).

Why You Shouldn’t Trust Ken Thompson [Paul McLellan/Breakfast Bytes]


(via Beyond the Beyond)

(Image: Antoinebercovici, CC-BY-SA)

Exit mobile version