Skip to content(if available)orjump to list(if available)

Zentool – AMD Zen Microcode Manipulation Utility

nomercy400

Wow, so providing a tool for bypassing the protection mechanism of a device (cpu) is accepted when it comes from google?

Try this on any game console or drm protected device ans you are DMCAed before you know it.

nomercy400

Doesn't changing how your cpu's microcode works mean you can bypass or leak all kinds of security measures and secrets?

dzdt

The blog post that explains the exploit and how this whole thing works is at https://bughunters.google.com/blog/5424842357473280/zen-and-...

nyanpasu64

Is the mitigation something that has to be installed on every system boot and only protects against microcode exploits later on that boot?

RachelF

I would guess it is a BIOS patch, just like the microcode normally is.

So it probably needs to be installed at every system boot.

Perhaps someone more knowledgeable can correct my guesses?

bpye

> The fix released by AMD modifies the microcode validation routine to use a custom secure hash function. This is paired with an AMD Secure Processor update which ensures the patch validation routine is updated before the x86 cores can attempt to install a tampered microcode patch.

__turbobrew__

What if your cpu microcode already has malware which injects itself into the microcode update?

https://dl.acm.org/doi/10.1145/358198.358210

BonusPlay

Both AMD and Google note, that Zen[1-4] are affected, but what changed about Zen5? According to the timeline, it released before Google notified AMD [1].

Is it using different keys, but same scheme (and could possibly be broken via side-channels as noted in the article)? Or perhaps AMD notices something and changed up the microcode? Some clarification on that part would be nice.

[1] https://github.com/google/security-research/security/advisor...

transpute

This is not the first case of accidental reuse of example keys in firmware signing, https://kb.cert.org/vuls/id/455367

Would it be useful to have a public list of all example keys that could be accidentally used, which could be CI/CD tested on all publicly released firmware and microcode updates?

If there was a public test suite, Linux fwupd and Windows Update could use it for binary screening before new firmware updates are accepted for distribution to endpoints.

bri3d

Hyundai used both this same NIST AES key _and_ an OpenSSL demo RSA key together in a head unit! (search “greenluigi1” for the writeup).

Using CMAC as both the RSA hashing function and the secure boot key verification function is almost the bigger WTF from AMD, though. That’s arguably more of a design failure from the start than something to be caught.

dtgriscom

Are there any examples of using this for non-nefarious reasons? For instance, could I add new instructions that made some specific calculation faster?

amluto

Something worth noting:

CPUs have no non-volatile memory -- microcode fully resets when the power is cycled. So, in a sensible world, the impact of this bug would be limited to people temporarily compromising systems on which they already had CPL0 (kernel) access. This would break (possibly very severely and maybe even unpatchably) SEV, and maybe it would break TPM-based security if it persisted across a soft reboot, but would not do much else of consequence.

But we do not live in a sensible world. The entire UEFI and Secure Boot ecosystem is a complete dumpster fire in which the CPU, via mechanisms that are so baroque that they should have been disposed of in, well, the baroque era, enforces its own firmware security instead of delegating to an independent coprocessor. So the actual impact is that getting CPL0 access to an unpatched system [0] will allow a complete compromise of the system flash, which will almost certainly allow a permanent, irreversible compromise of that system, including persistent installation of malicious microcode that will pretend to be patched. Maybe a really nice Verified Boot (or whatever AMD calls its version) implementation would make this harder. Maybe not.

(Okay, it's not irreversible if someone physically rewrites the flash using external hardware. Good luck.)

[0] For this purpose, "unpatched" means running un-fixed microcode at the time at which CPL0 access is gained.

mjg59

> enforces its own firmware security instead of delegating to an independent coprocessor

That depends on how we define "independent" - AMD's firmware validation is carried out by the Platform Security Processor, which is an on-die ARM core that boots its firmware before the x86 cores come up. I don't know whether or not the microcode region of the firmware is included in the region verified by their Platform Secure Boot or not - skipping it on the basis that the CPU's going to verify it before loading it anyway seems like an "obvious" optimisation, but there's room to implement this in the way you want.

But raw write access to the flash depends on you being in SMM, and I don't know to what extent microcode can patch what SMM transitions look like. Wouldn't bet against it (and honestly would be kind of surprised if this was somehow protected), but I don't think what Google's worked out here yet gives us a solid answer.

amluto

By “firmware security” I meant control of writes to the SPI flash chip that controls firmware. There are other mechanisms that try to control whether the contents of the chip are trusted for various purposes at boot, and you’re probably more familiar with those than I am.

As for my guesses about the rest:

As far as I know (and I am not privy to any non-public info here), the Intel ucode patch process sure seems like it can reprogram things other than the ucode patch SRAM. There seem to be some indications that AMD’s is different.

I wouldn’t bet real money, with fairly strong odds, that this ucode compromise gives the ability to run effectively arbitrary code in SMM CPL0, without even a whole lot of difficulty other than reverse engineering enough of the CPU to understand what the uops do and which patch slots do what. I would also bet, at somewhat less aggressive odds, that ucode patches can do things that even SMM can’t, e.g. writing to locked MSRs and even issuing special extra-privileged operations like the “Debug Read” and “Debug Write” operations that Intel CPUs support in the “Red Unlock” state.

bri3d

SEV attestation does delegate to the PSP, no? I think it _might_ be reasonable to attest that upgraded microcode is both present and valid using SEV, without the risk of malicious microcode blinding the attestation, but I’m not positive yet - need to think on it a bit more.

amluto

This probably depends on a lot of non-public info: how does the PSP validate CPU state? where does PSP firmware come from? can the PSP distinguish between a CPU state as reported by honest ucode and that state as reported by the CPU running malicious ucode?

I think that, at least on Intel, the “microcode” package includes all kinds of stuff beyond just the actual CPU microcode, and I think it’s all signed together. If AMD is like this, than an unpatched CPU can be made to load all kinds of goodies.

Also, at least in Intel (and I think also on AMD), most of the SPI flash security mechanism is controlled by SMM code. So any ranges that the CPU can write, unless locked by a mechanism outside of the control of whatever this bug compromises, can be written. This seems pretty likely to include the entire SPI chip, which includes parts controlling code that will run early after the next power cycle, which can compromise the system again.

mjg59

PSP firmware is in system flash, but is verified by the PSP with its own signing key. PSP firmware is loaded before x86 comes up, and as long as the SEV firmware measures itself and as long as it patches the microcode loader before allowing x86 to run (which the description of the patch claims it does) I think SEV is rescuable.

mkj

Was the microcode signing scheme documented by AMD, or did the researchers have to reverse engineer it somehow? I couldn't see a mention in the blog post.

sanxiyn

From the blog post:

> We plan to provide additional details in the upcoming months on how we reverse engineered the microcode update process, which led to us identifying the validation algorithms

p1mrx

> You can use the `resign` command to compensate for the changes you made:

How does that work? Did someone figure out AMD's private keys?

yuriks

The intro document mentions

> Here's the thing - the big vendors encrypt and sign their updates so that you cannot run your own microcode. A big discovery recently means that the authentication scheme is a lot weaker than intended, and you can now effectively "jailbreak" your CPU!

But there's no further details. I'd love to know about the specifics too!

taviso

They accidentally used the example key from AES-CMAC RFC, the full details are in the accompanying blog post: https://bughunters.google.com/blog/5424842357473280/zen-and-...

RachelF

Yikes! One would have expected a little more code review or a design review from a hardware manufacturer, especially of security system. A system that people have been worried about since the Pentium FDIV bug.

I guess this one just slipped through the cracks?

null

[deleted]

timewizard

Taking "never roll your own" too far.

AHTERIX5000

This work is related to recently found signing weakness and supposedly fake key with resign works with unpatched CPUs.