Skip to content(if available)orjump to list(if available)

Security research on Private Cloud Compute

doulouUS

Will there be SDKs to enable any developer to build things leveraging PCC? Like building a performant RAG system on personal/sensitive data.

kfreds

I've been working on technology like this for the past six years.

The benefits of transparent systems are likely considerable. The combination of reproducible builds, remote attestation and transparency logging allows trivial detection of a range of supply chain attacks. It can allow users to retroactively audit the source code of remote running systems. Yes, there are attacks that the threat model doesn't protect against. That doesn't mean it isn't immensely useful.

dboreham

I've also worked in this field but it feels like a foundation built on quicksand. You depend on so many turtle layers and only one of them has to be adversarial and game over.

kfreds

> it feels like a foundation built on quicksand. You depend on so many turtle layers and only one of them has to be adversarial and game over

Interesting. Please elaborate.

Here's how I see it.

Reproducible builds: I think we'll eventually see Linux distributions like Debian make reproducible builds mandatory by enforcing it in apt-get's trust policy. The trust policy could be expressed as "I will only trust .deb packages where their build hash and source hash are signed by three different build pipelines I trust".

Remote attestation: If you ensure that the server's CPU SoC and the TPM have different supply chains, you could construct a protocol where the supply chain attacker would have to own both supply chains in order to impersonate the server.

Transparency logging: One of the projects I've been working on for the past four years is Sigsum (sigsum.org). It is a transparency log with distributed trust assumptions. Our goal was to figure out the essence of transparency logging technology, identify the most significant design parameters, and for each parameter minimise the attack surface. You'll find the threat model on our website.

Here's a recent presentation by my colleague Rasmus on the subject: https://www.youtube.com/watch?v=Mp23yQxYm2c

Here's a recent presentation by me on the subject of system transparency / runtime transparency / the technology underlying Apple PCC: https://www.youtube.com/watch?v=Lo0gxBWwwQE

bitexploder

Each layer needs more than one safeguard then. If breaking the layer breaks the system then the layer needs better safe guards.

bitexploder

The xz backdoor would have been a yawn instead the many hands fire drill it was at most big orgs. It was scary.

null

[deleted]

dewey

Looks like they are really writing everything in Swift on the server side.

Repo: https://github.com/apple/security-pcc

tessela

I hope this helps people to consider Swift 6 as a viable option for server-side development, since it offers many of the modern safety features of Rust, including simpler memory management through ARC, compared to Rust’s more complex ownership system and more predictable than Go's garbage collector.

willtemperley

I'd love to use Swift on Cloudflare Workers, but SwiftWASM doesn't seem production ready whereas Rust just works (mostly) on workers. Swift on AWS Lambda looks promising though.

miki123211

It's worth keeping in mind that these AI machines run an environment very similar to Mac OS, XNU kernel and all, and are powered by Apple Silicon. Using Swift in that context makes sense.

At least according to what we publicly know, no other backend Apple services follow this model.

danielhep

Is using something other than XCode viable? I'd love to do more with swift but I hate that IDE.

dewey

Most editors will do, Xcode is mostly needed for iOS / macOS development if you want to submit to the App Store or work with a lot of Apple frameworks.

willtemperley

Have you used it recently, on an M series Mac? I used to feel the same, it was sluggish and crashed frequently. It's become usable now, even pleasant to use. Also it's great they support Vim keybindings now out-of-the-box.

russelg

If you're a Jetbrains user, they have their AppCode IDE.

ceejayoz

That was discontinued in 2022.

mmastrac

I feel like this is all smoke and mirrors to redirect from the likelihood intentional silicon backdoors that are effectively undetectable. Without open silicon, there's no way to detect that -- say -- when registers r0-rN are set to values [A, ..., N] and a jump to address 0xCONSTANT occurs, additional access is granted to a monitor process.

Of course, this limits the potential attackers to 1) exactly one government (or N number of eyes) or 2) one company, but there's really no way that you can trust remote hardware.

This _does_ increase the trust that the VMs are safe from other attackers, but I guess this depends on your threat model.

kfreds

> I feel like this is all smoke and mirrors to redirect from the likelihood intentional silicon backdoors that are effectively undetectable.

The technologies Apple PCC is using has real benefits and is most certainly not "all smoke and mirrors". Reproducible builds, remote attestation and transparency logging are individually useful, and the combination of them even more so.

As for the likelihood of Apple launching Apple PCC to redirect attention from backdoors in their silicon, that seems extremely unlikely. We can debate how unlikely, but there are many far more likely explanations. One is that Apple PCC is simply good business. It'll likely reduce security costs for Apple, and strengthen the perception that Apple respects users' privacy.

> when registers r0-rN are set to values [A, ..., N] and a jump to address 0xCONSTANT occurs

I would recommend something more deniable, or at the very least something that can't easily be replayed. Put a challenge-response in there, or attack the TRNG. It is trivial to make a stream of bytes appear random while actually being deterministic. Such an attack would be more deniable, while also allowing a passive network attacker to read all user data. No need to get code execution on the machines.

formerly_proven

Apple forgot to disable some cache debugging registers a while back which in effect was similar to something GP described, although exploitation required root privileges and would allow circumventing their in-kernel protections; protections most other systems do not have. (And they still didn't manage to achieve persistence, despite having beyond-root privileges).

kfreds

> Apple forgot to disable some cache debugging registers a while back which in effect was similar to something GP described

Thank you for bringing that up. Yes, it is an excellent example that proves the existence of silicon vulnerabilities that allow privilege escalation. Who knows whether it was left there intentionally or not, and if so by whom.

I was primarily arguing that (1) the technologies of Apple PCC are useful and (2) it is _very_ unlikely that Apple PCC is a ploy by Apple, to direct attention away from backdoors in the silicon.

password4321

20231227 https://news.ycombinator.com/item?id=38783112 Operation Triangulation: What you get when attack iPhones of researchers

20231229 https://news.ycombinator.com/item?id=38801275 Kaspersky discloses iPhone hardware feature vital in Operation Triangulation

kmeisthax

The economics of silicon manufacturing and Apple's own security goals (including the security of their business model) restrict the kinds of backdoors you can embed in their servers at that level.

Let's assume Apple has been compromised in some way and releases new chips with a backdoor. It's expensive to insert extra logic into just one particular spin of a chip; that involves extra tooling cost that would be noticeable line-items and show up in discovery were Apple to be sued about their false claims. So it needs to be on all the chips, not just a specific "defeat PCC" spin of their silicon. So they'd be shipping iPads and iPhones with hardware backdoors.

What happens when those backdoors inevitably leak? Well, now you have a trivial jailbreak vector that Apple can't patch. Apple's security model could be roughly boiled down as "our DRM is your security"; while they also have lots of actual security, they pride themselves on the fact that they have an economic incentive to lock the system down to keep both bad actors and competing app stores out. So if this backdoor was inserted without the knowledge of Apple management, there are going to be heads rolling. And if it was, then they're going to be sued up the ass once people realize the implications of such a thing, because Tim Cook went up on stage and promised everyone they were building servers that would refuse to let them read your Siri queries.

mike_hearn

Unfortunately that's not the case.

All remote attestation technology is rooted by a PKI (the DCA certificate authority in this case). There's some data somewhere that simply asserts that a particular key was generated inside a CPU, and everything is chained off that. There's currently no good way to prove this step so you just have to take it on faith. Forge such an assertion and you can sign statements that device X is actually a Y and it's game over, it's not detectable remotely.

Therefore, you must take on faith the organization providing the root of trust i.e. the CPU. No way around it. Apple does the best it can within this constraint by trying to have numerous employees be involved, and there's this third party auditor they hired, but that auditor is ultimately engaging in a process controlled by Apple. It's a good start but the whole thing assumes either that Apple employees will become whistleblowers if given a sufficiently powerful order, or that the third party auditor will be willing and able to shut down Apple Intelligence if they aren't satisfied with the audit. Given Apple's legal resources and famously leak-proof operation, is this a convincing proposition?

Conventional confidential computing conceptually works, because the people designing and selling the CPUs are different to the people deploying them to run confidential workloads. The deployers can't forge an attestation (assuming absence of bugs) because they don't have access to the root signing keys. The CPU makers could, theoretically, but they have no reason to because they aren't running any confidential workloads so there's no data to steal. And they are in practice constrained by basic problems like not knowing what CPU the deployers actually have, not being able to force changes to other people's hardware, not being able to intercept the network connections and so on.

So you need a higher authority that can force them to conspire which in practice means only the US government.

In this case, Apple is doing everything right except that the root of trust for everything is Apple itself. They can publish in their log an entry that claims to be an Apple CPU but for which the key was generated outside of the manufacturing process, and that's all it takes to dismantle the entire architecture. Apple know this and are doing the best they can within the "don't team up with competitors" constraint they obviously are placed under. But trust is ultimately a human thing and the purpose of corporations is to let us abstract and to some extent anthropomorphize large groups. So I'm not totally sure this works, socially.

Jerrrrrrry

  >backdoors inevitably leak? Well, now you have a trivial jailbreak vector
the discover-ability of an exploit vector relates little to its trivialness, definitely when considering the context (nation-state-APTs)

You can hold the enter key down for 40 seconds to login into any certain Linux Server distro, for years. No one knew, ez to do.

You can have a chip inside your chip that only accepts encrypted and signed microcode and has control over the superior chip. Everyone knows - nothing you can do.

Nation state actors however, can facilitate either; APT's can forge fake digital forensics that imply another motive/state/false flag.

stouset

If you take as a fundamental assumption that all your hardware is backdoored by Mossad who has unlimited resources and capacity to intercept and process all your traffic, the game is already lost and there’s no point in doing anything.

If instead you assume your attackers have limited resources, things like this increase the costs attackers have to spend to compromise targets, reducing the number of viable targets and/or the depth to which they can penetrate them.

One of these threat models is actually useful.

Jerrrrrrry

Soviets used typewriters.

American Lawyers of the highest pedigree (HNWI) don't even use email.

Your hardware is back-doored, as Intel is named "Intel" for a (nearly too poignant) reason.

gigel82

Some of us just assume Apple itself is a bad actor planning to use and sell customer data for profit; makes all of this smoke and mirrors like GP said.

There is absolutely no technical solution where Apple can prove our data isn't exfiltrated as long as this is their software that runs on their hardware.

rnts08

Anyone assuming otherwise is just foolish. No mega-corp is protecting the individuals privacy when developing products.

yalogin

This is an interesting idea. However what does open hardware mean? How can you prove that the design or architecture that was “opened” is actually what was built? What does the attestation even mean in this scenario?

kfreds

> what does open hardware mean?

Great question. Most hardware projects I've seen that market themselves as open source hardware provide the schematic and PCB design, but still use ICs that are proprietary. One of my companies, Tillitis, uses an FPGA as the main IC, and we provide the hardware design configured on the FPGA. Still, the FPGA itself is proprietary.

Another aspect to consider is whether you can audit and modify the design artefacts with open source tooling. If the schematics and PCB design is stored in a proprietary format I'd say that's slightly less open source hardware than if the format was KiCad EDA, which is open source. Similarly, in order to configure the HDL onto the FPGA, do you need to use 50 GB of proprietary Xilinx tooling, or can you use open tools for synthesis, place-and-route, and configuration? That also affects the level of openness in my opinion.

We can ask similar questions of open source software. People who run a Linux distribution typically don't compile packages themselves. If those packages are not reproducible from source, in what sense is the binary open source? It seems we consider it to be open source software because someone we trust claimed it was built from open source code.

threeseed

And what attestation do you have that the FPGA isn't compromised.

We can play this game all the way down.

yalogin

No, you trust the HW and so starting with secure boot you can get measurements cryptographically vouched for. That you can prove and verify.

So at some point you have no option but to trust something/someone

dented42

This is my thought exactly. I really love the idea of open hardware, but I don’t see how it would protect against cover surveillance. What’s stopping a company/government/etc from adding surveillance to an open design? How would you determine that the hardware being used is identical to the open hardware design? You still ultimately have to trust that the organisations involved in manufacturing/assembling/installing/operating the hardware in question hasn’t done something nefarious. And that brings us back to square one.

kfreds

> How would you determine that the hardware being used is identical to the open hardware design?

FPGAs can help with this. They allow you to inspect the HDL, synthesize it and configure it onto the FPGA chip yourself. The FPGA chip is still proprietary, but by using an FPGA you are making certain supply chain attacks harder.

mdhb

This website in particular tends to get very upset and is all too happy to point out irrelevant counter examples every time I point this out but the actual ground truth of the matter here is that you aren’t going to find yourself on a US intel targeting list by accident and unless you are doing something incredibly stupid you can use Apple / Google cloud services without a second thought.

astrange

Transparency through things like attestation is capable of proving nothing unexpected is running; for instance you can provide power/CPU time numbers or hashes of arbitrary memory and this can make it arbitrarily hard to run extra code since it would take more time.

And the secure routing does make most of these attacks infeasible.

null

[deleted]

ryandv

There's been some limited research in this space; see for instance xoreaxeaxeax's sandsifter tool which has found millions of undocumented processor instructions [0].

[0] https://www.youtube.com/watch?v=ajccZ7LdvoQ

brokenmachine

Relevant:

37C3 - Operation Triangulation: What You Get When Attack iPhones of Researchers https://www.youtube.com/watch?v=1f6YyH62jFE

Absolutely insane attack. Really opens your eyes on what nation-state attackers are capable of.

aabhay

A lot of people seem to be focusing on how this program isn’t sufficient as a guarantee, but those people are missing the point.

The real value of this system is that Apple is making legally enforceable claims about their system. Shareholders can, and do, sue companies that make inaccurate claims about their infrastructure.

I’m 100% sure that Apple’s massive legal team would never let this kind of program exist if _they_ weren’t also confident in these claims. And a legal team at Apple certainly has both internal and external obligations to verify these claims.

America’s legal system is in my opinion what allows the US to dominate economically, creating virtuous cycles like this.

layer8

Unfortunately that doesn’t help anyone outside the US, not because of differences in the legal systems, but because as an American company Apple will always have to defer to the US agencies first.

aabhay

I’m pretty sure a foreign shareholder can sue in a US court of law. While I agree that “shareholder” in this case means extra-massive moneyed entity, I firmly believe that even this provides a deterrence effect. At the very least, for the scale of operations in the US, there’s an extremely high trust environment. That level of trust doesn’t exist even for orders of magnitude smaller issues in most other countries

saagarjha

Yes, this is why we don't see bad behavior and willful abuse of the legal system by companies in the US.

ram_rattle

Something similar published by samsung, but sad that they are not as agile as apple in this area

https://research.samsung.com/blog/The-Next-New-Normal-in-Com...

axoltl

This doesn't look to be the same. Apple's talking about performing computation in their cloud in a secure, privacy-preserving fashion. Samsung's paper seems to be just on local enclaves (which Apple's also been doing since iPhone 5S in the form of the Secure Enclave Processor (SEP)).

kfreds

Wow! This is great!

I hope you'll consider adding witness cosignatures on your transparency log though. :)

ngneer

How is this different than a bug bounty?

alemanek

Well they are providing a dedicated environment from which to attack their infrastructure. But also they have a section called “ Apple Security Bounty for Private Cloud Compute” in the linked article so this is a bug bounty + additional goodies to help you test their security.

floam

There is a bug bounty too, but the ability to run one the same infrastructure, OS, models locally is big.

davidczech

Similar, but a lot of documentation is provided, source code for cross reference, and a VM based research environment instead of applying for a physical security research device.

gigel82

No amount of remote attestation and "transparency logs" and other bombastic statements like this would make up for the fact that they are fully in control of the servers and the software. There is absolutely no way for a customer to verify their claims that the data is not saved or transferred elsewhere.

So unless they offer a way for us to run the "cloud services" on our own hardware where we can strictly monitor and firewall all network activity, they are almost guaranteed to be misusing that data, especially given Apple's proven track record of giving in to government's demands for data access (see China).

kfreds

> No amount of remote attestation and "transparency logs" and other bombastic statements like this would make up for the fact that they are fully in control of the servers and the software. There is absolutely no way for a customer to verify their claims that the data is not saved or transferred elsewhere.

You are right. Apple is fully in control of the servers and the software, and there is no way for a customer to verify Apple's claims. Nevertheless system transparency is a useful concept. It can effectively reduce the number of things you have to blindly trust to a short and explicit list. Conversely it forces the operator, in this case Apple, to explicitly lie. As others have pointed out, that is quite a business risk.

As for transparency logs, it is an amazing technology which I can highly recommend you take a look at in case you don't know what it is or how it works. Check out transparency.dev or the project I'm involved in, sigsum.org.

> they are almost guaranteed to be misusing that data

That is very unlikely because of the liability, as others have pointed out. They are making claims which the Apple PCC architecture helps make falsifiable.

astrange

> There is absolutely no way for a customer to verify their claims that the data is not saved or transferred elsewhere.

Transparency logs are capable of verifying that, it's more or less the whole point of them. (Strictly speaking, you can make it arbitrarily expensive to fake it.)

Also, if they were "transferring your data elsewhere" it would be a GDPR violation. Ironically wrt your China claim, it would also be illegal in China, which does in fact have privacy laws.

ls612

Are transparency logs akin to Certificate Transparency but for signed code? I’ve read through the section a couple times and still don’t fully understand it.

astrange

Yeah, it's a log of all the software that runs on the server. If you trust the secure boot process then you trust the log describes its contents.

If you don't trust the boot process/code signing system then you'd want to do something else, like ask the server to show you parts of its memory on demand in case you catch it lying to you. (Not sure if that's doable here because the server has other people's data on it, which is the whole point.)

gigel82

That makes no sense at all. They control the servers and services entirely; they can choose to emit whatever logs they want into the "transparent logs" and then emit whatever else they don't want into non-transparent logs.

Even if they were running open source software with cryptographically verified / reproducible builds, it's still running on their hardware (any component or the OS / kernel or even hardware can be hooked into to exfiltrate unencrypted data).

Companies like Apple don't give a crap about GDPR violations (you can look at their "DMA compliance" BS games to see to what extent they're willing to go to skirt regulations in the name of profit).

davidczech

> they can choose to emit whatever logs they want into the "transparent logs" and then emit whatever else they don't want into non-transparent logs.

The log is publicly accessible and append-only, so such an event would not go un-noticed. Not sure what a non-transparent log is.

astrange

> They control the servers and services entirely

There's a key signing ceremony with a third-party auditor watching; it seems to rely on trusting them together with the secure boot process. But there are other things you can add to this, basically along the lines of making the machine continually prove that it behaves like the system described in the log.

They don't control all of the service though; part of the system is that the server can't identify the user because everything goes through third party proxies owned by several different companies.

> Companies like Apple don't give a crap about GDPR violations

GDPR fines are 4% of the company's yearly global revenue. If you're a cold logical profit maximizer, you're going to care about that a lot!

Beyond that, they've published a document saying all this stuff, which means you can sue them for securities fraud if it turns out to be a lie. It's illegal for US companies to lie to their shareholders.