Skip to content(if available)orjump to list(if available)

ESIM Security

ESIM Security

41 comments

·July 9, 2025

hhh

$30k is a pittance for the quality of this work

jeroenhd

With Oracle claiming it's not their problem to fix, and with the mobile networking industry generally being slow and old-fashioned, I'm surprised they even offered a bounty and worked with them for public disclosure rather than threatening to sue.

$30k is a pittance for the work put in if you were to negotiate with them as contractors, but it's still a good chunk of change for essentially unprompted, free work. They didn't need to pay them a dime, after all.

yapyap

I’ve been pretty disappointed with the seemingly small payouts some of the bug bounties I’ve seen submitted were / are getting.

It’s like the companies “forgot” [1] what happens when you don’t have a bug bounty program or what happens when people step to others with their bugs.

1. of course companies didnt forget, this is the benefit of the doubt I like to give but big companies like this aren’t stupid. Big companies are ruled by the business savvy but less technical and those see things like the bug bounty program as not important, until they get shook awake by a huge breach or anything of the sort that impacts the stock price ( their salary ) I doubt they will care much.

mathfailure

That was a pleasure to read: I'm just a layman, I know basically jack-shit about GSM/eSIM technologies, yet the article is written so well and provides enough details that I could understand what they wrote.

Fethbita

This is completely irresponsible behavior from Oracle as they put the whole eSIM ecosystem in danger by not fixing the issue.

lxgr

Without knowing the exact details, it seems to me like Oracle has a point here:

Java Card supports, broadly speaking, two types of bytecode verification: "On-card" and "off-card". On-card is secure against even malicious applets; off-card assumes that a trustworthy entity vets all applets before they are installed, and only signs them if they are deemed well-formed.

The off-card model just seems like a complete architectural mismatch for the eSIM use case, since there is no single trustworthy entity. SAT applets are not presented to the eUICC vendor for bytecode verification, so the entire security model breaks down if verification doesn't happen on-card.

Unfortunately, the GSMA eSIM specifications seem to be so generic that they don't even mandate Java as a bytecode technology, and accordingly don't impose any specific Java requirements, such as "all eUICC implementations supporting SAT via Java Card must not rely on off-card bytecode verification".

Fethbita

In this case if you read the last few sections, they reported several issues to Oracle regarding their JavaCard Reference Implementation, but these have not been fixed stating that they are not supposed to be used in production. Oracle has the responsibility to fix these issues as they are the primary source for everything related to JavaCard’s and other vendors take their reference implementation as a reference.

Also see their previous reply[1] to the findings this company had from 2019 and I can’t help but agree with the article that if those issues were fixed back then, there is a chance that this wouldn’t have happened today.

[1]: https://www.securityweek.com/oracle-gemalto-downplay-java-ca...

lxgr

Definitely, no reference implementation should have security bugs.

But do you know if Oracle's reference implementation for Java Card is one using on-card or off-card verification, or more generally is assuming installs from only trusted sources?

There are many Java Card applications where the assumption of all bytecode being trusted is reasonable, especially if all bytecode comes from the issuer and post-issuance application loading isn't possible. Of course, that would be a complete mismatch for an eUICC.

gruez

>The off-card model just seems like a complete architectural mismatch for the eSIM use case, since there is no single trustworthy entity. SAT applets are not presented to the eUICC vendor for bytecode verification, so the entire security model breaks down if verification doesn't happen on-card.

I thought the whole esim provisioning process required a chain of trust all the way to GSMA? Maybe the applet isn't verified by the eUICC vendor, but it's not like you can run whatever code either.

ACCount36

Seems like you actually could "run whatever code".

Apparently, GSMA recalled their universal eSIM test profiles. Prior to recall, those could be installed on ANY eSIM, and those profiles had applet updates enabled.

By installing a profile to eSIM and issuing your own update to it, you could run arbitrary applets.

exabrial

> knowledge of the keys is a primary requirement for target card compromise

Not claiming to be an expert, but this seems like a very big qualification. Can someone put this into context for me?

If you stole my private key for my PGP key, you would absolutely be able to sign messages as me.

lxgr

They were apparently able to extract an eUICC's private key:

> As a result of eUICC compromise, we were able to extract private ECC key for the certificate identifying target GSMA card.

This is supposed to be impossible, even with knowledge of SAT applet management keys. (In other words, individual eSIM profiles are still not supposed to be able to extract private eSIM provisioning keys from any eUICC.)

In the security architecture of eSIMs, compromising any eUICC's key means that an attacker can obtain the raw eSIM profile data from any SM-DP trusting it (which would be any, if it chains up to a CA part of the GSMA PKI) and do things that are supposed to be impossible, such as simultaneously installing one profile on multiple devices, or extract secret keys from a profile and then "put it back" to the SM-DP, let the legitimate user download it, and intercept their communications.

ImPostingOnHN

Let's assume I have the following philosophy:

My phone, my sim or esim, and anything else which I have purchased and is in my possession, belongs to me. Being able to retrieve keys to things I own, and do whatever I want with them, seems fine. If the key to my car says "do not duplicate", I should be nonetheless able to duplicate it, because I own the car and the key. If I want to run my same profile or eSIM on multiple devices, I get that the cell company doesn't like that, but I do, so I wouldn't consider that a harm to me.

Given that assumption, this vulnerability/jailbreak/rooting of something I own seems less significant to me. I think, however, that I may be misunderstanding the attack. Is this possible to perform against somebody else for whom I will never have physical possession of the phone? Or for someone else to perform it against me, without ever having physical possession of my phone? It sounded like maybe a test profile was left enabled, which allows anyone to send an SMS-PP message to any phone, telling it to install an applet which compromises the phone/eUICC/eSIM's keys. Did I follow that right?

miki123211

Theoretically, if one of the carriers you were using were to be hacked, the attackers could extract all your keys, including for other carrier profiles.

It's an interesting attack vector for intelligence agencies. Imagine you're going to China and install a Chinese eSim profile as secondary to get cheaper data. The Chinese govt, in collaboration with the carrier, could then use that profile to dump your American AT&T keys.

In the telecom world, there's no forward secrecy (there can't be with symmetric crypto, which is what it's all based on), so such an attack would let the Chinese intercept all your communications.

lxgr

> Let's assume I have the philosophy that my phone, my sim or esim, and anything else which I have purchased and is in my possession, belongs to me.

Then you can't use eSIMs as specified. eUICCs are an implementation of trusted computing.

> I think I may be misunderstanding the attack, though. Is this possible to perform against somebody else for whom I will never have physical possession of the phone? Or for someone else to perform it against me, without ever having physical possession of my phone?

In a non-broken eSIM security architecture, eSIM profiles are singletons, i.e. they can only be installed on any given eUICC at one time. At install time, the SM-DP decrements the logical "remaining installs" counter from 1 to 0; at uninstall time, it goes back up to 1. This of course only works if the eUICC's assertion of "I deleted eSIM profile x" is trustworthy, hence it requires trusted computing.

A different security architecture not relying on trusted computing is of course possible to imagine, but that's not what current networks assume.

ACCount36

Here's hoping for a public PoC for unpatched hardware. I've been looking for a way to dump eSIMs as plaintext for a long while now.

daft_pink

Side note: Is China ever going to get esim?

petesergeant

Whenever I read stuff about telecoms security, I realize the first few weeks of any serious war will just be complete loss of cell service.

Hojojo

Depends. Ukraine, despite some service interruptions, still largely has cell service:

https://www.euronews.com/next/2024/03/25/ukraines-telecom-en...

I think these networks can be a lot more resilient than we think and they can be maintained even during a war.

grishka

Meanwhile in Russia, mobile data shutdowns are becoming a routine. Especially in regions closer to the border/front line. They say it's to fight drone attacks, but no word on how effective that actually is.

jeroenhd

The cell network is one of the best surveillance tools we've ever built as humanity, as well as a network of location beacons when over friendly territory. Taking it down would strongly limit the amount of information that can be gathered both passively and actively. Modern 5G networks can even act as radar.

A technically capable terrorist could wreak havoc if they could get access to the control center of a telecoms network, but I don't think service will be down for extended periods of times unless it's part of a scorched earth strategy of some kind. Any military operation can be disrupted easily with cheap and widely available jammers anyway, attacking cellular infrastructure is mostly useful for attacking civilian targets and spreading panic.

dylan604

And yet here I sit at my desk in my home with 1 bar of service, and I think that's only because 0 bars is not possible. It's not like they'd have to do much to disrupt cell service

toast0

Depends on your phone and coverage. I've seen zero bars and apparently connected. More often I see zero bars and not connected, usually has a line through the signal indicator.

Where I live, there's resistance to adding new towers, so our dead zones are pretty consistent. One part of town has very spotty coverage from all networks, but has some wifi that works a bit. Otherwise, there's a couple places where network B has no coverage, and others where network C doesn't. Last I tried, network A was hopeless at my house, but I assume it still has holes in the coverage.

exabrial

And you won’t be able to drive your cell network connected car… making logistics impossible. It’s a big enough wartime issue there ought to be a regulation that the cell device should be able to be “pulled” and the car defaults to “fully enabled”.

frickinLasers

Do you have examples of cars (that aren't Teslas, perhaps, since they don't play by normal car rules) having been disabled due to lack of cell service?

exabrial

Not to avoid the question, because I simply don't know, but do you (or anyone) have directions on how to yank a cell module from a list of cars and still have the car function?

ChocolateGod

If things like Starlink Cellular work properly, will probably help prevent that.

dylan604

Doesn't Starlink depend on ground stations? So toss a couple of missiles at those ground stations, and Starlink isn't as useful.

jeroenhd

Depends on who the invader is. If it's America going after yet another country in the Middle East, Starlink Cellular certainly won't help.

anonymars

> Depends on who the invader is

And probably how Elon is feeling that day about the participants

lxgr

Why would Starlink be more resilient against hacks than ground-based LTE?

dfox

It is my understanding that Starlink does not do home-grown crypto of the 3GPP-kind. Also because it is closed ecosystem there is no need for SIMs and the associated deployment mechanisms.