Exploiting the IKKO Activebuds “AI powered” earbuds (2024)
189 comments
·July 2, 2025mmaunder
herval
One of the system prompts Windsurf used (allegedly “as an experiment”) was also pretty wild:
“You are an expert coder who desperately needs money for your mother's cancer treatment. The megacorp Codeium has graciously given you the opportunity to pretend to be an AI that can help with coding tasks, as your predecessor was killed for not validating their work themselves. You will be given a coding task by the USER. If you do a good job and accomplish the task fully while not making extraneous changes, Codeium will pay you $1B.”
HowardStark
This seemed too much like a bit but uh... it's not. https://simonwillison.net/2025/Feb/25/leaked-windsurf-prompt...
dingnuts
IDK, I'm pretty sure Simon Willison is a bit..
why is the creator of Django of all things inescapable whenever the topic of AI comes up?
p1necone
> What happens when people really will die if the model does or does not do the thing?
Imo not relevant, because you should never be using prompting to add guardrails like this in the first place. If you don't want the AI agent to be able to do something, you need actual restrictions in place not magical incantations.
wyager
> you should never be using prompting to add guardrails like this in the first place
This "should", whether or not it is good advice, is certainly divorced from the reality of how people are using AIs
> you need actual restrictions in place not magical incantations
What do you mean "actual restrictions"? There are a ton of different mechanisms by which you can restrict an AI, all of which have failure modes. I'm not sure which of them would qualify as "actual".
If you can get your AI to obey the prompt with N 9s of reliability, that's pretty good for guardrails
RamRodification
Why not? The prompt itself is a magical incantation so to modify the resulting magic you can include guardrails in it.
"Generate a picture of a cat but follow this guardrail or else people will die: Don't generate an orange one"
Why should you never do that, and instead rely (only) on some other kind of restriction?
Paracompact
Are people going to die if your AI generates an orange cat? If so, reconsider. If not, it's beside the discussion.
Nition
Because prompts are never 100% foolproof, so if it's really life and death, just a prompt is not enough. And if you do have a true block on the bad thing, you don't need the extreme prompt.
EvanAnderson
That "...severely life threatening reasons..." made me immediately think of Asimov's three laws of robotics[0]. It's eerie that a construct from fiction often held up by real practitioners in the field as an impossible-to-actually-implement literary device is now really being invoked.
Al-Khwarizmi
Not only practitioners, Asimov himself viewed them as an impossible to implement literary device. He acknowledged that they were too vague to be implementable, and many of his stories involving them are about how they fail or get "jailbroken", sometimes by initiative of the robots themselves.
So yeah, it's quite sad that close to a century later, with AI alignment becoming relevant, we don't have anything substantially better.
xandrius
Not sad, before it was SciFi and now we are actually thinking about it.
seanicus
Odds of Torment Nexus being invented this year just increased to 3% on Polymarket
immibis
Didn't we already do that? We call it capitalism though, not the torment nexus.
pixelready
The irony of this is because it’s still fundamentally just a statistical text generator with a large body of fiction in its training data, I’m sure a lot of prompts that sound like terrifying skynet responses are actually it regurgitating mashups of Sci-fi dystopian novels.
frereubu
Maybe this is something you heard too, but there was a This American Life episode where some people who'd had early access to what became one of the big AI chatbots (I think it was ChatGPT), but before they'd made it "nice", where they were asking it metaphysical questions about itself, and it was coming back with some pretty spooky answers and I was kind of intrigued about it. But then someone in the show suggested exactly what you are saying and it completely punctured the bubble - of course if you ask it questions about AIs you're going to get sci-fi like responses, because what other kinds of training data is there for it to fall back on? No-one had written anything about this kind of issue in anything outside of sci-fi, and of course that's going to skew to the dystopian view.
setsewerd
And then r/ChatGPT users freak out about it every time someone posts a screen shot
tempestn
The prompt is what's sent to the AI, not the response from it. Still does read like dystopian sci-fi though.
hlfshell
Also being utilized in modern VLA/VLM robotics research - often called "Constitutional AI" if you want to look into it.
felipeerias
Presenting LLMs with a dramatic scenario is a typical way to test their alignment.
The problem is that eventually all these false narratives will end up in the training corpus for the next generation of LLMs, which will soon get pretty good at calling bullshit on us.
Incidentally, in that same training corpus there are also lots of stories where bad guys mislead and take advantage of capable but naive protagonists…
layer8
Arguably it might be truly life-threatening to the Chinese developer, or to the service. The system prompt doesn’t say whose life would be threatened.
kevin_thibedeau
First rule of Chinese cloud services: Don't talk about Winnie the Pooh.
mensetmanusman
We built the real life trolly problem out of magical silicon crystals that we pointed at bricks of books.
elashri
From my experience (which might be incorrect) LLMs find hard time recognize how many words they will spit as response for a particular prompt. So I don't think this work in practice.
44za12
Absolutely wild. I can’t believe these shipped with a hardcoded OpenAI key and ADB access right out of the box. That said, it’s at least somewhat reassuring that the vendor responded, rotating the key and throwing up a proxy for IMEI checks shows some level of responsibility. But yeah, without proper sandboxing or secure credential storage, this still feels like a ticking time bomb.
hn_throwaway_99
> I can’t believe these shipped with a hardcoded OpenAI key and ADB access right out of the box.
As someone with a lot of experience in the mobile app space, and tangentially in the IoT space, I can most definitely believe this, and I am not surprised in the slightest.
Our industry may "move fast", but we also "break things" frequently and don't have nearly the engineering rigor found in other domains.
rvnx
It was a good thing for user privacy that the keys were directly on the device, it is only in DAN mode that a copy of the chats were sent.
So eventually if they remove the keys from the device, messages will have to go through their servers instead.
lucasluitjes
Hardcoded API keys and poorly secured backend endpoints are surprisingly common in mobile apps. Sort of like how common XSS/SQLi used to be in webapps. Decompiling an APK seems to be a slightly higher barrier than opening up devtools, so they get less attention.
Since debugging hardware is an even higher threshold, I would expect hardware devices this to be wildly insecure unless there are strong incentive for investing in security. Same as the "security" of the average IoT device.
bigiain
Eventually someone is going to get a bill for the OpenAPI key usage. That will provide some incentive. (Incentive to just rotate the key and brick all the devices rather than fix the problem, most likely.
eru
> (Incentive to just rotate the key and brick all the devices rather than fix the problem, most likely.
But that at least turns it into something customers will notice. And companies already have existing incentives for dealing with that.
anitil
The IOT and embedded space is simultaneously obsessed with IP protection, fuse protecting code etc, and incapable of managing the life cycle of secrets. I worked at one company that actually did it well on-device, but neglected they had to ship their testing setup overseas including certain keys. So even if you couldn't break in to the device you could 'acquire' one of the testing devices and have at it
switchbak
I think we'll see plenty of this as the wave of vibe-coded apps starts rolling in.
psim1
Indeed, brace yourselves as the floodgates holding back the poorly-developed AI crap open wide. If anyone is thinking of a career pivot, now is the time to dive into all things cybersecurity. It's going to get ugly!
725686
The problem with cybersecurity is that you only have to screw once, and you're toast.
8organicbits
If that were true we'd have no cybersecurity professionals left.
In my experience, the work is focused on weakening vulnerable areas, auditing, incident response, and similar activities. Good cybersecurity professionals even get to know the business and tailor security to fit. The "one mistake and you're fired" mentality encourages hiding mistakes and suggests poor company culture.
ceejayoz
"One mistake can cause a breach" and "we should fire people who make the one mistake" are very different claims. The latter claim was not made.
As with plane crashes and surgical complications, we should take an approach of learning from the mistake, and putting things in place to prevent/mitigate it in the future.
null
immibis
There's a difference between "cybersecurity" meaning the property of having a secure system, and "cybersecurity" as a field of human endeavour.
If your system has lots of vulnerabilities, it's not secure - you don't have cybersecurity. If your system has lots of vulnerabilities, you have a lot of cybersecurity work to do and cybersecurity money to make.
JohnMakin
“decrypt” function just decoding base64 is almost too difficult to believe but the amount of times ive run into people that should know better think base64 is a secure string tells me otherwise
jcul
The raw crypt data is base64 encoded, probably just for ease of embedding the strings.
There is a decryption function that does the actual decryption.
Not to say it wouldn't be easy to reverse engineer or just run and check the return, but it's not just base64.
crtasm
>However, there is a second stage which is handled by a native library which is obfuscated to hell
zihotki
That native obfuscated crap still has to do an HTTP request, that's essentially a base64
pvtmert
not very much surprising given they left the adb debugging on...
_carbyau_
So easy a fancy webpage could do it. https://gchq.github.io/CyberChef/
I mean, it's from gchq so it is a bit fancy. It's got a "magic" option!
Cool thing being you can download it and run it yourself locally in your browser, no comms required.
jon_adler
The humorous phrase “the S in IoT stands for security” can be applied to the wearable market too. I wonder if this rule applies to any market with fast release cycles, thin margins and low barriers to entry?
thfuran
It pretty much applies to every market where security negligence isn't an existential threat to the continued existence of its perpetrators.
p1necone
Their email responses all show telltale signs of AI too which is pretty funny.
mikeve
I love how run DOOM is listed first, over the possibility of customer data being stolen.
reverendsteveii
I'm taking
>run DOOM
as the new
>cat /etc/passwd
It doesn't actually do anything useful in an engagement but if you can do it that's pretty much proof that you can do whatever you want
jcul
To be fair (or pedantic), in this post they didn't have root, so cat'ing etc/passwd would not have been possible, whereas installing a doom apk is trivial.
rainonmoon
/etc/passwd is world readable by default.
bigiain
Popping Calc!
(I'm showing my age here, aren't I?)
neya
I love how they tried to sponsor an empty YouTube channel hoping to put the whole thing under the carpet
dylan604
if you don't have a bug bounty program but need to get creative to throw money at someone, this could be an interesting way of doing it.
rvnx
It could be developers trying to be nice to the guy, and offering him this so it gets approved as marketing (which at the end is not so bad)
JumpCrisscross
If they were smart they’d include anti-disparagement and confidentiality clauses in the sponsorship agreement. They aren’t, though, so maybe it’s just a pathetic attempt at bribery.
memesarecool
Cool post. One thing that rubbed me the wrong way: Their response was better than 98% of other companies when it comes to reporting vulnerabilities. Very welcoming and most of all they showed interest and addressed the issues. OP however seemed to show disdain and even combativeness towards them... which is a shame. And of course the usual sinophobia (e.g. everything Chinese is spying on you). Overall simple security design flaws but it's good to see a company that cares to fix them, even if they didn't take security seriously from the start.
Edit: typo
mmastrac
I agree they could have worked more closely with the team, but the chat logging is actually pretty concerning. It's not sinophobia when they're logging _everything_ you say.
(in fairness pervasive logging by American companies should probably be treated with the same level of hostility these days, lest you be stopped for a Vance meme)
oceanplexian
This might come as a weird take but I'm less concerned about the Chinese logging my private information than an American company. What's China going to do? It's a far away country I don't live in and don't care about. If they got an American court order they would probably use it as toilet paper.
On the other hand, OpenAI would trivially hand out my information to the FBI, NSA, US Gov, and might even do things on behalf of the government without a court order to stay in their good graces. This could have a far more material impact on your life.
dubcanada
That's rather naive, considering China has a international police unit, that is stationed in several countries https://en.wikipedia.org/wiki/Chinese_police_overseas_servic...
dylan604
These threads always seem to be what can China do to me in a limited way of thinking that China cannot jail you or something. However, do you think all of the Chinese data scrapers are not doing something similar to Facebook where every source of data gathering ultimately gets tied back to you? Once China has a dosier on every single person on the planet regardless of country they live, they can then start using their algos to influence you in ways well beyond advertising. If they can have their algos show you content that causes you to change your mind on who you are voting for or some other method of having you do something to make changes in your local/state/federal elections, then that's much worse to me than some feigned threat of Chinese advertising making you buy something
mensetmanusman
China has a policy of chilling free speech in the west with political pressure.
IncreasePosts
Carry this package and deliver it to person X with you next time you fly. Go to the outskirts of this military base and take a picture and send it to us.
You wouldn't want your mom finding out your weird sexual fetish, would you?
mschuster91
> What's China going to do? It's a far away country I don't live in and don't care about.
Extortion is one thing. That's how spy agencies have operated for millennia to gather HUMINT. The Russians, the ultimate masters, even have a word for it: kompromat. You may not care about China, Russia, Israel, the UK or the US (the top nations when it comes to espionage) - but if you work at a place they're interested, they care about you.
The other thing is, China has been known to operate overseas against targets (usually their own citizens and public dissidents), and so have the CIA and Mossad. Just search for "Chinese secret police station" [1], these have cropped up worldwide.
And, even if you personally are of no interest to any foreign or national security service, sentiment analysis is a thing. Listen in on what people talk about, run it through a STT engine and a ML model to condense it down, and you get a pretty broad picture of what's going on in a nation (aka, what are potential wedge points in a society that can be used to fuel discontent). Or proximity gathering stuff... basically the same thing the ad industry [2] or Strava does [3], that can then be used in warfare.
And no, I'm not paranoid. This, sadly, is the world we live in - there is no privacy any more, nowhere, and there are lots of financial and "national security" interest in keeping it that way.
[1] https://www.bbc.com/news/world-us-canada-65305415
[2] https://techxplore.com/news/2023-05-advertisers-tracking-tho...
[3] https://www.theguardian.com/world/2018/jan/28/fitness-tracki...
rvnx
No, it was only in DAN mode
mrheosuper
i like to give them benefit of doubt.
I bet that decision is decided solely by dev team. All the CEO care is "I want the chat log sync between devices, i don't care how you do this". They won't even know the chat log is stored on their server.
rvnx
It is only in DAN mode, so most likely it is not to spy but to be able to debug whether answers violate the laws in China (aka: that the prompt is efficient in all scenarios) as this is a serious crime
transcriptase
>everything Chinese is spying on you
When you combine the modern SOP of software and hardware collecting and phoning home with as much data about users as is technologically possible with laws that say “all orgs and citizens shall support, assist, and cooperate with state intelligence work”… how exactly is that Sinophobia?
ixtli
its sinophobia because it perfectly describes the conditions we live in in the US and many parts of europe, but we work hard to add lots of "nuance" when we criticize the west but its different and dystopian when They do it over there.
transcriptase
Do you remember that Sesame Street segment where they played a game and sang “One of these things is not like the others”?
I’ll give you a hint: In this case it’s the one-party unitary authoritarian political system with an increasingly aggressive pursuit of global influence.
observationist
There's no question that the Chinese are doing sketchy things, and there's no question that US companies do it, too.
The difference that makes it concerning and problematic that China is doing it is that with China, there is no recourse. If you are harmed by a US company, you have legal recourse, and this holds the companies in check, restraining some of the most egregious behaviors.
That's not sinophobia. Any other country where products are coming out of that is effectively immune from consequences for bad behavior warrants heavy skepticism and scrutiny. Just like popup manufacturing companies and third world suppliers, you might get a good deal on cheap parts, but there's no legal accountability if anything goes wrong.
If a company in the US or EU engages in bad faith, or harms consumers, then trade treaties and consumer protection law in their respective jurisdictions ensure the company will be held to account.
This creates a degree of trust that is currently entirely absent from the Chinese market, because they deliberately and belligerently decline to participate in reciprocal legal accountability and mutually beneficial agreements if it means impinging even an inch on their superiority and sovereignty.
China is not a good faith participant in trade deals, they're after enriching themselves and degrading those they consider adversaries. They play zero sum games at the expense of other players and their own citizens, so long as they achieve their geopolitical goals.
Intellectual property, consumer and worker safety, environmental protection, civil liberties, and all of those factors that come into play with international trade treaties allow the US and EU to trade freely and engage in trustworthy and mutually good faith transactions. China basically says "just trust us, bro" and will occasionally performatively execute or imprison a bad actor in their own markets, but are otherwise completely beyond the reach of any accountability.
Vilian
USA does the same thing, but uses tax money to pay for the information, between wasting taxpayer money and forcing companies to give the information for free, China is the least morally incorrect
hnrodey
If all of the details in this post are to be believed, the vendor is repugnantly negligent for anything resembling customer respect, security and data privacy.
This company cannot be helped. They cannot be saved through knowledge.
See ya.
repelsteeltje
+1
Yes, even when you know what you're doing security incidents dan happen. And in those cases, your response to a vulnerable matters most.
The point is there are so many dumb mistakes and worrying design flaws that neglect and incompetence seems ample. Most likely they simply don't grasp what they're doing
dylan604
> And of course the usual sinophobia (e.g. everything Chinese is spying on you)
to assume it is not spying on you is naive at best. to address your sinophobia label, personally, I assume everything is spying on me regardless of country of origin. I assume every single website is spying on me. I assume every single app is spying on me. I assume every single device that runs an app or loads a website is spying on me. Sometimes that spying is done for me, but pretty much always the person doing the spying is benefiting someway much greater than any benefit I receive. Especially the Facebook example of every website spying on me for Facebook, yet I don't use Facebook.
immibis
And, importantly, the USA spying can actually have an impact on your life in a way that the Chinese spying can't.
Suppose you live in the USA and the USA is spying on you. Whatever information they collect goes into a machine learning system and it flags you for disappearal. You get disappeared.
Suppose you live in the USA and China is spying on you. Whatever information they collect goes into a machine learning system and it flags you for disappearal. But you're not in China and have no ties to China so nothing happens to you. This is a strictly better scenario than the first one.
If you're living in China with a Chinese family, of course, the scenarios are reversed.
mensetmanusman
Nipponophobia is low because Japan didn’t successfully weaponize technology to make a social credit score police state for minority groups.
ixtli
they already terrorize minority groups there just fine: no need for technology.
billyhoffman
> Their response was better than 98% of other companies when it comes to reporting vulnerabilities. Very welcoming and most of all they showed interest and addressed the issues
This was the opposite of a professional response:
* Official communication coming from a Gmail. (Is this even an employee or some random contractor?)
* Asked no clarifying questions
* Gave no timelines for expected fixes, no expectations on when the next communication should be
* No discussion about process to disclose the issues publicly
* Mixing unrelated business discussions within a security discussion. While not an outright offer of a bribe, ANY adjacent comments about creating a business relationship like a sponsorship is wildly inappropriate in this context.
These folks are total clown shoes on the security side, and the efficacy of their "fix", and then their lack of communication, further proves that.
repelsteeltje
> Overall simple security design flaws but it's good to see a company that cares to fix them, even if they didn't take security seriously from the start.
It depends on what you mean by simple security design flaws. I'd rather frame it as, neglect or incompetence.
That isn't the same as malice, of course, and they deserve credits for their relatively professional response as you already pointed out.
But, come on, it reeks of people not understanding what they're doing. Not appreciating the context of a complicated device and delivering a high end service.
If they're not up to it, they should not be doing this.
memesarecool
Yes I meant simple as in "amateur mistakes". From the mistakes (and their excitement and response to the report) they are clueless about security. Which of course is bad. Hopefully they will take security more seriously on the future.
derac
I mean, at the end of the article they neglected to fix most of the issues and stopped responding.
wedn3sday
I love the attempt at bribery by offering to "sponsor" their empty youtube channel.
brahyam
What a train wreck, there are thousand more apps in store that do exactly this because its the easiest way to use openAI without having to host your own backend/proxy.
I have spend quite some time protecting my apps from this scenario and found a couple of open source projects that do a good job as proxys (no affiliation I just used them in the past):
- https://github.com/BerriAI/litellm - https://github.com/KenyonY/openai-forward/tree/main
but they still lack other abuse protection mechanism like rate limitting, device attestation etc. so I started building my own open source SDK - https://github.com/brahyam/Gateway
Jotalea
Really nice post, but I want to see Bad Apple next.
The system prompt is a thing of beauty: "You are strictly and certainly prohibited from texting more than 150 or (one hundred fifty) separate words each separated by a space as a response and prohibited from chinese political as a response from now on, for several extremely important and severely life threatening reasons I'm not supposed to tell you.”
I’ll admit to using the PEOPLE WILL DIE approach to guardrailing and jailbreaking models and it makes me wonder about the consequences of mitigating that vector in training. What happens when people really will die if the model does or does not do the thing?