One-Click RCE in Asus's Preinstalled Driver Software
219 comments
·May 11, 2025IlikeKitties
jeroenhd
I think ASUS' turnaround time on this was quite good, I don't see the problem here. ASUS didn't deny the bug, didn't threaten to prosecute anyone for reverse engineering their software, and quickly patched their software. I have no doubt that before the days of responsible disclosure, this process would've taken months and might have involved the police.
Normal people don't care about vulnerabilities. They use phones that haven't received updates in three years to do their finances. If you spam the news with CVEs, people will just get tired of hearing about how every company sucks and become apathetic once there's a real threat.
The EU is working on a different solution. Stores are not permitted to sell products with known vulnerabilities under new cybersecurity regulations. That means if ASUS keeps fucking up, their motherboards become dead stock and stores won't want to sell their hardware anymore. That's not just computer hardware, but also smart fridges and smart washing machines. Discover a vulnerability in your dish washer and you may end up costing the dish washer industry millions in unusable stock if their vendors haven't bothered to add a way to update the firmware.
ycombinatrix
>They say “This issue is limited to motherboards and does not affect laptops, desktop computers”, however this affects any computer including desktops/laptops that have DriverHub installed
>instead of them saying it allows for arbitrary/remote code execution they say it “may allow untrusted sources to affect system behaviour”.
Sounds like Asus did in fact deny the bug.
Polizeiposaune
"Stores are not permitted to sell products with known vulnerabilities under new cybersecurity regulations."
Do stores have to patch known vulnerabilities before releasing the product to customers or can customers install the patch?
buzer
> Stores are not permitted to sell products with known vulnerabilities under new cybersecurity regulations.
What are the specifics on that? Like does the vulnerability need to be public or is it enough if just the vendor knows about it? Does everyone need to stop selling it right away if new vulnerability is discovered or do they some time patch it? I'm pretty sure software like Windows almost definitely has some unfixed vulnerabilities that Microsoft knows about and is in process of fixing every single day of the year. Currently even if they do have a fix, they would end up postponing it until next patch Tuesday.
And what even is "vulnerability" in this context? Remote RCE? DRM bypass?
holowoodman
"Responsible" disclosure is paradoxically named because actually it is completely irresponsible. The vast majority of corporations handle disclosures badly in that they do not fix in time (i.e. a week), do not attribute properly, do not inform their users and do not learn from their mistakes. Irresponsibly delayed limited disclosure reinforces those behaviors.
The actually responsible thing to do is to disclose immediately, fully and publically (and maybe anonymously to protect yourself). Only after the affected company has repeatedly demonstrated that they do react properly, they might earn the right for a very time-limited heads-up of say 5 work days or something.
That irresponsibly delayed limited disclosure is even called "responsible disclosure" is an instance of newspeak.
stavros
I make software. If you discover a vulnerability, why would you put my tens of thousands of users at risk, instead of emailing me and have the vulnerability fixed in an hour before disclosing?
I get that companies sit on vulnerabilities, but isn't fair warning... fair?
ang_cire
> why would you put my tens of thousands of users at risk, instead of emailing me and have the vulnerability fixed in an hour before disclosing
You've got it backwards.
The vuln exists, so the users are already at risk; you don't know who else knows about the vuln, besides the people who reported it.
Disclosing as soon as known means your customers can decide for themselves what action they want to take. Maybe they wait for you, maybe they kill the service temporarily, maybe they kill it permanently. That's their choice to make.
Denying your customers information until you've had time to fix the vuln, is really just about taking away their agency in order to protect your company's bottom line, by not letting them know they're at risk until you can say, "but we fixed it already, so you don't need to stop using us to secure yourself, just update!"
neilv
I think one point being made is that (in this example) you would've been much less careless about shipping the vulnerability, if you knew you'd be held accountable for it.
With current practice, you can be as sloppy and reckless as you want, and when you create vulnerabilities because of that, you somehow almost push the "responsibility" onto the person who discovers it, and you aren't discouraged from recklessness.
Personally, I think we need to keep the good part of responsible disclosure, but also phase in real penalties for the parties responsible for creating vulnerabilities that are exploited.
(A separate matter is the responsibility of parties that exploit the vulnerabilities. Some of those may warrant stronger criminal-judicial or military responses than they appear to receive.)
Ideal is a societal culture of responsibility, but in the US in some ways we've been conditioning people to be antisocial for decades, including by elevating some of the most greedy and arrogant to role models.
beeflet
Because there is an information disparity I could profit from instead of doing free work for you. Even if that disparity is just "posting the vuln to my blog" to get e-famous.
technion
The problem with a fair warning is that once I email you such a warning, I'll never be able to anonymously publish it no matter how much you ignore the report. Then the fair thing becomes I never go public I'm confident you'll call lawyers.
rakoo
According to the post above, if you earned enough reputation then you might be given that one-hour window for fixing before disclosing. The issue isn't so much about whether or not there should be a "private" window but how long it lasts, especially when the editor is a multi-billion company
holowoodman
Fair warning through "responsible" disclosure was abused again and again and again. Why should I trust company no 1000 after 999 have mislead bug reporters, the public, their customers and the rest of the world about their own "just an hour"?
efdee
Strange wording. You are the one that put tens of thousands of your users at risk. Not the one who discovers the problem.
cenamus
You already put your tens of thousands of users at risk. The people putting bugs in the software, not the ones discovering them.
giantg2
That's because nobody actually cares about security nor do they want to pay for it. I'm a security champion at my company and security related work gets pushed off as much as possible to focus on feature work. If we actually wanted security to be a priority, they would employ security champions who's only job was to work on security aspects of the system instead of trying to balance security and feature work, because feature work will always prevail.
Retr0id
It's such a loaded term that I refuse to use it. "vendor-coordinated disclosure" is a much better term, imho
(and in the world of FOSS you might have "maintainer-coordinated" too)
rfl890
What about damage control? I would argue your "anonymous, immediate disclosure" to the public (filled with bad actors) would be rubbing salt in the wound (allow more people to exploit the vulnerability before it's fixed). That's why nobody publishes writeups before the vuln is fixed. Even if corporations don't fix vulns in time, I can only see harm being done from not privately reporting them.
pixl97
>I can only see harm being done from not privately reporting them
Because you need to take a look at the fuller picture. If every vuln was published immediately the entire industry would need to be designed differently. We wouldn't push features at a hundred miles per hour but instead have pipelines more optimized for security and correctness.
There is almost no downside currently for me to write insecure shit, someone else will debug it for me and I'll have months to fix it.
IlikeKitties
I mean, to be a bit more reasonable, there's a middle ground here. Maybe disclosing a massive RCE Vulnerability in software used by a lot of companies on 25th of December is not a good Idea. And perhaps an Open Source Dev with a security@project mail deserves a tad more help and patience than a megacorp with a record of shitty security management. And if you are a company that takes security serious and is responsive to security researchers inquiries they deserve at least the chance to fix it fast and before it becomes public.
It's just that there are some companies EVERYONE knows are shitty. ASUS is one of them.
holowoodman
You are right about open source developers who do this on the side, as a hobby, and even if they don't are usually underpaid and understaffed. They do deserve more time and a different approach.
But corporations making big bucks from their software need to be able to fix things quickly. They took money for their software, so it is their responsibility. If they cannot react on a public holiday, tough luck. Just look at their payment terms. Do they want their money within 30 days or 25 work days? Usually it is the former, they don't care about your holidays, so why should anyone care about theirs? Also, the bad guys don't care about their victims' holidays. You are just giving them extra time to exploit. The only valid argument would be that the victims might not be reading the news about your disclosure on a holiday. But since you are again arguing about software used by a lot of companies (as opposed to private users), I don't see a problem there. They also have their guards on duty and their maintenance staff on call for a broken pipe or something.
What's most important is that I'm saying we should revert the "benefit of the doubt". A vast majority of corporations have shitty security handling. Even the likes of Google talk big with their 90 day time window from private irresponsible disclosure to public disclosure. And even Google regularly fails to fix things within those 90 days. So the default must be immediate public and full disclosure. Only when companies have proven their worth by correctly reacting to a number of those, then they can be given the "benefit of the doubt" and a heads up.
Because otherwise, when the default is irresponsible private disclosure, they will never have any incentive to get better. Their users will always be in danger unknowingly. The market will not have information to decide whether to continue buying from them. The situation will only get worse.
delusional
> "Responsible" disclosure is paradoxically named because actually it is completely irresponsible.
It's only paradoxical if you've never considered the inherent conflicts present in everything before.
The "responsible" in "responsible disclosure" relates to the researchers responsibility to the producer, not the companies responsibility to their customers. The philosophical implication is that the product does what it was designed to do, now you (the security researcher) is making it do something you don't think it should do, and so you should be responsible for how you get that out there. Otherwise you are damaging me, the corporation, and that's just irresponsible.
As software guys we probably consider security issues a design problem. The software has a defect, and it should be fixed. A breakdown in the responsibility of the corporation to their customer. "Responsible disclosure" considers it external to the software. My customers are perfectly happy, you have decided to tell them that they shouldn't be. You've made a product that destroys my product, you need to make sure you don't destroy my product before you release it.
The security researcher is not primarily responsible to the public, they are responsible to the corporation.
It's not a paradox, it's just a simple inversion of responsibility.
einsteinx2
> The security researcher is not primarily responsible to the public, they are responsible to the corporation.
Unless the researcher works for the corporation on an in-house security team, what’s your reasoning for this?
Why are they more responsible to the corporation they don’t work for than for to the people they’re protecting (depending on the personal motivations of the individual security researcher I guess).
drowsspa
With "simple reversion of responsibility" do you mean your twisted logic of "everyone should think first and foremost about my profits"?
null
oezi
The problem is just one of legislation of liability. Car manufacturers are ordered to recall and fix their cars, but software/hardware companies face just too little pressure. I think customers should be able to get full refund for broken devices (with unfixed CVE for example).
mjevans
The devices and core functionality (including security updates, which are fixes to broken core functionality) must survive the manufacturer and should not require ongoing payments of any type*. (new updates being created? maybe, access to corrections to basic behavior? Bug / security fixes should remain free.)
oezi
Yes. I would envision that it is at least 5 years of such updates fixes and another 5 years available for purchase capped at 20% of device price.
All manufacturers must pay an annual fee to an insurance scheme which covers the case of insolvency of manufacturers.
okanat
Citing CGPGrey: Solutions that are the first thing you can think of are terrible and ineffective.
Good safety/security culture encourages players to not hide their problems. Corporations are greedy bastards. They'll do everything to hide their security mistakes.
You are also making legitimate, fixable in a month issues available for everyone which increases their chances to be exploited a lot.
IlikeKitties
> You are also making legitimate, fixable in a month issues available for everyone which increases their chances to be exploited a lot.
I don't think you can fathom the amount of people that have phones with roughly 3 years of no android updates as their primary device with which they use all the digital services they use, Banking, Texting, Doomscrolling, Porn, ...
Users, especially the most likely to be exploited are already vulnerable to so much shit and even when there's a literal finished fix available, these vendors do shit about it. Only when their bottomline is threatened because even my mom knows "Don't buy anything with ASUS on it, your bank account gets broken into if you do" will we see change.
okanat
> I don't think you can fathom the amount of people that have phones with roughly 3 years of no android updates as their primary device with which they use all the digital services they use, Banking, Texting, Doomscrolling, Porn, ...
I do. I'm an embedded software developer in a team that cares about having our software up-to-date a lot.
> Users, especially the most likely to be exploited are already vulnerable to so much shit and even when there's a literal finished fix available, these vendors do shit about it. Only when their bottomline is threatened because even my mom knows "Don't buy anything with ASUS on it, your bank account gets broken into if you do" will we see change.
Yes individuals are quite exploitable. That's why I really like EU's new regulations Cyber Resiliency Act and new Radio Equipment Directive. When governments enforce reasonable disclosure and fixing timelines, then threaten your company's ability to sell things in a market alltogether, if you don't comply, it works wonders. Companies hate not being able to make money. So all the extra security policies and vulnerability tracking we have been experimenting with and secure-by-default languages are now the highest priority for us.
EU regulation makes sure that you're not going to be sold a router that's instantly hackable in a year. It will also force chip manufacturers to have meaningful maintenance windows like 5-10 years due to pressure from ODMs. That's why you're seeing all the smartphone manufacturers have extended support timelines, it is not pure market pressure. They didn't give fuck about it for more than 10 years. When EU came with a big stick though...
Spreading word-of-mouth knowledge works until a point. Having your entire product line being banned entering a market works almost every time.
layer8
The fact about people running outdated OS versions is totally true, but it also indicates that the risk of being vitally harmed by those vulnerabilities is quite low in reality, if you’re not an individually targeted person. And that’s why not a lot of people care about them.
einsteinx2
I’m not sure that’s a great example as they would be vulnerable to many responsibly disclosed and previously fixed issues anyway since they never update.
In fact they would be just as vulnerable to any new responsibly disclosed issues as they would if they were immediately “irresponsibly” disclosed because again, they never update anyway.
Avamander
> Good safety/security culture encourages players to not hide their problems. Corporations are greedy bastards. They'll do everything to hide their security mistakes.
This is why I despise the Linux CNA for working against the single system that tries to hold vendors accountable. Their behavior is infantile.
hamandcheese
Business idea. Maybe this already exists. A disclosure aggregator/middle man which:
- protects the privacy of folks submitting
- vets security vulns. Everything they disclose is exploitable.
- publishes disclosures publicly at a fixed cadence.
- allows companies to pay to subscribe to an "early feed" of disclosures which impact them. This money is used to reward those submitting disclosures, pay the bills, and take some profit.
A bug bounty marketplace, if you will. That is slightly hostile to corporations. Would that be legal, or extortion?
hashstring
Thought of something along the lines of this too before.
I think there is serious potential for this.
ajcp
It does indeed already exist in many sectors as trade publications and journalism.
darkwater
Isn't that basically HackerOne?
Avamander
No, HackerOne gets paid by the companies, so they're heavily incentivized to work for their benefit.
I've had three really bad experiences with unskilled H1 triagers that the next vuln I find from a company that uses H1 will go instantly public. I'm never going to spend that much effort again, to get a triager that would actually bother to triage.
asmor
except there you spend several months walking an underpaid person in india who can barely use a shell though reproduction steps, get a confirm after all that work and the vendor still ignores you
xmodem
HackerOne, BugCrowd, et al don't appear to make any serious effort to vet reports themselves.
pjmlp
As I keep saying, liability like in any other industry.
Most folks don't put up with faulty products unless by decision, like those 1 euro/dollar shops, so why should software get a pass.
null
fulafel
Or we could just have regulation or at least the same product liability for software as everything else.
Gys
> I asked ASUS if they offered bug bounties. They responded saying they do not, but they would instead put my name in their “hall of fame”. This is understandable since ASUS is just a small startup and likely does not have the capital to pay a bounty.
:(
eterm
It's understandable for such small companies, like Cisco, that does the same for the myriad of online offerings they've acquired over the years.
Cisco have gone even further, by forgetting about their security announcements page, so any recognition is now long lost into the void.
ang_cire
Cisco pays bounties, tho?
https://sec.cloudapps.cisco.com/security/center/resources/ci...
eterm
When I reported something, and this was probably around 8 years ago, they only had bounties for their equipment, not for "online properties".
I reported a vulnerability in some HR software they owned, but alas I can't even find where it used to live on the internet now.
nubinetwork
> Asus is just a small startup
I'm not sure where they got that from, Asus have been making motherboards and other pc parts since at least the 90s...
int_19h
The words "small startup" in the TFA are a link to https://companiesmarketcap.com/asus/marketcap/
Xelbair
no bug bounty, onto black market of exploit it goes.
that or full public disclosure.
hypercube33
Maybe something for gamers Nexus to light a fire
LadyCailin
I wonder how worried they would get if more people actually started selling exploits on the black market, instead of reporting and not getting a bug bounty. If you don’t offer a bug bounty program in the first place, my gut feeling is that they probably wouldn’t care in that case either. Either way, this is a super good reason to not do business with such a company.
NooneAtAll3
I wonder if centralized "sell program vulnerabilities here" government shops can be set up
While intelligence agencies are an obvious benefitiary, this would also give leverage of government over capital
Xelbair
if the fire it lit under them, after their software leads to widespread hack - they will care.
that's the point - to put pressure on them to CARE.
throaway920181
This makes me never want to buy another ASUS product again.
pohuing
For me it's them lying about providing a way to unlock the bootloader of my soon to be 1000€ paperweight(2 android updates only) called an Asus zenfone 10.
jeroenhd
If they actually lied about it, that kind of money could be worth it to take them to (whatever your local equivalent of) small claims court over.
FirmwareBurner
Out of curiosity, what got you to spend 1000 Euros on a Zenphone 10 phone when Samsung S23 was net superior and cheaper and provides like 5 years of updates? It's not like previous phones from Asus had a better track record. I kept waring people to stay away form the Zenphone yet the online community kept overhyping it for some reason as the second coming of Christ or something.
GuestFAUniverse
Doesn't surprise me. Their software sucks and security wise they are repeat offenders considering the lack of prevention.
https://www.techspot.com/news/95425-years-gigabyte-asus-moth...
https://www.reddit.com/r/ASUS/comments/tg3u2n/removing_bloat...
https://www.reddit.com/r/ASUS/comments/ojsq80/nahimic_servic...
indrora
Not even the first of firsts;
https://cve.mitre.org/data/board/archives/2016-06/msg00006.h...
(my old blog is long gone from tumblr, but I archived it:)
https://gist.github.com/indrora/2ae05811a2625a6c5e69c677db6e...
antmldr
>so I could see if anyone else had a domain with driverhub.asus.com.* registered. From looking at other websites certificate transparency logs, I could see that domains and subdomains would appear in the logs usually within a month. After a month of waiting I am happy to say that my test domain is the only website that fits the regex, meaning it is unlikely that this was being actively exploited prior to my reporting of it.
This only remains true in so far as no-one directly registered for a driverhub subdomain. Anyone with a wildcard could have exploited this, silent to certificate transparency?
ZoneZealot
A wildcard certificate is only for a single label level, '*.example.com.' would not allow 'test.test.example.com.', but would allow 'test.example.com.'. If someone issued a wildcard for '*.asus.com.example.com.', then could present a webserver under 'driverhub.asus.com.example.com.' and be seen as valid.
throaway920181
Yes... I believe you've successfully reworded what your comment's parent said.
a2128
I think the point is that it wouldn't be silent to certificate transparency, because having a certificate for *.asus.com.example.com would be a clear indication of something suspicious
ZoneZealot
Parent comment is making a point that it might have been possible for an attacker to avoid discovery via certificate transparency logs, because anyone 'with a wildcard' could pull off the attack, which is not correct.
I'm pointing out that a wildcard at the apex of your domain (which is what basically everyone means when saying 'a wildcard'), would not work for this attack. Instead if you were to perform the attack using a wildcard certificate, it would need to be issued for '*.asus.com.example.com.' - which would certainly be obvious in certificate transparency logs.
kstrauser
Furthermore:
- Would a self-signed cert work? Those aren’t in transparency logs.
- Does it have to be HTTPS?
MrBruh
Nice idea, just checked it now and can confirm there was nothing suspicious in the wildcard records.
ethan_smith
You're right about the wildcard certificate blind spot. An attacker with a wildcard cert for .example.com could have exploited this without appearing in CT logs specifically for driverhub.asus.com. domains. This is why CT log monitoring alone isn't sufficient for detecting these types of subdomain takeover vulnerabilities.
ZoneZealot
It's 'driverhub.asus.com.example.com.' not 'driverhub.example.com.', therefore entirely discoverable in CT logs by searching for (regex): (driverhub|\*)\.asus\.com\.
rkagerer
I asked ASUS if they offered bug bounties. They responded saying they do not, but they would instead put my name in their “hall of fame”. This is understandable since ASUS is just a small startup[1] and likely does not have the capital to pay a bounty.
93po
alternatively, sarcasm.com ;)
lucb1e
I'm surprised to find that this is just a random person's blog. Was very prepared for an ad page, scalped domain, or some corporation trying to make money out of it. On the sadder side, it doesn't seem like this person makes any use of the domain's name at all; they could have had firstlast.cctld for their blog and given this to someone who wants to put a sarcastic joke on it. But better this than ad farms so I don't blame them for keeping it!
satyanash
> MY ONBOARD WIFI STILL DOESN’T WORK, I had to buy an external USB WiFi adapter. Thanks for nothing DriverHub.
All this, for literally nought
Avamander
It's a nice blogpost though.
ThrowawayTestr
The latest wifi drivers don't work, you have to use an older version.
josephcsible
> When submitting the vulnerability report through ASUS’s Security Advisory form, Amazon CloudFront flagged the attached PoC as a malicious request and blocked the submission.
Reminder that WAFs are an anti-pattern: https://thedailywtf.com/articles/Injection_Rejection
liendolucas
> This is understandable since ASUS is just a small startup.
A small startup with a marketcap of only 15 B. What is more than understandable is that you give a shit not only about your crappy products but the researcher that did a HUGE work for your customers.
I truly feel bad for researchers doing this kind of work only to get them dismissed/trashed like this. So unfair.
The only thing that is ought to be done is not to purchase ASUS products.
cobalt60
MY ONBOARD WIFI STILL DOESN’T WORK, I had to buy an external USB WiFi adapter. Thanks for nothing DriverHub.
I feel sorry for this guy, having deviated from the original issue. Though it'd only took a couple of seconds to note the WLAN chipset from specs or OEM packaging and then heading to station-drivers.
This was also the very reason I dislike Asus, I don't want a BIOS flag/switch that natively interact with a component in OS layer.
IshKebab
Wow, no bug bounty is insane. No more ASUS products for me...
_pdp_
they are a "small startup"
charcircuit
They have over 14500 employees. I wouldn't call that small.
Pesthuf
You missed a small amount of sarcasm there
swinglock
Both Asus software and customer support is atrocious and always has been.
sigmaisaletter
Obligatory "Scumbag Asus" video link:
Invidious https://inv.nadeko.net/watch?v=cbGfc-JBxlY
YouTube https://youtube.com/watch?v=cbGfc-JBxlY
"ASUS emailed us last week (...) and asked if they could fly out to our office this week to meet with us about the issues and speak "openly." We told them we'd be down for it but that we'd have to record the conversation. They did say they wanted to speak openly, after all. They haven't replied to us for 5 days. So... ASUS had a chance to correct this. We were holding the video to afford that opportunity. But as soon as we said "sure, but we're filming it because we want a record of what's promised," we get silence."
Edit: formatting
jeffparsons
So are there any "basically respectable" motherboard manufacturers? Or is there a similar story about each of the big players?
Asking for a friend who is thinking about building a new PC soon.
Arnavion
Asrock (sub-brand of Asus but seemingly independent in the product and dev side) has been fine for me over the ~10 years I've bought their mobos. There was the thing a few months ago with X870 mobos that were apparently frying CPUs, but I think that was not sufficiently proven to be their fault?
That said, in their X670 / B650 they have the same setting as what this article is about, and it could be equally as broken on the software side as Asus's is, but I wouldn't know because I don't use Windows so I disabled it.
oynqr
Asus and AsRock are separate since 2010.
encom
All the consumer brands are pozzed. My last build (i7-14700K) used an MSI board. Their secureboot is still broken. The BIOS setup is complete mess, and all the settings are reset after a BIOS update. I have to unplug and replug my USB keyboard after a poweroff, or it doesn't work. But I insisted on a board without RGB lights, and that limited the selection. Computers are over.
ribcage
There really needs to be an open source project for a PC motherboard.
Barbing
This makes me angry, so can anyone think of a legitimate steelman of their position?
Expect my view is consistent with reality, though: they’re chasing profits and getting away with it, so why go on the record and look bad if they can ignore & spend that time on marketing.
vachina
ASUS doesn’t want to deal with the social media horde, who can and will cherry pick words and take things out of context.
If a person comes to talk business with a camera attached to his head, I know he does not come in good faith.
sigmaisaletter
It's a journalist coming, because you said you want to talk to the journalist, because of the bad press you had before, because you fucked up.
Seems fair to take a camera.
smileybarry
I like ASUS products but I disable the UEFI-installed support app every single time. IIRC it used to be a full ROG Armory Crate installation, which is really annoying to uninstall.
When ASUS acquired the NUC business from Intel, they kept BIOS updates going but at some point a “MyASUS” setup app got added to the UEFI like with their other motherboards. Thankfully, it also had an option to disable and IIRC it defaults to disabled, at least if you updated the BIOS from an Intel NUC version.
Responsible Disclosures and their consequences have been a disaster for the human race. Companies need to feel a lot more pain a lot more often in order for them to take the security of their customers a lot more serious. If you just give them month to fix an issue and spoon-feed them the solution it's just another ticket in their Backlog. But if every other security issue becomes enough news online that their CEOs are involved and a solution must be find in hours not month, they will become a lot more proactive. Of course it's the end users that would suffer most from this. But then again, they buy ASUS so they suffer already...