Skip to content(if available)orjump to list(if available)

Ask HN: Our AWS account got compromised after their outage

Ask HN: Our AWS account got compromised after their outage

28 comments

·October 21, 2025

Could there be any link between the two events?

Here is what happened:

Some 600 instances were spawned within 3 hours before AWS flagged it off and sent us a health event. There were numerous domains verified and we could see SES quota increase request was made.

We are still investigating the vulnerability at our end. our initial suspect list has 2 suspects. api key or console access where MFA wasn’t enabled.

timdev2

I would normally say that "That must be a coincidence", but I had a client account compromise as well. And it was very strange:

Client was a small org, and two very old IAM accounts had suddenly had recent (yesterday) console log ins and password changes.

I'm investigating the extent of the compromise, but so far it seems all they did was open a ticket to turn on SES production access and increase the daily email limit to 50k.

These were basically dormant IAM users from more than 5 years ago, and it's certainly odd timing that they'd suddenly pop on this particular day.

tcdent

Smells like a phishing attack to me.

Receive an email that says AWS is experiencing an outage. Log into your console to view the status, authenticate through a malicious wrapper, and compromise your account security.

timdev2

These were accounts that shouldn't have had console access in the first place, and were never used by humans to log in AFAICT. I don't know exactly what they were originally for, but they were named like "foo-robots", were very old.

At first I thought maybe some previous dev had set passwords for troubleshooting, saved those passwords in a password manager, and then got owned all these years later. But that's really, really, unlikely. And the timing is so curious.

SoftTalker

Good point. Phishers would certainly take advantage of a widely reported outage to send emails related to "recovering your services."

Even cautious people are more vulnerable to phishing when the message aligns with their expectations and they are under pressure because services are down.

Always, always log in through bookmarked links or typing them manually. Never use a link in an email unless it's in direct response to something you initiated and even then examine it carefully.

CaptainOfCoit

Is it possible that people who already managed to get access (that they confirmed) has been waiting for any hiccups in AWS infrastructure in order to hide among the chaos when it happens? So maybe the access token was exposed weeks/months ago, but instead of going ahead directly, idle until there is something big going on.

Certainly feels like an strategy I'd explore if I was on that side of the aisle.

iainctduncan

Absolutely. I'm in diligence and we are hearing about attackers even laying the ground work and then waiting for company sales. The sophisticated ones are for sure smart enough to take advantage of this kind of thing and to even be prepping in advance and waiting for golden opportunities.

jinen83

I am from the same team & i can concur with what you are saying. I did see a warning about the same key that was used in todays exploit about 2 years ago from some random person in an email. but there was no exploutation till yesterday.

ThreatSystems

Cloudtrail events should be able to demonstrate WHAT created the EC2s. Off the top of my head I think it's the runinstance event.

jinen83

this is helpful. i will look for the logs.

Also some more observations below:

1) some 20 organisations were created within our Root all with email id with same domain (co.jp) 2) attacker had created multiple fargate templates 3) they created resources in 16-17 AWS regions 4) they requested to raise SES,WS Fargate Resource Rate Quota Change was requested, sage maker Notebook maintenance - we have no need of using these instances (recd an email from aws for all of this) 5) in some of the emails i started seeing a new name added (random name @outlook.com)

ThreatSystems

It does sound like you've been compromised by an outfit that has got automation to run these types of activities across compromised accounts. A Reddit post[0] from 3 years ago seems to indicate similar activities.

Do what you can to triage and see what's happened. But I would strongly recommend getting a professional outfit in ASAP to remediate (if you have insurance notify them of the incident as well - as often they'll be able to offer services to support in remediating), as well as, notify AWS that an incident has occurred.

[0] https://www.reddit.com/r/aws/comments/119admy/300k_bill_afte...

ThreatSystems

I'm officially off of AWS so don't have any consoles to check against, but back on a laptop.

Based on docs and some of the concerns about this happening to someone else, I would probably start with the following:

1. Check who/what created those EC2s[0] using the console to query: eventSource:ec2.amazonaws.com eventName:RunInstances

2. Based on the userIdentity field, query the following actions.

3. Check if someone manually logged into Console (identity dependent) [1]: eventSource:signin.amazonaws.com userIdentity.type:[Root/IAMUser/AssumedRole/FederatedUser/AWSLambda] eventName:ConsoleLogin

4. Check if someone authenticated against Security Token Service (STS) [2]: eventSource:sts.amazonaws.com eventName:GetSessionToken

5. Check if someone used a valid STS Session to AssumeRole: eventSource:sts.amazonaws.com eventName:AssumeRole userIdentity.arn (or other identifier)

6. Check for any new IAM Roles/Accounts made for persistence: eventSource:iam.amazonaws.com (eventName:CreateUser OR eventName:DeleteUser)

7. Check if any already vulnerable IAM Roles/Accounts modified to be more permissive [3]: eventSource:iam.amazonaws.com (eventName:CreateRole OR eventName:DeleteRole OR eventName:AttachRolePolicy OR eventName:DetachRolePolicy)

8. Check for any access keys made [4][5]: eventSource:iam.amazonaws.com (eventName:CreateAccessKey OR eventName:DeleteAccessKey)

9. Check if any production / persistent EC2s have had their IAMInstanceProfile changed, to allow for a backdoor using EC2 permissions from a webshell/backdoor they could have placed on your public facing infra. [6]

etc. etc.

But if you have had a compromise based on initial investigations, probably worth while getting professional support to do a thorough audit of your environment.

[0] https://docs.aws.amazon.com/awscloudtrail/latest/userguide/c...

[1] https://docs.aws.amazon.com/awscloudtrail/latest/userguide/c...

[2] https://docs.aws.amazon.com/IAM/latest/UserGuide/cloudtrail-...

[3] https://docs.aws.amazon.com/awscloudtrail/latest/userguide/s...

[4] https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credenti...

[5] https://research.splunk.com/sources/0460f7da-3254-4d90-b8c0-...

[6] https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_R...

sylens

RunInstances

sousastep

couple folks on reddit said while they were refreshing during the outage, they were briefly logged in as a whole different user

__turbobrew__

Maybe dynamodb was inconsistent for a period and as that backs IAM credentials were scrambled? Do you have references to this, because if it is true that is really really bad.

afandian

Got references? This is crazy.

null

[deleted]

gwbas1c

Years ago I worked for a company where customers started seeing other customers' data.

The cause was a bad hire decided to do a live debugging session in the production environment. (I stress bad hire because after I interviewed them, my feedback was that we shouldn't hire them.)

It was kind of a mess to track down and clean up, too.

CaptainOfCoit

> couple folks on reddit said while they were refreshing during the outage, they were briefly logged in as a whole different user

Didn't ChatGPT have a similar issue recently? Would sound awfully similar.

sunaookami

Steam also had this, classic caching issue.

liviux

A friend of a friend knows a friend who logged in to Netflix root account. Source: trust me bro

yfiapo

Highly likely to be coincidence. Typically an exposed access key. Exposed password for non-MFA protected console access happens but is less common.

itsnowandnever

i cant imagine it's related. if it is related, hello Bloomberg News or whoever will be reading this thread because that would be a catastrophic breach of customer trust that would likely never fully return

jddj

You say that, but azure and okta have had a handful of these and life over there has more or less gone on.

Inertia is a hell of a drug

geor9e

If I was a burgler holding a stolen key to a house, waiting to pick a good day, a city-wide blackout would probably feel like a good day.

brador

Lot of keys and passwords being panic entered on insecure laptops yesterday.

Do not discount the possibility of regular malware.

bdcravens

Any chance you did something crazy while troubleshooting downtime (before you knew it was an AWS issue)? I've had to deal with a similar situation, and in my case, I was lazy and pushed a key to a public repo. (Not saying you are, just saying in my case it was a leaked API key)

klysm

Sounds like a coincidence to me

AtNightWeCode

Not uncommon that machines get exposed during trouble-shooting. Just look at the Crowdstrike incident just the other year. People enabled RDP on a lot machines to "implement the fix" and now many of these machines are more vulnerable than if if they never installed that garbage security software in the first place.