Skip to content(if available)orjump to list(if available)

Allianz Life says 'majority' of customers' personal data stolen in cyberattack

Buttons840

I say this often, and it's quite an unpopular idea, and I'm not sure why.

Security researchers, white-hat hackers, and even grey-hat hackers should have strong legal protections so long as they report any security vulnerabilities that they find.

The bad guys are allowed to constantly scan and probe for security vulnerabilities, and there is no system to stop them, but if some good guys try to do the same they are charged with serious felony crimes.

Experience has show we cannot build secure systems. It may be an embarrassing fact, but many, if not all, of our largest companies and organizations are probably completely incapable of building secure systems. I think we try to avoid this fact by not allowing red-team security researches to be on the lookout.

It's funny how everything has worked out for the benefit of companies and powerful organizations. They say "no, you can't test the security of our systems, we are responsible for our own security, you cannot test our security without our permission, and also, if we ever leak data, we aren't responsible".

So, in the end, these powerful organizations are both responsible for their own system security, and yet they also are not responsible, depending on whichever is more convenient at the time. Again, it's funny how it works out that way.

Are companies responsible for their own security, or is this all a big team effort that we're all involved in? Pick a lane. It does feel like we're all involved when half the nation's personal data is leaked every other week.

And this is literally a matter of national security. Is the nation's power grid secure? Maybe? I don't know, do independent organizations verify this? Can I verify this myself by trying to hack the power grid (in a responsible white-hat way)? No, of course not; I would be committing a felony to even try. Enabling powerful organizations to hide their security flaws in their systems, that's the default, they just have to do nothing and then nobody is allowed to research the security of their systems, nobody is allowed to blow the whistle.

We are literally sacrificing national security for the convenience of companies and so they can avoid embarrassment.

pojzon

Did you see Google or facebook or Miceosoft customer databases breached ?

The issue is there is too little repercusions for companies making software in shitty ways.

Each data breach should hurt the company approximately to the size of it.

Equifax breach should have collapsed the company. Fines should be in tens of billions of dollars.

Then under such banhammer software would be built correctly, security would becared about, internal audits would be made (real ones) and people would care.

Currently as things stand. There is ZERO reason to care about security.

slivanes

I’m all for companies to not ignore their responsibility for data management, but I’m concerned that type of punishment could be used as a weapon against competitors. I can imagine that certain classes of useful companies would just not be able to exist. Tricky balance to make companies actually care without crippling insurance.

tempnew

Microsoft just compromised the National Nuclear Security Administration last week.

Facebook was breached what last month?

Google is an ad company. They can’t sell data that’s breached. They basically do email, and with phishing at epidemic levels, they’ve failed the consumer even at that simple task.

All are too big to fail so there is only congress to blame. While people like Rho Khana focus their congressional resources on the Epstein intrigue citizens are having their savings stolen by Indian scammers and there is clearly no interest and nothing on the horizon to change that.

conception

Microsoft lost their root keys to Azure. ¯\_(ツ)_/¯

GlacierFox

Didn't Sharepoint get hacked the other day? :S

jaynate

Yes, but those were on-prem deployments of Sharepoint, not Microsoft's infratructure.

sugarpimpdorsey

Do you think we should have strong legal protections for people who go around your neighborhood trying unlocked car doors and opening front doors (with a backpack full of burglary tools) and when confronted claim they're uh doing it for your security?

Buttons840

That's a failed analogy I won't entertain.

You're trying to say companies should have sole responsibility over their systems. I say, let them have sole legal and financial liability as well then.

xboxnolifes

The great thing about analogies is that they're just analogies. We can have different laws for different things. Cybersecurity vs physical security.

sugarpimpdorsey

Hey your front door was unlocked where is my bug bounty?

Some people still live in places where you can leave your doors unlocked and not worry.

Leave it to the tech industry to bring Internet of Shit locks to your doorstep.

Would you be upset if in the course of their unsolicited work, these white/grey hats found your wife's nudes in the digital equivalent of kicking over a rock? Full legal protection of course.

Ignore if they kept a copy for themselves for later use, they promised to delete them <wink>.

atmosx

If companies faced real consequences, like substantial fines from a regulatory body with the authority to assess damage and impose long-term penalties, their stock would take a hit. That alone would compel them to take security seriously. Unfortunately, most still don’t. More often than not, they walk away with a slap on the wrist. If, that.

Ylpertnodi

> I say this often, and it's quite an unpopular idea, and I'm not sure why. > Etc...etc...etc....

Me, neither, if that helps.

pengaru

  > I say this often, and it's quite an unpopular idea, and I'm not sure why.
  >
  > Security researchers, white-hat hackers, and even grey-hat hackers should have
  > strong legal protections so long as they report any security vulnerabilities
  > that they find.
  >
  > The bad guys are allowed to constantly scan and probe for security
  > vulnerabilities, and there is no system to stop them, but if some good guys
  > try to do the same they are charged with serious felony crimes.
So let me get this straight, you want to give unsuccessful bad actors an escape hatch by claiming white-hat intentions when they get caught probing systems?

Buttons840

If we did give bad actors an escape hatch, what harm would it do in a world already filled with untouchable bad actors?

doubled112

What about a white hat hacker license? Not sure what the criteria would be, but could it be done?

Then there would be some sort of evidence the guy was a "good guy". Like when a cop shoots your dog and suffers no consequences.

msgodel

The internet is really a lot like the ocean, things left unmaintained on it are swallowed by waves and sea life.

We need something like the salvage law.

bongodongobob

No. You cannot come to my home or business while I'm away and try to break in to protect me unless I ask, full stop. Same goes for my servers and network. It's my responsibility, not anyone else's. We have laws in place already for burgers and hackers. Just because they continue to do it doesn't give anyone else the right to do it for the children or whatever reasoning you come up with.

krior

But you would like to be notifiedby your neighbours if you have left your window open while away, right? Or are you going to sue them for attempted break-in?

The issue is not that its illegal to put on a white hat, break into the user database and steal 125 million accounts as proof of security issue.

The problem is people getting sued for saying "Hey, I stumbled upon the fact that you can log into any account by appending the account-number to the url of your website.".

There certainly is a line seperating ethical hacking (if you can even call it hacking in some cases) and prodding and probing at random targets in the name of mischief and chaos.

tjwebbnorfolk

Adding "full stop" doesn't strengthen your case, it just makes it sound like you are boiling the world down to be simple enough for your case to make any sense.

There are a lot of shades of grey that you are ignoring.

Buttons840

You claim sole responsibility. Do you accept sole legal and financial liability?

cmiles74

It seems like passing legislation that imposes harsher penalties for data breaches is the way to go.

thatguy0900

I mean, the problem is people will break things. How do you responsibly hack your local electric grid? What if you accidentally mess with something you don't understand, and knock a neighborhood out? How do we prove you just responsibly hacked into a system full of private information then didn't actually look at a bunch of it?

Buttons840

If a security researcher knocks out the power grid of a town, we should consider ourselves lucky that the vulnerability was found before an opposing nation used it to knock out the power of many towns.

sublinear

If we're strictly talking about software there should be some way to test in a staging environment. Production software that cannot be run this way should be made illegal.

sunrunner

> How do we prove you just responsibly hacked into a system full of private information then didn't actually look at a bunch of it?

Pinky promise?

slashdev

All these endless data breaches could be reduced if we fixed the incentives, but that's difficult. We could never stop it, because humans make mistakes, and big groups of humans make lots of mistakes. That doesn't mean we shouldn't try.

It seems to me a parallel path that should be pursued is to make the impact less damaging. Don't assume that things like birth dates, names, addresses, phone numbers, emails, SSNs, etc are private. Shut down the avenues that people use to "steal identities".

I hate the term stealing identity, because it implies the victim made some mistake to allow it to happen. When what really happened is the company was lazy to verify that the person they're doing business with is actually who they say they are. The onus and liability should be on the company involved. If a bank gives a loan to you under my name, it should be their problem, not mine. It would go away practically overnight as a problem if that were changed. Companies would be strict about verifying people, because otherwise they'd lose money. Incentives align.

Identify theft is not the only issue with data leaks / breaches, but it seems one of the more tractable.

DicIfTEx

> I hate the term stealing identity, because it implies the victim made some mistake to allow it to happen. When what really happened is the company was lazy to verify that the person they're doing business with is actually who they say they are. The onus and liability should be on the company involved.

You may enjoy this sketch: https://www.youtube.com/watch?v=CS9ptA3Ya9E

slashdev

That was hilarious, thanks for sharing!

MichaelZuo

It is really strange that is not already the case.

Buttons840

"It's really strange that the status-quo favors those with more wealth and power."

JumpCrisscross

> these endless data breaches could be reduced if we fixed the incentives, but that's difficult

It’s honestly unclear if the damage from data breaches exceeds the cost of eliminating it. The only case where I see that being clear is in respect of national security.

ponector

>> if the damage from data breaches exceeds the cost of eliminating it.

Definitely not. Damage is done to customers but costs to eliminate are on the company. Why should company invest more if there are no meaningful consequences for them?

JumpCrisscross

> Definitely not. Damage is done to customers

What is the evidence for this?

The cost of identity fraud clocks in around $20bn a year [1]. A good fraction of that cost gets picked up (and thus managed) by financial institutions and merchants.

I’m sceptical we could harden our nation’s systems for a few billion a year.

[1] https://javelinstrategy.com/research/2024-identity-fraud-stu...

AlotOfReading

The more important point is that the people who would have to pay to avoid data breaches (companies) are not the ones who suffer when they happen (the public). It's the same problem as industrial pollution.

afarah1

The solution already exists: MFA and IdP federation.

One factor you know (data) and the other you posess, or you are (biometrics).

IdP issues both factors, identification is federated to them.

Kind of happens when you are required to supply driver's license, which technically you own and is federated id if checked in government system, but can be easily forged with knowledge factors alone.

Unfortunately banks and governments here use facial recognition for the second factor, which has big privacy concerns, and the tendency I think will be federal government as sole IdP. Non-biometroc factors might have practical difficulties at scale, but fingerprint would be better than facial. It's already taken in most countries and could be easily federated. Not perfect but better than the alternatives imo.

SoftTalker

I'm unconvinced that biometrics are a good approach. You can't change them if a compromise is discovered.

afarah1

I also don't like it but it seems to be what most institutions are going for.

It's a strong factor if required in person, the problems start when accepting it remotely. But having to go to the bank seems like the past.

eptcyka

So what? My data will still get sold online and then agencies/businesses will take advantage of it to do differential pricing. 2fa does not solve the problem of data leaks.

giantfrog

This will never, ever, ever stop happening until executives start going bankrupt and/or to jail for negligence. Even then it won’t stop, but it would at least decrease in frequency and severity.

SoftTalker

Unless there is willfull negligence (very difficult to prove) or malicious behavior I don't think putting people in jail will help. Most of this stuff happens by accident not by intent.

Financial consequences to the company might be a deterrent, of course then you're dealing with hundreds or thousands of people potentially unemployed because the company was bankrupted by something as simple as a mistake in a firewall somewhere or an employee falling victim to a social engineering trick.

I think the path is along the lines of admitting that cloud, SaaS and other internet-connected information systems cannot be made safe, and dramatically limiting their use.

Or, admitting that a lot of this information should be of no consequence if it is exposed. Imagine a world where knowing my name, SSN, DOB, address, mother's maiden name, and whatever else didn't mean anything.

DanHulton

Imagine using this defence with regards to airline crashes. "The crashes happen by accident not by intent" would be a clearly ludicrous defence, as it ought to be here as well.

If we were serious about preventing these kinds of things from happening, we could.

SoftTalker

If we're OK with regulating SaaS companies (and anyone who connects their information systems to the internet) the way we do the airline industry, that may be an argument.

Bottom line though a good many folks here would loudly resist that kind of oversight on their work and their busineses, and for somewhat valid reasons. Data breaches hardly ever cause hundreds of deaths in a violent fireball.

If the consequences of an airline crash were just some embarassment and some inconvenience for the passengers, they would happen a lot more.

Also people almost never go to jail for airline crashes, even when they cause hundreds of deaths. We investigate them, and maybe issue new regulations, not to punish mistakes, but to try to eliminate the possibilty of them happening again.

fn-mote

> Most of this stuff happens by accident not by intent.

Consider the intent of not hiring enough security staff and supporting them appropriately. It looks a lot like an accident. You could even say it causes accidents.

SoftTalker

Hiring more people does not prevent the chance of mistakes. It may even increase them. I know places that spend lavishly on security (and employee education w/r/t social engineering, etc.) and have still been breached.

lynx97

Haha, I still vividly remember how they were trying to make me believe that GDPR is going to a big hammer because it will finally make executives liable for breaches. I silently laughed back then. I am still laughing.

I should probably clarify: There are two types of people that climed that back then. Those trying to gaslight us, and those naiv enough to actually believe the gaslighting. Severe negligence has to be proofen, and that is not easy, and there is a lot of wiggle room in court. Executives being liable for what they did during their term is just not coming, sorry kids.

time4tea

Mandatory £1000 fine per record lost. Would be company-terminal for companies with millions of customers - and thats right. Right now it's just cheaper to not care, then send a trite apology email when all the data inevitably gets stolen.

The status quo, nobody gives a crap, with the regulators literally doing nothing, cannot continue. In the UK, the ICO is as effective as Ofwat. (The regulator that was just killed for being pointlessly and dangerously usless)

(Edit: fix autocorrect)

grapescheesee

Mandatory amount paid directly to the customer of record, instead of fractions of a cent on the dollar, in year long class action settlements might help the disenfranchised 'customers'.

sunrunner

> Would be company-terminal

What happens to customers of the affected company in this case? Does this not now pass on a second problem to the people actually affected?

unsupp0rted

Would be national economy terminal too

amai

Actually Allianz offers an insurance against cyberattacks like this: https://www.allianz.de/aktuell/storys/cyberschutz-knoten-im-...

7373737373

Insurance is part of the problem - companies prefer to insure themselves rather than employ and support the research and development of secure software. As long as this is the more economical thing to do, nothing will change.

ok123456

Good to see the contractually required endpoint protection was working.

ofjcihen

That’s partially due to SF devs not knowing enough about the product but also due to Salesforce treating security as an afterthought. For a poorly configured implementation it takes 2 web requests as an unauthenticated user to know all of the data you can pull down and then do it. Don’t even get me started on the complete lack of monitoring. I basically had to design an entire security monitoring setup outside of Salesforce using their (absolutely awful) logs to get anything close to usable. Edit: here’s a guide someone wrote. https://www.varonis.com/blog/misconfigured-salesforce-experi... Seriously, you can automate this and then throw it at the end of recon to find SF sites. I’ve done it.

jmkni

> “On July 16, 2025, a malicious threat actor gained access to a third-party, cloud-based CRM system used by Allianz Life,” referring to a customer relationship management (CRM) database containing information on its customers.

So who the hell was the "third-party, cloud-based CRM system"?

ofjcihen

Another article mentioned Salesforce which has a knack for being poorly secured on the data owners side.

I’ve got another reply here with details but suffice it to say misconfigured Salesforce tenants are all over the internet.

eclipticplane

Even if SFDC is configured correctly, any sufficiently large or old instance of SFDC may have dozens of other systems plugged into it. Many of which get default access to everything because SFDC security and permission configuration is so byzantine.

milesskorpen

Does it matter? Wasn't a technical breach of their systems, but instead social engineering.

poemxo

If a cloud-based system doesn't support technologies that deter social engineering, it's still a problem. Some login portals to check your credit history don't even support 2FA.

So I think it matters, I think access systems should be designed with a wider set of human behaviors in mind, and there should be technical hurdles to leaking a majority of customers' personal information.

politelemon

It matters. That's often a generic phrasing used to make it look like it was a partner's fault. But very often it is simply a platform that was managed by and configured by the company itself, which would mean more than just social engineering. Take a look at the language used in other breaches and it's very similarly veiled.

MontagFTB

Depending on the CRM, is this not a HIPAA violation?

marcusb

Why would it be? Is Allianz Life a covered entity? If so, why would it depend on the specific CRM being used?

tfehring

Allianz Life publishes a HIPAA privacy notice at [0], which states:

> This notice applies to individuals who participate in any of the following programs under the closed line of business:

> • Long term care

> • Medical

> • Medical supplemental

> • Hospital income

> • Cancer and disease specific coverage

> • Dental benefits

> The Covered Entity’s actions and obligations are undertaken by Allianz employees as well as the third parties who perform services for the Covered Entity. However, Allianz employees perform only limited Covered Entity functions – most Covered Entity administrative functions are performed by third party service providers.

It sold long term care insurance policies until 2010.

(Disclosure, I happen to have worked at Allianz Life a long time ago, though I have no nonpublic information about any of this.)

[0] https://www.allianzlife.com/-/media/Files/Allianz/PDFs/about...

nothercastle

The punishment for poor data security is so low it’s not worth paying for it in most companies. And of course the government makes it nearly impossible to change your ssn yet still uses it as a means of verifying so almost everyone is exposed by now.

barbazoo

Depending on which entity, this could affect hundreds of millions of people.

urquhartfe

Fundamentally the issue is that companies are just not investing enough in engineering and IT. When you farm out this work to offshore workers on a shoestring budget, the result is utterly predictable.

alephnerd

This isn't an offshore situation though.

I've worked with Allianz's cybersecurity personas previously on EBRs/QBRs, and the issue is they (like a lot of European companies) are basically a confederation of subsidiaries with various independent IT assets and teams, so shadow IT abounds.

They have subsidiaries numbering in the dozens, so there is no way to unify IT norms and standards.

There is an added skills issue as well (most DACH companies I've dealt with have only just started working on building hybrid security posture management - easily a decade behind their American peers), but it is a side effect of the organizational issues.

insomniacity

> They have subsidiaries numbering in the dozens, so there is no way to unify IT norms and standards.

That is their choice though - they could setup a technology services subsidiary, and then provide IT services to the other subsidiaries, transparently to the end users in those subsidiaries.

bee_rider

Ignoring the whole pain in the ass this will be for their customers—at what point does this become a tragedy of the commons failure? Actually, I don’t know the case-law on this sort of stuff. If your bank authenticates using credentials that are generally publicly known by black-hats for most people—stuff like your social security number and some random bits of trivia (mothers maiden name)—shouldn’t they be responsible for any breaches?

rr808

Kinda frustrating the last few months I've had to upload bank statements and payslips to rent a house and also refinance a mortgage. I know all my financial details are out there floating and invevitably get leaked. I should be able to upload somewhere temporary where these docs are checked then safely deleted.