Shifting Cyber Norms: Microsoft security POST-ing to you
63 comments
·January 23, 2025diggan
mjl-
what if microsoft decides they not only want to load the URL in messages, but also click links and click buttons? presumably this is to detect bad/dangerous content. bad people will also just put that dangerous stuff behind a link or button if it's that easy to evade their checking.
or why exactly are they following those links again? perhaps it is for previews instead of "security"?
from the article:
> Over time, it also became OK for software to visit links in email to find out what was behind them.
why would that be OK? if i email a secret link to someone, i fully expect it to stay between the recipient and me. not some company reading along. but that's why i don't have accounts with these types of companies...
RamblingCTO
Because of this I always implement magic links with an additional link to click. Robust and easy and not a thing you do 100 times a day anyway.
formerly_proven
What about double opt-in links? What about unsubscribe links? Do these need to be triple opt-in now because of Microsoft?
diggan
> now because of Microsoft?
I'm not sure why people believe this to be a new issue, I think the first time I myself implemented a work around for this issue was like 4 years ago, and here is a Stackoverflow Question about it from 7 years ago: https://stackoverflow.com/questions/43443947/how-to-stop-e-m...
Outlook/Microsoft is not the only ones who do this either, when I first had to work around it myself it was because it was happening in gmail if I remember correctly.
formerly_proven
Accessing links (GET), yes, executing JS and POSTing is the new thing. Putting an XHR in DOMContentLoaded was the workaround for the behavior you are talking about.
mschuster91
> You're not supposed to send off a POST request directly as the page has been loaded to exchange the tokens, but require the user to click on a button to confirm the activity.
Since requiring the user to do anything drastically reduces the go-through rate, the current "standard" is to use either a meta-refresh tag, a HTTP Refresh header [1], setting location.href from JS or a <form> that gets submitted via JavaScript, and a <button> inside a <noscript> for those that don't do JS.
Used to be enough but nowadays content scanners run legitimate chromium embedded and whatnot, which obviously trips over at least the refresh tag/header and the automated Javascript solutions. And that's the problem.
diggan
> And that's the problem.
Which again, it's a known problem, existed for many years already and we have a work around for. If you want to do POST requests from URLs sent via email/other communication platforms, you need to make it so the user confirms the activity.
Users who wanted to sign in and have to press a button after loading the page instead of just logging them in won't suddenly not want to login. They have a clear purpose when clicking the link, regardless if they have to click one button afterwards or not.
And besides, if you don't implement it like that, you'll have even more users dropping off because they literally won't be able to sign in at all, no matter how badly they want to.
im3w1l
Won't this shift the meta for malware to a "button" that is 99.9% transparent and overlays the whole page in the hope that someone will accidentally click on it when trying to select text or something? A lot better than 0-click ig but might catch people now and then.
mschuster91
> Which again, it's a known problem, existed for many years already and we have a work around for.
Not really. Other than Google's crawlers it was rare to see full browser engines or even JS runtimes as part of "security" solutions.
JimDabell
If a page contains JavaScript that makes a POST request when the page loads, then it’s the developers of that page that are violating HTTP norms, not the developers of software that loads such pages. POSTs are unsafe requests that should be made as part of a user’s intent. Following a link should always be safe. Microsoft aren’t responsible for this problem – the site developers are.
We already went through this with Google Web Accelerator and 37Signals twice. They couldn’t accept that mere links were a bad idea for deleting things, and GWA came along and deleted their users’ data by following links. They tried to detect and block GWA instead of following HTTP rules and it happened again.
Links are supposed to be safe. Ignore this at your peril.
Avamander
Microsoft making this change also rather clearly indicates that malicious actors are actively abusing these spec violations (by walling their phishing pages behind a simple POST on page load).
grayhatter
Microsoft knows what they're doing. The problem is users dont. I don't know what you care about more, your users convenience, or their safety and security. But no chance in hell I'd allow MS to dictate the terms of the security of my services.
I'd solve this by warning users who attempt to reuse consumed links that the link has been used already, and likely list broken email providers that I suspect are using software that breaks this. And then wait. Let them complain to their IT, or whomever is responsible for picking broken software.
If I was feeling really petty, the clown that wrote this link scanning software isn't in charge of the networking, so you could eventually enumerate the subnets that MS uses, and black list them. Then the links might actually get to users connecting from non MS data centers.
You could roll over and decrease your security so that MS can claim they increased theirs... But you shouldn't! They can only get away with turning security into a negative sum game if you play along. Please don't?
diggan
> I'd solve this by warning users who attempt to reuse consumed links that the link has been used already, and likely list broken email providers that I suspect are using software that breaks this. And then wait. Let them complain to their IT, or whomever is responsible for picking broken software.
The thing is, most websites already know how to workaround that particular issue (https://news.ycombinator.com/item?id=42804447), so when users notice that something breaks with just your service and not everyone elses, you can say "Outlook is shit" all you want, the users will blame you for it, since you weren't able to fix what others could, for better or worse.
> You could roll over and decrease your security so that MS can claim they increased theirs
What part of your security would decrease if you "roll over" and what exactly does "roll over" entail here? Websites who've dealt with this issue implemented what I linked to above, they're no more/less secure than the website who didn't manage to work around this issue, they're just less buggy for Outlook users, if anything.
grayhatter
Your description isn't a work around, it's the exact same buggy behavior, just implemented in JS, instead of a malware scanner. GET or POST is less important than user action, if your code consumes a token without user action, it doesn't matter if that's server side or client side, GET or POST.
If outlook is clicking on links or buttons that generate POST requests, then outlook is broken. If outlook is loading a page, and the page without further interaction is sending a POST, that's just as bad, if not worse than expiring the token on that same GET.
I wouldn't call that a work around, would you?
diggan
> If outlook is clicking on links or buttons that generate POST requests
It isn't though, it's loading URLs it finds in emails, and then runs scripts on those loaded websites, like many "modern" scrapers/crawlers.
So if your JavaScript on that URL automatically does POST requests on load, those will happen automatically when their scanner loads the website.
But if you instead don't do a POST request on JavaScript load, and do that request when the user presses an actual button on the loaded website, the scanner won't trigger that button automatically (that would be a whole new level of craziness, we're not there (yet))
So yes, it is a workaround. I've implemented this 10s of times already, because it's been an issue for a long time.
Edit: The submission article also makes it abundantly clear that this is the very problem they're suffering from:
> Microsoft’s security scanning will now not only visit the links you mail out, they will also run the JavaScript on your page, and will also send out any POSTs generated by that JavaScript:
The problem is that their JavaScript automatically issues POST request on load, not that Outlook loads and executes JavaScript on the page (although that sucks too, don't get me wrong).
philipwhiuk
That's nice. Here's how that will go:
User: Hey IT, a site I'm trying to use says that I can't use Outlook, can we switch provider
IT: Heck no. Are there any alternatives
User: I guess I'll try to find them.
grayhatter
That, is only your problem if you make it your problem. Ideally, you wouldn't. But like I already said, it depends on if you're willing to sacrifice the security of all your users because, money I guess?
Eventually you have to choose doing what's right, and doing what's expedient. I can't draw your line for you. Thankfully for me I've always been able to always draw mine at protecting users from abuse.
lexicality
Users care much more about convenience than security. If you made a phone that wiped itself after 3 incorrect pin attempts then you'd have a lot of very angry users wearing gloves. "It's for your own security!" wouldn't appease them.
ahoka
What prevents bad actors from detecting that the request is coming from MS and returning pictures of kittens, but the most wicked malware otherwise? You don’t actually have to answer.
richardwhiuk
Microsoft can try quite hard to make their requests look like a user.
sidewndr46
Microsoft owns the domain & owns the client. They are the user in this case
rad_gruchalski
> Microsoft knows what they're doing
Right. Thanks for consuming my SSO magic links.
NotYourLawyer
> enumerate the subnets that MS uses, and black list them. Then the links might actually get to users connecting from non MS data centers.
Or Microsoft might start blocking your email altogether for circumventing security procedures or whatever.
JohnMakin
Lots of things consume URL’s other than security scanners in similar ways - as smarter people than me have commented, the solution most use (and I did recently myself) is to make the signup/signin link direct to a frontend page which makes them click to confirm. Yea, that still sucks, but you can have fun with things like this - one of my favorite things to do to figure out if an employer is snooping on my private messages in an app like say, slack, is to post a honeypot link in a DM to myself - on slack at least, it will consume the link every time the message thread is opened. So if I know I didnt open it, and I get an alert, I know someone or something has read the message.
There’s tons of stuff out there like this, just assume apps are being disrespectful and plan accordingly.
michaelt
I can see how Microsoft ended up in this position, although it's unfortunate.
E-mails show up with obfuscated links all the time; you can't detect phishing just from the URL if legitimate e-mails are using https://us123.list-manage.com/track/click?u=87958734095826 as an URL.
You need to load the URL, if you want to check if a fake Google login page shows up or something like that.
And the phishers are trying to evade your URL scanner. If your URL scanner has an identifiable user-agent, or doesn't execute javascript, or there's anything else that makes it identifiable, they'll show a boring legitimate page to your scanner and only phish the real users.
As I understand it, self-serve ad networks have similar challenges detecting ads placed by scammers.
aaronax
It is an arms race that will end poorly for everyone. Whatever the legit companies do to make their links work
Good guys send out email links -> Bad guys start sending out email links -> Security scanners start checking link contents
Good guys hide behind automatic POST -> Bad guys start hiding behind automatic POST -> Security scanners start checking behind automatic POST
Good guys hide behind button click (we are here) -> Bad guys start hiding behind button click -> Security scanners start checking behind button click
Good guys hide behind link that doesn't work until 10 minutes after sending -> ...
Good guys hide behind captcha -> ...
Good guys hide behind plain text code that has to be copy/pasted from email -> ...
Eventually the security scanners will be indistinguishable in function and intelligence from a human being.
Might as well just merge it with your personal AI assistant and it will do everything for you...including getting scammed by the bad guys.
arkh
Wait, from reading the article it looks like the link they send to their users send them to a page with some javascript which automatically POST data for them. Which MS will correctly do: your "POST" is in fact a simple GET for most users with javascript on.
What you should do is send your user to a page with a form requiring a manual action from them.
badmintonbaseba
This just highlights how the security scanning is just theater though. So bad actors can now just have their evil content behind a form, and users get accustomed to the double opt-in workflow anyway.
arkh
Yup, best thing would be to either have a form in your email / SMS or let's get crazy: implement POST links (and maybe DELETE ones too). Your client knows it will change things when followed (so security clients and prefetchers won't open it), the server receive a POST request, and it's a one click action for the user.
szundi
This already happened when you had a link in for example GMail in iOS, it wanted to open that login link inside its own browser popup (shame) - thus using up the 1x login at once. I jumped out to the Safari browser, my choice, but the login link had been expired by the GMail window.
ohc
I agree that email scanners really shouldn't be sending POST requests... but there is already a solution for this; follow RFC 8058 - https://datatracker.ietf.org/doc/html/rfc8058
bawolff
Does that mean they are automatically pressing the "unsubscribe" link?
lexicality
I suppose that explains why so often I have to click a "yes really unsubscribe" button once the link loads. How annoying.
dspillett
Yep, along with many “magic link” emails now going to a stub form where you click a button to continue via POST request, instead of going direct to the target resource. Not that this is effective if the scanners are now submitting POST requests as well as following simple links.
Though also the “I'm sure I want to unsubscribe” is in part to deliberately add extra steps to unsubscribing.
bawolff
Can't wait to have to fill out a captcha to unsubscribe.
diggan
> emails now
Not sure what time-frames we're talking about here, but I remember having to work around this particular issue more than 4 years ago, I'm sure it was also an issue before too. It's been like this for a long time.
philipwhiuk
Yes.
Its_Padar
Executing JavaScript on random pages seems like quite a bad idea, spammers could potentially include links to JavaScript which does resource intensive things, like that small and sketchy trend of including Bitcoin miners in websites.
efitz
NB:
1. Most “safe browsing” browser features do this to some extent
2. Any browser or mail extension with access to the URL might also do this
(1) and (2) might be done by remote servers, even in remote countries.
This is one of the reasons why you should avoid magic links (like login links or bearer tokens like S3 signed URLs), because you may inadvertently be handing them to other parties.
> Microsoft’s security scanning will now not only visit the links you mail out, they will also run the JavaScript on your page, and will also send out any POSTs generated by that JavaScript:
It seems to me that someone misunderstood how to implement the logic on the frontend here. You're not supposed to send off a POST request directly as the page has been loaded to exchange the tokens, but require the user to click on a button to confirm the activity.
So the flow would be:
- "Send email with url+token > Frontend loads, shows button to user to confirm > User clicks button and exchange happens".
Instead, wrong implementations do something like this:
- "Send email with url+token > Frontend loads and exchange happens", which we've known for years to not work correctly, because of this very issue.
Discourse for example implemented this perfectly, so if you need an example, that would be where I'd go.
(I know it sucks that scrapers/crawlers do a lot of horrible shit, it truly does. But we've dealt with this for years now, it's not a new thing and it won't be the last shitty thing they do)