Skip to content(if available)orjump to list(if available)

A proposal to restrict sites from accessing a users’ local network

mystifyingpoi

I like this on the first glance. The idea of a random website probing arbitrary local IPs (or any IPs for that matter) with HTTP requests is insane. I wouldn't care if it breaks some enterprise apps or integrations - enterprises could reenable this "feature" via management tools, normal users could configure it themselves, just show a popup "this website wants to control local devices - allow/deny".

buildfocus

This is a misunderstanding. Local network devices are protected from random websites by CORS, and have been for many years. It's not perfect, but it's generally quite effective.

The issue is that CORS gates access only on the consent of the target server. It must return headers that opt into receiving requests from the website.

This proposal aims to tighten that, so that even if the website and the network device both actively want to communicate, the user's permission is also explicitly requested. Historically we assumed server & website agreement was sufficient, but Facebook's recent tricks where websites secretly talked to apps on your phone have broken that assumption - websites might be in cahoots with local network servers to work against your interests.

xp84

Doesn't CORS just restrict whether the webpage JS context gets to see the response of the target request? The request itself happens anyway, right?

So the attack vector that I can imagine is that JS on the browser can issue a specially crafted request to a vulnerable printer or whatever that triggers arbitrary code execution on that other device. That code might be sufficient to cause the printer to carry out your evil task, including making an outbound connection to the attacker's server. Of course, the webpage would not be able to discover whether it was successful, but that may not be important.

jonchurch_

I think CORS is so hard for us to hold in our heads in large part due to how much is stuffed into the algorithm.

It may send an OPTIONS request, or not.

It may block a request being sent (in response to OPTIONS) or block a response from being read.

It may restrict which headers can be set, or read.

It may downgrade the request you were sending silently, or consider your request valid but the response off limits.

It is a matrix of independent gates essentially.

Even the language we use is imprecise, CORS itself is not really doing any of this or blocking things. As others pointed out it’s the Single Origin Policy that is the strict one, and CORS is really an exception engine to allow us to punch through that security layer.

tombakt

No, a preflight (OPTIONS) request is sent by the browser first prior to sending the request initiated by the application. I would be surprised if it is possible for the client browser to control this OPTIONS request more than just the URL. I am curious if anyone else has any input on this topic though.

Maybe there is some side-channel timing that can be used to determine the existence of a device, but not so sure about actually crafting and delivering a malicious payload.

nbadg

Or simply perform a timing attack as a way of exploring the local network, though I'm not sure if the browser implementation immediately returns after the request is made (ex fetch API is called) but before the response is received. Presumably it doesn't, which would expose it to timing attacks as a way of exploring the network.

sidewndr46

This is also a misunderstanding. CORS only applies to the Layer 7 communication. The rest you can figure out from the timing of that.

Significant components of the browser, such as Websockets have no such restrictions at all

James_K

Won't the browser still append the "Origin" field to WebSocket requests, allowing servers to reject them?

afiori

A WebSocket starts as a normal http request, so it is subject to cors if the initial request was (eg if it was a post)

rnicholus

CORS doesn’t protect you from anything. Quite the opposite: it _allows_ cross origin communication (provided you follow the spec). The same origin policy is what protects you.

spr-alex

I made a CTF challenge 3 years ago that proves why local devices are not so protected. exploitv99 bypasses PNA with timing as the other commentor points out.

https://github.com/adc/ctf-midnightsun2022quals-writeups/tre...

friendzis

> The issue is that CORS gates access only on the consent of the target server. It must return headers that opt into receiving requests from the website.

False. CORS only gates non-simple requests (via options), simple requests are sent regardless of CORS config, there is no gating whatsoever.

Aeolun

How would Facebook do that? They scan all likely local ranges for what could be your phone, and have a web server running on the phone? That seems more like a problem of allowing the phone app to start something like that and keep it running in the background.

londons_explore

Webrtc allows you to find the local ranges.

Typically there are only 256 IP's, so a scan of them all is almost instant.

esnard

Do you have a link talking about those Facebook's recent tricks? I think I missed that story, and would love to read an analysis about it

IshKebab

I think this can be circumvented by DNS rebinding, though your requests won't have the authentication cookies for the target, so you would still need some kind of exploit (or a completely unprotected target).

MBCook

How? The browser would still have to resolve it to a final IP right?

jm4

This sounds crazy to me. Why should websites ever have access to the local network? That presents an entirely new threat model for which we don’t have a solution. Is there even a use case for this for which there isn’t already a better solution?

loaph

I've used https://pairdrop.net/ before to share files between devices on the same LAN. It obviously wouldn't have to be a website, but it's pretty convenient since all my devices I wanted to share files on already have a browser.

A4ET8a8uTh0_v2

Same use case, but I remember getting approval prompts ( though come to think of it, those were not mandated, but application specific prompts to ensure you consciously choose to share/receive items ). To your point, there are valid use cases for it, but some tightening would likely be beneficial.

necovek

Not a local network, but localhost example: due to the lousy private certificate capability APIs in web browsers, this is commonly used for signing with electronic IDs for countries issuing smartcard certificates for their citizens (common in Europe). Basically, a web page would contact a web server hosted on localhost which was integrated with PKCS library locally, providing a signing and encryption API.

One of the solutions in the market was open source up to a point (Nowina NexU), but it seems it's gone from GitHub

For local network, you can imagine similar use cases — keep something inside the local network (eg. an API to an input device; imagine it being a scanner), but enable server-side function (eg. OCR) from their web page. With ZeroConf and DHCP domain name extensions, it can be a pretty seamless option for developers to consider.

Thorrez

>Why should websites ever have access to the local network?

It's just the default. So far, browsers haven't really given different IP ranges different security.

evil.com is allowed to make requests to bank.com . Similarly, evil.com is allowed to make requests to foo.com even if foo.com DNS resolves to 127.0.0.1 .

chuckadams

> It's just the default. So far, browsers haven't really given different IP ranges different security.

I remember having "zone" settings in Internet Explorer 20 years ago, and ISTR it did IP ranges as well as domains. Don't think it did anything about cross-zone security though.

EvanAnderson

> Is there even a use case for this for which there isn’t already a better solution?

I deal with a third-party hosted webapp that enables extra when a webserver hosted on localhost is present. The local webserver exposes an API allowing the application to interact more closely with the host OS (think locally-attached devices and servers on the local network). If the locally-installed webserver isn't present the hosted app hides the extra functionality.

Limiting browser access to the localhost subnet (127.0.0.1/8) would be fine to me, as a sysadmin, so long as I have the option to enable it for applications where it's desired.

Thorrez

>That presents an entirely new threat model for which we don’t have a solution.

What attack do you think doesn't have a solution? CSRF attacks? The solution is CSRF tokens, or checking the Origin header, same as how non-local-network sites protect against CSRF. DNS rebinding attacks? The solution is checking the Host header.

null

[deleted]

charcircuit

>for which we don’t have a solution

It's called ZTA, Zero Trust Architecture. Devices shouldn't assume the LAN is secure.

udev4096

Exactly, LAN is not a "secure" network field. Authenticate everything from everywhere all the time

esseph

You got grandma running ZTA now?

This is a problem impacting mass users, not just technical ones.

lucideer

> normal users could configure it themselves, just show a popup "this website wants to control local devices - allow/deny".

MacOS currently does this (per app, not per site) & most users just click yes without a second thought. Doing it per site might create a little more apprehension, but I imagine not much.

mastazi

Do we have any evidence that most users just click yes?

My parents who are non-technical click no by default to everything, sometimes they ask for my assistance when something doesn't work and often it's because they denied some permission that is essential for an app to work e.g. maybe they denied access to the microphone to an audio call app.

Unless we have statistics, I don't think we can make assumptions.

technion

The amount of "malware" infections I've responded to over the years that involved browser push notifications to Windows desktops is completely absurd. Chrome and Edge clearly ask for permissions to enable a browser push.

The moment a user gets this permissions request, as far as I can tell they will hit approve 100% of the time. We have one office where the staff have complained that it's impossible to look at astrology websites without committing to desktop popups selling McAfee. Which implies those staff, having been trained to hit "no", believe it's impossible to do.

(yes, we can disable with a GPO, which I heavily promote, but that org has political problems).

Aeolun

As a counter example, I think all these dialogs are annoying as hell and click yes to almost everything. If I’m installing the app I have pre-vetted it to ensure it’s marginally trustworthy.

lucideer

I have no statistics but I wouldn't consider older parents the typical case here. My parents never click yes on anything but my young colleagues in non engineering roles in my office do. And I'd say even a decent % of the engineering colleagues do too - especially the vibe coders. And they all spend a lot more time on they computer then my parents.

paxys

People accept permission prompts from apps because they conciously downloaded the app and generally have an idea about the developer and what the app does. If a social media app asks for permission to your photos it's easy to understand why, same with a music streamer wanting to connect to your smart speaker.

A random website someone linked me to wanting to access my local network is a very different case. I'm absolutely not giving network or location or camera or any other sort of access to websites except in very extreme circumstances.

poincaredisk

"Please accept the [tech word salad] popup to verify your identity"

Maybe this won't fool you, but it would trick 90% of internet users. (And even if it was 20% instead of 90%, that's still way too much.)

lucideer

To be clear: implementing this in browser on a per site basis would be a massive improvement over in-OS/per-app granularity. I want this popup in my browser.

But I was just pointing out that, while I'll make good use of it, it still probably won't offer sufficient protection (from themselves) for most.

lxgr

And annoyingly, for some reason it does not remember this decision properly. Chrome asks me about local access every few weeks, it seems.

Yes, as a Chromecast user, please do give me a break from the prompts, macOS – or maybe just show them for Airplay with equal frequency and see how your users like that.

grokkedit

problem is: without allowing it webUIs like synology won't work, since they require your browser to connect to the local network... as it is, it's not great

planb

Why? I’d guess requests from a local network site to itself (maybe even to others on the same network) will be allowed.

jay_kyburz

This proposal is for websites outside your network contacting inside your network. I assume local IPs will still work.

mystified5016

I can't believe that anyone still thinks a popup permission modal offers any type of security. Windows UAC has shown quite definitively that users will always click through any modal in their way without thought or comprehension.

Besides that, approximately zero laypersons will have even the slightest clue what this permission means, the risks involved, or why they might want to prevent it. All they know is that the website they want is not working, and the website tells them to enable this or that permission. They will all blindly enable it every single time.

ameliaquining

I don't think anyone's under the impression that this is a perfect solution. But it's better than nothing, and the options are this, nothing, or a security barrier that can't be bypassed with a permission prompt. And it was determined that the latter would break too many existing sites that have legitimate (i.e., doing something the end user actively wants) reason to talk to local devices.

knome

I wonder how much of that is on the modal itself. If we instead popped up an alert that said "blocked an attempt to talk to your local devices, since this is generally a dangerous thing for websites to do. <dismiss>. to change this for this site, go to settings/site-security", making approval a more annoying multi-click deliberate affair, and defaulting the knee-jerk single-click dismissal to the safer option of refusal.

A4ET8a8uTh0_v2

Maybe. But eventually they will learn. In the meantime, other users, who at least try to stay somewhat safe ( if it is even possible these days ), can make appropriate adjustments.

lxgr

I think it does, in many (but definitely not all) contexts.

For example, it's pretty straightforward what camera, push notification, or location access means. Contact sharing is already a stretch ("to connect you with your friends, please grant...").

"Local network access"? Probably not.

xp84

This is so true. The modern Mac is a sea of Allow/Don't Allow prompts, mixed with the slightly more infantilizing alternative of the "Block" / "Open System Preferences" where you have to prove you know what you're doing by manually browsing for the app to grant the permission to, to add it to the list of ones with whatever permission.

They're just two different approaches with the same flaw: People with no clue how tech works cannot completely protect themselves from any possible attacker, while also having sophisticated networked features. Nobody has provided a decent alternative other than some kind of fully bubble-wrapped limited account using Group Policies, to ban all those perms from even being asked for.

broguinn

This Web Security lecture by Feross Aboukhadijeh has a great example of Zoom's zero-day from 2019 that allowed anyone to force you to join a zoom meeting (and even cause arbitrary code execution), using a local server:

https://www.youtube.com/watch?v=wLgcb4jZwGM&list=PL1y1iaEtjS...

It's not clear to me from Google's proposal if it also restricts access to localhost, or just your local network - it'd be great if it were both, as we clearly can't rely on third parties to lock down their local servers sufficiently!

edit: localhost won't be restricted:

"Note that local -> local is not a local network request, as well as loopback -> anything. (See "cross-origin requests" below for a discussion on potentially expanding this definition in the future.)"

Thorrez

>edit: localhost won't be restricted:

It will be restricted. This proposal isn't completely blocking all localhost and local IPs. Rather, it's preventing public sites from communicating with localhost and local IPs. E.g:

* If evil.com makes a request to a local address it'll get blocked.

* If evil.com makes a request to a localhost address it'll get blocked.

* If a local address makes a request to a localhost address it'll get blocked.

* If a local address makes a request to a local address, it'll be allowed.

* If a local address makes a request to evil.com it'll be allowed.

* If localhost makes a request to a localhost address it'll be allowed.

* If localhost makes a request to a local address, it'll be allowed.

* If localhost makes a request to evil.com it'll be allowed.

broguinn

Ahh, thanks for clarifying! It's the origin being compared, not the context - of course.

null

[deleted]

donnachangstein

[flagged]

kulahan

I agree fully with him. I don’t care what part of your job gets harder, or what software breaks if you can’t make it work without unnecessarily invading my privacy. You could tell me it’s going to shut down the internet for 6 months and I still wouldn’t care.

You’ll have to come up with a really strong defense for why this shouldn’t happen in order to convince most users.

Aeolun

It just means I run a persistent client on your device that is permanently connected to the mothership, instead of only when you have your browser open.

zaptheimpaler

I'm sure it will require some work, but this is the price of security. The idea that any website I visit can start pinging/exploiting some random unsecured testing web server I have running on localhost:8080 is a massive security risk.

duskwuff

Or probing your local network for vulnerable HTTP servers, like insecure routers or web cameras. localhost is just the tip of the iceberg.

Wobbles42

I do understand this sentiment, but isn't the tension here that security improvements by their very nature are designed to break things? Specifically the things we might consider "bad", but really that definition gets a bit squishy at the edges.

protocolture

This attitude kept IE6 in production well after its natural life should have concluded.

aaomidi

I’m sorry but this proposal is absolutely monumentally important.

The fact that I have to rely on random extensions to accomplish this is unacceptable.

socalgal2

I wish they'd (Apple/Micrsoft/Google/...) would do similar things for USB and Bluetooth.

Lately, every app I install, wants bluetooth access to scan all my bluetooth devices. I don't want that. At most, I want the app to have to declare in their manifest some specific device IDs (short list) that their app is allowed to connect to and have the OS limit their connections to only those devices. For for example the Bose App should only be able to see Bose devices, nothing else. The CVS (pharmacy app) should only be able to connect to CVS devices, whatever those are. All I know is the app asked for permission. I denied it.

I might even prefer if it had to register the device ids and then the user would be prompted, the same way camera access/gps access is prompted. Via the OS, it might see a device that the CVS.app registered for in its manifest. The OS would popup "CVS app would like to connect to device ABC? Just this once, only when the app is running, always" (similar to the way iOS handles location)

By id, I mean some prefix that a company registers for its devices. bose.xxx, app's manifest says it wants to connect to "bose.*" and OS filters.

Similarly for USB and maybe local network devices. Come up with an id scheme, have the OS prevent apps form connecting to anything not that id. Effectively, don't let apps browser the network, usb, bluetooth.

3eb7988a1663

I am still holding out hope that eventually at least Apple will offer fake permission grants to applications. Oh, app XYZ "needs" to see my contact list to proceed? Well it gets a randomized fake list, indistinguishable from the real one. Similar with GPS.

I have been told that WhatsApp does not let you name contacts without sharing your address book back to Facebook.

ordu

Yeah. I'd like it too. I can't use my bank's app, because it wants some weird permissions like an access to contacts, I refuse to give them, because I see no use in it for me, and it refuses to work.

yonatan8070

Also for the camera, just feed them random noise or a user-selectable image/video

nothrabannosir

In iOS you can share a subset of your contacts. This is functionally equivalent and works as you described for WhatsApp.

shantnutiwari

>In iOS you can share a subset of your contacts.

the problem is, the app must respect that.

WhatsApp, for all the hate it gets, does.

"Privacy" focused Telegram doesnt-- it wouldnt work unless I shared ALL my contacts-- when I shared a few, it kept complaining I had to share ALL

WhyNotHugo

WhatsApp specifically needs phone numbers, and you can filter out which contacts you share, but not which fields. So if you family uses WhatsApp, you’d share those contacts, but you can’t share ONLY their phone number, WhatsApp also gets their birthdays, addresses, personal notes, and any other personal information which you might have.

I think this feature is pretty meaningless in the way that it’s implemented.

It’s also pretty annoying that applications know they have partial permission, so kept prompting for full permission all the time anyway.

baobun

GrapheneOS has this feature (save for faking GPS) fwiw

quickthrowman

Apps are not allowed to force you to share your contacts on iOS, report any apps that are asking you to do so as it’s a violation of the App Store TOS.

totetsu

Like the github 3rd party application integration. "ABC would like to see your repositories, which ones do you want to share?"

kuschku

Does that UI actually let you choose? IME it just tells me what orgs & repos will be shared, with no option to choose.

rjh29

Safari doesn't support Web MIDI apparently for this reason (fingerprinting), but it makes using any kind of MIDI web app impossible.

Thorrez

Are you talking about web apps, mobile apps, desktop apps, or browser extensions?

socalgal2

All of them.

_bent

Apple does this for iOS 18 via the AccessorySetupKit

bsder

> Lately, every app I install, wants bluetooth access to scan all my bluetooth devices.

Blame Apple and Google and their horrid BLE APIs.

An app generally has to request "ALL THE PERMISSIONS!" to get RSSI which most apps are using as a (really stupid, bug prone, broken) proxy for distance.

What everybody wants is "time of flight"--but for some reason that continues to be mostly unsupported.

paxys

It's crazy to me that this has always been the default behavior for web browsers. A public website being able to silently access your entire filesystem would be an absurd security hole. Yet all local network services are considered fair game for XHR, and security is left to the server itself. If you are developer and run your company's webapp on your dev machine for testing (with loose or non-existent security defaults), facebook.com or google.com or literally anyone else could be accessing it right now. Heck think of everything people deploy unauthed on their home network because they trust their router's firewall. Does every one of them have the correct CORS configuration?

3abiton

I majored in CS and I had no idea that was possible: public websites you access have access to your local network. I have to take time to process this. Beside what is suggested in the post, are there any ways to limit this abusive access?

Too

What’s even crazier is that nobody learned this lesson and new protocols are created with the same systematic vulnerabilities.

Talking about MCP agents if that’s not obvious.

thaumasiotes

> Does every one of them have the correct CORS configuration?

I would guess it's closer to 0% than 0.1%.

reassess_blind

The local server has to send Access-Control-Allow-Origin: * for this to work, right?

Are there any common local web servers or services that use that as the default? Not that it’s not concerning, just wondering.

meindnoch

No, simple requests [1] - such as a GET request, or a POST request with text/plain Content-Type - don't trigger a CORS preflight. The request is made, and the browser may block the requesting JS code from seeing the response if the necessary CORS response header is missing. But by that point the request had already been made. So if your local service has a GET endpoint like http://localhost:8080/launch_rockets, or a POST endpoint, that doesn't strictly validate the body Content-Type, then any website can trigger it.

[1] https://developer.mozilla.org/en-US/docs/Web/HTTP/Guides/COR...

reassess_blind

I was thinking in terms of response exfiltration, but yeah, better put that /launch_rockets endpoint behind some auth.

pacifika

Internet Explorer solved this with their zoning system right?

https://learn.microsoft.com/en-us/previous-versions/troubles...

donnachangstein

Ironically, Chrome partially supported and utilized IE security zones on Windows, though it was not well documented.

pacifika

Oh yeah forgot about that, amazing.

bux93

Although those were typically used to give ActiveX controls on the intranet unfettered access to your machine because IT put it in the group policy. Fun days.

nailer

Honestly I just assumed a modern equivalent existed. That it doesn’t is ridiculous. Local network should be a special permission like the camera or microphone.

sroussey

I guess this would help Meta’s sneaking identification code sharing between native apps and websites with their sdk on them from communicating serendipitously through localhost, particularly on Android.

[0] https://www.theregister.com/2025/06/03/meta_pauses_android_t...

will4274

surreptitiously

skybrian

While this will help to block many websites that have no business making local connections at all, it's still very coarse-grained.

Most websites that need this permission only need to access one local server. Granting them access to everything violates the principle of least privilege. Most users don't know what's running on localhost or on their local network, so they won't understand the risk.

paxys

> Most users don't know what's running on localhost or on their local network, so they won't understand the risk.

Yes, which is why they also won't understand when the browser asks if you'd like to allow the site to visit http://localhost:3146 vs http://localhost:8089. A sensible permission message ("allow this site to access resources on your local network") is better than technical mumbo jumbo which will make them just click "yes" in confusion.

xp84

Either way they'll click "yes" as long as the attacker site properly primes them for it.

For instance, on the phishing site they clicked on from an email, they'll first be prompted like:

"Chase need to verify your Local Network identity to keep your account details safe. Please ensure that you click "Yes" on the following screen to confirm your identity and access account."

Yes, that's meaningless gibberish but most people would say:

• "Not sure what that means..."

• "I DO want to access my account, though."

kevincox

This is true, but you can only protect people from themselves so far. At some point you gotta let them do what they want to do. I don't want to live in a world where Google decides what we are and aren't allowed to do.

derefr

In an ideal world, the browser could act as an mDNS client, discovering local services, so that it could then show the pretty name of the relevant service in the security prompt.

In the world we live in, of course, almost nothing on your average LAN has an associated mDNS service advertisement.

mixmastamyk

They don’t? Every time I install an OS I turn that stuff off, because I don’t fully understand it. Or is avahi et al another thing?

skybrian

On a phone at least, it should be "do you want to allow website A to connect to app B."

(It's harder to do for the rest of the local network, though.)

nine_k

A comprehensive implementation would be a firewall. Which CIDRs, which ports, etc.

I wish there were an API to build such a firewall, e.g. as a part of a browser extension, but also a simple default UI allowing to give access to a particular machine (e.g. router), to the LAN, to a VPN, based on the routing table, or to "private networks" in general, in the sense Windows ascribes to that. Also access to localhost separately. The site could ask one of these categories explicitly.

kuschku

> I wish there were an API to build such a firewall, e.g. as a part of a browser extension,

There was in Manifest V2, and it still exists in Firefox.

https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/Web...

That's the API Chrome removed with Manifest V3. You can still log all web requests, but you can't block them dynamically anymore.

skybrian

I think something like Tailscale is the way to go here.

rerdavies

I worry that there are problems with Ipv6. Can anyone explain to me if there actually is a way to determine whether an IPv6 is site local? If not, the proposal is going to have problems on IPv6-only networks.

I have an struggled with this issue in the past. I have an IoT application whose websever wants to reject any requests from a non-local address. After failing to find a way to distinguish IPv6 local addresses, I ended up redirecting IPv6 requests to the local IPv4 address. And that was the end of that.

I feel like I would be in a better position to raise concerns if I could confirm that my understanding is correct: that there is no practical way for an application to determine whether an IPv6 address is link- or site-local.

I did experiment with IPv6 "link local" addresses, but these seem to be something else altogether different (for use by routers rather than general applications),and don't seem to work for regular application use.

There is some wiggle room provided by including .local address as local servers. But implementation of .local domains seems to be inconsistent across various OSs at present. Raspberry PI OS, for example, will do mDNS resolution of "some_address" but not of "someaddress.local"; Ubuntu 24.04 will resolve "someaddress.local", but not "someaddress". And neither will resolve "someaddress.local." (which I think was recommended at one point, but is now deprecated and non-functional). Which does seems like an issue worth raising.

And it frustrates the HECK out of me that nobody will allow use of privately issued certs for local network addresses. The "no https for local addresses" thing needs to be fixed.

gerdesj

IPv6 still has the concept of "routable". You just have to decide what site-local means in terms of the routing table.

In old school IPv4 you would normally assign octet two to a site and octet three to a VLAN. Oh and you start with 10.

With IPv6 you have a lot more options.

All IPv6 devices have link local addresses - that's the LAN or local VLAN - a bit like APIPA.

Then you start on .local - that's Apple and DNS and the like and nothing to do with IP addresses. That's name to address.

You can do Lets Encrypt (ACME) for "local network addresses" (I assume you mean RFC 1918 addresses: 10/8, 172.16/12, 192.168/16) - you need to look into DNS-01 and perhaps DNS CNAME. It does require quite some effort.

There is a very good set of reasons why TLS certs are a bit of a bugger to get working effectively these days. There are solutions freely available but they are also quite hard to implement. At least they are free. I remember the days when even packet capture required opening your wallet.

You might look into acme.sh if Certbot fails to work for you. You also might need to bolt down IP addressing in general, IPv4 vs IPv6 and DNS and mDNS (and Bonjour) as concepts - you seem a little hazy on that lot.

Bon chance mate

globular-toast

HTTPS doesn't care about IP addresses. It's all based on domain names. You can get a certificate for any domain you own. You can also set said domain to resolve to any address you like, including a "local" one.

NAT has rotted people's brains unfortunately. RFC 1918 is not really the way to tell if something is "local" or not. 25 years ago I had 4 publicly routable IPv4 addresses. All 4 of these were "local" to me despite also being publicly routable.

An IP address is local if you can resolve it and don't have to communicate via a router.

It seems too far gone, though. People seem unable to separate RFC 1918 from the concept of "local network".

donnachangstein

> Can anyone explain to me if there is any way to determine whether an inbound IPv6 address is "local"?

No, because it's the antithesis of IPv6 which is supposed to be globally routable. The concept isn't supposed to exist.

Not to mention Google can't even agree on the meaning of "local" - the article states they completely changed the meaning of "local" to be a redefinition of "private" halfway through brainstorming this garbage.

Creating a nonstandard, arbitrary security boundary based on CIDR subnets as an HTTP extension is completely bonkers.

As for your application, you're going about it all wrong. Just assume your application is public-facing and design your security with that in mind. Too many applications make this mistake and design saloon-door security into their "local only" application which results in overreaction such as the insanity that is the topic of discussion here.

".local" is reserved for mDNS and is in the RFC, though this is frequently and widely ignored.

ryanisnan

It's very useful to have this additional information in something like a network address. I agree, you shouldn't rely on it, but IPv6 hasn't clicked with me yet, and the whole "globally routable" concept is one of the reasons. I hear that, and think, no, I don't agree.

donnachangstein

Globally routable doesn't mean you don't have firewalls in between filtering and blocking traffic. You can be globally routable but drop all incoming traffic at what you define as a perimeter. E.g. the WAN interface of a typical home network.

The concept is frequently misunderstood in that IPv4 consumer SOHO "routers" often combine a NAT and routing function with a firewall, but the functions are separate.

rerdavies

@donnachangstein:

The device is an IoT guitar pedal that runs on a Raspberry Pi. In performance, on stage, a Web UI runs on a phone or tablet over a hotspot connection on the PI, which is NOT internet connected (since there's no expectation that there's a Wi-Fi router or internet access at a public venue). OR the pi runs on a home wifi network, using a browser-hosted UI on a laptop or desktop. OR, I suppose over an away-from-home Wi-Fi connection at a studio or rehearsal space, I suppose.

It is not reasonable to expect my users to purchase domain names and certs for their $60 guitar pedal, which are not going to work anyway, if they are playing away from their home network. Nor is ACME provisioning an option because the device may be in use but unconnected to the internet for months at a time if users are using the Pi Hotspot at home.

I can't use password authentication to get access to the Pi Web server, because I can't use HTTPS to conceal the password, and browsers disable access to javascript crypto APIs on non non-HTTPS pages (not that I'd really trust myself to write javascript code to obtain auth tokens from the pi server anyway), so doing auth over an HTTP connection doesn't really strike me as a serious option either..

Nor is it reasonable to expect my non-technical users to spend hours configuring their networks. It's an IoT device that should be just drop and play (maybe with a one-time device setup that takes place on the Pi).

There is absolutely NO way I am going to expose the server to the open internet without HTTPS and password authentication. The server provides a complex API to the client over which effects are configured and controlled. Way too much surface area to allow anyone of the internet to poke around in. So it uses IP/4 isolation, which is the best I can figure out given the circumstances. It's not like I havem't given the problem serious consideration. I just don't see a solution.

The use case is not hugely different from an IoT toothbrush. But standards organizations have chosen to leave both my (hypothetical) toothbrush and my application utterly defenseless when it comes to security. Is it any surprise that IoT toothbrushes have security problems?

How would YOU see https working on a device like that?

> ".local" is reserved for mDNS and is in the RFC, though this is frequently and widely ignored.

Yes. That was my point. It is currently widely ignored.

mixmastamyk

Grandparent explained that a firewall is also needed with ip6.

I understand that setting it up to delineate is harder in practice. Therein lies the rub.

AStonesThrow

> can't even agree on the meaning of "local"

Well, who can agree on this? Local network, private network, intranet, Tailscale and VPN, Tor? IPv6 ULA, NAT/CGNAT, SOCKS, transparent proxy? What resources are "local" to me and what resources are "remote"?

This is quite a thorny and sometimes philosophical question. Web developers are working at the OSI Layer 6-7 / TCP/IP Application Layer.

https://en.wikipedia.org/wiki/OSI_model#Comparison_with_TCP/...

Now even cookies and things like CSRF were trying to differentiate "servers" and "origins" and "resources" along the lines of the DNS hierarchy. But this has been fraught with complication, because DNS was not intended to delineate such things, and can't do so cleanly 100% of the time.

Now these proposals are trying to reach even lower in the OSI model - Layer 3, Layer 2. If you're asking "what is on my LAN" or "what is a private network", that is not something that HTTPS or web services are supposed to know. Are you going to ask them to delve into your routing table or test the network interfaces? HTTPS was never supposed to know about your netmask or your next-hop router.

So this is only one reason that there is no elegant solution for the problem. And it has been foundational to the way the web was designed: "given a uniform locator, find this resource wherever it may be, whenever I request it." That was a simpler proposition when the Web was used to publish interesting and encyclopedic information, rather than deliver applications and access sensitive systems.

G_o_D

Cors doesnt stop POST request also not fetch with 'no-cors'supplied in javascript its that you cant read response that doesnt mean request is not sent by browser

Then again local app can run server with proxy that adds adds CORS headers to the proxied request and you can access any site via js fetch/xmlhttprequest interface, even extension is able to modify headers to bypass cors

Cors bypassing is just matter of editing headers whats really hard to or impossible to bypass in CSP rules,

Now facebook app itself is running such cors server proxy even without it an normal http or websocket server is enought to send metrics

Chrome already has flag to prevent locahost access still as said websocket can be used

Completely banning localhost is detrimental

Many users are using self hosted bookmarking, note app, pass managers like solutions that rely on local server

1vuio0pswjnm7

Explainer by non-Googler

Is the so-called "modern" web browser too large and complex

I never asked for stuff like "websockets"; I have to disable it, why

I still prefer a text-only browser for reading HTML; it does not run Javascript, it does not do websockets, CSS, images or a gazillion other things; it does not even autoload resources

It is relatively small, fast and reliable; very useful

It can read larger HTML files that make so-called "modern" web browsers choke

It does not support online ad services

The companies like Google that force ads on www users are known for creating problems for www users and then proposing solutions to them; why not just stop creating the problems

1vuio0pswjnm7

Text-only browsers are not a "solution". That is not the point of the comment. Such simpler clients are not a problem.

The point is that gigantic, overly complex "browsers" designed for surveillance and advertising are the problem. They are not a solution.

HumanOstrich

Going back to text-only browsers is not the solution.

ronsor

Do note that since the removal of NPAPI plugins years ago, locally-installed software that intends to be used by one or more public websites has to run an HTTP server on localhost.

It would be really annoying if this use case was made into an unreasonable hassle or killed entirely. (Alternatively, browser developers could've offered a real alternative, but it's a bit late for that now.)

michaelt

Doesn't most software just register a protocol handler with the OS? Then a website can hand the browser a zoommtg:// link, which the browser opens with zoom ?

Things like Jupyter Notebooks will presumably be unaffected by this, as they're not doing any cross-origin requests.

And likewise, when a command line tool wants you to log in with oauth2 and returns you to a localhost URL, it's a simple redirect not a cross-origin request, so should likewise be allowed?

kuschku

A common use case, whether for 3D printers, switches, routers, or NAS devices is that you've got a centrally hosted management UI that then sends requests directly to your local devices.

This allows you to use a single centrally hosted website as user interface, without the control traffic leaving your network. e.g. Plex uses this.

michaelt

I don't think this proposal will stop you visiting the management UI for devices like switches and NASes on the local network. You'll be able to visit http://192.168.0.1 and it'll work just fine?

This is just about blocking cross-origin requests from other websites. I probably don't want every ad network iframe being able to talk to my router's admin UI.

hypercube33

Windows Admin center but it's only local which I rather hate

ronsor

That works if you want to launch an application from a website, but it doesn't work if you want to actively communicate with an application from a website.

fn-mote

This needs more detail to make it clear what you are wishing for that will not happen.

It seems like you're thinking of a specific application, or at least use-case. Can you elaborate?

Once you're launching an application, it seems like the application can negotiate with the external site directly if it wants.

RagingCactus

I don't believe this is true, as https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/Web... exists. It does need an extension to be installed, but I think that's fair in your comparison with NPAPI.

IshKebab

It would be amazing if that method of communicating with a local app was killed entirely, because it's been a very very common source of security vulnerabilities.

ImPostingOnHN

> locally-installed software that intends to be used by one or more public websites has to run an HTTP server on localhost

if that software runs with a pull approach, instead of a push one, the server becomes unnecessary

bonus: then you won't have websites grossly probing local networks that aren't theirs (ew)

rhdunn

It's harder to run html and xml files with xslt by just opening them in a web browser (things like nunit test run output). To view these properly now -- to get the css, xslt, images, etc. to load -- you now typically have to run a web server at that file path.

Note: this is why the viewers for these tools will spin up a local web server.

With local LLMs and AI it is now common to have different servers for different tasks (LLM, TTS, ASR, etc.) running together where they need to communicate to be able to create services like local assistants. I don't want to have to jump through hoops of running these through SSL (including getting a verified self-signed cert.), etc. just to be able to run a local web service.

ImPostingOnHN

I'm not sure any of that is necessary for what we're talking about: locally-installed software that intends to be used by one or more public websites.

For instance, my interaction with local LLMs involves 0 web browsers, and there's no reason facebook.com needs to make calls to my locally-running LLM.

Running HTML/XML files in the browser should be easier, but at the moment it already has the issues you speak of. It might make sense, IMO, for browsers to allow requests to localhost from websites also running on localhost.

donnachangstein

[flagged]

afavour

> Googlers present a solution no one is asking for,

I'm asking for it. Random web sites have no business poking around my internal network.

moralestapia

>I'm asking for it.

Proof? Link to issue? Mailing list? Anything?

I think you just made that up.

udev4096

Deny any incoming requests using ufw or nftables. Only allow outbound requests by default

bmacho

One of the very few security inspired restrictions I can wholeheartedly agree with. I don't want random websites be able to read my localhost. I hope it gets accepted and implemented sooner than later.

OTOH it would be cool if random websites were able to open up and use ports on my computer's network, or even on my LAN, when granted permission of course. Browser-based file- and media sharing between my devices, or games if multi-person.

avidiax

> OTOH it would be cool if random websites were able to open up and use ports on my computer's network

That's what WebRTC does. There's no requirement that WebRTC is used to send video and audio as in a Zoom/Meet call.

That's how WebTorrent works.

https://webtorrent.io/faq