Skip to content(if available)orjump to list(if available)

URLs are state containers

URLs are state containers

130 comments

·November 2, 2025

jorl17

When I get my way reviewing a codebase, I make sure that as much state as possible is saved in a URL, sometimes (though rarely) down to the scroll position.

I genuinely don't understand why people don't get more upset over hitting refresh on a webpage and ending up in a significantly different place. It's mind-boggling and actually insulting as a user. Or grabbing a URL and sending to another person, only to find out it doesn't make sense.

Developing like this on small teams also tends, in my experience, to lead to better UX, because it makes you much more aware of how much state you're cramming into a view. I'll admit it makes development slower, but I'll take the hit most days.

I've seen some people in this thread comment on how having state in a URL is risky because it then becomes a sort of public API that limits you. While I agree this might be a problem in some scenarios, I think there are many others where that is not the case, as copied URLs tend to be short-lived (bookmarks and "browser history" are an exception), mostly used for refreshing a page (which will later be closed) or for sharing . In the remaining cases, you can always plug in some code to migrate from the old URL to the new URL when loading, which will actually solve the issue if you got there via browser history (won't fix for bookmarks though).

thijsvandien

While I like this approach as well, these URLs ending up in the browser history isn’t ideal. Autocomplete when just trying to go to the site causes some undesired state every now and then. Maybe query params offer an advantage over paths here.

DrewADesign

I think it’s a “use the right tool for the job” thing. Putting ephemeral information like session info in URLs sucks and should only be done if you need to pass it in a get request from a non-browser program or something, and even then I think you should redirect or rewrite the url or something after the initial request. But I think actual navigational data or some sort of state if it’s in the middle of an important action is acceptable.

But if you really just want your users to be able to hit refresh and not have their state change for non-navigational stuff like field contents or whatever, unless you have a really clear use case where you need to maintain state while switching devices and don’t want to do in server-side, local storage seems like the idiomatic choice.

linked_list

JS does have features for editing the history, but it's a trade-off of not polluting the history too much while still letting the user navigate back and forth

orphea

I'm glad to see that prismjs site mentioned by the blog is doing the right thing - when it updates the URL, it replaces the current history item.

hamdingers

Browser autocomplete behavior is reliably incorrect and infuriating either way, so it's not a good reason to avoid the utility of having bookmarkable/sharable urls.

SoftTalker

Yeah it's an annoyance more than it helps. I always disable it.

SoftTalker

Yeah I use a web app regularly for work where they have implemented their own "back" button in the app. The app maintains its own state and history so the browser back button is totally broken.

The problem here is that they've implemented an application navigation feature with the same name as a browser navigation feature. As a user, you know you need to click "Back" and your brain has that wired to click the broswer back button.

Very annoying.

Having "Refresh" break things is (to me) a little more tolerable. I have the mental association of "refresh" as "start over" and so I'm less annoyed when that takes me back to some kind of front page in the app.

apitman

> I make sure that as much state as possible is saved in a URL, sometimes (though rarely) down to the scroll position.

If your page is server-rendered, you get saved scroll position on refresh for free. One of many ways using JS for everything can subtly break things.

o11c

Even with JS, if it is classical synchronous JS it is much better than the modern blind push for async JS, which causes the browser to try to restore the position before the JS has actually created the content.

endless1234

Still leaves the problem of not being able to simply send the current URL to someone else and know they'll see the same thing. Of course anchors can solve this, but not automatically

pests

Chrome (at least?) solves this via Text Fragments[0] which are a pure client side thing and requires no server or site support.

This URI for example:

https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/...

Links to an instance of "The Referer" narrowed down via the a start prefix (downgrade) and end prefix (origins).

These are used across Google I believe so many have probably seen them.

[0]https://developer.mozilla.org/en-US/docs/Web/URI/Reference/F...

divan

Also reminder that "refresh" is just a code word for "restart (and often redownload) the whole bloody app". It's funny how in web-world people so used to "refreshing" the apps and assume that it's a normal functionality (and not failure mode).

fittom

I completely agree. In fact, I believe URL design should be part of UX design, and although I've worked with 30+ UX designers, I've never once received guidance on URLs.

mrexroad

As a UX designer that always gives guidance on URL design/strategy, I’ll say it’s not always well received. I’ve run into more than a few engineering or PM teams who feel that’s not w/in scope of design.

pyrolistical

As a dev mentor one of my first lesson is what everybody has in common is design.

We all are trying to understand a problem and trying to figure out the best solution.

How each role approaches this has some low level specializations but high level learnings can be shared.

MattDaEskimo

I can understand "shareable" state (scroll position), but _as much as possible_ seems like overkill.

Why not just use localStorage?

layer8

> Why not just use localStorage?

So that I can operate two windows/tabs of the same site in parallel without them stealing each other’s scroll position. In addition, the second window/tab may have originated from duplicating the first one.

mejutoco

You could work around that if needed with a unique id per tab (I was curious myself)

https://stackoverflow.com/questions/11896160/any-way-to-iden...

phillipseamore

sessionStorage should treat the windows/tabs as separate

makeitdouble

> I genuinely don't understand why people don't get more upset over hitting refresh on a webpage and ending up in a significantly different place.

Th web has evolved a lot, as users we're seeing an incredible amount of UX behaviors which makes any single action take different semantics depending on context.

When on mobile in particular, there's many cases where going back to the page's initial state is just a PITA the regular way, and refreshing the page is the fastest and cleanest action.

Some implementations of infinite scroll won't get you to the content top in any simple way. Some sites are a PITA regarding filtering and ordering, and you're stuck with some of the choices that are inside collapsible blocks you don't even remember where they were. And there's myriads of other situation where you just want the current page in anew and blank state.

The more you keep in the url, the more resetting the UX is a chore. Sometimes just refreshing is enough, sometimes cleaning the URL is necessary, sometimes you need to go back to the top and navigate back to the page you were on. And those are situations where the user is already in frustration over some other UX issue, so needing additional efforts just to reset is a adding insult to injury IMHO.

jraph

> I make sure that as much state as possible is saved in a URL

Do you have advice on how to achieve this (for purely client-side stuff)?

- How do you represent the state? (a list of key=value pair after the hash?)

- How do you make sure it stays in sync?

-- do you parse the hash part in JS to restore some stuff on page load and when the URL changes?

- How do you manage previous / next?

- How do you manage server-side stuff that can be updated client side? (a checkbox that's by default checked and you uncheck it, for instance)

MPSimmons

One example I think is super interesting is the NWS Radar site, https://radar.weather.gov/

If you go there, that's the URL you get. However, if you do anything with the map, your URL changes to something like

https://radar.weather.gov/?settings=v1_eyJhZ2VuZGEiOnsiaWQiO...

Which, if you take the base64 encoded string, strip off the control characters, pad it out to a valid base64 string, you get

"eyJhZ2VuZGEiOnsiaWQiOm51bGwsImNlbnRlciI6Wy0xMTUuOTI1LDM2LjAwNl0sImxvY2F0aW9uIjpudWxsLCJ6b29tIjo2LjM1MzMzMzMzMzMzMzMzMzV9LCJhbmltYXRpbmciOmZhbHNlLCJiYXNlIjoic3RhbmRhcmQiLCJhcnRjYyI6ZmFsc2UsImNvdW50eSI6ZmFsc2UsImN3YSI6ZmFsc2UsInJmYyI6ZmFsc2UsInN0YXRlIjpmYWxzZSwibWVudSI6dHJ1ZSwic2hvcnRGdXNlZE9ubHkiOmZhbHNlLCJvcGFjaXR5Ijp7ImFsZXJ0cyI6MC44LCJsb2NhbCI6MC42LCJsb2NhbFN0YXRpb25zIjowLjgsIm5hdGlvbmFsIjowLjZ9fQ==", which decodes into:

{"agenda":{"id":null,"center":[-115.925,36.006],"location":null,"zoom":6.3533333333333335},"animating":false,"base":"standard","artcc":false,"county":false,"cwa":false,"rfc":false,"state":false,"menu":true,"shortFusedOnly":false,"opacity":{"alerts":0.8,"local":0.6,"localStations":0.8,"national":0.6}}

I only know this because I've spent a ton of time working with the NWS data - I'm founding a company that's working on bringing live local weather news to every community that needs it - https://www.lwnn.news/

asielen

In this case, why encode the string instead of just having the options as plain text parameters?

toxik

Sorry but this is legitimately a terrible way to encode this data. The number 0.8 is encoded as base64 encoded ascii decimals. The bits 1 and 0 similarly. URLs should not be long for many reasons, like sharing and preventing them from being cut off.

linked_list

The URL spec already takes care of a lot of this, for example /shopping/shirts?color=blue&size=M&page=3 or /articles/my-article-title#preface

Waterluvian

The URL is a public facing interface. If anything goes into the URL, it should already be detailed in the design that the PR’d code is implementing.

notepad0x90

Deeplinking is awesome! The Azure portal is my favorite example. You could be many layers deep in some configuration "blade" and the URL will retain the exact location you are in the UI.

padolsey

I agree, and this reminds me: I really wish there was better URL (and DNS) literacy amongst the mainstream 'digitally literate'. It would help reduce risk of phishing attacks, allow people to observe and control state meaningful to their experience (e.g. knowing what the '?t=_' does in youtube), trimming of personal info like tracking params (e.g. utm_) before sharing, understanding https/padlock doesn't mean trusted. Etc. Generally, even the most internet-savvy age group, are vastly ill-equipped.

weikju

> Generally, even the most internet-savvy age group, are vastly ill-equipped.

It’s a losing battle when even the tools (web browsers hiding URLs by default, heck even Firefox on iOS does it now!) and companies (making posters with nothing more than QR codes or search terms) are what they’re up against….

Lord-Jobo

And with commercial software like Outlook being so ubiquitous and absolutely HORRENDOUS with url obfuscation, formatting, “in network” contacts, and seemingly random spam filtering.

Our company does phishing tests like most, and their checklist of suspicious behavior is 1 to 1 useless. Every item on the list is either 1: something that our company actually does with its real emails or 2: useless because outlook sucks a huge wang. So I basically never open emails and report almost everything I get. I’m sure the IT department enjoys the 80% false report rate.

null

[deleted]

dzhar11

Recommendation:

https://github.com/Nanonid/rison

Super old but still a very functional library for saving state as JSON in the URL, but without all the usual JSON clutter. I first saw it used in Elastic's Kibana. I used it on a fancy internal React dashboard project around 2016, and it worked like a charm.

Sample: http://example.com/service?query=q:'*',start:10,count:10

chaboud

If the URL is your state container, it also becomes a leakage mechanism of internals that, at the very least, turns into a versioning requirement (so an old bookmark won’t break things). That also means that there’s some degree of implicit assumption with browsers and multi-browser passing. At some point, things might not hold up (Authentication workflows, for example).

That said, I agree with the point and expose as much as possible in the URL, in the same way that I expose as much as possible as command line arguments in command line utilities.

But there are costs and trade offs with that sort of accommodation. I understand that folks can make different design decisions intentionally, rather than from ignorance/inexperience.

vbezhenar

When the system evolves, you need to change things. State structure also evolves and you will refactor and rework it. You'll rename things, move fields around.

URL is considered a permanent string. You can break it, but that's a bad thing.

So keeping state in the URL will constrain you from evolving your system. That's bad thing.

I think, that it's more appropriate to treat URL like a protocol. You can encode some state parameters to it and you can decode URL into a state on page load. You probably could even version it, if necessary.

For very simple pages, storing entire state in the URL might work.

oceanplexian

I think it depends on the permanence of the thing you’re keeping state for. For example for a blog post, you might want to keep it around for a long time.

But sometimes it’s less obvious how to keep state encoded in a URL or otherwise (i.e for the convenience of your users do you want refreshing a feed to return the user to a marker point in the feed that they were viewing? Or do you want to return to the latest point in the feed since users expect a refresh action to give them a fresh feed?).

tomtomistaken

You can always do versioning.

null

[deleted]

alansaber

This is something you learn to appreciate when you do web scraping. I do overlook it for frontend webdev though

azangru

> Browsers and servers impose practical limits on URL length (usually between 2,000 and 8,000 characters) but the reality is more nuanced. As this detailed Stack Overflow answer explains, limits come from a mix of browser behavior, server configurations, CDNs, and even search engine constraints. If you’re bumping against them, it’s a sign you need to rethink your approach.

So what is the reality? The linked StackOverflow answer claims that, as of 2023, it is "under 2000 characters". How much state can you fit into under 2000 characters without resorting to tricks for reducing the number of characters for different parameters? And what would a rethought approach look like?

djoldman

Each of those characters (aside from domain) could be any of 66 unique ones:

   Uppercase letters: A through Z (26 characters)

   Lowercase letters: a through z (26 characters)

   Digits: 0 through 9 (10 characters)

   Special: - . _ ~ (4 characters)
So you'd get a lot of bang for your buck if you really wanted to encode a lot of information.

croes

Unless you have some kind of mapping to encode different states with different character blocks your possibilities are much more limited. Like storing product ids or EAN plus the number of items. Just hope the user isn’t on a shopping spree

mrbonner

I believe draw.io achieves complete state persistence solely through the URL. This allows you to effortlessly share your diagrams with others by simply providing a link that contains an embedded Base64-encoded string representing the diagram’s data. However, I’m uncertain whether this approach would qualify as a “state container” according to the definition presented in the article.

jakegmaths

The latest version of Microsoft Teams is absolutely terrible at this... just one URL for everything. No way to bookmark even a particular team.

nonethewiser

>If you need to base64-encode a massive JSON object, the URL probably isn’t the right place for that state.

Why?

I get it if we're talking about a size that flirts with browser limitations. But other than that I see absolutely no problem with this. In fact it makes me think the author is actually underrating the use-case of URL's as state containers.