Skip to content(if available)orjump to list(if available)

Datastar: Web Framework for the Future?

andersmurphy

If you want a solid demo of what you can do with datastar. You can checkout this naive multiplayer game of life I wrote earlier in the week. Sends down 2500 divs every 200ms to all connected cliends via compressed SSE.

https://example.andersmurphy.com/

CharlesW

Is sending 10,000 divs/sec the right solution for this problem, or is this an "everything looks like a nail" solution?

andersmurphy

It's deliberately naive. But brotli and a tuned compression window over SSE means it gets 150-250:1 compression ratio, combined with Datastar rendering speed and you can get away with it.

The reason it's naive is although you can use datastar to drive SVG, a canvas or even a game engine the minute you do people think you are doing magic game dev sorcery and dismiss your demo. I wanted to show that your average crud app with a bunch of divs is going to do just fine.

I break it down in this post.

https://andersmurphy.com/2025/04/07/clojure-realtime-collabo...

wavemode

This is a Wirth's Law solution - the reasoning goes: "computers are fast enough to deal with it, so why not?"

andersmurphy

But, you can learn a lot doing dumb stuff. I learnt a lot about compression.

If you go to google chrome and throttle the site to 3G it will still run fine.

Rendering on the server like this will be faster for low end devices than rendering on the client (as the client doesn't have to run or simulate the game). It just gets raw HTML it has to render.

Effectively, the bulk of the work on the client will be done by the browser native rendering code and native compression code.

The other thing that might not be obvious. Is #brotli compression is not set to 11, it's set to 5 so similar CPU cost to gzip. But, the compression advantage comes from compressing the SSE stream. Tuning the shared window size cost memory on client and server but gives you a compression ratio of 150-250:1 (vs 30:1), at the cost of 263kb on both server on client (for context gzip has a fixed window of 32kb). This not only saves bandwidth and make the game run smoothly on 3G it also massively reduces CPU cost on both client and server. So it can run on lower end devices than a client heavy browser app.

So server driven web apps are better for low end devices. The same way you can watch YouTube on a low end phone but not play some games.

sudodevnull

I'm am not of that opinion at all. I'm all about optimization but then people will say "that's not how web pages are made". Can't win opinions but can say, Datastar is not the bottleneck. You send as much, as often as you choose.

danesparza

"Sends down 2500 divs every 200ms to all connected cliends via compressed SSE."

If I didn't know better, I'd say this was an April Fool's joke.

sudodevnull

It's a DOM render stress test. It's trying to show the network is not the bottleneck. TL;DR do this in React or any other framework compared to a one time tiny shim and see if you get better results.

kaycebasques

Wow, I've never done multiplayer GoL. Simple yet addictively fun. LONG LIVE THE ORANGE CIVILIZATION!!

edit: damn, purple civilization got hands

sudodevnull

May the Yellows never forget

jgalt212

your server logs are going to be an intelligible mess. This framework will be a yuge money maker for AWS CloudWatch.

andersmurphy

Tell me more!

dalmo3

Reading tfa I kept wondering "is this yet another framework where every click is a server round trip?" Judging by the demos¹, the answer is yes?

If this is "the Future", I'm branching off to the timeline where local-first wins.

¹. https://data-star.dev/examples/click_to_edit

tauroid

Counterexample with just local signals: https://data-star.dev/guide/getting_started#data-on

tipiirai

A JavaScript framework, built by a person who hates JavaScript doesn’t sound right

hsbauauvhabzb

With additional swipes at ecosystems and ‘must be written in go’ with no real justification as to _why_ more than the developers preference

throwaway519

Robust performance, error handling that's not stuck in 1982, and cross platform would be my guesses, but agree the OP could be more spicific as there are more benefits.

mdhb

I am not in any way “pro go” but it’s also very clear that JS is not the future. I know it’s where a LOT of people are right now but it’s been artificially pumped up to such a massive degree by being literally the only viable choice for the web for the entirety of its existence… and that’s starting to change and from both a technical, performance and development experience it is going to lose when that advantage goes away.

sudodevnull

Use whatever backend language you want. No justification needed when it's agnostic, even, shocker, JS

fbn79

Every time I read "Web Framework" I run.

Ripley: These techs are here to protect you. They're frameworks.

Newt: It won't make any difference.

andersmurphy

Here's the thing datastar isn't really a framework in the traditional sense (ruby on rails), you can bring your own backend and use it in a variety of ways. I use it a push based CQRS style, but you just as easily do request/response, hell even polling if that's your thing.

sudodevnull

Our free shared fly.io was not built to handle hackernews. We are looking into alternatives but in the mean time checkout https://andersmurphy.com/2025/04/07/clojure-realtime-collabo... as it's the same tech but on a slight better machine.

zamalek

I think the happy place is somewhere in-between. Use JS to allow the user to build up a request/form (basically DHTML circa 2000), but use one of these hypermedia frameworks when interacting with the server. I think that these are successfully showing that BFFs were a mistake.

infamia

idk if I'd put it quite that strongly. https://data-star.dev/examples/dbmon

Also, multiplayer for free on every page due to SSE (if you want it).

sudodevnull

Datastar author here... AMA, but know that Datastar is pure yak shaving for me to do real work stuff so I have no golden calves, just approaches I've seen work at scale.

buangakun

Hello, I've heard of Datastar before but didn't really pay attention to it since all the air in the room was sucked up by HTMX.

I tried HTMX and I found that it is really, really hard to manage complexity once the codebase gets big.

Is there an example of Datastar being used with Go in a highly interactive application that goes beyond just a TODO app so I could see how the project structure should be organized?

theboywho

What do you think about the Hotwire stack (Stimulus, Turbo) as compared to Datastar ?

sudodevnull

Not a fan (andersmurphy actually is better at explaining than me). However! I've been working with Micah from the Turbo.js team on idiomorph ideas. We are already multiples of v0.7.3 right now and getting faster each day. Together we can solve the core ideas even if we disagree on the top level API.

andersmurphy

So I've used Turbo 8 to make a multiplayer app using a non rails backend. It was a struggle, the docs are incomplete. I mostly pieced things together from what others have done and written about. Which, is ironic considering your point about having a massive community behind it. The nice thing about Turbo 8 is morph (it uses idiomorph like datastar). It's also got a pretty simple refresh model.

However, you quickly realise the limitation. You can even see this in the Turbo 8 demo (see this issue https://github.com/basecamp/turbo-8-morphing-demo/issues/9). You can try to fix this with `data-turbo-permanent` but you'll now run into another issue that you can't clear that field without resorting to JavaScript. Which, brings me to the next thing, I found I was still writing quite a bit of JavaScript with turbo. Like HTMX pushes you to use alpin.js/hypercript turbo pushes you to use Stimulus.js.

Turbo.js is not push based it's mostly polling based. Even when you push a refresh event, it pushes the client to re-fetch the data. Sure this is elegant in in that you re-use your regular handlers but it's a performance nightmare as you stampede your own server. It also prohibits you from doing render sharing between clients (which is what opens up some of the really cool stuff you can do with datastar).

I was using turbo.js with SSE so no complaints there. But, most turbo implementations use websockets (which if you have any experience with websockets is just a bad time: messages can be dropped, no auto reconnect, not regular http, proxy and firewalls can block it etc).

Finally, according to the docs Turbo Native doesn't let you use stream events (which is what gives you access to refresh and other multiplayer features).

I like turbo, I'd use it over react if I was using Rails. I use it for my static blog to make the navigation feel snappy (turbo drive). It gives you a lot without you having to do anything. But, the minute you start working on day 2 problems and you are not using rails the shine fades pretty quickly. There are 3 ways to do things, frames, streams and morph. None of them are enough to stop you having to import stimulus or alpine and honestly it's just a bit of a mess.

If you need help with turbo the best blogs posts are from (Radan Skoric https://radanskoric.com/archives/).

Specifically these:

https://radanskoric.com/articles/turbo-morphing-deep-dive-id...

https://radanskoric.com/articles/turbo-morphing-deep-dive

I think he's also got a book on turbo he's releasing soon (if you go with turbo it's probably worth getting).

Those posts helped me grok Torbo 8 morph and ultimately what sold me on datastar. Morph, signals and SSE is all you need.

As for mobile I'll just wrap it in a webview (as an X native mobile dev I can tell you it will lead to a lower maintenance app than native or react native).

TLDR: datastar solves all the problems I ran into with turbo and more. It's faster, smaller, simpler, more examples, better docs and easier.

vb-8448

Doesn't it make stateful the whole stack?

postepowanieadm

So how are your server bills? Does Datastar supports caching/prerendering?

andersmurphy

So being on the front page of hacker news twice in 24 hours. The multiplayer game of life game is running on a 15.59$/month 4 core AMD 8GB ram shared VPS (hetzner) and only at about 30% load. That's with a Clojure backend running very naive code.

mattgreenrocks

I believe it. Part of the issue I suspect is that generation JS truly does not understand how fast the old school GC’d runtimes (e.g. JVM/CLR) are at this point.

CharlesW

The TODOS mini application at data-star.dev is slow and doesn't work correctly for me (checking/unchecking items isn't reliable). To me, this highlights one common problem I've seen with frameworks that insist on doing everything on the server.

sudodevnull

UPDATE: I have no idea why fly.io hate the TODO, but https://example.andersmurphy.com/ is a decent example (that's way more fun) that's running now. I'm commenting out that demo until I have more time to investigate. If y'all find other ones that are acting up please let me know. Looks likes it might be time to actual host this thing on a real server.

tevon

Agreed, I have gig internet and a hardwire connection and still get more lag than I'd want from a web app.

Potentially could be solved with some client side cache but still..

sudodevnull

Yeah something is DEFINITELY up. This is not the norm, we haven't seen this before. Fly.io free tier is not happy and I'm not sure why (we've been on it for years at this point). I'm gonna disable until I can dig deeper. Have day job stuff to attend to, this is not my ideal Friday afternoon :P

smallerfish

If you're on shared CPU you probably got throttled. Dig into the grafana dashboard and you'll see it somewhere...it's not nearly prominent enough in their UI.

sudodevnull

Yeah I'm seeing that too. We're getting ready for V1 and I probably missed a test around the Todo. My fault, didn't think we'd get hit by hackernews on a free shared fly.io server. I'll look into it now

tasqyn

I have the fastest internet in the whole country and I couldn't add new todo, also deleting the todo item is very slow.

macmac

Link?

CharlesW

> data-star.dev

macmac

Oh I know what happens, the todo app was removed from the front page.

nz3000

I've been working with datastar for a bit now and have really been enjoying it. If you are looking to try it out, I created a boilerplate template that distills some of the examples from the datastar site to get up and running with:

https://github.com/zangster300/northstar/tree/main

dpc_01234

This matches 100% my experience and thoughts.

I really enjoy HTMX and it's a blessing for my small-scale reactivity web interfaces, but I can immediately tell: "Well, this is hard to organize in a way that will scale with complexity well. It works great now, but I can tell where are the limits". And when I had to add alpine.js to do client-side reactivity, it immediately was obvious that I'd love to have both sides (backend and frontent) unified.

Still need more time opportunities to roll some stuff with datastar in it, but ATM I'm convinced datastar is the way to go.

For reference, my typical "web tech stack": Rust, axum, maud, datastar, redb.

naasking

> And when I had to add alpine.js to do client-side reactivity, it immediately was obvious that I'd love to have both sides (backend and frontent) unified.

https://alpine-ajax.js.org/

resonious

Nitpicking but

> SSE enables microsecond updates, challenging the limitations of polling in HTMX.

How is this true? SSE is just the server sending a message to the client. If server and client are in opposite sides of the world, it will not be a matter of microseconds...

ivanjermakov

Reminds me of the joke "hey, check out the website I just made: localhost:8080"

andersmurphy

You can have microsecond updated, once the connection is established you can stream. Regardless of your latency.

Say your ping is 100 (units are irrelevant here). It will take you 100 before you see your first byte but if the server is sending updates down that connection you will have data at whatever rate the server can send data. Say the server sends every 10.

Then you will have updates on the client at 100 110 120 130 etc.

xyzzy_plugh

That's still 100 irrelevant units later than the server sent the update. This is like saying the first byte of the packet takes 100ms to arrive but the subsequent bytes in the packet are instant!

It's not quite right. You'll never have updates in microseconds even if your ping is, say, 7ms.

At best you can be ~2-4x as fast as long polling on HTTP/1 -- an order of magnitude is a ridiculous statement.

sudodevnull

You're right! 200-400% faster is so useless.

sudodevnull

Well obviously there's a difference between latency and throughput. Of course it's going to be microsecond plus your rtt/2. Sorry, we can't beat physics.

zdragnar

> Sorry, we can't beat physics.

In a way, you can with optimistic updates. That requires having a full front end stack, though, and probably making the app local-first if you really wanted to hammer that nail.

There's always the cost of the round trip to verify, which means planning a solid roll-back user experience, but it can be done.

recursive

Everyone knows no one can beat physics. That doesn't excuse claiming you can beat physics.

andersmurphy

Latency doesn't affect server update rate it affects time to first data. I can have a ping of 500ms and still get an update from a stock ticker every 5 milliseconds. They will arrive at 500 505 510 etc.

throwaway519

Can't beat physics but can write better copy.

andersmurphy

Latency doesn't affect server update rate it affects time to first data.

65

https://www.youtube.com/watch?v=0K71AyAF6E4

I found this talk really interesting. It's a cool framework for very interactive applications.

thanhnguyen2187

Really well-written and well-structured post! I'll seriously evaluate Datastar in my next toy project because of the author's praises!

For people who are looking for HTMX alternatives, I think Alpine AJAX is another choice if you are already using AlpineJS

sudodevnull

Ian is great, if you want progressive enhancement it would be my go-to every time!

Mister_Snuggles

The section on the author's background could have almost been written by me. I'm also a PeopleSoft developer, and the ability to build fully-functional CRUD apps without needing to know about HTML, JavaScript, Browsers, etc, is severely underappreciated. For very simple CRUD pages, no code is required. For developing line-of-business apps it's actually an incredible toolset.

rodolphoarruda

I just wanted to say I love this entire "ecosystem", if I may call it. Hypermedia is cool. HTMX looks like the natural evolution of HTML we all were expecting from the 90s. The simplicity of hx tags and the fact they get the work done is really refreshing. Datastar looks promising as well. It is already on my radar for a hobby project. Kudos to the dev team!

j13n

This is the second post I’ve seen praising Datastar in the last 24 hours, and once again no mention of the requirement to punch a gaping hole in one’s Content-Security-Policy.

If this is the framework of the future, cyber criminals are going to have a bright future!

sudodevnull

That's the nature of anything that does this kind of work. React, Svelte, Solid. Alpine has a CSP version but it does so little that I recommend you just accept being a Web1 MPA basic site.

I have ideas around ways around this but it's a per language template middleware.

jazoom

Alpine CSP version works fine. You just can't write JS code in strings, which one may wish to avoid anyway.

I also didn't have a problem with CSP and HTMX.

Nor with SvelteKit.

I'm not sure why you think these are all equivalent to DataStar's hard requirement on unsafe-eval.

FYI, this is the reason I didn't try out DataStar.

pie_flavor

Svelte only requires a CSP hole in its default config as a standalone library; SvelteKit does proper CSP by default, and if you're not using SvelteKit you can build CSP handling into whatever you are using instead. I assume the others are the same way.

tauroid

Could you avoid eval by having a CSP mode that forces reactive expressions to only allow functions users have registered with datastar in a lookup table?

dpc_01234

Is there anything I could read detailed explanation of issue, in particular w.r.t datastar?

andersmurphy

Please don't cargo cult CSP without understanding it.

unsafe-eval constrained to function constructors without inline scripts is only a concern if you are rendering user submitted HTML (most common case I see is markdown). Regardless of your CSP configuration you should be sanitizing that user submitted HTML anyway.

max_

How does this compare to HTMX (security wise)?

sudodevnull

Same, you control your signals and fragments. So you are responsible for proper escaping and thoughtful design.

j13n

You can disable all use of eval with htmx. The tradeoff is one has to write a bit more JavaScript.

https://news.ycombinator.com/item?id=43650921

sudodevnull

I have thoughts about a fully compliant CSP middleware, problem is it's per language so I'd probably only make for Go (maybe PHP & TS)

nchmy

could you please elaborate on this?