Skip to content(if available)orjump to list(if available)

Triptych Proposals

Triptych Proposals

80 comments

·January 6, 2025

alexpetros

Co-author here! I'll let the proposal mostly speak for itself but one recurring question it doesn't answer is: "how likely is any of this to happen?"

My answer is: I'm pretty optimistic! The people on WHATWG have been responsive and offered great feedback. These things take a long time but we're making steady progress so far, and the webpage linked here will have all the status updates. So, stay tuned.

ksec

Thank You for the work. It is tedious and takes a long time. I know we are getting some traction on WHATWG.

But do we have if Google or Apple have shown any interest? At the end you could still end up being on WHATWG and Chrome / Safari not supporting it.

theptip

How much would HTMX internals change if these proposals were accepted? Is this a big simplification or a small amount of what HTMX covers?

Similarly, any interesting ways you could see other libraries adopting these new options?

recursivedoubts

i don't think it would change htmx at all, we'd probably keep the attribute namespaces separate just to avoid accidentally stomping on behavior

i do think it would reduce and/or eliminate the need for htmx in many cases, which is a good thing: the big idea w/htmx is to push the idea of hypermedia and hypermedia controls further, and if those ideas make it into the web platform so much the better

paulddraper

This covers a lot of the common stuff.

This is native HTMX, or at least a good chuck of the basics.

philosopher1234

Is it possible to see their feedback? Is it published somewhere public?

divbzero

When I was reading “The future of htmx” blog post which is also being discussed on HN [1], the “htmx is the new jQuery” idea jumped out at me. Given that jQuery has been gradually replaced by native JavaScript [2], I wondered what web development could look like if htmx is gradually replaced by native HTML.

Triptych could be it, and it’s particularly interesting that it’s being championed by the htmx developers.

[1]: https://news.ycombinator.com/item?id=42613221

[2]: https://youmightnotneedjquery.com/

tomashm

> I wondered what web development could look like > if htmx is gradually replaced by native HTML

This perspective seems to align closely with how the creator of htmx views the relationship between htmx and browser capabilities.

1. https://www.youtube.com/watch?v=WuipZMUch18&t=1036s 2. https://www.youtube.com/watch?v=WuipZMUch18&t=4995s

recursivedoubts

this is a set of proposals by Alex Petros, in the htmx team, to move some of the ideas of htmx into the HTML spec. He has begun work on the first proposal, allowing HTML to access PUT, DELETE, etc.

https://alexanderpetros.com/triptych/form-http-methods

This is going to be a long term effort, but Alex has the stubbornness to see it through.

croemer

Congrats, you seem to be a co-author of the proposal as well, right?

recursivedoubts

i help alex out a bit, but he's the main author

emmanueloga_

In the meanwhile, I found that enabling page transitions is a progressive enhancement tweak that can go a long way in making HTML replacement unnecessary in a lot of cases.

1) Add this to your css:

    @view-transition { navigation: auto; }
2) Profit.

Well, not so fast haha. There are a few details that you should know [1].

* Firefox has not implemented this yet but it seems likey they are working on it.

* All your static assets need to be properly cached to make the best use of the browser cache.

Also, prefetching some links on hover, like those on a navbar, is helpful.

Add a css class "prefetch" to the links you want to prefetch, then use something like this:

    document.addEventListener("mouseover", ({target}) => {
      if (target.tagName !== "A" || !target.classList.contains("prefetch")) return;
      target.classList.remove("prefetch");

      const linkElement = document.createElement("link");
      linkElement.rel = "prefetch";
      linkElement.href = target.getAttribute("href");
      document.head.appendChild(linkElement);
    })
There's more work on prefetching/prerendering going on but it is a lil green (experimental) at the moment [2].

--

1: https://developer.mozilla.org/en-US/docs/Web/CSS/@view-trans...

2: https://developer.mozilla.org/en-US/docs/Web/API/Speculation...

alexpetros

In many cases, browsers will also automatically perform a "smooth" transition between pages if your caching settings are don well, as described above. It's called paint holding. [0]

One of the driving ideas behind Triptych is that, while HTML is insufficient in a couple key ways, it's a way better foundation for your website than JavaScript, and it gets better without any effort from you all the time. In the long run, that really matters. [1]

[0] https://developer.chrome.com/blog/paint-holding [1] https://unplannedobsolescence.com/blog/hard-page-load/

Dan42

No, please, just no.

The idea of using PUT, DELETE, or PATCH here is entirely misguided. Maybe it was a good idea, but history has gone in a different direction so now it's irrelevant. About 20 years ago, Firefox attempted to add PUT and DELETE support to the <form> element, only to roll it back. Why? Because the semantics of PUT and DELETE are not consistently implemented across all layers of the HTTP infrastructure—proxies, caches, and intermediary systems. This inconsistency led to unpredictable failures, varying by website, network, and the specific proxy or caching software in use.

The reality we live in, shaped by decades of organic evolution, is that only GET and POST are universally supported across all layers of internet infrastructure.

Take a cue from the WHATWG HTML5 approach: create your RFC based on what is already the de facto standard: GET is for reading, and POST is for writing. The entire internet infrastructure operates on these semantics, with little to no consideration for other HTTP verbs. Trying to push a theoretically "correct" standard ignores this reality and, as people jump into the hype train, will consume significant time and resources across the industry without delivering proportional value. It's going to be XHTML all over again, it's going to be IPv6 all over again.

Please let's just use what already works. GET for reading, POST for writing. That’s all we need to define transport behavior. Any further differentiation—like what kind of read or write—is application-specific and should be decided by the endpoints themselves.

Even the <form> element’s "action" attribute is built for this simplicity. For example, if your resource is /tea/genmaicha/, you could use <form method="post" action="brew">. Voilà, relative URLs in action! This approach is powerful, practical, and aligned with the infrastructure we already rely on.

Let’s not overcomplicate things for the sake of theoretical perfection. KISS.

alexpetros

> About 20 years ago, Firefox attempted to add PUT and DELETE support to the <form> element, only to roll it back. Why? Because the semantics of PUT and DELETE are not consistently implemented across all layers of the HTTP infrastructure—proxies, caches, and intermediary systems.

This is incorrect, according to this comment from the Firefox implementer who delayed the feature. He intended the roll back to be temporary. [0]

> The reality we live in, shaped by decades of organic evolution, is that only GET and POST are universally supported across all layers of internet infrastructure.

This is also incorrect. The organic evolution we actually have is that servers widely support the standardized method semantics in spite of the incomplete browser support. [1] When provided with the opportunity to take advantage of additional methods in the client (via libraries), developers user them, because they are useful. [2][3]

> Take a cue from the WHATWG HTML5 approach: create your RFC based on what is already the de facto standard: GET is for reading, and POST is for writing.

What you're describing isn't the de defacto standard, it is the actual standard. GET is for reading and POST is for writing. The actual standard also includes additional methods, namely PUT, PATCH, and DELETE, which describe useful subsets of writing, and our proposal adds them to the hypertext.

> Trying to push a theoretically "correct" standard ignores this reality and, as people jump into the hype train, will consume significant time and resources across the industry without delivering proportional value. It's going to be XHTML all over again, it's going to be IPv6 all over again.

You're not making an actual argument here, just asserting that takes time—I agree—and that it has no value—I disagree, and wrote a really long document about why.

[0] https://alexanderpetros.com/triptych/form-http-methods#ref-6

[1] https://alexanderpetros.com/triptych/form-http-methods#rest-...

[2] https://alexanderpetros.com/triptych/form-http-methods#usage...

[3] https://alexanderpetros.com/triptych/form-http-methods#appli...

tinthedev

It looks wonderful, but the adoption will be a thoroughly uphill battle. Be it from browsers, be it from designs and implementations on the web.

I'll be first in line to try it out if it ever materializes, though!

ttymck

Looks really pragmatic and I'd be glad to see this succeed.

Is anyone able to credibly comment on the likelihood that these make it into the standard, and what the timeline might look like?

recursivedoubts

Alex is working on it now and we have contacts in the browser teams. I’m optimistic but it will be a long term (decades) project.

KronisLV

Good luck!

The partial page replacement in particular sounds like it might be really interesting and useful to have as a feature of HTML, though ofc more details will emerge with time.

Unless it ended up like PrimeFaces/JSF where more often than not you have to finagle some reference to a particular table row in a larger component tree, inside of an update attribute for some AJAX action and still spend an hour or two debugging why nothing works.

mg

What is the upside of

    <button action="/users/354" method="DELETE"></button>
over

    <button action="/users/delete?id=354"></button>

?

bryanrasmussen

Everybody has already pointed out the problem with GETTING a deletable resource, but I figured I would add this (and maybe someone will remember extra specifics).

About 2007 or so there was a case where a site was using GET to delete user accounts, of course you had to be logged in to the site to do it so what was the harm the devs thought, however a popular extension made by Google for Chrome started prefetching GET requests for the users - so coming in to the account page where you could theoretically delete your account ended up deleting the account.

It was pretty funny, because I wasn't involved in either side of the fight that ensued.

I would provide more detail than that, but I'm finding it difficult to search for it, I guess Google has screwed up a lot of other stuff since then.

on edit: my memory must be playing tricks on me, I think it had to be more around 2010, 2011 that this happened, at first I was thinking it happened before I started working in Thomson Reuters but now I think it must have happened within the first couple years there.

JadeNB

JimDabell earlier recalled 37Signals and the Google Web Accelerator that sounds like what you mean: https://news.ycombinator.com/item?id=42619712

bryanrasmussen

yes, that's it. Thanks.

thayne

A GET request to `/users/delete?id=354` is dangerous. In particular, it is more vulnerable to a CSRF attack, since a form on another domain can just make a request to that endpoint, using the user's cookies.

It's possible to protect against this using various techniques, but they all add some complexity.

Also, the former is more semantically correct in terms of HTTP and REST.

hnbad

An important consideration is also that browsers may prefetch GET requests.

alexpetros

Hey there, good question! Probably worth reading both sections 6 and 7 for context, but I answer this question specifically in section 7.2: https://alexanderpetros.com/triptych/form-http-methods#ad-ho...

croemer

HTTP/1.1 spec, section 9.1.1 Safe Methods:

> Implementors should be aware that the software represents the user in their interactions over the Internet, and should be careful to allow the user to be aware of any actions they might take which may have an unexpected significance to themselves or others.

> In particular, the convention has been established that the GET and HEAD methods SHOULD NOT have the significance of taking an action other than retrieval.

See the "GET scenario" section of https://owasp.org/www-community/attacks/csrf to learn why ignoring the HTTP spec can be dangerous.

Or this blog post: https://knasmueller.net/why-using-http-get-for-delete-action...

AndrewHampton

What HTTP method would you expect the second example to use? `GET /users/delete?id=354`?

The first has the advantage of being a little clearer at the HTTP level with `DELETE /users/354`.

mg

GET because that is also the default for all other elements I think. form, a, img, iframe, video...

Ok, but what is the advantage to be "clear at the http level"?

necubi

GET shouldn't be used for a delete action, because it's specified as a safe method[0], which means essentially read-only. On a practical level, clients (like browsers) are free to cache and retry GET requests, which could lead to deletes not occurring or occurring when not desired.

[0] https://datatracker.ietf.org/doc/html/rfc7231#section-4.2.1

JimDabell

That means I can make you delete things by embedding that delete URL as the source of an image on a page you visit.

GET is defined to be safe by HTTP. There have been decades of software development that have happened with the understanding that GETs can take place without user approval. To abuse GET for unsafe actions like deleting things is a huge problem.

This has already happened before in big ways. 37Signals built a bunch of things this way and then the Google Web Accelerator came along, prefetching links, and their customers suffered data loss.

When they were told they were abusing HTTP, they ignored it and tried to detect GWA instead of fixing their bug. Same thing happened again, more things deleted because GET was misused.

GET is safe by definition. Don’t abuse it for unsafe actions.

lionkor

Well, its correct, so its likely to be optimized correctly, to aid in debugging, to make testing easier and clearer, and generally just to be correct.

Correctness is very rarely a bad goal to have.

Also, of course, different methods have different rules, which you know as an SE. For example, PUT, UPDATE and DELETE have very different semantics in terms of repeatability of requests, for example.

recursive

GETs have no side effects, by specification. DELETEs can have side effects.

recursivedoubts

implied idempotence

LegionMammal978

I'd say deleting a user is pretty idempotent: deleting twice is the same as deleting once, as long as you aren't reusing IDs or something silly like that. It's more that GET requests shouldn't have major side effects in the first place.

andrewflnr

> giving buttons to ability

Might want to fix that. :)

tln

I haven't seen the proposal, but buttons can already set the form method (and action, and more). So I guess the "Button HTTP Requests" will just save the need to nest one tag?

    <form><button type="submit" formaction="/session" formmethod="DELETE"></form>
    <form action="/session" method="DELETE"><button type="submit"></form>

andrewflnr

To be clear, I was referring to the minor typo.

jjcm

This proposal also includes the ability to update a target DOM element with the response from that delete action.

null

[deleted]

Devasta

Its genuinely incredible that we are more than 20 years since the takeover of HTML from the W3C and there isn't anything in the browser approaching even one tenth of the capability of XForms.

I wish the people behind this initiative luck and hope they succeed but I don't think it'll go anywhere; the browser devs gave up on HTML years ago, JavaScript is the primary language of the web.