Skip to content(if available)orjump to list(if available)

30 years of <br> tags

30 years of <br> tags

41 comments

·December 13, 2025

1718627440

> Every page on your site needed the same header, the same navigation, the same footer. But there was no way to share these elements. No includes, no components.

That's not completely true. Webservers have Server Side Includes (SSI) [0]. Also if you don't want to rely on that, 'cat header body > file' isn't really that hard.

[0] https://web.archive.org/web/19970303194503/http://hoohoo.ncs...

Gualdrapo

I think they meant that from a vanilla HTML standpoint

bigstrat2003

Sure, but later in the article it says that when PHP came out it solved the problem of not being able to do includes. Which again... server-side includes predate PHP. I think that this is just an error in the article any way you slice it. I assume it was just an oversight, as the author has been around long enough that he almost certainly knows about SSI.

1718627440

If they insist on only using vanilla HTML then the problem is unsolved to this day. I think it is actually less solved now, since back then HTML was an SGML application, so you could supply another DTD and have macro-expansion on the client.

alehlopeh

HTML frames let you do this way back in the day

pimlottc

The article mentions that in the very next sentence

> You either copied and pasted your header into every single HTML file (and god help you if you needed to change it), or you used <iframe> to embed shared elements. Neither option was great.

tannhaeuser

HTML was invented as an SGML vocabulary, and SGML and thus also XML has entities/text macros you can use to reference shared documents or fragments such as shared headers, footers, and site nav, among other things.

notatallshaw

> At one company I worked at, we had a system where each deploy got its own folder, and we'd update a symlink to point to the active one. It worked, but it was all manual, all custom, and all fragile.

The first time I saw this I thought it was one of the most elegant solutions I'd ever seen working in technology. Safe to deploy the files, atomic switch over per machine, and trivial to rollback.

It may have been manual, but I'd worked with a deployment processes that involved manually copying files to dozens of boxes and following 10 to 20 step process of manual commands on each box. Even when I first got to use automated deployment tooling in the company I worked at it was fragile, opaque and a configuration nightmare, built primarily for OS installation of new servers and being forced to work with applications.

toast0

> It may have been manual

It's pretty easy to automate a system that pushes directories and changes symlinks. I've used and built automation around the basic pattern.

PaulDavisThe1st

I wanted this to end with something like:

"... and through it all, the humble <br> tag has continued playing its role ..."

1970-01-01

Very nice article.

However, some very heavy firepower was glossed over.. TLS/HTTPS gave us the power to actually buy things and share secrets. The WWW would not be anywhere near this level of commercialized if we didn't have that in place.

1718627440

> For that, you needed CGI scripts, which meant learning Perl or C. I tried learning C to write CGI scripts. It was too hard. Hundreds of lines just to grab a query parameter from a URL. The barrier to dynamic content was brutal.

That's folk wisdom, but is it actually true? "Hundreds of lines just to grab a query parameter from a URL."

    /*@null@*/
    /*@only@*/
    char *
    get_param (const char * param)
    {
        const char * query = getenv ("QUERY_STRING");
        if (NULL == query) return NULL;

        char * begin = strstr (query, param);
        if ((NULL == begin) || (begin[strlen (param)] != '=')) return NULL;
        begin += strlen (param) + 1;

        char * end = strchr (begin, '&');
        if (NULL == end) return strdup (begin);

        return strndup (begin, end-begin);
    }
In practice you would probably parse all parameters at once and maybe use a library.

I recently wrote a survey website in pure C. I considered python first, but do to having written a HTML generation library earlier, it was quite a cakewalk in C. I also used the CGI library of my OS, which granted was one of the worst code I ever refactored, but after, it was quite nice. Also SQLite is awesome. In the end I statically linked it, so I got a single binary to upload anywhere. I don't even need to setup a database file, this is done by the program itself. It also could be tested without a webserver, because the CGI library supports passing variables over stdin. Then my program outputs the webpage on stdout.

So my conclusion is: CRUD websites in C are easy and actually a breeze. Maybe that also has my previous conclusion as a prerequisite: HTML represents a tree and string interpolation is the wrong tool to generate a tree description.

flanfly

Good showcase. Your code will match the first parameter that has <param> as a suffix, no necessarily <param> exactly (username=blag&name=blub will return blag). It also doesn't handle any percent encoding.

null

[deleted]

stouset

Further, when retrieving multiple parameters, you have a Shlemiel-the-painter algorithm.

https://www.joelonsoftware.com/2001/12/11/back-to-basics/

1718627440

Thanks, good author. I also like to read him. Honestly not parsing the whole query string at once feels kind of dumb. To quote myself:

> In practice you would probably parse all parameters at once and maybe use a library.

1718627440

> Your code will match the first parameter that has <param> as a suffix, no necessarily <param> exactly

Depending on your requirements, that might be a feature.

> It also doesn't handle any percent encoding.

This does literal matches, so yes you would need to pass the param already percent encoded. This is a trade off I did, not for that case, but for similar issues. I don't like non-ASCII in my source code, so I would want to encode this in some way anyway.

But you are right, you shouldn't put this into a generic library. Whether it suffices for your project or not, depends on your requirements.

stouset

This exact mindset is why so much software is irreparably broken and riddled with CVEs.

Written standard be damned; I’ll just bang out something that vaguely looks like it handles the main cases I can remember off the top of my head. What could go wrong?

recursive

Ampersands are ASCII, but also need to be encoded to be in a parameter value.

bryanlarsen

> HTML represents a tree and string interpolation is the wrong tool to generate a tree description.

Yet 30 years later it feels like string interpolation is the most common tool. It probably isn't, but still surprisingly common.

1718627440

Which is really sad. This is the actual reason why I preferred C over Python[*] for that project, so I could use my own library for HTML generation, which does exactly that. It also ameliorates the `goto cleanup;` thing, since now you can just tell the library to throw subtrees away. And the best thing is, that you can MOVE, and COPY them, which means you can generate code once and then fill it with the data and still later modify it.

[*] I mean yeah, I could have written a wrapper, but that would have taken far more time.

ripe

What a comprehensive, well-written article. Well done!

The author traces the evolution of web technology from Notepad-edited HTML to today.

My biggest difference with the author is that he is optimistic about web development, while all I see is shaky tower of workarounds upon workarounds.

My take is that the web technology tower is built on the quicksand of an out-of-control web standardization process that has been captured by a small cabal of browser vendors. Every single step of history that this article mentions is built to paper over some serious problems instead of solving them, creating an even bigger ball of wax. The latest step is generative AI tools that work around the crap by automatically generating code.

This tower is the very opposite of simple and it's bound to collapse. I cannot predict when or how.

rsync

"Virtual private servers changed this. You could spin up a server in minutes, resize it on demand, and throw it away when you were done. DigitalOcean launched in 2011 ..."

The first VPS provider, circa fall of 2001, was "JohnCompanies" handing out FreeBSD jails advertised on metafilter (and later, kuro5hin).

These VPS customers needed backup. They wanted the backup to be in a different location. They preferred to use rsync.

Four years later I registered the domain "rsync.net"[1].

[1] I asked permission of rsync/samba authors.

emilbratt

Im only half way through, but just wanted to share that I love this kind of write up

dansjots

What an incredible article. More than its impressive documented scope and detail, I love it foremost for conveying what the zeitgeist felt at each point in history. This human element is something usually only passed on by oral tradition and very difficult to capture in cold, academic settings.

It’s fashionable to dunk on “how did all this cloud cruft become the norm”, but seeing a continuous line in history of how circumstance developed upon one another, where each link is individually the most rational decision at their given context, makes them an understandable misfortune of human history.

outofmyshed

This is a great overview of web tech as I more or less recall it. Although pre-PHP CGI wasn’t a big deal, but it was more fiddly and you had to know and understand Apache, broadly. mod_perl & FastCGI made it okay. Only masochists wrote CGI apps in compiled languages. PHP made making screwy web apps low-effort and fun.

I bugged out of front-end dev just before jquery took off.

GoatOfAplomb

Fantastic read. I did most of my web development between 1998 and 2012. Reading this gave me both a trip down memory lane and a very digestible summary of what I've missed since then.

Kuyawa

> All I needed was Notepad, some HTML, and an FTP client to upload my files

That's what I still do 30 years later

brianwawok

Does your site resemble the ux of hacker news and craigslist?