Skip to content(if available)orjump to list(if available)

The New Three-Tier Application

The New Three-Tier Application

64 comments

·March 18, 2025

davedx

I think people in this industry make using complicated, powerful paradigms part of their identity. They don’t feel like they’re important unless they’re reaching for N-tier architecture or exotic databases or lambdas or whatever else it is.

Most apps I’ve worked on could have been a monolith on postgres but they never ever are as soon as I’m not the sole engineer.

ebiester

The architecture is the function of number of people in the system. How do you manage 100 people in a monolith? 250? What if one group gets to a broken state but another group needs to release to escape a broken state?

Architecture is often solving a human problem. That said, too many teams break out way too early.

lucianbr

Often the way they attepmt to manage 100 people is to split the monolith into a distributed monolith. Now you have all the same problems plus some new ones, but hey, we're "managing" the human problem.

And considering they somehow muddle along, with one person sometimes breaking everything for the other 99, and all the other problems, I think they could very well muddle along with a monolith. With 100 or however many programmers.

Yes, the distributed system with well thought out splits into services would be an improvement. But it's clearly not a necessity. So it remains that some places at least, use it out of some other reason - fad, cargo culting, whatever.

Architecture should be solving the human or other problems, definitely. But how often it does... I guess each with their own experience.

localghost3000

It took me a bit to realize the author is selling me something. I guess good job there sir.

I’ve built a bunch of distributed architectures. In every case I did, we would have been better served with a monolith architecture and a single relational DB like Postgres. In fact I’ve only worked on one system that had the kind of scale that would justify the additional complexity of a distributed architecture. Ironically that system was a monolith with Postgres.

anonzzzies

They are basically advocating that (postgres replaces everything else normally used) however they need to add in enterprisey stuff.

ambicapter

> In fact I’ve only worked on one system that had the kind of scale that would justify the additional complexity of a distributed architecture. Ironically that system was a monolith with Postgres.

This...doesn't seem to support your case at all? Maybe if you'd turned all those distributed architectures into monoliths you would've then thought the distributed architecture was justified (since you have a 1 of 1 case where that was the case).

I'm guessing the truth is somewhere in the middle, but unfortunately it's not very useful to the reader to say "well, some systems are better distributed, some systems are better as monolith". The interesting question is which is which.

localghost3000

> This...doesn't seem to support your case at all?

Hm ok well I am not sure what you mean but its the internet so... <shrug>

What I am saying here is that you would be shocked at how far you can get with a simpler architecture. Distributed systems have massive trade offs and are the kind of thing you shouldn't do unless you are FORCED to.

szundi

Your experience suggest that monoliths on Postgres might be key to win on the market.

geophile

Orchestration tier. Oy.

So something goes wrong, and you need to back out an update to one of your microservices. But that back-out attempt goes wrong. Or happens after real-world actions have been based on that update you need to back out. Or the problem that caused a backout was transient, everything turns out to be fine, but now your backout is making its way across the microservices. Backout the backout? What if that goes wrong? The "or"s never end.

Just use a centralized relational database, use transactions, and be done with it. People not understanding what can go wrong, and how RDB transactions can deal with a vast subset of those problems -- that's like the 21st century version of how to safely use memory in C.

Yes, of course, centralized RDBs with transactions are sometimes the wrong answer, due to scale, or genuinely non-atomic update requirements, or transactions spanning multiple existing systems. But I have the sense that they are often rejected for nonsensical reasons, or not even considered at all.

shermantanktop

I mostly agree. But I work at a place where scale precludes that, and as a result relational concepts are sneered at. It turns out that pulling atomicity concerns into a pile of Java code leads to consistency problems…

mmastrac

Company hawking an orchestrating backend server says you should use an orchestrating backend server?

You still have four layers, it's just that one is hidden with annotations.

tomhallett

When they get to what their implementation is, I’m not even sure what it is. Like how is the following different from a library, like acts-as_state_machine (https://github.com/aasm/aasm). Are they auto running the retries - in a background job which they handle (like “serverless” background job)?

“Implementing orchestration in a library connected to a database means you can eliminate the orchestration tier, pushing its functionality into the application tier (the library instruments your program) and the database tier (your workflow state is persisted to Postgres).“

jedberg

> Are they auto running the retries - in a background job which they handle (like “serverless” background job)?

Yes. The library makes sure that your app retries failed workflows. Or if you use the commercial products, takes care of that for you.

dventimi

"In the beginning (that is, the 90’s), developers created the three-tier application. Per Martin Fowler, these tiers were the data source tier, managing persistent data, the domain tier, implementing the application’s primary business logic, and the presentation tier, handling the interaction between the user and the software. The motivation for this separation is as relevant today as it was then: to improve modularity and allow different components of the system to be developed relatively independently."

Immediately, I see problems. Martin Fowler's "Patterns of Enterprise Application Architecture" was first published in 2002, a year that I think most people will agree was not in "the 90's." Also, was that the motivation? Are we sure? Who had that motivation? Were there any other motivations at play?

edoceo

Well, Martin's book came out after we were doing these patterns in the 90s. My teams had that motivation - data worked with logic; logic worked with UI teams. Separation of concerns and division of labour are, generally, good ideas.

ETA: one of the groups that was motivated was MS: use SQL Server + SP ; then COM in the Logic layer and then ASP in the UI.

jmull

Yes. I was happy when Fowler came out because we could all start using the same terminologies for the same things, and work from common concepts when solving the same problem.

(It didn't work out that way, though. It seemed like most people used Fowler as some kind of bible or ending point, when it should have been a starting point/ source of inspiration. Somehow it seemed to turn people's brains off, making them dumber and less insightful about the systems they were building.)

tomnipotent

> one of the groups that was motivated was MS

I remember Microsoft being a huge marketing proponent of the 3-tier architecture in the late 90's, particularly after the release of ASP. The model was promoted everywhere - MSDN, books, blogs, conferences. At this point COM was out of the picture and ASP served as both front-end (serving HTML) and back-end (handling server responses).

recursivedoubts

Every day we stray further from God’s light.

falcor84

Pray tell, what software architecture does God's light shine brightest on?

infinitezest

The one you're not currently using, of course.

pphysch

The GP created HTMX, so possibly he would argue for an even simpler 2-tier or 2.5-tier architecture that addresses frontend complexity

baq

To err is human

falcor84

My favorite take on this is:

To err is human; to bring down the whole cluster with one bad command is DevOps.

mtillman

This takes me back. http://errtheblog.com/

floathub

To iterate is human.

bryanrasmussen

to recurse, devilish.

chrisweekly

To forgive, divine

ptx

> In the beginning (that is, the 90’s), developers created the three-tier application. [...] Of course, application architecture has evolved greatly since the 90's. [...] This complexity has created a new problem for application developers: how to coordinate operations in a distributed backend? For example: How to atomically perform a set of operations in multiple services, so that all happen or none do?

This doesn't seem like a correct description of events. Distributed systems existed in the 90s and there was e.g. Microsoft Transaction Server [0] which was intended to do exactly this. It's not a new problem.

And the article concludes:

> This manages the complexity of a distributed world, bringing the complexity of a microservice RPC call or third-party API call closer to that of a regular function call.

Ah, just like DCOM [1] then, just like in the 90s.

[0] https://en.wikipedia.org/wiki/Microsoft_Transaction_Server

[1] https://en.wikipedia.org/wiki/Distributed_Component_Object_M...

smithkl42

I only ever played with DCOM and Transaction Server, and never in production, but I do wonder what about that tech stack made it so absolutely unworkable, and such a technological dead-end? Did anyone ever manage to make it work?

politelemon

I haven't noticed the same trend or evolution of application tiers, perhaps we live in different echo chambers. Teams using microsevices need to evaluate whether it's still a good fit considering the inherent overhead it brings. Applying a bandaid solution on top of it, if it isn't a good fit, only makes the problem worse.

bazizbaziz

Workflows/orchestration/reconciliation-loops are basically table stakes for any service that is solving significant problems for customers. You might think you don't need this, but when you start needing to run async jobs in response to customer requests, you will always eventually implement one of the above solutions.

IMO the next big improvement in this space is improving the authoring experience. In short, when it comes to workflows, we are basically still writing assembly code.

Writing workflows today is done in either a totally separate language (StepFunctions), function-level annotations (Temporal, DBOS, etc), or event/reconciliation loops that read state from the DB/queue. In all cases, devs must manually determine when state should be written back to the persistence layer. This adds a level of complexity most devs aren't used to and shouldn't have to reason about.

Personally, I think the ideal here is writing code in any structure the language supports, and having the language runtime automatically persist program state at appropriate times. The runtime should understand when persistence is needed (i.e. which API calls are idempotent and for how long) and commit the intermediate state accordingly.

recroad

I like building one-tier applications in Elixir.

whilenot-dev

Following the Getting Started[0] section it seems like DBOS requires the configuration of a Postgres-compatible database[1] (NOTE: DBOS currently only supports Postgres-compatible databases.). Then, after decorating your application functions as workflow steps[2], you'll basically run those workflows by spawning a bunch of worker threads[3] next to your application process.

Isn't that a bit... unoptimized? The orchestrator domain doesn't seem to be demanding on compute, so why aren't they making proper use of asyncio here in the first place? And why aren't they outsourcing their runtime to an independent process?

EDIT:

So "To manage this complexity, we believe that any good solution to the orchestration problem should combine the orchestration and application tiers." (from the article) means that your application runtime will also become the orchestrator for its own workflow steps. Is that a good solution?

EDIT2:

Are they effectively just shifting any uptime responsibility (delivery guarantees included) to the application process?

[0]: https://github.com/dbos-inc/dbos-transact-py/tree/a3bb7cb6dd...

[1]: https://docs.dbos.dev/python/reference/configuration#databas...

[2]: https://github.com/dbos-inc/dbos-transact-py/blob/a3bb7cb6dd...

[3]: https://github.com/dbos-inc/dbos-transact-py/blob/a3bb7cb6dd...

jedberg

The point is that your application already has uptime responsibilities, so why not build the orchestration right into it instead of adding another service that will have its own uptime responsibilities?

whilenot-dev

Well, my application servers are usually designed stateless to provide sub-second responses, whereas orchestration workflows can take up to hours. I ususally scale my workers differently than my REST APIs, as their failure scenario looks quite different: an unresponse orchestration engine might just delay its jobs (inconsistent, outdated data), whereas an unavailable API won't provide any responses at all (no data).

How'd that work in a microservice architecture anyway? Does each service has some part of the orchestration logic defined? Or will I end up writing a separate orchestration engine as one service anyway? Wouldn't that then contradict the promise of the article?

ape4

Just mentioning MVC (Model-View-Controller) https://developer.mozilla.org/en-US/docs/Glossary/MVC