Skip to content(if available)orjump to list(if available)

Replacing a $3000/mo Heroku bill with a $55/mo server

speedgoose

Looking at the htop screenshot, I notice the lack of swap. You may want to enable earlyoom, so your whole server doesn't go down when a service goes bananas. The Linux Kernel OOM killer is often a bit too late to trigger.

You can also enable zram to compress ram, so you can over-provision like the pros'. A lot of long-running software leaks memory that compresses pretty well.

Here is how I do it on my Hetzner bare-metal servers using Ansible: https://gist.github.com/fungiboletus/794a265cc186e79cd5eb2fe... It also works on VMs.

levkk

Yeah, no way. As soon as you hit swap, _most_ apps are going to have a bad, bad time. This is well known, so much so that all EC2 instances in AWS disable it by default. Sure, they want to sell you more RAM, but it's also just true that swap doesn't work for today's expectations.

Maybe back in the 90s, it was okay to wait 2-3 seconds for a button click, but today we just assume the thing is dead and reboot.

LaurensBER

The beauty of ZRAM is that on any modern-ish CPU it's surprisingly fast. We're talking 2-3 ms instead of 2-3 seconds ;)

I regularly use it on my Snapdragon 870 tablet (not exactly a top of the line CPU) to prevent OOM crashes (it's running an ancient kernel and the Android OOM killer basically crashes the whole thing) when running a load of tabs in Brave and a Linux environment (through Tmux) at the same time.

ZRAM won't save you if you do actually need to store and actively use more than the physical memory but if 60% of your physical memory is not actively used (think background tabs or servers that are running but not taking requests) it absolutely does wonders!

On most (web) app servers I happily leave it enabled to handle temporary spikes, memory leaks or applications that load a whole bunch of resources that they never ever use.

I'm also running it on my Kubernetes cluster. It allows me to set reasonable strict memory limits while still having the certainty that Pods can handle (short) spikes above my limit.

bayindirh

This is a wrong belief because a) SSDs make swap almost invisible, so you can have that escape ramp if something goes wrong b) SWAP space is not solely an escape ramp which RAM overflows into anymore.

In the age of microservices and cattle servers, reboot/reinstall might be cheap, but in the long run it is not. A long running server, albeit being cattle, is always a better solution because esp. with some excess RAM, the server "warms up" with all hot data cached and will be a low latency unit in your fleet, given you pay the required attention to your software development and service configuration.

Secondly, Kernel swaps out unused pages to SWAP, relieving pressure from RAM. So, SWAP is often used even if you fill 1% of your RAM. This allows for more hot data to be cached, allowing better resource utilization and performance in the long run.

So, eff it, we ball is never a good system administration strategy. Even if everything is ephemeral and can be rebooted in three seconds.

Sure, some things like Kubernetes forces "no SWAP, period" policies because it kills pods when pressure exceeds some value, but for more traditional setups, it's still valuable.

gchamonlive

How programs use ram also changed from the 90s. Back then they were written targeting machines that they knew would have a hard time fitting all their data in memory, so hitting swap wouldn't hurt perceived performance too drastically since many operations were already optimized to balance data load between memory and disk.

Nowadays when a program hits swap it's not going to fallback to a different memory usage profile that prioritises disk access. It's going to use swap as if it were actual ram, so you get to see the program choking the entire system.

winrid

Exactly. Nowadays, most web services are run in a GC'ed runtime. That VM will walk pointers all over the place and reach into swap all the time.

henryfjordan

Does HDD vs SSD matter at all these days? I can think of certain caching use-cases where swapping to an SSD might make sense, if the access patterns were "bursty" to certain keys in the cache

winrid

It's still extremely slow and can cause very unpredictable performance. I have swap setup with swappiness=1 on some boxes, but I wouldn't generally recommend it.

01HNNWZ0MV43FF

It's not just 3 seconds for a button click, every time I've run out of RAM on a Linux system, everything locks up and it thrashes. It feels like 100x slowdown. I've had better experiences when my CPU was underclocked to 20% speed. I enable swap and install earlyoom. Let processes die, as long as I can move the mouse and operate a terminal.

cactusplant7374

What's the performance hit from compressing ram?

YouAreWRONGtoo

It's sometimes not a hit, because CPUs have caches and memory bandwidth is the limiting factor.

aidenn0

Depends on the algorithm (and how much CPU is in use); if you have a spare CPU, the faster algorithms can more-or-less keep up with your memory bandwidth, making the overhead negligible.

And of course the overhead is zero when you don't page-out to swap.

speedgoose

I haven’t scientifically measured, but you don’t compress the whole ram. It is more about reserving a part of the ram to have very fast swap.

For an algorithm using the whole memory, that’s a terrible idea.

sokoloff

> It is more about reserving a part of the ram to have very fast swap.

I understand all of those words, but none of the meaning. Why would I reserve RAM in order to put fast swap on it?

waynesonfire

> zram, formerly called compcache, is a Linux kernel module for creating a compressed block device in RAM, i.e. a RAM disk with on-the-fly disk compression. The block device created with zram can then be used for swap or as a general-purpose RAM disk

To clarify OP's represention of the tool, it compresses swap space not resident ram. Outside of niche use-cases, compressing swap has overall little utility.

jdprgm

Just saw Nate Berkopec who does a lot of rails performance stuff posting about the same idea yesterday saying Heroku is 25-50x price for performance which is so insane. They clearly have zero interest in competing on price.

It's a shame they don't just license all their software stack at a reasonable price with a similar model like Sidekiq and let you sort out actually decent hardware. It's insane to consider Heroku if anything has gotten more expensive and worse compared to a decade ago yet in comparison similar priced server hardware has gotten WAY better of a decade. $50 for a dyno with 1 GB of ram in 2025 is robbery. It's even worse considering running a standard rails app hasn't changed dramatically from a resources perspective and if anything has become more efficient. It's comical to consider how many developers are shipping apps on Heroku for hundreds of dollars a month on machines with worse performance/resources than the macbook they are developing it on.

It's the standard playback that damn near everything in society is going for though just jacking prices and targeting the wealthiest least price sensitive percentiles instead of making good products at fair prices for the masses.

czhu12

> It's a shame they don't just license all their software stack at a reasonable price with a similar model like Sidekiq and let you sort out actually decent hardware

We built and open sourced https://canine.sh for exactly that reason. There’s no reason PaaS providers should be charging such a giant markup over already marked up cloud providers.

nicoburns

This looks decent for what it is. I feel like there are umpteen solutions for easy self-hosted compute (and tbh even a plain Linux VM isn't too bad to manage). The main reason to use a PAAS provider is a managed database with built-in backups.

gregsadetsky

Fully agreed - our recommendation is not /not/ run your prod Postgres db yourself, but use one of the many great dedicated options out there - Crunchy Data, Neon, Supabase, or AWS RDS..!

teiferer

It's insane how much my local shop charges for an oil change, I can do it much cheaper myself!

It's insane how much a restaurant charges for a decent steak, I can do it much cheaper myself!

...!

jdprgm

I know you mean this sarcastically but I actually 100% agree with this particular on the steak point. Especially with beef prices at all time record highs and restaurant inflation being out of control post pandemic. It takes so much of the enjoyment out of things for me if I feel i'm being ripped off left and right.

xmprt

Not the best comment but I agree with the sentiment. I fear far too often, people complain about price when there are competitors/other cheaper options that could be used with a little more effort. If people cared so much then they should just use the alternative.

No one gets hurt if someone else chooses to waste their money on Heroku so why are people complaining? Of course it applies in cases where there aren't a lot of competitors but there are literally hundreds of different of different options for deploying applications and at least a dozen of them are just as reliable and cheaper than Heroku.

raincole

It's just trendy to bash cloud and praise on-premises in 2025. In a few years that will turn around. Then in another few years it will turn around again.

g8oz

The price value proposition here seems similar to that of a stadium hot dog.

andrewstuart2

This argument doesn't work with such commoditized software. It's more like comparing an oil change for $100 plus an hour of research and a short drive against a convenient oil change right next door for $2,500.

Tiberium

The situation is interesting, and self-hosting is indeed a very nice solution often. However, I wanted to comment on the article itself - it seems to be very heavily AI-edited. Anyone who has spent time with LLMs will easily see it. But even that's not the issue; the main issue is that the article is basically a marketing piece.

For example, the "Bridging the Gap: Why Not Just Docker Compose?" section is a 1:1 copy of the points in the "Powerful simplicity" on the landing page - https://disco.cloud/

And this blog post is the (only) case study that they showcase on their main page.

gregsadetsky

You're absolutely right! Here are some three points why:

- ...

I'm kidding :-)

Our library is open source, and we're very happy and proud that Idealist is using us to save a bit of cash. Is it marketing if you're proud of your work? :-) Cheers

colechristensen

There's a tone issue.

Marketing should be marketing and clearly so. Tech blogs are about sharing information with the community (Netflix Tech blog is a good example) NOT selling something. Marketing masquerading as a tech blog is offputting to a lot of people. People don't like being fooled with embedded advertising and putting ad copy into such pieces is at best annoying.

https://netflixtechblog.com/

fragmede

Nah, people are stupid. Including me. It's all marketing. Netflix's tech blog is marketing to engineers to want to go work there and to promote their product. If you want to see things though the lense that all advertising is bad, you'll make your life miserable because it's all advertising in one way or another.

tasuki

> But even that's not the issue; the main issue is that the article is basically a marketing piece.

Why is that an issue? Is it forbidden by HN guidelines? Or would you like all marketing to be marked as such? Which articles _aren't_ marketing, one way or another?

jdprgm

It's funny they have this marketing blog post based on competing on price yet don't disclose any of their pricing on their site only a schedule a meeting which is just about the biggest RED FLAG on pricing there is.

gregsadetsky

Our library is open source, the price is 0!! :-) Haha

We're actually mostly talking to people (that "schedule a meeting") to see how we can help them migrate their stuff away (from Heroku, Vercel, etc.)

But we're not sure of the pricing model yet - probably Entreprise features like Gitlab does, while remaining open source. It's a tough(er) balance than running a hosted service where you can "just" (over)charge people.

cirrus3

This isn't the first time an article is also marketing. Besides, what is wrong with marketing something via a use case article? This is a fairly tame example of it and I found it an interesting and useful read, knowing full well it was also marketing.

AstroBen

heh my first instinct was to go see how they're making money. Guess that's coming soon

tempest_

The cloud has made people forget how far you can get with a single machine.

Hosting staging envs in pricey cloud envs seems crazy to me but I understand why you would want to because modern clouds can have a lot of moving parts.

jeroenhd

Teaching a whole bunch of developers some cloud basics and having a few cloud people around is relatively cheap for quite a while. Plus, having test/staging/prod on similar configurations will help catch mistakes earlier. None of that "localstack runs just fine but it turns out Amazon SES isn't available in region antartica-east-1". Then, eventually, you pay a couple people's wages extra in cloud bills, and leaving the cloud becomes profitable.

Cloud isn't worth it until suddenly it is because you can't deploy your own servers fast enough, and then it's worth it until it exceeds the price of a solid infrastructure team and hardware. There's a curve to how much you're saving by throwing everything in the cloud.

nine_k

Deploying to your private cloud requires basically the same skills. Containers, k8s or whatnot, S3, etc. Operating a large DB on bare metal is different from using a managed DB like Aurora, bit for developers, the difference is hardly visible.

rikafurude21

The cloud has made people afraid of linux servers. The markup is essentially just the price business has to pay because of developer insecurity. The irony is that self hosting is relatively simple, and alot of fun. Personally never got the appeal of Heroku, Vercel and similar, because theres nothing better than spinning up a server and setting it up from scratch. Every developer should try it.

jampekka

> The irony is that self hosting is relatively simple, and alot of fun. Personally never got the appeal of Heroku, Vercel and similar, because theres nothing better than spinning up a server and setting it up from scratch.

It's fun the first time, but becomes an annoying faff when it has to be repeated constantly.

In Heroku, Vercel and similar you git push and you're running. On a linux server you set up the OS, the server authentication, the application itself, the systemctl jobs, the reverse proxy, the code deployment, the ssl key management, the monitoring etc etc.

I still do prefer a linux server due to the flexibility, but the UX could be a lot better.

teekert

I use NixOS and a lot of it is in a single file. I just saw some ansible coming by here, and although I have no experience with it, it looked a lot simpler than Nix (for someone from the old Linux world, like me… eventhough Nix is, looking through your eyelashes, just a pile of key/value pairs).

tbrownaw

And all of that takes, what, a week? As a one time thing?

YouAreWRONGtoo

Any idiot can automate all of that.

That means that everyone using those services is an even greater idiot.

daemonologist

I dunno, the cloud has mostly made me afraid of the cloud. You can bury yourself in towering complexity so easily on AWS. (The highly managed stuff like Vercel I don't have much experience with, so maybe it's different.)

ygouzerh

I will recommend to try GCP or Azure, the complexity is lower there! AWS is great for big corporate that needs a lot of lego pieces to do their own custom setup. At the contrario, GCP and Azure solutions are often more bundled.

tempest_

It is way more than that though.

It offloads things like - Power Usage - Colo Costs - Networking (a big one) - Storage (SSD wear / HDD pools) - etc

It is a long list but what doesnt allow you do it make trade offs like spending way less but accept downtime if your switch dies etc etc.

For a staging env these are things you might want to do.

sokoloff

> the price business has to pay because of developer insecurity

Is it mostly developer insecurity, or mostly tech leadership insecurity?

agumonkey

my take is that it's fun up until there's just enough brittleness and chaos.. too many instance of the same thing but with too many env variables setup by hand and then fuzzy bug starts to pile up

fragmede

Never got the appeal of having someone else do something for you, and giving them money, in exchange for goods and services? Vercel is easy. You pay them to make it easy. When you're just getting started, you start on easy mode before you jump into the deep end of the pool. Everybody's got a different cup of tea, and some like it hot and others like it cold.

rikafurude21

Sure I love having someone else do work for me and paying them for that, the question is if that work is worth a 50x markup.

odie5533

Fully replicating prod is helpful. Saves time since deployment is similar and does a better test of what prod will be.

teaearlgraycold

Completely agree. It’s not a staging server if it’s hosted on a different platform.

odie5533

I think OP is using these less as staging and more as dev environments for individual developers. That seems like a great use of a single server to me.

I'd still like a staging + prod, but keeping the dev environments on a separate beefy server seems smart.

hamdingers

The "platform" software runs on is just other software. If your prod environment is managed kubernetes then you don't lose much if your staging environment is self-hosted kubernetes.

jamestimmins

This could be the premise for a fun project based infra learning site.

You get X resources in the cloud and know that a certain request/load profile will run against it. You have to configure things to handle that load, and are scored against other people.

YouAreWRONGtoo

All it means is that the cloud doesn't work like a power socket, which was the whole point of it.

Things like Lambda do fit in this model, but they are too inefficient to model every workload.

Amazon lacks vision.

noosphr

The cloud was a good deal in 2006 when the smallest aws machine was about the size of a ok dev desktop and took over two years of renting to justify buying the physical machine outright.

Today the smallest, and even large, aws machines are a joke, comparable to a mobile phone from 15 years ago to a terrible laptop today, and take about three to six months to in rent as buying the hardware outright.

If you're on the cloud without getting 75% discount you will save money and headcount by doing everything on prem.

MangoCoffee

The cloud has made people forget that the internet is decentralized.

altcognito

The weird thing is the relationship between developer costs and operations costs. For startups that pay salaries, $3000 a month is a pittance!*

* The big caveat: If you don't incur the exact same devops costs that would have happened with a linux instance.

Many tools (containers in particular) have cropped up that have made things like quick, redundant deployment pretty straightforward and cheap.

andersa

The best part is when you start with a $3000/month cloud bill during development and finally realize that hosting the production instance this way would actually cost $300k/month, but now it's too late to change it quickly.

j45

Cloud often has everyone thinking it's still 2008.

tempest_

With some obvious exceptions there isnt much you cant run on a 200 Core machine wrt web services.

gregsadetsky

Heya, Disco is the open source PaaS I've been working on with my friend Antoine Leclair.

Lots of conversation & discussion about self-hosting / cloud exits these days (pros, cons, etc.) Happy to engage :-)

Cheers!

martinald

Just to be aware when you say "Even with all 6 environments and other projects running, the server's resource usage remained low. The average CPU load stayed under 10%, and memory usage sat at just ~14 GB of the available 32 GB."

The load average in htop is actually per CPU core. So if you have 8 CPU cores like in your screenshot, a load average of 0.1 is actually 1.25% (10% / 8) of total CPU capacity - even better :).

Cool blog! I've been having so much success with this type of pattern!

gregsadetsky

Sharp eye! Thanks. Fixed

bstsb

what does this service offer over an established tool like Coolify? currently hosting most of my services on a cheap Hetzner VPS so i'm interested what Disco has to offer

alberth

Or Dokku, Dokploy or CapRover

Would be great to have a comparison on the main page of Disco

gregsadetsky

Coolify and other self-hosting options such as Kamal are great. We're all in the same boat!

I'd say the main differences is that we 1) we offer a more streamlined CLI and UI rather than offering extensive app/installation options 2) have an api-key based system that lets team members collaborate without having to manage ssh access/keys.

Generally speaking, I'd say our approach and tooling/UX tends to be more functional/pragmatic (like Heroku) than one with every possible option.

odie5533

There's quite a few now. Coolify, Dokku, CapRover, Kamal.

null

[deleted]

ksajadi

It is clear that Heroku is not interested in reducing their prices. But I don’t think this is a Heroku problem. Vercel is also the same, which makes me think there is a fundamental issue with the PaaS business model that stops it from competing on price while the commoditised part their business (data centers) are always reducing their prices.

The challenge I always face with homebrew PaaS solutions is that you always end up moving from managing your app to managing your PaaS.

This might not be true right now but as complexity of your app grows it’s almost always the eventual outcome.

IshKebab

On the other hand for $3k/month you can just hire someone to do it for you (part time at least, but I doubt it's remotely a full-time job).

zachlatta

We've had a similar experience at Hack Club, the nonprofit I run that helps high schoolers get into coding and electronics.

We used to be on Heroku and the cost wasn't just the high monthly bill - it was asking "is this little utility app I just wrote really worth paying $15/month to host?" before working on it.

This year we moved to a self-hosted setup on Coolify and have about 300 services running on a single server for $300/month on Hetzner. For the most part, it's been great and let us ship a lot more code!

My biggest realization is that for an organization like us, we really only need 99% uptime on most of our services (not 99.99%). Most developer tools are around helping you reach 99.99% uptime. When you realize you only need 99%, the world opens up.

Disco looks really cool and I'm excited to check it out!

gregsadetsky

Cheers, let me know if you do / hop onto our Discord for any questions.

We know of two similar cases: a bootcamp/dev school in Puerto Rico that lets its students deploy all of their final projects to a single VPS, and a Raspberry Pi that we've set up at the Recurse Center [0] which is used to host (double checking now) ~75 web projects. On a single Pi!

[0] https://www.recurse.com/

IshKebab

300 services?? What do they all do?

swanson

I guess I'm not quite understanding why you need six staging servers provisioned at $500 a pop? And if you need that because you have a large team...what percentage of your engineering spend is $3000 vs $100k+/yr salaries?

Especially when I got look at the site in question (idealist.org) and it seems to be a pretty boring job board product.

gregsadetsky

6 staging servers: main, dev, and any branches that you want to let other (non tech people) QA.

As for the staging servers, for each deployment, it was a mix of Performance-M dynos, multiple Standard dynos, RabbitMQ, a database large enough, etc. - it adds up quickly.

Finally, Idealist serves ~100k users per day - behind the product is a lot of boring tech that makes it reliable & fast. :-)

odie5533

From what I read, they're using them as dev environments. Like running many services at once for a single developer to tie into. That's why they wanted multiple ones, one for each dev.

ygouzerh

Yes, everyone forget to compute man-days in the cost calculation

merelysounds

The article's title seems inaccurate - as far as I understood there never was a $3000/mo bill; there was a $500/(mo,instance) staging setup that has been rightly optimized to $55/mo before running six instances.

> Critically, all staging environments would share a single "good enough" Postgres instance directly on the server, eliminating the need for expensive managed database add-ons that, on Heroku, often cost more than the dynos themselves.

Heroku also has cheaper managed database add-ons, why not use something like that for staging? The move to self hosting might still make sense, my point is that perhaps the original staging costs of $500/mo could have been lower from the start.

gregsadetsky

I answered elsewhere with the list of dynos, but the short version is that between the list of services that each deployment required, and the size of the database, it truly (and unfortunately) did end up costing $500 per staging.

afro88

Doesn't staging need to be a (downsized) replica of prod, infra wise to give confidence that changes will be stable and working in prod?

Genuine question.

pentacent_hq

Cool project!

From looking at your docs, it appears like using and connecting GitHub is a necessary prerequisite for using Disco. Is that correct? Can disco also deploy an existing Docker image in a registry of my choosing without a build step? (Something like this with Kamal: `kamal --skip-push --version latest`)

gregsadetsky

Correct, GitHub is necessary at this point to deploy code.

However, yes, you can ask Disco to fetch an existing Docker image (we use that to self-host RabbitMQ). An example of deploying Meilisearch's image is here [0] with the tutorial here [1].

Do you typically build your Docker images and push them to a registry? Curious to learn more about your deployment process.

[0] https://github.com/letsdiscodev/sample-meilisearch/blob/main...

[1] https://disco.cloud/docs/deployment-guides/meilisearch

codyb

Amazing to see this article in 2025. Feel like it's 2015 all over again!