Escaping surprise bills and over-engineered messes: Why I left AWS
159 comments
·February 4, 2025scrose
xyzzy123
It's like, the complexity has to live somewhere.
I've seen successful serverless designs but the complexity gets pushed out of code and into service configuration / integration (it becomes arch spaghetti). These systems are difficult to properly test until deployed to cloud. Also, yeah, total vendor lock in. It works for some teams but is not my preference.
irjustin
If you have a low traffic or % of server ultiziation such as B2B applications. "Full Container on Serverless" can be insanely cheap. Such as FastAPI, Django, Rails etc running all of these on Lambda when you've only got a few hits during the day and almost none at night, is very cost effective.
jhot
We do this at current job for most of our internal tools. Not a setup I would choose on my own, but serviceable. Using a simple handler function that uses mangum [0] to translate the event into a request compatible with FastAPI, it mostly Just Works TM the same in AWS as it does locally. The trade off comes with a bit harder troubleshooting and there are some cases where it can be difficult to reproduce a bug locally because of the different server architectures.
bigfatkitten
It can be, but a whole lot of people fail to do this analysis properly to see whether it will be for their use case, and they get burned.
liendolucas
What it's also surprising is people getting excited and "certified" on AWS (and attending AWS conferences, lol), job postings requiring you to "know" AWS for a developer position. Why on earth do I have to know AWS to develop software? Isn't it that supposed to be covered by DevOps or sysadmins? If one word could define AWS that would be: overengineer. The industry definitely does not need all that machinery to build things, probably a fraction of what is offered and way way simpler.
larusso
Because if you hire a DevOps after the original meaning then he needs to know AWS (assuming that’s the cloud vendor the company posting is using) DevOps means develop and operate. That was the raging new concept. Since actual sysadmin work of setting up hardware is no longer needed when hosting on AWS. So the developer takes the part of hosting and operation. But now that cloud infrastructure became so damn complicated and all, most DevOps define the dev as developing and maintaining the Zoo of tools and configurations. No time for actual development of the product. This is handled by another team. And we are back full circle to the times before DevOps. Our company still runs the old style of the definition and it is manageable.
[Edit typo]
tietjens
Because the roles are increasingly blurring, and require both the dev and the ops knowledge. AWS gives you a lot of power if you buy into it, but of course that comes with a whole set of tradeoffs. There won’t be less cloud in the future, no matter personal feelings about it.
qwertycrackers
Our team kinda thinks the same thing about serverless but despite that we have some things built with it. And the paradoxical thing is that this issues have just never materialized, the serverless stuff is overwhelmingly the most stable part of our application. It's kinda weird and we don't fully trust it but empirically serverless works as advertised.
franktankbank
How long have you been running on serverless?
kikimora
My experience - system went down every time we have significant load. Reasons were various, but all triggered by load. Switched to ECS + Aurora - problem gone, bill has slightly increased.
alias_neo
As with many of these things, I've seen it time and time again; the initial setup can be simple, perhaps that's a good thing for an org that needs to get moving quickly, but very soon you start to see the limitations and the more advanced stuff just isn't possible and now you're stuck, locked in so have to rebuild from the ground up.
Sometimes you have to do the (over) engineering part yourself, so that someone else isn't making the decisions on your behalf.
moltar
Just a counterpoint. But my experience has been the opposite.
Taking a legacy application that runs on a single server and is struggling to scale with spaghetti of config files all over the box and security issues that have gone unpatched because no one wants to touch the box as it’s too dangerous to change anything, as everything is intertwined. No backups or unverified backups. No redundancy. No reproducible environments so things work on local but not on remote. Takes devs days to pin the issues.
Then break it down and replace with individual fully managed AWS services that can scale independently and I don’t need to worry about upgrades.
Yeah, the total bill for services is usually higher. But that’s because now there’s actually someone (AWS) maintaining each service, rather than completely neglecting it.
They key to all this is to use IaC such as AWS CDK to have code be the single source of truth. No ClickOps!
scrose
I mostly understand what you’re saying and I hope I didn’t come across as saying serverless is *never* a good idea. In my experience it’s just very often not the best choice. I will always keep an open mind about solutions, but serverless often comes with many shortcomings that teams don’t notice until it’s too late. I’ve seen it happen enough times that I feel I must keep a healthy dose of skepticism when anyone proposes changing a small part of stack for serverless options.
I’ve been working with AWS for about 8 years now and have worked on supporting dozens of projects across 5 different orgs ranging from small 5-person shops to 1000+ people. I have only seen a handful of cases where serverless makes sense.
> Taking a legacy application that runs on a single server and is struggling to scale with spaghetti of config files all over the box and security issues that have gone unpatched because no one wants to touch the box as it’s too dangerous to change anything, as everything is intertwined. No backups or unverified backups. No redundancy. No reproducible environments so things work on local but not on remote. Takes devs days to pin the issues.
All of these issues are easily possible with many serverless options. What it really sounds like is your org went from a legacy architecture that built up a ton of tech debt over the course of years and had no one owning the basics (ie backups) to a new one with less tech debt since it was built specifically around current issues you were facing. In 3-5 years I wouldn’t be surprised to hear the pendulum swinging back in the other direction as staffing and resources change hands and new requirements emerge.
moltar
Hey thank you for the thoughtful response.
I didn’t mean to say that serverless is the way. I was just talking about AWS in general.
Although serverless is a broad category that includes Aurora Serverless, which does has trade offs but it’s a good choice in some cases.
I also like Step Functions a lot.
EventBridge for an event bus.
I’d even put ECS into sort of serverless category. I don’t manage the underlying servers and only care about the abstraction at the container level.
irjustin
I see these articles on HN enough almost always end up agreeing. My gross over simplification of this problem is:
The sweet spot for AWS seemingly is US$300-$3k/mth.
At that low end, you can get a mix of basic-but-okay DB, web servers, caching, CDN and maybe a few Lambdas all with logging/x-ray. All you'll need to run a good site/service w/ low-medium traffic and solid reliability. At around $3k/mth, you likely know whether you're in AWS to stay (using many services) or are eyeing the self hosting route.Side projects really need to optimize under this $5-50/mth and that's just too low on AWS. The expensive foot-guns are just too risky.
archerx
That’s still way too much. In my previous job I ran the site on $10 a month dual core cpu. It had decent traffic, live status updates, a live stream during 2 hours of the day, a successful e-commerce shop. The site was getting thousands of visits an hour and I was always surprised at how much bandwidth we would chew. It even survived a could have ddos attacks without issues. I also cached the hell out of it with memcached.
My past two previous projects have been on $5 vps with out issue. Some one really tried to bring one of them down with a ddos but gave up when it didn’t work.
A $5 VPS can take you very far if your project is well optimized.
Salgat
Our biggest expense isn't cpu/bandwidth, it's data. S3 (that includes snapshots/backups), databases, and memory. So your experience aligns with mine for AWS, even though we spend magnitudes more.
archerx
We had a lot of data too, everyday we produced at least two hours of video, it would get encoded to the various formats we needed (broadcast and web), store the archive on our local server (which did a lot of backend tasks as well) and host them on Youtube/Facebook for the audience. We also had a 3rd party partner for the livestream on the site. All of that took a massive load off our web server.
If we had to have all of that on the cloud it would have been a lot more expensive.
Styn
I've heard quite a bit a few people mentioning this. Do you have any resources you could refer me to? Or is it something you have to learn as you go?
archerx
I've been doing web dev for at least 15 years now so I kind of grew up with the technologies. The thing that helped me the most was building stuff from scratch in my personal projects to learn what is actually happening behind the scenes and see if there is a better way of doing it.
The problem with frameworks/cms/etc is they are trying to solve everybody's problem but aren't really optimized to solve your specific problem if you have one.
cess11
At that lower end you get two of these machines, https://www.hetzner.com/sb/#price_from=100, plus another hundred bucks per month for fun and CDN and some DDOS protection if that's your thing and a backup solution. Pick a 256 GB RAM machine and one with 3-4 TB SSD.
It'll reliably cost exactly that and you'll be able to handle quite a lot of traffic. Probably enough that the same load would land you around 3k/month on AWS or GCP or whatever.
For 3k/month you can get hundreds of cores and 1-2 TB RAM on a couple of machines physically separated in a redundant data center setup. That's with markup from the data center selling it to you and the credit cost. After two-three years you own the hardware and can sell it if you want. A rig like this makes it rather easy to build HA-systems, have your devs experiment a bit with Ansible and keepalived and ProxySQL and they'll figure it out.
The beauty of Unix-like systems is that they're trivially discoverable and well documented. The clown platforms are not, and if your systems go down for whatever reason it's typically much harder and more labour intensive to get them back up again when they're clowned. This is especially the case if they get transfered to new management, e.g. due to bankruptcy proceedings. Paying a premium to get locked-in is a bad deal, unless you're the kind of organisation that is likely to also have mainframes in production.
8fingerlouie
There's a "sweet spot" between startup and enterprise where the cloud makes zero sense.
Startups will benefit from the cloud, as it allows them to easily scale, and maintain infrastructure that would otherwise be very costly.
Then at some point, around 50-100 employees (depending on business), it makes more sense to have a server park sitting at home, and you probably have the revenue to hire a couple of system administrators to handle stuff.
Then you have enterprise customers with 500+ employees, who have compliance and governance to follow, who maybe has huge spikes during a month (banks for one experience a lot more load during payday), which you will have to pay for in your local server farm, but you can dynamically scale in a cloud data center.
Not that you can just "throw everything in the cloud", it still requires planning for the best cost optimizing, but once you reach enterprise level you can probably afford a couple of cloud architects or similar to facilitate this.
theshrike79
Owning your servers works best if your load is stable and predictable.
If you make a mobile game that becomes an overnight success and your DAU goes from 10k to 1M you need the capacity TO-DAY. Not in a week or two when you can get more servers to the colocation facility your stuff is in and get them configured and added to the cluster.
If your stuff is built on AWS, it'll just scale up automatically and most likely nobody notices except for your AWS bill - but the revenue from the game will more than make up for it.
cyberax
I'd say about $10k.
For that cost, you can get multiple production regions (for failover), dev/alpha/beta environments, a reasonable amount of CDN traffic, a bunch of databases and some halo services (like X-Ray/CloudWatch).
zaphirplane
You have to be joking, there are massive Number of household name companies spending orders of magnitude more, on cloud. Companies that can afford to dedicated finOps people
bschwindHN
Truth is, most web projects made today can run on a raspberry pi or mini PC and be just fine. If you have enough users that you need to scale to more machines, you'll be in a position to know what to do to handle it, or hire someone who does.
ZYbCRq22HbJ2y7
Seems like a pointless thing to talk about, "most web projects today". You don't know what specific requirements people have, and people have been making "web projects" with limited resources since the web started.
Yet, for some reason, I see this repeated everywhere, always. Does it make the people who repeat it feel better? Are they actually informing anyone of anything? Who knows.
addicted
I don’t see a refutation or even disagreement with the claim in your comment. Are you saying that this is too obvious to restate?
ziddoap
The implied refutation of "You don't know what specific requirements people have" is that there will be people with specific requirements that a Pi/Mini PC obviously doesn't meet.
taurknaut
> You don't know what specific requirements people have
This is actually pretty apparent if you camp a web forum. People seem to either need a raspberry pi or a data center: this is the typical web-app vs mmorpg divide amongst possible computational fantasies people have. Questioning what motives people have is very sensible.
bschwindHN
I might be biased since I posted the comment, but I don't think it's pointless :)
It's worth repeating, if only to raise awareness among devs that modern computers are _extremely_ fast. You can start a new one with commodity hardware and it will probably serve your needs for the lifetime of the project. If your project gets so popular/successful that you need more than a single machine, then I would hope by that point you know how to move it onto a cluster of machines, or into the cloud if the economics make sense.
I say "most web projects" because it's true - most web projects are simple CRUD apps in slightly different forms. Sure, there are some up-and-comers using AI to burn half a data center to produce fake comments and images for internet forums. Those won't run on a Pi or single mini PC. But people are making _so_ many web apps that will never see more than a few hundred or thousand users. Those don't need multi-instance load-balanced highly-available infinite-scaling systems to serve their users needs. That usually just adds cost and complexity until you're at a point where you _really_ need it.
YetAnotherNick
Not only that, people always assume that you need complicated dependencies if they use AWS and compare any of the other company's stack with their raspberry pi. They can use AWS lightsail just fine for $5 if they just need a VM.
darkwater
If you just need a VM why not a plain EC2 instance? Why are you adding already lock-in to the secret sauce?
bschwindHN
That's fair too, those $5 a month machines can also serve you well as long as you can limit bandwidth charges if you suddenly end up with a lot of traffic.
valenterry
It depends. If you want a service that doesn't have downtime because of a hardware issue, then you should have at least 2 machines. And then, you probably want to do deployments in a way where you spin up 2 new machines and then, only once confirmed that the deployment works, shut down the old 2.
That is super easy to do on AWS (and many other providers) but not so easy (or even impossible) anymore when you do things on-prem.
EDIT: due to the replies, let me clarify; this is not an argument for AWS specifically but for services that make it easy to spin up machines and that handle deployments and failover for you.
Also, there are more reasons to have more than one machine, e.g. being able to deploy with zero downtime and therefore without having to think when to deploy and what to do if a deployment fails etc.
snackbroken
Most web projects can tolerate an hour of scheduled downtime a week and the occasional (< 1/year) unscheduled hour of downtime to replace broken hardware. Running on a single machine with 2 disks in RAID1 right up until the point where the ~50 hours of downtime a year equal one engineer's salary in lost revenue makes sense if you pick a DC that will sell/rent you replacement hardware and do replacements for you. Just make sure you have offsite backups in case the DC burns down. If your office/home is near a well stocked computer store, most web projects can skip the DC altogether and just put the server in the closet under the stairs or whatever. The company will likely die before the server does anyways.
valenterry
> Most web projects can tolerate an hour of scheduled downtime a week and the occasional (< 1/year) unscheduled hour of downtime to replace broken hardware.
An hour? What happens if it breaks while I'm on vacation and no one is there to take care?
Nah, that argument might work sometimes but not in general.
theshrike79
Most but not all.
If you're running an online shop/reservation system, a hour of downtime can be tens of thousands of lost revenue.
esquire_900
Super easy? Getting that setup for the first time from zero knowledge will probably take a few days. And that's before understanding all the intricacies like your AWS bill, the hidden costs of cloud, properly securing IAM and setting up your VPC etc etc
wontondisregard
Not once in my entire career have I seen people successfully avail themselves of the cloud's purported benefits. I have, however, seen a lot of happy account managers in Las Vegas during Re:Invent, oh yes.
valenterry
Comparably easy, yes. I'm not talking specifically about AWS btw, but even there it is easy.
If someone has zero knowledge, then everything will take them a long time, including hosting on-prem or so.
pinoy420
I got downvoted for saying something similar for standing up a web app from scratch and getting it deployed securely.
You are absolutely correct
freeone3000
It is exactly as easy, or easier, to do this in proxmox than it is in AWS console, and would have a price breakeven point of less than a year (you only need two physical nodes).
valenterry
Proxmox is just a way to virtualize things. You need hardware to run it. So what hardware are you talking about?
panja
Does proxmox have 24/7 enterprise support yet?
sgarland
> not so easy (or even impossible) on-prem
It’s as if an entire generation of tech workers are utterly unaware that these have been solved problems for decades.
You put two machines behind HAProxy with keepalived. If you want to do a new deployment, you can spin up new machines in the same manner, and cut them over (or any of the many other ways to accomplish this).
AWS isn’t inventing any of the HA stuff they offer; they’re taking existing technologies and packaging them.
valenterry
So how do you do all of that with "a raspberry pi or mini PC"? You would need at least 4 of them.
bschwindHN
Most web projects can afford to be down for a bit. Even the big guys have outages fairly often. It's not like AWS is always running smoothly either, and there are so many "_ is down" posts on HN alone.
I do agree that your projects should be designed to be well contained and easy to create a new instance of it, though.
znpy
> That is super easy to do on AWS (and many other providers) but not so easy (or even impossible) anymore when you do things on-prem.
False. You can run kubernetes and/or some virtualisation engine as well and do the same on prem.
and overprovisioning physical hardware is much cheaper.
valenterry
Well, yes. I would categorize "running kubernetes on prem" as "not so easy".
theshrike79
And you can easily have a separate clean environment for every pull request easily.
Can't do that with your own hardware unless it's also a Kubernetes cluster, which is a whole new set of crap you need to manage.
wontondisregard
I'm sorry, but the ten billionth CRUD web app really doesn't need nine nines.
valenterry
I agree. It doesn't change my argument though.
2030ai
Just a blue green! But I agree somewhat. Single machine running Docker compose is pretty decent. Pets! Not cattle. (I'd rather care for a dog than a 100 sheep)
lvturner
I just went down the rabbit hole of trying to host a small app for free(ish) on cloud solutions, only to remember that I have a 2GB fiber line and a raspberry pi sitting idle - if it ever gets so popular that I outgrow this, it'll be a nice problem to have.
fm2606
Yeah I did the same thing about a year ago only on GCP. I tried to stay in the free tier but "hidden" networking costs got me at $20/month.
The site was a blog with traffic of 1, load balancer, cloud run and storage bucket.
I shut it down. It was a nice exercise but not worth it to me long term.
bongodongobob
You're absolutely right. A little off-topic, but I've seen a similar problem on the on-prem side of things. Their server setups are a lot of the time completely overblown "because this is what the nice vendor recommended, they are great partners!"
Awesome, your 5 node Nutanix setup with 15 whole VMs is peaking at 5% of its total compute, 5% of its RAM, and 8% of its disk. Granted, I do work with medium sized businesses in the manufacturing sector, but so many get tricked into thinking they need 5 9s, when 3 is enough, and their setup isn't even providing 5 anyway because nothing else is downstream. Then 5 years later their warranty is up and they scrap it all and replace. It's crazy.
Kills me how many businesses contract this shit out and get tricked into spending a million on infra when they could just run 2 servers and pay a mid level engineer to manage it. "Not in the budget." Yeah, I wonder why.
puchatek
Isn't that a security risk? Having to open a port for the pi and then always keeping an eye on it in case the next heartbleed, etc. is discovered?
post-it
You could set up a DMZ if you're concerned. But even if someone gets on my LAN, it's not the end of the world. They could send goatse to my Chromecast, I guess.
bongodongobob
No, it's literally what firewalls and DMZs are for.
baq
Yeah but OTOH I’ve just set up a vm host + guest on a N100 minipc for home assistant (migrating from a bare metal pi 4) and while I’m happy with the result, I’ve spent more hours than I’d like or planned to get it working reliably vs click ‘make a vm please’ and have it waiting for an ssh connection after a coffee break.
(Started with proxmox but the installer’s kernel doesn’t have the Ethernet driver and they don’t provide WiFi because it’s stupid for the installed system - but they allow upgrading to a kernel which supports my adapter, so got a bit catch 22’d. Moved to Ubuntu server that would stop booting after creating a bridge interface because ‘waiting for network. no timeout’ and then a few more interesting issues.)
2030ai
If you have enough users that you need to scale to more machines ... run the rpis in a 3d printed 1u rack adaptor as a kubernetes cluster.
sieve
> One app gave me trouble - a Python Flask app. It had several complicated dependencies including OpenCV, Numpy, and Matplotlib. My belief is that this had more to do with the complex nature of the libraries and less to do with NFS.
It has to do with the allround lunacy that surrounds the python ecosystem the moment you step outside the stdlib nursery. I picked up python because I wanted to contribute to a project. Was immediately met with ** of the first order.
After decades of C-like languages, I like (some of) the syntax. But I hate the ecosystem and having to ride a bullock cart after cruising in a plane. The language is slow. So all the critical work is done in C/Rust. And then you write python bindings for your library. Once that is available, you can "program in python."
The dependencies are a nightmare, particularly if you do anything LLM-related. Some libraries simply won't work with newer versions of the interpreter. After decades of Java, this came as a big surprise.
If it were not for uv, I might have given up on the language altogether.
faizshah
Personally I use lightsail on AWS and cloudflare cause there is always an off ramp to try some of the fancy stuff but then you can always go back to just using cheap VMs behind cloudflare. You can also put it all behind a VPC and you can use CDK/CloudFormation so that’s also nice.
I gave up on using GCP even though the products like BigQuery are way better just because I got burned too many times like with the Google Domains -> Squarespace transition.
I’m thinking of switching back to a bare metal provider now like Vultr or DO (would love to know what people are using these days I haven’t used bare metal providers since ~2012).
Also, completely unrelated does anyone know what the best scraping proxy is these days for side projects (data journalism, archiving etc.)?
smatija
Hetzner is good to me, but I am EU based.
Loving that German logic that even emergency maintenace has 2-week notification.
bingo-bongo
But.. that’s just maintenance?
smatija
I think they define emergency a bit more widely than we are used to with other providers. For urgent change of router I was notified almost 2 months in advance.
For "real" unplaned emergencies I had in total like 5min of downtime last year, when some other router died.
choilive
Been using OVH for bare metal for a few years now and no major hiccups other than scheduled maintenance.
wahnfrieden
OVH VPS in Canada is also a great deal
deathanatos
I've used NearlyFreeSpeech for years (as registrar & DNS), and I've loved their service. Their site is plain, and you just trade money for a plain, simple product, with basically 0 bullshit between you and that exchange. Their site is so refreshing in today's landscape of upsells and other corporate dark patterns.
The article implicates AWS, but AFAICT the other major cloud, GCP, behaves similarly. The docs for "budget alerts"[1] do call it out directly,
> Setting a budget does not automatically cap Google Cloud or Google Maps Platform usage or spending. Budgets trigger alerts to inform you of how your usage costs are trending over time. Budget alert emails might prompt you to take action to control your costs, but they don't automatically prevent the use or billing of your services when the budget amount or threshold rules are met or exceeded.
But still. But wait, you say, those docs go on to suggest,
> One option to automatically control spending is to use budget notifications to programmatically disable Cloud Billing on a project.
And the linked page states,
> Following the steps in this capping example doesn't guarantee that you won't spend more than your budget.
sigh "Over-Engineered Messes", TFA hits it on the nose.
There's also limiting API usage, but that's on requests … not on cost.
I avoid it all for personal stuff.
At work, we pipe all these cloud bills into a BigQuery account which then pipes into graphs in Grafana, which all tells us that engineers have no idea what the actual horsepower of 3 GHz * 32 cores is when they request a bajillion more cores.
It's probably also reasonably categorized as an "Over-Engineered Mess".
(We also import Azure's billing data, and boy do they make that obnoxious. CSV files dumping into a bucket, and they might go back & edit CSVs, or drop new ones, and if there is there a schema for those CSV files … I've yet to find it. Some columns you might think were guaranteed non-"" are not. Dates in American. Severely denormalized. Etc.)
rectang
Enshittification at cloud providers guarantees that in order for some department to hit the numbers ownership demands, an ever increasing number of customers must get screwed with overage bills.
wordofx
Enshittification of software that people create guarantees that they will get overage bills and blame cloud providers.
rectang
The cloud products could be created with hard stops, but the vendors choose not to do so. It's a constant, exhausting fight to engineer projects to avoid the risk of business-destroying overages when those projects don't need to scale infinitely — e.g. when it wouldn't be great if some misbehaving, malicious AI bot takes down the site but that's still better than the alternative of serving some insane number of requests and paying for the traffic.
bambax
I am self hosting on a NAS at home, with Cloudflare in front (which does the most of the work) and Cloudflare tunnels to avoid exposing anything directly. The tunnel communicates with various Docker instances depending on the services.
It works flawlessly for now, and costs almost zero, since the NAS is always on in any case, and Cloudflare is free.
These are all small projects of course, but two of them stayed on HN frontpage for a day and didn't break a sweat.
abrookewood
Talk about burying the lede: "My bill has increased from about $1 to $7 a month."
I agree with much of the sentiment, but I don't see how complex things you could possibly be making things if you're paying $1 a month ...
inemesitaffia
It's a change. It might not be much to you. But there's lots of people who $6 extra will make their day
B-Con
The point was the size of the app. That was the cost of switching all the apps from AWS to LFS, not a surprise AWS bill.
abrookewood
Their bill increased ...
fulafel
In AWS you could have quite complex things as stuff like Lambda and EC2 have free tiers.
But it's also not really relevant to the main point of the article which is the risk of large surprise bills from eg runaway feedback loops and DoS attacks.
Arnavion
Funny. I'm planning to move from Azure to AWS because Azure is planning to raise my ~$10 bill to ~$50 later this year for no good reason.
Insanity
That is pretty cool. I tend to default to AWS, luckily not too expensive for my side projects (about $15/month) - nothing accessible to the public though so my cost is relatively predictable.
That said, I do wish you could hard shutdown at a certain budget limit.. but guess that is not in AWS’s best interested.
rectang
The worst case for a traffic spike on some hard-limited service is the service becoming unavailable. The worst case for a traffic spike on an unlimited service is financial calamity.
null
grav
It is possible, since spending has an API. I’ve recently implemented it on Azure for a company, to ensure our storage costs didn’t run wild.
Couldn’t find an out-of-the-box solution, maybe, as you say because it’s not in their interest.
darthrupert
Would contacting one's credit card union to deny the billing work as a kind of a hard shutdown?
cmeacham98
No - most cloud providers are postpaid, you use their services and then get a bill at the end of the month.
Making your payment method invalid doesn't absolve you of this debt and will just result in being sent to collections and/or sued if the total is large enough.
rendaw
I was using Azure for a cloud windows desktop which I need rarely, for using something like itunes or kobo to download books. It cost ~$5 a month.
One day windows update bricks the system (hanging while trying to revert some security patch), and over a couple months from time to time I try random janky official azure recovery tools and voodoo from user forums with people who don't really know what they're doing either.
I notice my bill has crept up to several hundred dollars a month. Each of the recovery tools was cloning the system + disks, and I ended up with a bunch of disks that chewed up my bill.
I raised a support ticket and they refunded part of it with a bit of "you're a bad person", but wow... although the primary lesson I got here is that I never want to use windows again.
baq
A perfectly workable N100 box runs you like $150 (that's on the expensive side I'd say). If you can find a cheap Windows key or maybe get a box with an OEM license it'll be an even better deal.
> I never want to use windows again.
Can't blame you :) I'm nearing the point of never wanting to use any computer again.
skydhash
I’m using FreeBSD on an oldish (2019) office laptop. While the desktop is duct tape (bspwm) it’s a better experience than my M1 MBA for the usual workflow.
What I don't like about macOS is system components opacity, low UX customization (animation, windows management), The fact that UI elements are scaled as a whole with the desktop (can't change fonts to sensible values, unless the monitor is retina),... It's the console experience for a desktop computer.
rendaw
Ah I wanted to try cloud gaming with it too, from time to time (stuff arbitrarily blocked here from geforce now etc).
minorshrinkage
AWS definitely has its place, but for personal projects, the complexity and cost risk can be overkill. The horror stories of surprise bills are real—misconfigured services, forgotten instances, and unexpected data transfer fees can add up fast. Even with alerts, by the time you notice, it’s often too late.
For those who need AWS but want to avoid these surprises, there are ways to keep costs in check. Cost allocation tags, savings plans, and budget alerts help, but they require ongoing effort. Tools like SpendShrink.com can automate cost analysis and highlight savings opportunities before they become an issue.
It’s great to see more people looking for simpler hosting solutions, but for those who do need AWS, better cost visibility is a must.
andrewstuart
You can get 1Gbps unlimited traffic VPS on IONOS 12 vCores, 24GB RAM, 640GB storage for $50/month.
No need to pay 9 cents per GB egress to the big clouds.
Over the years I’ve spent a lot of time talking engineers and managers out of using serverless AWS options for various reasons. I’ve found that most non-infra focused engineers and managers see serverless marketed as “simpler” and “cheaper”.
It’s often the opposite, but most people don’t see that until after they’ve built their infrastructure around that, get locked in, and then start seeing the surprise bills, difficult to diagnose system failures, and hard-limitations start rolling in.
A bit of early skepticism, and alternative solutions with a long-term perspective in mind, often go a long way.