I could not convince my k8s team to go AWS serverless
50 comments
·July 2, 2025ezrast
moltar
I can do all of the stacks well, including serverless described or pure ECS Fargate or Kubernetes.
From my experience Kubernetes is the most complex with most foot guns and most churn.
cybrox
Is it? If you compare to serverless, you'd almost have to compare AWS EKS Fargate and with that, there's a lot less operational overload. You still have to learn ingress, logging, networking, etc. but you'd have to do that with serverless as well.
I'd argue between AWS serverless and AWS EKS fargate, the initial complexity is about the same. But serverless is a lot harder to scale cost efficiently and not accidentally go wild with function or sns loops.
dontlaugh
ECS Fargate is simple to set up and scales just fine.
null
mnahkies
I don't think the author has seen k8s done well. They imply that serverless is necessary to achieve a "you build it you run it" setup, but that's false.
We operate in a self-serve fashion predominantly on kubernetes, and the product teams are perfectly capable of standing up new services and associated infrastructure.
This is enabled through a collection of opinionated terraform modules and helm charts that pave a golden path for our typical use cases (http server, queue processor, etc). If they want to try something different/new they're free to, and if successful we'll incorporate it back into the golden path.
As the author somewhat acknowledges, the answer isn't k8s or serverless, but both. Each has their place, but as general rule of thumb if it's going to run more than about 30% of the time it's probably more suitable for k8s, assuming your org has that capability.
I think it's also worth noting that k8s isn't the esoteric beast it was ~5-8 years ago - the managed offerings from GCP/AWS and projects like ArgoCD make it trivial to operate and maintain reliable, secure clusters.
kryptn
> To a k8s engineer, serverless means “no servers”!
I'd assume a majority of people working with k8s knows what serverless is and where Functions as a Service work more generically.
The rest of the post just seems to be full of strawman arguments.
who is this kubernetes engineer villain? It sounds like a bad coworker at a company with a toxic culture, or a serverless advocate complaining at a bar after a bad meeting.
> k8s is great for container orchestration and complex workloads, while serverless shines for event-driven, auto-scaling applications.
> But will a k8s engineer ever admit that?
Of course. I manage k8s clusters in aws with eks. We use karpenter for autoscaling. A lot of our system is argo workflows, but we've also got a dozen or so services running.
We also have some large step functions written by a team that chose use lambda because aws can handle that kind of scaling much better than we would have wanted to in k8s.
aduwah
I think what happened is:"Chatgpt generate me a ragebait for HN about serverless and a k8s-engineer"
js4ever
here it is:
Title: Serverless is eating Kubernetes—and that's a good thing
After 6 years of watching K8s engineers over-engineer themselves into a corner with YAML spaghetti, I finally moved a production workload to AWS Lambda + EventBridge + DynamoDB. You know what happened?
Nothing broke. It just worked.
No Helm charts. No Ingress hell. No cluster upgrades at 3am because of CVEs in some sidecar no one uses anymore. The whole app is now ~300 lines of infra code and it scales from 0 to 10k RPS without me touching anything.
Meanwhile, the K8s crowd is still debating which operator to use to restart a pod that keeps crashing because someone misconfigured a liveness probe. Or building internal platforms to abstract away the abstraction they just built last quarter.
Remind me again what the value-add of Kubernetes is in 2025? Other than keeping a whole cottage industry of "DevOps" engineers employed?
Serverless isn’t the future. It’s the present. And K8s? It’s the next OpenStack—just slower to die because it has better branding.
renatovico
kkk, busted, trying a new shine thing :)
trynumber9
>As long as you keep the cost down, you will never need to move away.
Yes, as long as the $2 trillion dollar American corporation beholden to shareholders to maximize profits doesn't try to milk its captive customers you'll be fine. Shouldn't be a problem.
onli
You shouldn't repeat the shareholder value myth, it is not true. See https://scholarship.law.cornell.edu/cgi/viewcontent.cgi?arti... for example.
Whether that means that Amazon won't try to squeeze profits is a different question.
trynumber9
I didn't, as far as I'm aware. They are indeed beholden to their shareholders and I said nothing of value. Investors desire shares of profitable companies with consistent growth. As an AWS customer, you are the consistent growth. First as a new customer, a statistic paraded to investors. Later through price increases bringing real revenue.
Brushing off lock-in is a short term luxury.
mrkeen
I can't quite follow the article. Is it trying to argue that it's bad when it happens, or that it doesn't happen, or both?
onli
I understood parent as repeating the claim that companies are beholden to their shareholders to maximize (short-term) profit. The article I linked discusses from several angles that this is a myth, companies are not forced (for example by law, as myth repeaters often claim) to maximize short term profit for shareholders. They can aim for different values and strategies.
OrderlyTiamat
If society at large, and judges in particular think it's true, then it's true. A partially socially constructed world is like that.
sa-code
While that's an interesting read, reality would disagree
KronisLV
> If you’re arguing with a k8s purist, you’ll never convince them.
I feel like the whole article very much sounded like constructing a strawman and arguing against that. The way I see it, there can be advantages and disadvantages to either approach.
If you really find a good use case for serverless, then try it out, summarize the good and the bad and go from there. Maybe it's a good fit for the problem but not for the team, or vice versa. Maybe it's neither. Or maybe it works and then you can implement it more. Or maybe you need to value consistency over an otherwise optimal solution so you just stick with EC2.
Most of the deployments I've seen don't really need serverless, nor do they need Kubernetes. More often than not, Docker Swarm is more than enough from a utilitarian perspective and often something like Docker/Compose with some light Ansible server configuration is also enough. Kubernetes seems more like the right solution when you have strong familiarity and organizational support for it, much like with orgs that try to run as much of their infra as possible on a specific Linux distro.
It's good when you can pick tech that's suited for the job (that you have now and in the near future, vs the scale you might need to be at in N years), the problems seem to start when multiple options that are good enough meet strong opinions.
I will admit that I quite do like containers for packaging and running software, especially since they're the opposite of vendor lock (OCI).
snicker7
We literally had a major us-east-1 incident on AWS today. Only thing we can do is sit on our butts and wait for it to end so that we can clean up. This happens every few months. I am unimpressed with the the "thousands of engineers" argument.
swiftcoder
Even if you had deployed Kubernetes into us-east-1, you'd likely still be down during the incident
bob1029
Serverless is such a trap. The vendor's need to standardize the execution model is poorly aligned with the developer's need for control and stability over time. I gave Azure Functions a genuine try and was treated with piles of deprecation notices regarding in proc execution model after just a few months into it. Perhaps AWS is better (I suspect they are), but the concern remains. I don't know how anyone is driving meaningful business value with the amount of distraction these ecosystems bring.
I also don't see the scalability argument. Being able to own a whole CPU indefinitely means I can take better advantage of its memory architecture over time. Caches actually have meaning. Latency becomes something you can control. A t2.large running proper software and handling full load for 60 seconds could cost $10-20 if the same were handled in AWS lambda. The difference is truly absurd.
TCO-wise, serverless is probably the biggest liability in any cloud portfolio, just short of the alternative database engines and "lakes".
solatic
The promise of both Kubernetes and serverless was to abstract away the infrastructure from the developer, who can stick to writing line-of-business code. In both cases, companies end up needing to hire infrastructure teams to manage the underlying infrastructure.
Author is making a moot argument that doesn't resonate. The real struggle is about steady-state load versus spiky load. The best place to run steady-state load is on-prem (it's cheapest). The best place to run spiky workloads is in the cloud (cheapest way of eliminating exhausted capacity risk). Then you have crazy cloud egress networking costs throwing a wrench into things. Then you have C-suite folks concerned about appearances and trading off stability (functional teams) versus agility (feature teams) with very strong arguments for treating infrastructure teams not as feature teams ("platform teams") but as functional teams (the "Kubernetes team" or the "serverless team").
And yes, there woukd be a "serverless" team, because somebody has to debug why DynamoDB is so expensive (why is there a table scan here...?!) and cost-optimize provisioned throughput, and somebody has to look at those ECS Fargate steady-state costs and wonder if managing something like auto-patching Amazon Linux is really that hard considering the cost savings. At the end of the day, infrastructure teams are cost centers, and knowing how to reduce costs while supporting developer agility is the whole game.
biot
> Serverless Advocate: Yes, but instead of paying for infrastructure overhead and hiring 5–10 highly specialized k8s engineers, you pay AWS to manage it for you.
This argument lost me. If you’re running your own k8s install on top of servers, you’re doing it wrong. You don’t need highly specialized k8s engineers. Use your cloud provider’s k8s infrastructure, configure it once, put together a deploy script, and you never have to touch yaml files for typical deploys. You don’t need Lambda and the like to get the same benefits. And as a bonus, you avoid the premium costs of Lambda if you’re doing serious traffic (like a billion incoming API requests/day).
Every developer should be able to deploy at any time by running a single command to deploy the latest CI build. Here’s how: https://engineering.streak.com/p/implementing-bluegreen-depl...
cybrox
Also: As if you didn't need "5-10 highly specialized engineers" (neither needs this number but alas) to get all AWS serverless services to coexist and scale cost and compute efficiently with proper monitoring, logging, permissions, tracing, etc.
hn_throw2025
> But the real question is, why will you migrate? It is not like AWS is like Orkut, which can be shutdown overnight. As long as you keep the cost down, you will never need to move away.
Seems like a shallow take. Prices could rise and reliability fall, but you’d still be married to them.
dovys
You are trying to convince the team they don't need to exist while their livelihood depends on the opposite
thrwaway55
This is an issue of comp no? I'd delete my team if it made sense given the chance because we are all share holders as well.
dovys
If your company is big enough to have a dedicated k8s team, chances are deleting an entire team won't directly boost your comp. Better to sell the entire endeavor as change of responsibilities - from a team that manages k8s to one that's responsible for uptime. Set constraints and let the team find the best tool for the job.
hnarayanan
I like how, in this context, k8s is considered the raw metal thing. :)
madduci
The assumption is that you can always install k8s on bare metal, if cloud providers aren't good anymore
Another article that, by the third sentence, namedrops seven different AWS services they want to build their app on and then spends the rest of the argument pretending like that ecosystem has zero in-built complexity. My friend, each one of those services has its own security model, limitations, footguns, and interoperability issues that you have to learn about independently. And you don't even mention any of the operational services like CloudWatch, CloudTrail, VPCs (even serverless, you'll need them if you want your lambdas to hit certain other services efficiently), and so on. Those are not remotely free. Your "real developers" can't figure out how to write a YAML document, but you trust them to manage infrastructure-as-code for motherloving API Gateway? Absolutely wild.
Kubernetes and AWS are both complex, but one of them frontloads all the complexity because it's free software written by infrastructure dorks, and one of them backloads all of it because it's a business whose model involves minimizing barriers to entry so that they can spring all the real costs on you once you're locked in. That doesn't mean either one is a better or worse technical solution to whatever specific problem you have, but it does make it really easy to make the wrong choice if you don't know what you're getting into.
As for the last point, I don't discourage serverless solutions because they make less work for me, I do it because they make more. The moment the developers decide they want any kind of consistency across deployments, I'm stuck writing or rewriting a bunch of Terraform and CI/CD pipelines for people who didn't think very hard about what they were doing the first time. They got a PoC working in half an hour clicking around the AWS console, fell in love, and then handed it to someone else to figure out esoterica like "TLS termination" and "logs" and "not making all your S3 buckets public by accident."