Skip to content(if available)orjump to list(if available)

Ask HN: Best way to simultaneously run multiple projects locally?

Ask HN: Best way to simultaneously run multiple projects locally?

60 comments

·March 7, 2025

Hey HN, I often work on couple of projects at the same time and am trying to figure out what’s the easiest way to run all of them at once.

For example, let’s say that I want to run frontend and two backend services (each it’s own repo).

My current approach is to run each on a different port, but I actually wonder whether setting up a small Kubernetes cluster with Traefik as proxy wouldn’t serve me better as I could then use Skaffold or something similar to deploy apps independently to the cluster.

Basically looking for true and tested solutions. Thanks.

ruuda

Woah, so much overcomplication in the comments!

If you want to run multiple applications, how about ... just running them? It sounds like you already do that, so what is the real problem that you are trying to solve?

If it's annoying to start them by hand one by one, you could use Foreman or Overmind to start them with a single command, and interleave their output in a terminal.

zwnow

I also dont know why OP wants to jump into kubernetes directly instead of just using docker...

tbrownaw

> jump into kubernetes directly instead of just using docker

Option A: learn k8s, use it for anything containerized. Don't use the complicated parts if you don't need them.

Option B: learn docker (docker compose I guess?), use that... then also learn k8s because you'll eventually have to anyway

zwnow

I'd argue most companies would never have to touch k8s but Docker is a valuable skill to have.

suralind

It's not that I don't want to run them locally. It's more like I would like to

a) use HTTPS when developing services b) give each service a unique hostname c) run all my services on the same port

and there's simply no easy way to do that that I know of. Hence my Ask HN :)

ruuda

Even Docker ... does that solve a problem OP has?

zwnow

Yes, just write a small docker compose file and you can start all of the services with 1 command. Either isolated or connected, however OP wants to use them.

jinnko

I hadn't come across overmind before and searching turned up several very different projects.

I think the parent comment is referring to this one: https://github.com/DarthSim/overmind

ruuda

Yes, that’s the one I meant. One thing that annoyed me with Foreman is that sometimes it doesn’t terminate all processes when one of them crashes. Since I switched to Overmind/Hivemind I never had that problem again.

null

[deleted]

lelanthran

What a bunch of over engineered solutions.

I run multiple side projects on my linux desktop using both postgresql and mysql, and using host entries work well enough.

For HTTP, using entries in hosts file locally allows the clients to all connect to port 80 on the local machine all using different domain names, and nginx can proxy the connection to whatever port the backend is running on.

jasonkester

Everybody loves their random port numbers these days, but I still prefer custom hostnames.

Just chuck an entry into your hosts file and tell your web server to sniff for it and you’re done. Run your stuff on port 80 like nature intended and never have to teach either end of your app how to deal with colons and numbers in domain names.

And you get to skip the six paragraphs of pain the other commenters are describing, orchestrating their kubernetes and whatnot.

e.g.: http://dev.whatever/

necovek

I like to combine both locally — bind to port 0 to get an arbitrary unprivileged port, and use domains in /etc/hosts for easier configuration ;-)

For those not familiar with what you are talking about, you can add things like the following to your /etc/hosts file:

  127.0.1.1  foo.local
  127.0.2.4  foo.bar.local
  ...
They will all resolve to local host, so make sure to bind your services to those IP addresses (eg. `nc -l 127.0.2.4 80` will bind only to 127.0.2.4 on port 80).

But running on unprivileged ports like 80 means you've got to run the server as root.

parasti

At least on macOS you don't even need to add to /etc/hosts, just use .localhost as the TLD.

necovek

How does that work with binding the same port to it? Does it automatically assign a new IP to every new domain resolv() call gets (and then caches it)?

pacifika

This is the right answer. But use the .test domain as this is meant for the usecase.

SkyPuncher

Different ports.

Don't add unnecessary complexity unless it's strictly necessary.

vaylian

Or, if you want to get a bit more fancy, use different IP addresses.

For IPv4 you have the entire 127.0-255.0-255.1-255 subnet available to you.

For IPv6 you can add additional addresses to your existing network interface.

unsnap_biceps

I use the 127.xxx.xxx.xxx extensively for local development and add entries in /etc/hosts so I get dns resolution as well. It works great.

KronisLV

I wonder why there aren't super popular solutions for managing hosts entries in an automated way for development, since if you need to share entries with other people for a bunch of projects, then adding them manually is cumbersome and running a separate DNS server just for development is not less so, whereas using public DNS records set to local addresses seems a bit dirty.

In other words, where's:

  hostman apply Hostfile
  hostman clear Hostfile

xlii

That’s a huge trap.

Yes, it’s simple at the beginning but it takes a lot of effort to move to non-port based solution for anything.

Cuts are small at the beginning (oh, this service should use other PostgreSQL, so lets have two - oh but I my code doesn’t specify port in an config file, so let me put on direnv - oops IDE didn’t pick up env file) but they grow quickly.

Containers are standard nowadays and allow going for Kubernetes if one wants. With solutions like Justfile or Taskfile its reasonably ergonomic.

SkyPuncher

It’s a problem future me will always, happily solve.

If my system is large enough and complex enough, it _likely_ means the business behind it is successful. I will always solve “dumb problems from past me” for a successful business than the alternative of not having a business.

xlii

I agree to the principle in general, but non-reproduceble execution environment with free-for-all internet access is unimaginable time sink.

Business tends to topple on unreliable promises or annoying customers through the bad tech and I’ve seen a few that are gone because of it.

Unicorns won’t fall on such problems but semi-good Average Joe’s product might. No one is going to hear about it even if such products make bulk of software in global code volume.

Containers (on low technical level) are simple, performant and scale well.

armarr

Depends on which type of scale you are running. And it's not worth overengineering a project for a scale you hope to achieve some day. It will take away engineering resources from building something that provides value today

neilv

Using Kubernetes can be good for your resume.

What I usually do is to use different ports on my workstation. So I can get the fastest iteration, by keeping things simple. Be careful to keep the ports straight, though.

You can put the port numbers and other installation-specific files in a `.env` file, application-specific config file, or a supplemental build system config file, that aren't checked into Git.

One way I did this was to have a very powerful `Makefile` that could not only build a complicated system and perform many administrative functions, of a tricky deployment, but also work in both development and production. That `Makefile` pulled in `Makefile-options`, which had all the installation-specific variables, including ports. Other development config files, including HTTPS certificates, were generated automatically based on that. Change `Makefile-options` and everything depending on any of those was rebuilt. When you ran `make` without a `Makefile-options` file, it generated one, with a template of the variables you could set.

Today I'd consider using Kubernetes for that last example, but we were crazy-productive without it, and production worked fine.

thierrydamiba

What is your preferred method to deploy a python api to a frontend?

aprdm

can use ansible (or ssh) and copy a tarball (or python package or pull docker container)

xlii

Containers.

I’ll also recommend commercial OrbStack for Mac, because it simplifies some configuration and is performant.

That was my focus over the last couple months (and right now with customer solution I’m running tens of isolated clusters of heterogenous services and custom network configurations).

I’ve tried nix, local Caddy/Haproxy, Direnvs, Devenvs, dozens of task file runners, DAG runners etc.

Kubernetes is fine but it’s a domain in its own and you’ll end up troubleshooting plenty of things instead of doing work itself.

The simplest solution I would recommend is a task runner (Justfile/Taskfile) with containers (either built or with volumes attached - this prevents secrets leakage). Pay special attention to artifact leakage and clone volumes instead of mutating them.

I don’t recommend Docker Compose because it has low entry and a high ceiling and it takes long time to back out.

For simple clusters (5-8 containers) it’s working well. If you need to level up my personal recommendation would be:

- Go for pure programmatic experience (I’ve tested bunch of API clients and IMO it’s less time learning Go than troubleshooting/backfilling missing control flow features) - there’s also Magefile for simplified flows

- Full Kubernetes with templating language (avoid YAMLs like a plague)

- Cuelang if you want to go for full reliability (but it’s difficult to understand and documentation is one of the worst I ever read through).

gabesullice

https://ddev.com/ has become standard in the circles I run in (most are web devs working in agencies touching multiple projects each week). You don't have to use DDEV specifically, but it works like a dream and may provide some inspiration.

Each project gets its own Docker Compose file. These allow you to set up whatever project specific services and configuration you need.

None of your projects need to expose a port. Instead each project gets a unique name like `fooproject` and `barproject` and the container listening to port 80 is named {project-name}-web.

It all gets tied together by a single global NGINX/Traefik/Caddy container (you choose) that exposes port 80 and 443 and reverse proxies to each project's web container using Docker's internal hostnames. In pseudo-code:

  https://fooproject.example.site 
  {
    reverse_proxy fooproject- web:80
  }

  https://barproject.example.site 
  {
    reverse_proxy barproject-web:80
  }
The final piece of the puzzle is that the maintainer of DDEV set up a DNS A record for

  127.0.0.1 *.ddev.site
You could do something similar yourself using your own domain or locally with DNSMasq.

It may seem overcomplicated (and it is complicated). But since it's a de-facto standard and well-maintained, that maintenance burden is amortized over all the users and projects. (To the naysayers, consider: PopOS/Ubuntu are quite complicated, but they're far easier to use for most people than a simpler hand-rolled OS with only the essentials.)

necovek

I prefer setting up services that bind to port 0 ("get me an unprivileged port"), report that back, and use that to auto-configure dependent services.

This allows local development and debugging for fast iterations and quick feedback loop. But, it also allows for multiple branches of the same project to be tested locally at the same time!

Yeah, git does not make that seamless, so I'd have multiple shallow clones to allow me to review a branch without stopping work on my own branch(es).

tcoff91

Git worktrees make working with multiple branches a breeze

necovek

Nice, I wasn't aware of git worktrees at all!

PaulHoule

Before there was Docker and Kubernetes I used to run hundreds of web sites on a single server by being disciplined about where files go, how database connections are configured, etc. I still do.

suralind

I like to run my code in containers (also during development), although your approach ain’t broken.

djood

I would say docker-compose with traefik is definitely the easiest! You can even set dependencies between services to ensure that they load in the right order, do networking, etc.

If you're interested in running locally, a solution like kubernetes seems slightly overkill, but it can be fun to mess with for sure!

renewiltord

Kube slows down iteration cycle. I use bash script that encodes the port information. No big deal if you repeat this boilerplate code. LLM can apply a change to all of it simultaneously.

Simple `bin/restart`

K3s is good. Kube is also good. But for local development you want to isolate to code and have rapid cycle time on features. Use `mise` with simple run script. If deploying to k3s use Docker (with Orbstack if Mac) and simple run script.

LLMs bad at auto debugging environment means you are spending even more time on low leverage task. Avoid that at all costs. Small problem => small solution.

victorNicollet

We use the following steps:

- Each service listens on a different, fixed port (as others have recommended).

- Have a single command (incrementally) build and then run each service, completely equivalent to running it from your IDE. In our case, `dotnet run` does this out of the box.

- The above step is much easier if services load their configuration from files, as opposed to environment variables. The main configuration files are in source control; they never contain secrets, instead they contain secret identifiers that are used to load secrets from a secret store. In our case, those are `appsettings.json` files and the secret store is Azure KeyVault.

- An additional optional configuration file for each application is outside source control, in a standard location that is the same on every development machine (such as /etc/companyname/). This lets us have "personal" configuration that applies regardless of whether the service is launched from the IDE or the command-line. In particular, when services need to communicate with each other, it lets us configure whether service A should use a localhost address for service B, or a testing cluster address, or a production cluster address.

- We have a simple GUI application that lists all services. For each service it has a "Run" button that launches it with the command-line script, and a checkbox that means "other local services should expect this one to be running on localhost". This makes it very simple to, say, check three boxes, run two of them from the GUI, and run the third service from the IDE (to have debugging enabled).