What if we treated Postgres like SQLite?
37 comments
·September 22, 2025JodieBenitez
omarqureshi
Literally prior to the cloud being a thing, for medium sized web apps, this was the way.
eirikbakke
The PostgreSQL data directory format is not very stable or portable. You can't just ZIP it up and move it to a different machine, unless the new machine has the same architecture and "sufficiently" similar PostgreSQL binaries.
In theory the data directory works with any PostgreSQL binaries from the same major version of PostgreSQL, but I have seen cases where this fails e.g. because the binaries were from the same major version but compiled with different build options.
tracker1
Yeah, I was going to mention, just upgrading between PG versions can be a bit of a pain. Dump/Restore really seems like a less than stellar option of you have a LOT of data. I mean you can stream through gzip/bzip to save space but still.
I often wish that Firebird had a license that people found friendlier to use as it always felt like a perfect technical option for many use cases from embedded to a standalone server. PG has clearly eclipsed it at this point though.
oulipo2
So the alternative would be to just pg_dump / pg_restore the content? Is it an issue?
freedomben
I've been hit by this too, so definitely a risk. I've also had permissions on the files get broken which is a bitch to debug
mutagen
Some apps do, the most used I know of is Blackmagic's Davinci Resolve, the video editor with a relatively full featured free edition available. I think this has more to do with its roots being in a high end networked environment but still, the local desktop version installs Postgres.
srameshc
When I want to treat my Postgres like SQLite , I use DuckDB using PostgreSQL Extension https://duckdb.org/docs/stable/core_extensions/postgres.html and this is one of may extensions, there is more and I love DuckDB for that
oulipo2
In what kind of scenarios are you using Duckdb from postgres? And does it improve over, say, having a Clickhouse instance nearby?
emschwartz
I think this is a neat direction to explore. I've wondered in the past whether you could use https://pglite.dev/ as if it were SQLite.
tensor
Someone is working on a libpglite based around pglite. I think this is what will provide an actual sqlite alternative:
https://github.com/electric-sql/pglite-bindings
It would still be missing the standardized disk format aspect of sqlite, but otherwise will be neat.
thruflo
This is very much the point of https://pglite.dev
It's an embeddable Postgres you can run in process as a local client DB, just like SQLite but it's actually Postgres.
emschwartz
For sure. I’m curious if anyone using it in production as an alternative to SQLite and, if so, what the performance and experience is like.
8organicbits
I do this using the Docker approach, especially for low scale web apps that run on a single VM. I like that its full Postgres versus the sometimes odd limits of SQLite. My usual approach uses a Trafik container for SSL, Django+gunicorn web app, and Postgres container; all running as containers one VM. Postgres uses a volume, which I back up regularly. For testing I use `eatmydata` which turns off sync, and speeds up test cycles by a couple percent.
I haven't tried the unix socket approach, I suppose I should try, but it's plenty performant as is. One project I built using this model hit the HN front page. Importantly, the "marketing page" was static content on a CDN, so the web app only saw users who signed up.
bob1029
I think this is a great idea for testing. MSSQL has LocalDB which is used a lot throughout the .NET ecosystem:
https://learn.microsoft.com/en-us/sql/database-engine/config...
For heavy duty production use (i.e., pushing the actual HW limits), I would feel more comfortable with the SQLite vertical. Unix sockets are fast, but you are still going across process boundaries. All of SQLite can be ran on the same thread as the rest of the application. This can dramatically reduce consumption of memory bandwidth, etc.
bluGill
Memory bandwidth I don't worry about much - most of the time you should settup a small database with just enough data for that test, which hopefully is fast. However sockets and processes are a worry as starting as there are places things can go wrong not related to your test and then you have flakely tests nobody trusts.
nu11ptr
You can, just embed it in your Go app:
tptacek
This just runs Postgres as a process, right?
nu11ptr
Yes, but it embeds it in your executable so it is transparent to the end user.
UPDATE: Actually, I see it is downloaded (and I think cached?). I can't recall if you can embed as an option or not.
OutOfHere
Does this use an external binary or CGO or Wazero (Wasm) or is it rewritten in Go?
With SQLite, although all approaches are available, my fav is to use https://github.com/ncruces/go-sqlite3 which uses Wazero.
I try to avoid CGO if I can because it adds compile-time complexity, making it unfriendly for a user to compile.
nu11ptr
> Does this use an external binary or CGO or Wazero (Wasm) or is it rewritten in Go?
Since Postgres is always a network connection, I don't believe any CGo is required.
> I try to avoid CGO if I can because it adds compile-time complexity, making it unfriendly for a user to compile.
Using zig as your C compiler mostly fixes this, but you can't 100% get rid of the complexity, but I've cross compiled using Zig cc to Windows/Mac/Linux pretty easily via CGo.
marcobambini
Instead of sqlite-vec you can take a look at the new sqlite-vector extension: https://substack.com/home/post/p-172465902
munchlax
From what I've read about it, DuckDB comes close. Regular files, like sqlite, but pg functionality.
nullzzz
No it’s not ”pg functionality”. It’s close to SQL standard compliance but not close to what Postgres has to offer. Also, single transaction writing at a time, in-process etc.
OutOfHere
If I am not mistaken, DuckDB is suitable for columnar analytics queries, less so for multi-column row extractions. Which PG-like functionality does it offer on top?
datadrivenangel
DuckDB does aim to be Postgres compatible from a SQL syntax perspective, but you are 100% correct that it is not optimized for individual transactions. I'm a huge advocate of DuckDB, but strongly consider your life choices if you want to use it as a transactional database.
whartung
I don’t recall the mechanics but I do know that folks have bundled starting a local instance of PG solely for unit tests.
There’s a pre-install step to get PG on the machine, but once there, the testing framework stands it up, and shuts it down. As I recall it was pretty fast.
nullzzz
This is IMO best done running pg in a container using docker-compose or similar
kissgyorgy
The huge advantage of SQLite is not that it's on the same machine, but it's that is it in process which makes deployment and everything else just simpler.
> You can just install Postgres on your single big and beefy application server (because there’s just the one when you use SQLite, scaled vertically), and run your application right next to it.
Am I getting old ? Seems obvious to me.