Twake Drive – An open-source alternative to Google Drive
109 comments
·October 24, 2025Vipsy
CaptainOfCoit
I'd add a fourth; "Make it easy to do backups and verify they're correct".
I don't think I've ever considered a data store without that being one of my top concerns. This anxiety comes from real-life experience where the business I worked at had backups enabled for the primary data store for years, but when something finally happened and we lost some production data, we quickly discovered that the backups weren't actually possible to restore from, and had been corrupted this whole time.
navigate8310
Schrödinger's backup. Testing the backup works involves even more engineering and non creative work.
CaptainOfCoit
Depends. Even something basic like "Check if the produced artifact is a valid .zip/.tar.gz" can be enough in the beginning, probably would have prevented the issue I shared before.
Then once you grow/need higher reliability, you can start adding more advanced checks, like it has the tables/data structures you expect and so on.
6510
I had a funny where I somewhat regularly test an sql backup, then one day it didn't work, it worked the second time, the 3rd and the 4th. I have no idea why it didn't work. It turned into a permanent background process in the back of my head. The endless what-if loop.
PanoptesYC
I'd like a manual "sync now" option. Sometimes I put stuff in google drive using windows explorer and it's not immediately obvious if it is syncing, why it is or isn't, or what I need to do to make it.
The_President
I've got a theory that progress bars for main functionality tasks and the associated manual triggers in modern software are out of favor, as it creates a stage for an error to be displayed and creates expectations the customer can lean on. Less detail in errors displayed to the customer removes their ability to identify a software problem as unique or shared among others.
"Something went wrong!"
Vipsy
Syncing should be in the control of users. user should be able to trigger or abort the sync. Also it should provide some sort of indicator of progress.
gwbas1c
I built something similar years ago. These are terribly hard to build, so I did a bit of digging.
1: This appears to be backed by a French company called Linagoria. I don't know much about the company, but they've been around for a bit.
2: I experimented with Mongodb for the similar product, and it turned out to be very unreliable. A lot can change since I used Mongodb, but in general, I'm weary of any product that uses it unless there's an expectation that data is lossy.
(Which was the problem Mongodb had at the time: Their CTO only wanted to target lossy data use cases, but the people interested in using Mondodb wanted a database that was easier to use than SQL.)
evolve2k
I’ve had similar warnings from multiple very senior devs to never go near mongo. So better explain that choice if you’re wanting adoption. Reliability was the concern.
orliesaurus
Lots of talk about must‑have features and backups here...
BUT there's another piece that makes or breaks these tools... whether they can build a community around them and stick around for years...
Open‑source cloud storage projects come and go when maintainers burn out... a sustainable business model or strong contributor base matters as much as technical checklists...
ALSO interoperability is underrated... if your drive can speak WebDAV or S3 and plug into existing identity systems, teams are more likely to try it...
In the end people want something that won't vanish after the honeymoon... that's harder than adding a progress bar...
cheema33
As others have asked, how does it compare with nextCloud ownCloud? And does it have native clients for the usual suspects? Windows/Mac/Mobile...
kimos
I desperately want to be a fan of ownCloud, because it offers clients natively across Mac/Linux/mobile, but it’s such a mess. Every platform has small bugs and reliability problems that makes the whole thing useless.
ponooqjoqo
I tried to install nextcloud once, and it was an exercise in misery.
vachina
If you just need a web interface to your filesystem, there’s this single Go executable (https://github.com/filebrowser/filebrowser) that supports sharing and minimal user management.
3idet
58.9% TypeScript and 32.6% JavaScript wouldn't be my first preference to implement such a high performance and throughput demanding application? Why is that?
tantalor
> 58.9% TypeScript and 32.6% JavaScript
Isn't that just 91.5% JavaScript?
TypeScript is not real.
awwaiid
Almost, but not entirely, unlike birds
ActionHank
Maybe ask all the startups looking to scale their TS\JS microservices "stack" using event driven architecture.
SilverSlash
Why not use Deno instead of Node.js for the backend? For a product like this could the extra security that Deno's sandbox provides help?
edweis
Do you really need a database for this? On a unix system, you should be able to: CRUD users, CRUD files and directories, grant permissions to files or directories
Is there a decade-old software that provides a UI or an API wrapper around these features for a "Google Drive" alternative? Maybe over the SAMBA protocol?
ramses0
Take a look at "cockpit", because if there were, that's where it "should" be.
https://cockpit-project.org/applications
--
With no command line use needed, you can:
Navigate the entire filesystem,
Create, delete, and rename files,
Edit file contents,
Edit file ownership and permissions,
Create symbolic links to files and directories,
Reorganize files through cut, copy, and paste,
Upload files by dragging and dropping,
Download files and directories.motorest
> Do you really need a database for this?
I have no idea how this project was designed, but a) it's expectable that disk operations can and should be cached, b) syncing file shares across multiple nodes can easily involve storing metadata.
For either case, once you realize you need to persist data then you'd be hard pressed to justify not using a database.
MontyCarloHall
How would you implement things like version history or shareable URLs to files without a database?
Another issue would be permissions: if I wanted to restrict access to a file to a subset of users, I’d have to make a group for that subset. Linux supports a maximum of 65536 groups, which could quickly be exhausted for a nontrivial number of users.
skydhash
Backup files the way Emacs, Vim,... do it: Consistent scheme for naming the copies. As for sharable URLs, they could be links.
The file system is already a database.
Wicher
As for the permissions, using ACLs would work better here. Then you don't need a separate group for every grouping.
MontyCarloHall
TIL about ACLs! I think that would nicely solve the group permission issue.
edweis
Ok this product will be for project with less than 65k users.
For naming, just name the directory the same way on your file system.
Shareable urls can be a hash of the path with some kind of hmac to prevent scraping.
Yes if you move a file, you can create a symlink to preserve it.
conception
Encode paths by algorithm/encryption?
MontyCarloHall
This wouldn’t be robust to moving/renaming files. It also would preclude features like having an expiration date for the URL.
ajross
> How would you implement things like version history
Filesystem or LVM snapshots immediately come to mind
> or shareable URLs to files without a database?
Uh... is the path to the file not already an URL? URLs are literally an abstraction of a filesystem hierarchy already.
QuantumNomad_
> Filesystem or LVM snapshots immediately come to mind
I use ZFS snapshots and like them a lot for many reasons. But I don’t have any way to quickly see individual versions of a file without having to wade through a lot of snapshots where the file is the same because snapshots are at filesystem level (or more specifically in ZFS, at “dataset” level which is somewhat like a partition).
And also, because I snapshot at set intervals, there might be a version of a file that I wanted to go back to but which I don’t have a snapshot of at that exact moment. So I only have history of what the file was a bit earlier or a bit later than some specific moment.
I used to have snapshots automatically trigger every 2 minutes and snapshot clean up automatically trigger hourly, daily, weekly and monthly. In that setup it was fairly high chance that if I make some mistake with an edit to a file I also had a version of it that kept the edits from right before as long as I discover the mistake right away.
These days I snapshot automatically a couple of times per day and cleanup every few months with a few keystrokes. Mainly because at the moment the files I store on the servers don’t need that fine-grained snapshots.
Anyway, the point is that even if you snapshot frequently it’s not going to be particularly ergonomic to find the version you want. So maybe the “Google Drive” UI would also have to check each revision to see if they were actually modified and only show those that were. And even then it might not be the greatest experience.
benrutter
I don't know of one- have thought this before but with python and fsspec. Having a google drive style interface that can run on local files, or any filesystem of your choice (ssh, s3 etc) would be really great.
pas
... well, it makes sense to be able to do a "join" with the `users` and `documents` collections, use the full expressive range of an aggregation pipeline (and it's easy to add additional indices to MongoDB collections, and have transactions, and even add replication - not easy with a generic filesystem)
put all kinds of versioned metadata on docs without coming up with strange encodings, and even though POSIX (and NodeJS) offers a lot of FS related features it probably makes sense to keep things reeeeally simple
and it's easy to hack on this even on Windows
nodesocket
Perhaps they are using MongoDB GridFS instead of storing files on disk.
jedimastert
An SCP or FTP client maybe?
edweis
Definity. Though SAMBA supports authentication natively. With SCP and sFTP you'll need another admin server to create users.
GiorgioG
You expose SAMBA shares outside your home network?
edweis
I do, password-protected of course. It is the only "native" way I found to get server files access to my iPhone without downloading a third party app (via Files).
vlovich123
I really hope you lock it down to something like Tailscale so that you have a private area network and your Samba share isn’t open to the entire world.
Samba is a complicated piece of software built around protocols from the 90s. It’s designed around the old idea of physical network security where it’s isolated on a LAN and has a long long history of serious critical security vulnerabilities (eg here’s an RCE from this month https://cybersecuritynews.com/critical-samba-rce-vulnerabili...).
dns_snek
I think you should figure out how to quit while you're ahead. I wouldn't expose Samba to most of the devices on my LAN, never mind the internet.
operon
Search for wannacry. You may rethink your setup.
rambambram
USB sticks, the alternative to the cloud.
netdevphoenix
Until you lose it, break it, damage it accidentally (via high humidity, high heat, etc). Arguably, if you run twake on some VPS, you have additional layers of redundancy by default.
tfe__
You mean, like the dns of AWS in us-east-1? #OhWait
cheschire
USB sticks can fulfill part of the "2" in the 3-2-1 rule.
Tepix
Not sure how i can collaboratively edit documents thanks to a USB stick.
cheema33
Surely you jest. I love USB sticks. But they are not a proper alternative to cloud storage. For example, how do I do share select files/folders with select people, in other countries?
maxlin
Given how integrated Drive and Docs are, if this doesn't have docs-like collaborative realtime document editing, for many people this is like "30% of Google Drive"
For people whose UX is dragging and dropping stuff to browser, and/or using a desktop sync client only, sure why not, the UI looks clean and familiar. But as someone who has used and still uses like 3 different similar things concurrently, the only real reason I use drive is because of the seamless zero-dependency office-like web software being part of the product.
(yes I know it's a curse too, I ended up writing a piece of software just to migrate company drive stuff to my personal drive when a company I was a cofounder in went bust to have a record ... those google docs can really only exist in Drive natively, any export is an immediate downgrade)
Gigachad
Is this a fork of something? Or recently open sourced? Looks like there is a single commit where a majority of the code came from.
CaptainOfCoit
> Looks like there is a single commit where a majority of the code came from.
I do this all the time, right before open sourcing a project. Basically while it's private, commit quality can be a bit rough, and if I want to open source it, I'll remove .git, make a new init commit then open source it. No one needs to see what I do in my private abode :)
Elizer0x0309
Ha! 100% agree! Lots of my commits have personal info even. Months or years of changes, I'd rather squash and then push publicly.
g-b-r
The history of the development since its beginning can help a lot in studying the code, so I encourage people to avoid the single commit as much as possible.
It's much better to refactor (rebase) the messy commits, removing the personal or embarrassing stuff; although that might result in a "false" history, a series of smaller-sized commits will usually be much easier to follow than reading a whole code base all at once.
Really, I see a ton of open-source projects that do this, and it results in a lot of more opacity and friction than necessary.
It results in less people being able to check the code and contribute to the project.
CaptainOfCoit
I promise you're not missing much, except some commits that are implementing something, reverting it, implementing it again slightly differently, fixing typos, replacing 80% of the codebase in one swoop and similar stupid and un-needed stuff.
If the project is from the get-go supposed to be a long-lived project (like professional development for a business) then I agree, don't smoke the entire history no matter how embarrassing it is.
But for my personal projects, I can let you know that having access to the git history before I made it FOSS will make you dumber rather than being helpful for anything, compared to one clean starting commit.
javatuts
+1
pgt
If you want to increase adoption, change the name: https://www.paulgraham.com/name.html
TDrive would work
CaptainOfCoit
> If you want to increase adoption, change the name: https://www.paulgraham.com/name.html
> If you have a US startup called X and you don't have x.com, you should probably change your name.
But they do own https://twake-drive.com/ already? What exactly is your point here? Either you misunderstand the linked article, or I do. But seems people would be able to find that just fine if they search for, as twake-drive.com comes up as the first result when I search for "Twake Drive".
Besides, Graham's articles are almost always geared towards startups in one way or another. This doesn't seem to be that, so not sure I'd even try to read it if I was the owner of Twake Drive.
VWWHFSfQ
I don't think that advice has been relevant anymore for awhile now.
Open source drive tools live or die on three things. 1) Simple sync that never surprises. 2) Clean conflict handling you can explain to a non tech friend. 3) And zero drama upgrades.
If Twake nails those and keeps a sane on prem story with S3 and LDAP, it has a shot. The harder part is trust and docs. Clear threat model. Crisp migration guides from Drive and Dropbox. And a tiny CLI that just works on a headless box. Do these and teams will try it for real work, not just weekend tests.