Skip to content(if available)orjump to list(if available)

Infinite Git repos on Cloudflare workers

koolba

> We’re building Gitlip - the collaborative devtool for the AI era. An all-in-one combination of Git-powered version control, collaborative coding and 1-click deployments.

Did they get a waiver from the git team to name it as such?

Per the trademark policy, new “git${SUFFIX}” names aren’t allowed: https://git-scm.com/about/trademark

>> In addition, you may not use any of the Marks as a syllable in a new word or as part of a portmanteau (e.g., "Gitalicious", "Gitpedia") used as a mark for a third-party product or service without Conservancy's written permission. For the avoidance of doubt, this provision applies even to third-party marks that use the Marks as a syllable or as part of a portmanteau to refer to a product or service's use of Git code.

WorkerBee28474

You don't need their permission to make a portmanteau, all you need is to follow trademark law (which may or may not allow it). The policy page can go kick sand.

saurik

While true, using someone else's trademark as a prefix of your name when you are actively intending it to reference the protected use seems egregious.

fumplethumb

What about… GitHub, Gitlab, Gitkraken, GirButler (featured on HN recently)? The list goes on forever!

afiori

Supposedly they got written permission

rzzzt

What about an old word? Agitator, legitimate, cogitate?

plesiv

OP here. Oops, thank you for pointing that out! We weren’t aware of it. We will investigate ASAP. In the worst case, we’ll change our name.

benatkin

Doesn't sound worse case to me. It could use a better name anyway.

ecshafer

Github doesn't stop me from making an infinite number of git repos. Or maybe they do, but I have never hit the limit. And if I am hitting that limit, and become a large enterprise customer, I am sure they would work with me on getting around that limit.

Where does this fit into a product? Maybe I am blind, but while this is cool, I don't really see where I would want this.

aftbit

Github would definitely reach out if you tried to make 100k+ Github repos. We once automatically opened issues in response to exceptions (sort of a ghetto Bugsnag / Sentry) and received a nice email from an engineer asking us if we really needed to do that when we hit around the 200k mark.

no_wizard

Oh here’s an interesting idea.

What if these bug reporting platforms could create a branch and tag it for each issue.

This would be particularly useful for point and time things where you have an immutable deployment branch. So it could create a branch off that immutable deployment branch and tag it, so you always have a point in time code reference for bugs.

Would that be useful? I feel like what you’re doing here isn’t that different if I get what’s going on (basically creating one repository per bug?)

justincormack

Github werent terribly happy with the number of branches we created for this type of use case at one point.

aphantastic

Why not just keep the sha of the release in the big report?

foota

In some ways, you could imagine repos might be more scalable than issues within a repo, since you could reasonably assume a bound on the number of issues in a single repo.

plesiv

OP here. We’re building a new kind of Git platform. "Infinity" is more beneficial for us as platform builders (simplifying infrastructure) but less relevant to our customers as users.

shivasaxena

Imagine every notion docs or every airtable base being a a git repo. Imagine the PR workflow that we developers love being available to everyone.

yjftsjthsd-h

> It allows us to easily host an infinite number of repositories

I like this system in general, but I don't understand why scaling the number of repos is treated as a pinch point? Are there git hosts that struggle with the number of repos hosted in particular? (I don't think the "Motivation" section answers this, either.)

plesiv

OP here.

It’s unlikely any Git providers struggle with the number of repos they're hosting, but most are larger companies.

Currently, we're a bootstrapped team of 2. I think our approach changes the kind of product we can build as a small team.

rad_gruchalski

How? What makes it so much more powerful than gitea hosted on a cheap vps with some backup in s3?

Unless, of course, your product is infinite git repos with cf workers.

null

[deleted]

icambron

Seems like it enables you do things like use git repos as per-customer or per-some-business-object storage, which you otherwise wouldn't even consider. Like imagine you were setting up a blogging site where each blog was backed by a repo

abraae

Or perhaps a SaaS product where individual customers had their own fork of the code.

There are many reasons not to do this, perhaps this scratches away at one of them.

bhl

Serverless git repos would be useful if you wanted to make a product like real-time collaboration + offline support code editing in the browser.

You can still sync to a platform like GitHub or BitBucket after all users close their tabs.

A long time ago, I looked into using isomorphic-git with lightning-fs to build light note-taking app in the browser: pull your markdown files in, edit them in a rich-text-editor a la Notion, stage and then commit changes back using git.

aphantastic

That’s essentially what github.dev and vscode.dev do FWIW.

sluongng

@plesiv could you please elaborate on how repack/gc is handled with a libgit2 backend? I know that Alibaba has done something similar in the past based on libgit2, but I have yet to see another implementation in the wild like this.

Very cool project. I hope Cloudflare workers can support more protocols like SSH and GRPC. It's one of the reasons why I prefer Fly.io over Cloudflare worker for special servers like this.

plesiv

Great question! By default, with libgit2 each write to a repo (e.g. push) will create a new pack file. We have written a simple packing algorithm that runs after each write. It works like this:

Choose these values:

* P, pack "Planck" size, e.g. 100kB

* N, branching factor, e.g. 8

After each write:

1. iterate over each pack (pack size is S) and assign each pack a class C which is the smallest integer that satisfies P * N^C > S

2. iterate variable c from 0 to the maximum value of C that you got in step 2

* if there are N packs of class c, repack them into a new pack, new pack is going to be at most of class c+1

skybrian

Not having a technical limit is nice, because then it’s a matter of spending money. But whenever I see “infinite,” I ask what it will cost. How expensive is it to host git repos this way?

As a hobbyist, “free” is pretty appealing. I’m pretty sure my repos on GitHub won’t cost me anything, and that’s unlikely to change anytime soon. Not sure about the new stuff.

jsheard

With CloudFlare at least when you overstay your welcome on the free plan they just start nagging you to start paying, and possibly kick you out if you don't, rather than sending you a surprise bill for $10,000 like AWS or Azure or GCP might do.

betaby

Somewhat related question. Assume I have ~1k ~200MB XML files that get ~20% of their content changed. What are my best option to store them? While using vanilla git on a SSD raid10 works, that's quite slow in retrieving historical data dating back ~3-6 months. Are there other options for a quickie back-end? I'm fine with it being not that storage efficient to a degree.

o11c

I'm not sure if this quite fits your workload, but a lot of times people use `git` when `casync` would be more appropriate.

adobrawy

I don't know what your "best" criterion is (implementation costs, implementation time, maintainability, performance, compression ratio, etc.). Still, the easiest way to start is to delegate it to the file system, so zfs + compression. Access time should be decent. No application-level changes are required to enable that.

betaby

It is already on ZFS with compression.

nomel

If you can share, but I'd be curious to know what that large of an XML file might be used for, and what benefits it might have over other formats. My persona and professional use of XML has been pretty limited, but XSD was super powerful, and the reason we choose it when we did.

betaby

Juniper routers configs, something like below.

adamc@router> show arp | display xml <rpc-reply xmlns:JUNOS="http://xml.juniper.net/JUNOS/15.1F6/JUNOS"> <arp-table-information xmlns="http://xml.juniper.net/JUNOS/15.1F6/JUNOS-arp" JUNOS:style="normal"> <arp-table-entry> <mac-address>0a:00:27:00:00:00</mac-address> <ip-address>10.0.201.1</ip-address> <hostname>adamc-mac</hostname> <interface-name>em0.0</interface-name> <arp-table-entry-flags> <none/> </arp-table-entry-flags> </arp-table-entry> </arp-table-information> <cli> <banner></banner> </cli> </rpc-reply>

hobs

it's a good question because my answer for a system like this which had very little schema changing was just dump it into a database and add historical tracking per object that way, hash, compare, insert and add historical record.

betaby

I do have the current state in the DB. However I need sometimes to compare today's file with the one from 6 month ago.

hokkos

You can compress in EXI, it's a format for XML and if it is informed by the schema can give a big boost in compression.

tln

> get ~20% of their content changed

...daily? monthly? how many versions do you have to keep around?

I'd look at a simple zstd dictionary based scheme, first. Put your history/metadata into a database. Put the XML data into file system/S3/BackBlaze/B2, zstd compressed against a dictionary.

Create the dictionary : zstd --train PathToTrainingSet/* -o dictionaryName Compress with the dictionary: zstd FILE -D dictionaryName Decompress with the dictionary: zstd --decompress FILE.zst -D dictionaryName

Although you say you're fine with it being not that storage efficient to a degree, I think if you were OK with storing every version of every XML file, uncompressed, you wouldn't have to ask right?

betaby

If one stores a whole versions of the files that defeats the idea of git, and would consume too much space. I suppose I don't even need zstd if I have ZFS with compression, although compression levels won't be as good.

tln

You're relying on compression either way... my hunch is that controlling the compression yourself may get you a better result.

Git does not store diffs, it stores every version. These get compressed into packfiles https://git-scm.com/book/en/v2/Git-Internals-Packfiles. It looks like it uses zlib.

tln

Congrats, you've done a lot of interesting work to get here.

This could be a fantastic building block for headless CMS and the like.

plesiv

OP here. Thank you and good catch! :-) We have a blog post planned on that topic.

seanvelasco

this leverages Durable Objects, but as i remember from two years ago, DO's way of guaranteeing uniqueness is that there can only be once instance of that DO in the world.

what if there are two users who wants to access the same DO repo at the same time, one in the US and the other in Singapore? the DO must live either in US servers or SG servers, but not at the same time. so one of the two users must have high latency then?

then after some time, a user in Australia accesses this DO repo - the DO bounces to AU servers - US and SG users will have high latency?

but please correct me if i'm wrong

skybrian

Last I heard, durable objects don’t move while running. It doesn’t seem worse than hosting in US-East, though.

VoidWhisperer

Not the main purpose of the article but they mention they were working on a notetaking app oriented towards developers - did anything ever come of that? If not, does anyone know products that might fit this niche? (I currently use obsidian)

plesiv

OP here. Not yet - it's about 50% complete. I plan to open-source it in the future.

nbbaier

Definitely interested in seeing this as well. What are the key features?

ericyd

Engaging read! For me, just the right balance of technical detail and narrative content. It's a hard balance to strike and I'm sure preferences vary widely which makes it an impossible target for every audience.

iampims

Some serious engineering here. Kudos!

bagels

Infinite sounds like a bug happened. It's obviously not infinite, some resource will eventually be exhausted, in this case, memory.