Skip to content(if available)orjump to list(if available)

Going down the rabbit hole of Git's new bundle-URI

jakub_g

This is super interesting, as I maintain a 1M commits / 10GB size repo at work, and I'm researching ways to have it cloned by the users faster. Basically for now I do a very similar thing manually, storing a "seed" repo in S3 and having a custom script to fetch from S3 instead of doing `git clone`. (It's faster than cloning from GitHub, as apart from not having to enumerate millions of objects, S3 doesn't throttle the download, while GH seem to throttle at 16MiB/s.)

Semi-related: I always wondered but never got time to dig into what exactly are the contents of the exchange between server and client; I sometimes notice that when creating a new branch off main (still talking the 1M commits repo), with just one new tiny commit, the amount of data the client sends is way bigger than I expected (tens of MBs). I always assumed the client somehow established with the server that it has a certain sha, and only uploads missing commit, but it seems it's not exactly the case when creating a new branch.

maccard

Funny you say this. At my last job I managed a 1.5TB perforce depot with hundreds of thousands of files and had the problem of “how can we speed up CI”. We were on AWS, so I synced the repo, created an ebs snapshot and used that to make a volume, with the intention of reusing it (as we could shove build intermediates in there too.

It was faster to just sync the workspace over the internet than it was to create the volume from the snapshot, and a clean build was quicker from the just sync’ed workspace than the snapshotted one, presumably to do with however EBS volumes work internally.

We just moved our build machines to the same VPC as the server and our download speeds were no longer an issue.

dijit

I used to use fuse and overlayfs for this, I’m not sure it still works well as I’m not a build engineer and I did it for myself.

Its a lot faster in my case (little over 3TiB for latest revision only).

jclarkcom

VMware?

captn3m0

The linux kernel does the same thing, and publishes bundle files over CDN[0] for CI systems using a script called linux-bundle-clone[1]

[0]: https://www.kernel.org/best-way-to-do-linux-clones-for-your-...

[1]: https://web.git.kernel.org/pub/scm/linux/kernel/git/mricon/k...

miyuru

If I read the script correctly, it still points to git.kernel.org

however, it seems to use the git bundle technique mentioned in the article.

djfivyvusn

Have you tried downloading the .zip archive of the repo? Or does that run into similar throttling?

autarch

> This has resulted in a contender for the world's smallest open source patch:

Hah, got you beat: https://github.com/eki3z/mise.el/pull/12/files

It's one ASCII character, so a one-byte patch. I don't think you can get smaller than that.

yangman

There is a cursor rendering fix in xf86-video-radeonhd (or perhaps -radeon) that flips a single bit.

It took the group several years to narrow in on.

null

[deleted]

ZeWaka

That's a line modification, so presumably you'd count just an insertion or just a deletion as 'smaller'.

autarch

Yes, but so is the PR shown in the article. You're not going to get a diff that's less than one line unless you are using something besides the typical diff and patch tools.

san1t1

My smallest PR was adding a missing executable file permission.

timdorr

falcor84

What's the story behind that? Did you just deploy a blank commit to trigger a hook?

nine_k

Only accepted and merged commits count!

ks2048

How much bandwidth and time is wasted cloning the entire history of large projects when people only need single snapshot in a single branch?

According to SO, newer versions of git can do,

  git init
  git remote add origin <url>
  git fetch --depth 1 origin <sha1>
  git checkout FETCH_HEAD

jes5199

I have a vague recollection that GitHub is optimized for whole repo cloning and they were asking projects not to do shallow fetching automatically, for performance reasons

nyanpasu64

As I understand this issue affected Homebrew and CocoaPods: https://github.com/CocoaPods/CocoaPods/issues/4989#issuecomm...

> Apparently, most of the initial clones are shallow, meaning that not the whole history is fetched, but just the top commit. But then subsequent fetches don't use the --depth=1 option. Ironically, this practice can be much more expensive than full fetches/clones, especially over the long term. It is usually preferable to pay the price of a full clone once, then incrementally fetch into the repository, because then Git is better able to negotiate the minimum set of changes that have to be transferred to bring the clone up to date.

sureIy

I don't know if that applies anymore or if it doesn't apply on GitHub Actions, but shallow clones is the default there. See `actions/checkout`

acheong08

git clone --depth 1 works as well. If you're just cloning to build and not contributing it makes much more sense

mikepurvis

Github can also just serve you a tarball of a snapshot, which is faster and smaller than a shallow clone (and therefore it's the preferred option for a lot of source package managers, like nix, homebrew, etc).

It’s frustrating that tarball urls are a proprietary thing and not something that was ever standardized in the git protocol.

skissane

> It’s frustrating that tarball urls are a proprietary thing and not something that was ever standardized in the git protocol.

I think there’s a lot of stuff which is common to the major Git hosters (GitHub, GitLab, etc) - PRs/MRs, issues, status checks, etc - which I wish we had a common interoperable protocol for. Every forge has its own REST API which provides many of the same operations and fields just in an incompatible way. There really should be standardisation in this area but I suppose that isn’t really in the interests of the major incumbents (especially GitHub) since it would reduce the lock-in due to switching costs

bobbylarrybobby

I believe there is a bit of a footgun here because if you don't git clone then you don't fetch all branches, just the default. Can be very confusing and annoying if you know a branch exists on remote but don't have it locally (the first time you hit it, at least).

geenat

git needs built in handling of large binary files without a ton of hassle, it's all I ask. It'd make git universally applicable to all projects.

mercurial had it for ages.

svn had it for ages.

perforce had it for ages.

just keep the latest binary, or last x versions. Let us purge the rest easily.

mbac32768

One consequence of git clone is that if you have mega repos, it kind of ejects everything else from your cache for no win.

You'd actually rather special case full clones and instruct the storage layer to avoid adding to the cache for the clone. But this isn't always possible to do.

Git bundles seem like a good way to improve the performance of other requests, since they punt off to a CDN and protect the cache.

jedimastert

This actually might solve a massive CI problem we've been having...will report back tomorrow

jwpapi

!remind me

andrewshadura

Interestingly, Mercurial had solved the bundles more than ten years ago and back then they already worked better than Git's today

capitainenemo

Not the only mercurial feature where that's the case.. sad, I keep rooting for the project to implement mercurial frontend over a git db, but they seem to be limited by missing git features.

kps

Jujutsu (jj) is heavily inspired by Mercurial (though with some significant differences) and can operate with git as a storage backend. https://github.com/jj-vcs/jj

capitainenemo

Yeah. That sounds a bit right. But from what I read they were limited in what they could implement due to missing git features. For example phases.

https://ahal.ca/blog/2024/jujutsu-mercurial-haven/ was a post on that.

But, it looks like they are trying, and at least they imposed some sanity like in the base commit ID. I wonder if they have anything like hg grep --all, hg absorb and hg fa --deleted yet.

They do have revsets ♥

nine_k

But branches were more problematic.

capitainenemo

Mercurial has had git-like "lightweight branches"/bookmarks without the revision record of mercurial named branches for over 15 years. There are good reasons to use the traditional branches though.

https://mercurial.aragost.com/kick-start/en/bookmarks/

DrinkyBird

The topics[0] feature in the evolution extension is probably even closer to Git branches, since they are completely mutable and needn't be a permanent part of your repo. Bookmarks are just pointers to changesets, and although that's technically how Git branches work, it's not how they work in practice in Mercurial because of its focus on immutability (and because hg and git work differently).

[0]: https://www.mercurial-scm.org/doc/evolution/tutorials/topic-...

dgfitz

Someone once put together an llm backed list of things people on hn post about a lot, mine was about this “other” dvcs system.

It is superior, and it’s not even much of a comparison.

Already__Taken

I used mercurial in anger for about 9 months or something, with a gitlab fork too. when git goes wrong there's forums, blogs, books and manuals. When hg does it's a python stack trace, good luck.

capitainenemo

When I've had mercurial issues, I went to the mercurial channel on libera, or to their manual. But then, haven't ended up with a stack trace yet.