Skip to content(if available)orjump to list(if available)

ArchiveTeam has finished archiving all goo.gl short links

dkh

Excellent! ArchiveTeam have always been impressive this way. Some years ago, I was working at a video platform that had just announced it would be shutting down fairly soon. I forget how, but one way or another I got connected with someone at ArchiveTeam who expressed their interest in archiving it all before it was too late. Believing this to be a good idea, I gave them a couple of tips about where some of our device-sniffing server endpoints were likely to give them a little trouble, and temporarily "donated" a couple EC2 instances to them to put towards their archiving tasks.

Since the servers were mine, I could see what was happening, and I was very impressed. Within I want to say two minutes, the instances had been fully provisioned and were actively archiving videos as fast as was possible, fully saturating the connection, with each instance knowing to only grab videos the other instances had not already gotten. Basically they have always struck me as not only having a solid mission, but also being ultra-efficient in how they carry it out.

zdimension

Title is imprecise, it's Archiveteam.org, not Archive.org. The Internet Archive is providing free hosting, but the archival work was done by Archiveteam members.

im3w1l

What exactly is archiveteam's contribution? I don't fully understand.

Edit: Like they kinda seem like an unnecessary middle-man between the archive and archivee, but maybe I'm missing something.

creatonez

What ArchiveTeam mainly does is provide hand-made scripts to aggressively archive specific websites that are about to die, with a prioritization for things the community deems most endangered and most important. They provide a bot you can run to grab these scripts automatically and run them on your own hardware, to join the volunteer effort.

This is in contrast to the Wayback Machine's builtin crawler, which is just a broad spectrum internet crawler without any specific rules, prioritizations, or supplementary link lists.

For example, one ArchiveTeam project had the goal to save as many obscure Wikis as possible, using the MediaWiki export feature rather than just grabbing page contents directly. This came in handy for thousands of wikis that were affected by Miraheze's disk failure and happened to have backups created by this project. Thanks to the domain-specific technique, the backups were high-fidelity enough that many users could immediately restart their wiki on another provider as if nothing happened.

They also try to "graze the rate limit" when a website announces a shutdown date and there isn't enough time to capture everything. They actively monitor for error responses and adjust the archiving rate accordingly, to get as much as possible as fast as possible, hopefully without crashing the backend or inadvertently archiving a bunch of useless error messages.

dkh

I just made a root comment with my experience seeing their process at work, but yeah it really cannot be overstated how efficient and effective their archiving process is

iamacyborg

Their MediaWiki tool was also invaluable in helping us fork the Path of Exile wiki from Fandom.

wongarsu

> Like they kinda seem like an unnecessary middle-man between the archive and archivee

They are the middlemen that collects the data to be archived.

In this example the archivee (goo.gl/Alphabet) is simply shutting the service down and has no interest in archiving it. Archive.org is willing to host the data, but only if somebody brings it to them. Archiveteam writes and organises crawlers to collect the data and send it to Archive.org

1gn15

ArchiveTeam delegates tasks to volunteers and themselves running the Archive Warrior VM, which does the actual archiving. The resultant archives are then centralized by ArchiveTeam and uploaded to the Internet Archive.

(Source: ran a Warrior)

notpushkin

Sidenote, but you can also run a Warrior in Docker, which is sometimes easier to set up (e.g. if you already have a server with other apps in containers).

diggan

> What exactly is archiveteam's contribution? I don't fully understand.

If Internet Archive is a library, ArchiveTeam is people who run around collecting stuff, and gives it to the library for safe keeping. Stuff that are estimated/announced to be disappearing/removed soon tends to be focused too.

debesyla

They gathered up the links for processing, because Google doesn't just give a list of short links in use. So the links have to be brute-forcefully gathered first.

horseradish7k

liability shield

dang

Related. Others?

Enlisting in the Fight Against Link Rot - https://news.ycombinator.com/item?id=44877021 - Aug 2025 (107 comments)

Google shifts goo.gl policy: Inactive links deactivated, active links preserved - https://news.ycombinator.com/item?id=44759918 - Aug 2025 (190 comments)

Google's shortened goo.gl links will stop working next month - https://news.ycombinator.com/item?id=44683481 - July 2025 (222 comments)

Google URL Shortener links will no longer be available - https://news.ycombinator.com/item?id=40998549 - July 2024 (49 comments)

Ask HN: Google is sunsetting goo.gl on 3/30. What will be your URL shortener? - https://news.ycombinator.com/item?id=19385433 - March 2019 (14 comments)

Tell HN: Goo.gl (Google link Shortener) is shutting down - https://news.ycombinator.com/item?id=16902752 - April 2018 (45 comments)

Google is shutting down its goo.gl URL shortening service - https://news.ycombinator.com/item?id=16722817 - March 2018 (56 comments)

Transitioning Google URL Shortener to Firebase Dynamic Links - https://news.ycombinator.com/item?id=16719272 - March 2018 (53 comments)

SilverElfin

Is there anyone archiving all of reddit? Or twitter? I mean even if their terms have changed to not allow it.

DaSHacka

> reddit

There used to be one such project (Pushshift), before the Reddit API change. You can download all the data and see all the info on the-eye, another datahoarder/preservationist group:

https://the-eye.eu/redarcs/

> twitter

Not that I know of, and you haven't even been able to archive tweets on the Wayback machine for YEARS.

stuffoverflow

Academictorrents has monthly dumps of all reddit submissions and comments even after the API restrictions.

9dev

Ask OpenAI maybe?

Ayesh

shaky-carrousel

Yeah, I'll take that "update" like the extremely unreliable info from an extremely unreliable company that it is.

OJFord

This leaves me wondering what the point is? What could it possibly cost to keep redirecting existing shortlinks that they consider unused/low activity already anyway?

(In addition to the higher activity ones parent link says they'll now continue to redirect.)

manquer

[delayed]

toomuchtodo

To save face.

RicoElectrico

In another submission someone speculated the reason might be the unending churn of the Google tech stack that just makes low-maintenance stuff impossible.

nocoiner

I have a question about this.

Per google, shortened links “won't work after August 25 and we recommend transitioning to another URL shortener if you haven’t already.”

Am I missing something, or doesn’t this basically obviate the entire gesture of keeping some links active? If your shortened link is embedded in a document somewhere and can’t be updated, google is about to break it, no?

OJFord

About to break it if it didn't seem 'actively used' in late 2024, yes. But if your document was being frequently read and the link actively clicked, it'll (now) keep working.

But as I said in sibling comment to yours, I don't see the point of the distinction, why not just continue them all, surely the mostly unused ones are even cheaper to serve.

fortran77

I don't really understand this. Is it really that costly to keep the entire database if they're going to keep part of it?

tombert

I built a URL shortener years ago for fun. I don't have the resources that Google has, but I just hacked it together in Erlang using Riak KV and it did horizontally scale across at least three computer (I didn't have more at the time).

Unless I'm just super smart (I'm not), it's pretty easy to write a URL shortener as a key-value system, and pure key-value stuff is pretty easy to scale. I cannot imagine that isn't doing something as or more efficient than what I did.

wtallis

Google also has the advantages that they now only need a read-only key-value store, and they know the frequency distribution for lookups. This is now the kind of problem many programmers would be happy to spend a weekend optimizing to get an average lookup time down to tens of nanoseconds.

benoau

I don't understand the data on ArchiveTeam's page but, it seems like they have 35 terabytes of data (286.56TiB)? It's a lot larger than I'd have thought.

wtallis

FYI, "TiB" means terabytes with a base of 1024, ie. the units you'd typically use for measuring memory rather than the units you'd typically see drive vendors using. The factor of 8 you divided by only applies to units based on bits rather than bytes, and those units use "b" rather than "B", and are only used for capacity measurements when talking about individual memory dies (though they're normal for talking about interconnect speeds).

Either way, we're talking about a dataset that fits easily in a 1U server with at most half of its SSD slots filled.

Aardwolf

I don't understand the page, it shows a list of data sets (I think?) up to 91 TiB in size

The list of short links and their target URLs can't be 91 TiB in size can it? Does anyone know how this works?

jdiff

I did some ridiculous napkin math. A random URL I pulled from a Google search was 705 bytes. A googl link is 22 bytes but if you only store the ID, it'd be 6 bytes. Some URLs are going to be shorter, some longer, but just ballparking it all, that lands us in the neighborhood of hundreds of billions of URLs, up to trillions of URLs.

makeworld

Glad I contributed to this in some small way.

Klathmon

Same, it's nice to see my username on the leaderboards.

Even though all I did was setup the docker container one day and forget about it

yreg

I wonder how many of them lead to private YouTube videos, Google documents, etc.

mdaniel

I was going to be cheeky and say "well, now you can download them and search" but it seems it's "Access-restricted-item: true" for some reason, above and beyond being 10G a pop <https://archive.org/details/archiveteam_googl_20250228144231...>

horseradish7k

you'd have to rescrape them all from https://web.archive.org/cdx/search?url=goo.gl/* - they don't publish the whole datasets

do_not_redeem

Does "all" mean all the URLs publicly known, or did they exhaustively iterate the entire URL namespace?

jedberg

They iterated the entire URL namespace by having volunteers run a client so they didn't get IP banned.

Imustaskforhelp

are we sure that the whole entire URL namespace has been mapped?

How would that even function, I mean, did they loop through every single permutation and see the result, or what exactly/ how would that work?

jedberg

> did they loop through every single permutation and see the result, or what exactly/ how would that work?

In short, yes. Since no one can make new links, it's a pre-defined space to search. They just requested every possible key, and recorded the answer, and then uploaded it to a shared database.

toomuchtodo

The pipeline code is available for review of the mechanics of http requests made if you follow the ArchiveTeam wiki links.

barbazoo

Beautiful. I wish I had seen this and could have helped.

brokensegue

they are still archiving other url shorteners https://tracker.archiveteam.org:1338/ you can participate in that

ccgreg

The goo.gl URLs that are publicly known are already in the Internet Archive and Common Crawl crawls.

null

[deleted]