Google's shortened goo.gl links will stop working next month
195 comments
·July 25, 2025edent
toomuchtodo
https://wiki.archiveteam.org/index.php/Goo.gl
https://tracker.archiveteam.org/goo-gl/ (1.66B work items remaining as of this comment)
How to run an ArchiveTeam warrior: https://wiki.archiveteam.org/index.php/ArchiveTeam_Warrior
(edit: i see jaydenmilne commented about this further down thread, mea culpa)
progbits
They appear to be doing ~37k items per minute, with 1.6B remaining that is roughly 30 days left. So that's just barely enough to do it in time.
Going to run the warrior over the weekend to help out a bit.
pentagrama
Thank you for that information!
I wanted to help and did that using VMware.
For curious people, here is what the UI looks like, you have a list of projects to choose, I choose the goo.gl project, and a "Current project" tab which shows the project activity.
Project list: https://imgur.com/a/peTVzyw
Current project: https://imgur.com/a/QVuWWIj
jlarocco
IMO it's less Google's fault and more a crappy tech education problem.
It wasn't a good idea to use shortened links in a citation in the first place, and somebody should have explained that to the authors. They didn't publish a book or write an academic paper in a vacuum - somebody around them should have known better and said something.
And really it's not much different than anything else online - it can disappear on a whim. How many of those shortened links even go to valid pages any more?
And no company is going to maintain a "free" service forever. It's easy to say, "It's only ...", but you're not the one doing the work or paying for it.
justin66
> It wasn't a good idea to use shortened links in a citation in the first place, and somebody should have explained that to the authors. They didn't publish a book or write an academic paper in a vacuum - somebody around them should have known better and said something.
It's a great idea, and today in 2025, papers are pretty much the only place where using these shortened URLs makes a lot of sense. In almost any other context you could just use a QR code or something, but that wouldn't fit an academic paper.
Their specific choice of shortened URL provider was obviously unfortunate. The real failure is that of DOI to provide an alternative to goo.gl or tinyurl or whatever that is easy to reach for. It's a big failure, since preserving references to things like academic papers is part of their stated purpose.
dingnuts
Even normal HTTP URLs aren't great. If there was ever a case for content-addressable networks like IPFS it's this. Universities should be able to host this data in a decentralized way.
gmerc
Ahh classic free market cop out.
FallCheeta7373
if the smartest among us publishing for academia cannot figure this out, then who will?
kazinator
Nope! There have in fact been education campaigns about the evils of URL shorteners for years: how they pose security risks (used for shortening malicious URLs), and how they stop working when their domain is temporarily or permanently down.
The authors just had their heads too far up their academic asses to have heard of this.
epolanski
Jm2c, but if your resource is a link to an online resource that's borderline already (at any point the content can be changed or disappear).
Even worse if your resource is a shortened link by some other service, you've just added yet another layer of unreliable indirection.
zffr
For people wanting to include URL references in things like books, what’s the right approach to take today?
I’m genuinely asking. It seems like its hard to trust that any service will remaining running for decades
toomuchtodo
It is built for the task, and assuming worse case scenario of sunset, it would be ingested into the Wayback Machine. Note that both the Internet Archive and Cloudflare are supporting partners (bottom of page).
(https://doi.org/ is also an option, but not as accessible to a casual user; the DOI Foundation pointed me to https://www.crossref.org/ for adhoc DOI registration, although I have not had time to research further)
ruined
perma.cc is an interesting project, thanks for sharing.
other readers may be specifically interested in their contingency plan
Hyperlisk
perma.cc is great. Also check out their tools if you want to get your hands dirty with your own archival process: https://tools.perma.cc/
whoahwio
While Perma is solution specifically for this problem, and a good one at that - citing the might of the backing company is a bit ironic here
edent
The full URl to the original page.
You aren't responsible if things go offline. No more than if a publisher stops reprinting books and the library copies all get eaten by rats.
A reader can assess the URl for trustworthiness (is it scam.biz or legitimate_news.com) look at the path to hazard a guess at the metadata and contents, and - finally - look it up in an archive.
firefax
>The full URl to the original page.
I thought that was the standard in academia? I've had reviewers chastise me when I did not use wayback machine to archive a citation and link to that since listing a "date retrieved" doesn't do jack if there's no IA copy.
Short links were usually in addition to full URLS, and more in conference presentations than the papers themselves.
grapesodaaaaa
I think this is the only real answer. Shorteners might work for things like old Twitter where characters were a premium, but I would rather see the whole URL.
We’ve learned over the years that they can be unreliable, security risks, etc.
I just don’t see a major use-case for them anymore.
danelski
Real URL and save the website in the Internet Archive as it was on the date of access?
eviks
> And for what? The cost of keeping a few TB online and a little bit of CPU power?
For the immeasurable benefits of educating the public.
kazinator
The act of vandalism occurs when someone creates a shortened URL, not when they stop working.
djfivyvusn
The vandalism was relying on Google.
toomuchtodo
You'd think people would learn. Ah, well. Hopefully we can do better from lessons learned.
api
The web is a crap architecture for permanent references anyway. A link points to a server, not e.g. a content hash.
The simplicity of the web is one of its virtues but also leaves a lot on the table.
QuantumGood
When they began offering this, their rep for ending services was already so bad I refused to consider goo.gl. Amazing for how many years now they have introduced then ended services with large user bases. Gmail being in "beta" for five years was, weirdly, to me, a sign they might stick with it.
mrcslws
From the blog post: "more than 99% of them had no activity in the last month" https://developers.googleblog.com/en/google-url-shortener-li...
This is a classic product data decision-making fallacy. The right question is "how much total value do all of the links provide", not "what percent are used".
bayindirh
> The right question is "how much total value do all of the links provide", not "what percent are used".
Yes, but it doesn't bring in the sweet promotion home, unfortunately. Ironically, if 99% of them doesn't see any traffic, you can scale back the infra, run it in 2 VMs, and make sure a single person can keep it up as a side quest, just for fun (but, of course, pay them for their work).
This beancounting really makes me sad.
quesera
Configuring a static set of redirects would take a couple hours to set up, and literally zero maintenance forever.
Amazon should volunteer a free-tier EC2 instance to help Google in their time of economic struggles.
bayindirh
This is what I mean, actually.
If they’re so inclined, Oracle has an always free tier with ample resources. They can use that one, too.
socalgal2
If they wanted the sweat promotion they could add an interstitial. Yes, people would complain, but at least the old links would not stop working.
ahstilde
> just for fun (but, of course, pay them for their work).
Doing things for fun isn't in Google's remit
kevindamm
Alas, it was, once upon a time.
morkalork
Then they shouldn't have offered it as a free service in the first place. It's like that discussion about how Google in all its 2-ton ADHD gorilla glory will enter an industry, offer a (near) free service or product, decimate all competition, then decide its not worth it and shutdown. Leaving a desolate crater behind of ruined businesses, angry and abandoned users.
ceejayoz
It used to be. AdSense came from 20% time!
null
kmeisthax
[dead]
HPsquared
Indeed. I've probably looked at less than 1% of my family photos this month but I still want to keep them.
null
sltkr
I bet 99% of URLs that exist on the public web had no activity last month. Might as well delete the entire WWW because it's obviously worthless.
firefax
> "more than 99% of them had no activity in the last month"
Better to have a short URL and not need it, than need a short URL and not have it IMO.
fizx
Don't be confused! That's not how they made the decision; it's how they're selling it.
SoftTalker
From Google's perspective, the question is "How many ads are we selling on these links" and if it's near zero, that's the value to them.
esafak
What fraction of indexed Google sites, Youtube videos, or Google Photos were retrieved in the last month? Think of the cost savings!
nomel
Youtube already does this, to some extent, by slowly reduce the quality of your videos, if they're not accessed frequently enough.
Many videos I uploaded in 4k are now only available in 480p, after about a decade.
handsclean
I don’t think they’re actually that dumb. I think the dirty secret behind “data driven decision making” is managers don’t want data to tell them what to do, they want “data” to make even the idea of disagreeing with them look objectively wrong and stupid.
HPsquared
It's a bit like the the difference between "rule of law" and "rule by law" (aka legalism).
It's less "data-driven decisions", more "how to lie with statistics".
JimDabell
Cloudflare offered to keep it running and were turned away:
https://x.com/elithrar/status/1948451254780526609
Remember this next time you are thinking of depending upon a Google service. They could have kept this going easily but are intentionally breaking it.
fourseventy
Google killing their domains service was the last straw for me. I started moving all of my stuff off of Google since then.
nomel
I'm still shocked that my google voice number still functions after all these years. It makes me assume it's main purpose is to actually be an honeypot of some sort, maybe for spam call detection.
joshstrange
Because IIRC it’s essentially completely run by another company (I want to say Bandwidth?) and, again my memories might be fuzzy, originally came from an acquisition of a company called Grand Central.
My guess is it just keeps chugging along with little maintenance needed by Google itself. The UI hasn’t changed in a while from what I’ve seen.
hnfong
Another shocking story to share.
I have a tiny service built on top of Google App Engine that (only) I use personally. I made it 15+ years ago, and the last time I deployed changes was 10+ years ago.
It's still running. I have no idea why.
throwyawayyyy
Pretty sure you can thank the FCC for that :)
mrj
Shhh don't remind them
kevin_thibedeau
Mass surveillance pipeline to the successor of room 641A.
thebruce87m
> Remember this next time you are thinking of depending upon a Google service.
Next time? I guess there’s a wave of new people that haven’t learned that that lesson yet.
jaydenmilne
ArchiveTeam is trying to brute force the entire URL space before its too late. You can run a Virtualbox VM/docker image (ArchiveTeam Warrior) to help (unique IPs are needed). I've been running it for a couple months and found a million.
pimlottc
Looks like they have saved 8000+ volumes of data to the Internet Archive so far [0]. The project page for this effort is here [1].
localtoast
Docker container FTW. Thanks for the heads-up - this is a project I will happily throw a Hetzner server at.
wobfan
Same here. I am geniunely asking myself for what though. I mean, they'll receive a list of the linked domains, but what will they do with that?
localtoast
It's not only goo.gl links they are actively archiving. Take a look at their current tasks.
fragmede
save it, forever*.
* as long as humanly possible, as is archive.org's mission.
hadrien01
After a while I started to get "Google asks for a login" errors. Should I just keep going? There's no indication on what I should do on the ArchiveTeam wiki
ojo-rojo
Thanks for sharing this. I've often felt that the ease by which we can erase digital content makes our time period susceptible to a digital dark ages to archaeologists studying history a few thousand years from now.
Us preserving digital archives is a good step. I guess making hard copies would be the next step.
AstroBen
Just started, super easy to set up
cpeterso
Google’s own services generate goo.gl short URLs (Google Maps generates https://maps.app.goo.gl/ URLs for sharing links to map locations), so I assume this shutdown only affects user-generated short URLs. Google’s original announcement doesn’t say as such, but it is carefully worded to specify that short URLs of the “https://goo.gl/* format” will be shut down.
Google’s probably trying to stop goo.gl URLs from being used for phishing, but doesn’t want to admit that publicly.
growthwtf
This actually makes the most logical sense to me, thank you for the idea. I don't agree with the way they're doing it of course but this probably is risk mitigation for them.
jedberg
I have only given this a moment's thought, but why not just publish the URL map as a text file or SQLLite DB? So at least we know where they went? I don't think it would be a privacy issue since the links are all public?
DominikPeters
It will include many URLs that are semi-private, like Google Docs that are shared via link.
ryandrake
If some URL is accessible via the open web, without authentication, then it is not really private.
bo1024
What do you mean by accessible without authentication? My server will serve example.com/64-byte-random-code if you request it, but if you don’t know the code, I won’t serve it.
charcircuit
Then use something like argon2 on the keys, so you have to spend a long time to brute force them all similar to how it is today.
high_na_euv
So exclude them
ceejayoz
How?
How will they know a short link to a random PDF on S3 is potentially sensitive info?
Nifty3929
I'd rather see it as a searchable database, which I would think is super cheap and no maintenance for Google, and avoids these privacy issues. You can input a known goo.gl and get it's real URL, but can't just list everything out.
growt
And then output the search results as a 302 redirect and it would just be continuing the service.
null
devrandoom
Are they all public? Where can I see them?
jedberg
You can brute force them. They don't have passwords. The point is the only "security" is knowing the short URL.
Alifatisk
I don't think so, but you can find the indexed urls here https://www.google.com/search?q=site%3A"goo.gl" it's about 9,6 million links. And those are what got indexed, it should be way more out there
sltkr
I'm surprised Google indexes these short links. I expected them to resolve them to their canonical URL and index that instead, which is what they usually do when multiple URLs point to the same resource.
ElijahLynn
OMFG - Google should keep these up forever. What a hit to trust. Trust with Google was already bad for everything they killed, this is another dagger.
phyzix5761
People still trust Google?
spankalee
As an ex-Googler, the problem here is clear and common, and it's not the infrastructure cost: it's ownership.
No one wants to own this product.
- The code could be partially frozen, but large scale changes are constantly being made throughout the google3 codebase, and someone needs to be on the hook for approving certain changes or helping core teams when something goes wrong. If a service it uses is deprecated, then lots of work might need to be done.
- Every production service needs someone responsible for keeping it running. Maybe an SRE, thought many smaller teams don't have their own SREs so they manage the service themselves.
So you'd need some team, some full reporting chain all the way up, to take responsibility for this. No SWE is going to want to work on a dead product where no changes are happening, no manager is going to care about it. No director is going to want to put staff there rather than a project that's alive. No VP sees any benefit here - there's only costs and risks.
This is kind of the Reader situation all over again (except for the fact that a PM with decent vision could have drastically improved and grown Reader, IMO).
This is obviously bad for the internet as a whole, and I personally think that Google has a moral obligation to not rug pull infrastructure like this. Someone there knows that critical links will be broken, but it's in no one's advantage to stop that from happening.
I think Google needs some kind of "attic" or archive team that can take on projects like this and make them as efficiently maintainable in read-only mode as possible. Count it as good-will marketing, or spin it off to google.org and claim it's a non-profit and write it off.
Side note: a similar, but even worse situation for the company is the Google Domains situation. Apparently what happened was that a new VP came into the org that owned it and just didn't understand the product. There wasn't enough direct revenue for them, even though the imputed revenue to Workspace and Cloud was significant. They proposed selling it off and no other VPs showed up to the meeting about it with Sundar so this VP got to make their case to Sundar unchallenged. The contract to sell to Squarespace was signed before other VPs who might have objected realized what happened, and Google had to buy back parts of it for Cloud.
gsnedders
To some extent, it's cases like this which show the real fragility of everything existing as a unified whole in google3.
While clearly maintenance and ownership is still a major problem, one could easily imagine deploying something similar — especially read-only — using GCP's Cloud Run and BigTable products could be less work to maintain, as you're not chasing anywhere near such a moving target.
rs186
Many good points, but if you don't mind me asking: if you were at Google, would you be willing to be the lead of that archive team, knowing that you'll be stuck at this position for the next 10 years, with the possibility of your team being downsized/eliminated when the wind blows slightly in the other direction?
spankalee
Definitely a valid question!
Myself, no, for a few reasons: I mainly work on developer tools, I'm too senior for that, and I'm not that interested.
But some people are motivated to work on internet infrastructure, and would be interested. First, you wouldn't be stuck for 10 years. That's not how Google works (and you could of course quit) you're supposed to be with a team a minimum of 18 months, and after that, transfer away. A lot of junior devs don't care that much where they land, the archive team would have to be responsible for more than just the link shortener, so it might be interesting to care for several services from top to bottom. SWEs could be compensated for rotating on to the archive team, and/or it could be part-time.
I think the harder thing is getting management buy-in, even from the front-line managers.
romaniv
URL shorteners were always a bad idea. At the rate things are going I'm not sure people in a decade or two won't say the same thing about URLs and the Web as whole. The fact that there is no protocol-level support for archiving, versioning or even client-side replication means that everything you see on the Web right now has an overwhelming probability to permanently disappear in the near future. This is an astounding engineering oversight for something that's basically the most popular communication system and medium in the world and in history.
Also, it's quite conspicuous that 30+ years into this thing browsers still have no built-in capacity to store pages locally in a reasonable manner. We still rely on "bookmarks".
davidczech
I don't really get it, it must cost peanuts to leave a static map like this up for the rest of Google's existence as a company.
nikanj
There’s two things that are real torture to google dev teams: 1) Being told a product is completed and needs no new features or changes 2) Being made to work on legacy code
hinkley
What’s their body count now? Seems like they’ve slowed down the killing spree, but maybe it’s just that we got tired of talking about them.
cyp0633
The runner of Compiler Explorer tried to collect the public shortlinks and do the redirection themselves:
Compiler Explorer and the Promise of URLs That Last Forever (May 2025, 357 points, 189 comments)
About 60k academic citations about to die - https://scholar.google.com/scholar?start=90&q=%22https://goo...
Countless books with irrevocably broken references - https://www.google.com/search?q=%22://goo.gl%22&sca_upv=1&sc...
And for what? The cost of keeping a few TB online and a little bit of CPU power?
An absolute act of cultural vandalism.