Skip to content(if available)orjump to list(if available)

Perma.cc – Permanent Link Service

Perma.cc – Permanent Link Service

77 comments

·February 7, 2025

JackC

Hi! Perma is made by the Harvard Library Innovation Lab, which I direct, and I wrote a bunch of the early code for it back in 2015 or so.

For HN readers, I'd suggest checking out https://tools.perma.cc/, where we post a bunch of the open source work that backs this. Due to the shift from warc to wacz, (a zipped-web-archive format developed by WebRecorder), it's now possible to pass around fully interactive high fidelity web archives as simple files and host them with client side javascript, which opens up a bunch of new possibilities for web archive designs. You can see some tech demos of that at our page https://warcembed-demo.lil.tools/ , where each page is just a static file on the server and some client side javascript.

It's best to think of Perma.cc itself, the service, as some UX and user support wrapping to help solve linkrot primarily in the law journal, courts, law journals, and journalists area (for example, dashboards for a law journal to collaborate on the links they're archiving for their authors), and our work on this as building from that usecase to try to make it easier for everyone to build similar things.

I saw some mentions of the Internet Archive, which is great, and is also kind enough to keep a copy of our archives and expose them through the Wayback Machine. One thing I've been thinking about recently in archiving is that there's a risk to overstandardizing -- you don't want things too much captured with the same software platforms, funded through the same models, governed by the same people, exposed through the same interfaces, etc. There's supposed to be thousands of libraries, not one library. Unlike "don't roll your own crypto," I'd honestly love to see more people roll their own archives.

Happy to answer any questions!

jedberg

My first question was "If this is a free service, how do I know it will still be around in even a few years?". This was answered by your comment that it is (or at least appears to be?) funded by Harvard.

In which case, why isn't this prominently displayed on the main page? Or why not use a Harvard library URL, which will significantly boost the trust level? Especially vs a CC TLD which are known to be problematic?

JackC

It is on core Harvard funds, and we also have paid accounts used by law firms and journalists.

As an innovation lab we often minimize Harvard branding with project websites because it's more instructive to win or lose on our own merits than based on how people feel about Harvard, in either direction.

stanislavb

Yeah, but the the mare success of a service like perma.cc relies on trust. How does someone trust you that you will be here in 10, 20, etc years?

Harvard has been around for hundreds of years, Harvard has inbuilt trust, Harvard has funding. You should negotiate and arrange going behind its brand.

null

[deleted]

gabcoh

I guess it’s not sufficiently prominent (given that you didn’t see it) but this is discussed in detail in the FAQ section

A4ET8a8uTh0_v2

I think the main question is:

- Why is it better than internet archive?

I personally see the benefit as potentially having internet archive stopping being the only game in town, but even that comes with certain costs ( which may not be great to the community as a whole -- depending on who you ask ).

I would love to hear your perspective on where you stand as related to other providers of similar services.

JackC

I think the biggest distinction is between archiving platforms made primarily for authors and primarily for web crawlers.

If you're an author (say, of a court decision) and you archive example.com/foo, Perma makes a fresh copy of example.com/foo as its own wacz file, with a CPU-intensive headless browser, gives it a unique short URL, and puts it in a folder tree for you. So you get a higher quality capture than most crawls can afford, including a screenshot and pdf; you get a URL that's easy to cite in print; you can find your copy later; you get "temporal integrity" (it's not possible for replays to pull in assets from other crawls, which can result in frankenstein playbacks); and you can independently respond to things like DMCA takedowns. It's all tuned to offer a great experience for that author.

IA is primarily tuned for preserving everything regardless of whether the author cared to preserve it or not, through massive web crawls. Which is often the better strategy -- most authors don't care as much as judges about the longterm integrity of their citations.

This is what I'm getting at about the specific benefits of having multiple archives. It's not just redundancy, it's that you can do better for different users that way.

bArray

> - Why is it better than internet archive?

With the internet archive, the purpose seems to be for public archiving. One could imagine a use-case where you want non-public archives, and are therefore not subject to any take-down requests, especially if they are considered court evidence for example.

By paying directly for your links to be archived, it directly helps fund the service and therefore keep it going. You would want to see some guarantees in the contract about pricing if you were to long-term rely on the service.

rakoo

Irrelevant. The point is that there shouldn't be a single archive for anything, because then it has the longevity of the operators. Who can say whether Harvard or the IA will close its service first? Why choose ?

lrvick

Is there any concept of signing data at time of archive, and verification at time of access, to prove it is not later tampered with, say by a bribed sysadmin?

Similarly are there any general supply chain integrity measures in place, such as code review of dependencies, reproducible builds, or creating archives reproducibly in independently administrated enclaves?

You note archives could be used for instances like Supreme Court decisions, so any anyone with power to tamper with content would certainly be targeted.

JackC

We're coauthors on the wacz-auth spec, which is designed to solve this sort of thing by signing archives with the domain cert of the archive that created them. If you cross-sign with a private cert you can do pretty well with this approach against various threat models, though it has to be part of a whole PKI security design.

I think the best approach for high stakes archiving is to have a standard for "witness APIs" so that you could fetch archives from independent archiving institutions. That also solves for the web looking different from different places. That hasn't gelled yet, though.

makeworld

WACZ files created by WebRecorder software like archiveweb.page are signed (by you) and timestamped (by a third party using RFC 3161).

pbhjpbhj

And put the signatures on a blockchain so that the perma.cc holders, or the USA government, can't do easily alter things either.

russellbeattie

Since you own the "perma.link" domain name (I just looked it up) why don't you use that instead of .cc which has issues?

husam212

It's really annoying that domain is not the main one, it's so much better!

Onavo

What happens if you get a lawsuit or injunction demanding information removal or alteration? What if somebody archives a born secret or something sensitive?

lolinder

Heads up that the .cc TLD is frequently used for malicious purposes and will likely get blocked by a lot of networks.

When I've worked on spam prevention in the past, that TLD always comes up disproportionately often. I've never personally built a filter that blocks the entire TLD, but I'm sure from looking at the data that people with stricter compliance requirements have.

The Anti-Phishing Working Group ranked the TLD the second-worst in the ratio of phishing domains to total registrations, with the highest total volume of phishing (page 13):

https://docs.apwg.org//reports/APWG_Global_Phishing_Report_2...

basch

This isnt a new product. Harvard made perma.cc over a decade ago.

https://en.wikipedia.org/wiki/Perma.cc

It's unique in that, if you opt out of the paid account route, you need someone like a library to sponsor your access, and then when you archive something, it is akin to giving it to your library to store.

lolinder

Right, but new product or not if you use this as a solution for permalinks you are running the risk that in certain types of networks—especially those that the target audience for academic writing often operates in—people will not be able to access your links.

That might be worth the trade-off, and it might well be that the service is well-known enough that even networks that block the entire TLD make an exception for Perma.cc. But I wouldn't assume that to be the case without validating it first.

I also think it's worth just calling out bad TLDs when we see them so that people don't think it's okay to copy. Even if Perma.cc is well known enough to avoid the problem, your new app won't be.

null

[deleted]

jbullock35

Prospective users are understandably concerned that perma.cc will go out of business. No institution can guarantee that it will exist in perpetuity. But perma.cc has at least published a contingency plan: https://perma.cc/contingency-plan.

dylan604

"Please note that this is a statement of Perma.cc’s present intent in the event the project winds down. Perma.cc may revise or amend this page at any time. Nothing on this page is intended to, nor may it be read to, create a legal or contractual right for users or obligation for Perma.cc, under the Perma.cc Terms of Use or otherwise."

So, yeah, nothing is different than anyone else, other than they have a "cunning plan" that can easily get shitcanned at anyone's whim

true_religion

You can’t actually create a contractual right without consideration and they appear to be a free service.

They can only promise to do their best.

ZeWaka

They're not an entirely free service, no: https://perma.cc/sign-up

> New users are able to create ten free links on a trial basis. After using the trial, individuals must either be affiliated with a registrar or sign up for a paid subscription.

chaorace

Not what people are asking for. What you're ruling out is the equivalent of expecting cryostasis subscribers to sue if there's ever a service interruption.

Conventional business models as currently implemented are fundamentally misaligned to the timescales associated with this product category. Products like these need a level of stability that can only be accomplished at the charter level of the corporation -- it needs to be fundamentally incapable of reneging on promises made.

Without that kind of reassurance, why should anyone trust this service with their links? The exchange is incredibly unequal. They receive full, permanent control of the content, access, and monetization of all things which I cite. I receive... a promise that my links will do what they already do, but maybe last longer.

dylan604

Right, which makes this whole contingency plan worth less than the ink and paper it is written. It's their weasel words of saying they know that their entire marketing plan of "permanent" anything is outlandish. However, this is the exact type of marketing that attracts VCs. Might as well add "making the world a better place" in there too

pbhjpbhj

You can create an obligation for yourself or make a binding statement of intent.

Indeed memoranda in the UK, created when registering a company, require it. You state the intended services. Companies weasel around it by making broad milquetoast claims.

A statement binding the organisation to release their data and cede all copyright should the site be terminated, for example, would demonstrate good faith and go a long way to reassuring people that it wasn't wasted effort.

LorenDB

Permanent, until they go out of business. We should just standardize on archive.org and figure out a way to distribute redundant copies of its data around in such a way that it can survive even if the original Internet Archive goes down.

I hate to push blockchain stuff, but something like IPFS might actually be a good idea here.

didgeoridoo

It’s run by the Harvard Law Library (i.e. backed by a multibillion-dollar university that is substantially older than the country it’s located in) and operated as a decentralized network across multiple public and private library systems.

Like any service, it might shut down due to lack of interest, but I doubt Harvard Law is at risk of “going out of business”.

Andrew6rant

There honestly might be a greater chance of something happening to the .cc domain (like what's happening now with .io)

Harvard has a much larger population than the Cocos islands, I don't know why this project decided to rely on a country code tld

basch

then the dns providers of the world step in and override it and redirect. hopefully cloudflare can save us.

aaroninsf

Any discussion in this domain should include an overview of what happened to PURL and purl.org

https://en.wikipedia.org/wiki/Persistent_uniform_resource_lo...

Regardless of institutional gravitas, projects without wide uptake are mostly doomed on a 20-year horizon.

weinzierl

I am glad I am not the only one who remembers that. Most of the time people mean package url when they use purl.

palata

> I doubt Harvard Law is at risk of “going out of business”.

Wait until they publicly criticise Musk.

Artemon

[dead]

pogue

Archive.org has had it's fair share of problems recently as well. I'm still mad Google dropped their own cache and just expected the IA to pick up the slack.

Having said that, there are a variety of different archives that Wikipedia uses for backups - Perma.cc is included on that list. https://en.m.wikipedia.org/wiki/Wikipedia:List_of_web_archiv...

There's also projects like Archive Box that allows for self hosted backups of websites https://archivebox.io/

immibis

IPFS is basically content-addressed HTTP, and it's really slow, and there's no way to discover all the stuff that needs to be redundantly archived (which makes sense because anyone can host anything).

emddudley

PURL (https://purl.archive.org/) is a similar permanent URL service but you choose the URL.

It used to be hosted at purl.org and run by the OCLC but in 2016 it was transferred to the Internet Archive.

https://web.archive.org/web/20161002094639/https://www.oclc....

smarx007

PURL is in the same space as w3id.org, not perma.cc. Purl and w3id work by creating stable URLs thar can redirect to a (potentially changing) origin, perma.cc/archive.org/archivebox create WARC archives or the content at a given instant.

DanAtC

Using a country code TLD is a bold choice https://www.theregister.com/2024/10/10/io_domain_uk_mauritiu...

itscrush

Is the best counter here to acquire a brand tld to operate themselves (setting aside all the linkrot it generates)? They've certainly got the resources when you compare against other brand tlds that this could have been an option.

jklinger410

It's a stupid choice

NewJazz

My first thought too. Permanent... As long as checks notes the territory of Cocos Islands doesn't change governance.

hombre_fatal

The diagram of link rot is depressing.

I used the same unique online alias from age ten to eighteen.

I used to be able to google it and see hundreds of results. Dozens of forums I posted on. Dozens of games I played. In my twenties, I'd do this for the nostalgia of reading posts I'd written in my preteen era.

Now, there are just seven results.

zoezoezoezoe

If the world has taught me anything, it's that nothing is permanent, and nothing is perfect. Forums from days of yore are littered with tinybucket 404 pictures and anonymous Imgur images are gone. We like to imagine that the internet will stay the way it is forever, but I dont believe it. Free internet services like email, file uploads, etc. wont last forever. The idea is amazing, and exactly what we need for a constantly changing internet, but the only thing that is forever is nothingness.

jmuguy

Why use the cc tld? I'm guessing that choice was made a while back? Unfortunately due to its association with phishing etc, that immediately gave me the impression that the service isn't legit.

Pikamander2

Does this solve any of the problems that other link shorteners have, like eventually breaking when the site goes bankrupt or getting blocked by sites like Reddit due to their ability to conceal spam?

NohatCoder

It is not a link shortener, it is an archive tool. I thought the same thing when I first saw the headline. Their description is really bad and confusing.

Pikamander2

Oh wow, you're right. I guess the example link on their home page makes that more clear, but at a glance I thought it was supposed to be an attempt at creating a googl/bitly/tinyurl clone.

https://perma.cc/63AP-6EHJ

portaouflop

No it doesn’t, it will also go down eventually, most likely due to irrelevance or whenever someone at Harvard grows tired of maintaining this pet project or there are problems with the .cc domain.

I would be surprised if this will survive past 2035.

bArray

Personally, I started building tooling around Data URIs to provide permanent archives to external resources. You can only really store text or single images like this, and you have to get the size below 16KB [1] (64kB at a push) to be reliable. You can do some form of compression, but it would be nicer to have support for longer Data URIs.

I've literally embedded these into PDFs and all sorts, all of which still open today. It doesn't really increase the document size notably, but does make them more robust.

[1] https://stackoverflow.com/a/695167

nikisweeting

This is how singlefile and some other archiving tools work, they just embed all remote assets as data urls within the page. It works really well, the only inconvenience is if you need to parse the html later on, the massive attribute lengths can crash some html parsers like jsdom or cheerio.

pinoy420

So who maintains the database? Or is it hashed? Or what?