Skip to content(if available)orjump to list(if available)

Are hard drives getting better?

Are hard drives getting better?

36 comments

·October 15, 2025

eqvinox

I'm curious what this data would look like collated by drive birth date rather than (or in 3D addition to) age. I wouldn't use that as the "primary" way to look at things, but it could pop some interesting bits. Maybe one of the manufacturers had a shipload of subpar grease? Slightly shittier magnets? Poor quality silicon? There's all kinds of things that could cause a few months of hard drive manufacture to be slightly less reliable…

(Also: "Accumulated power on time, hours:minutes 37451*:12, Manufactured in week 27 of year 2014" — I might want to replace these :D — * pretty sure that overflowed at 16 bit, they were powered on almost continuously & adding 65536 makes it 11.7 years.)

dylan604

Over the past couple of years, I've been side hustling a project that requires buying ingredients from multiple vendors. The quantities never work out 1:1, so some ingredients from the first order get used with some from a new order from a different vendor. Each item has its own batch number which when used together for the final product yields a batch number on my end. I logged my batch number with the batch number for each of the ingredients in my product. As a solo person, it is a mountain of work, but nerdy me goes to that effort.

I'd assume that a drive manufacture does similar knowing which batch from which vendor the magnets, grease, or silicon all comes from. You hope you never need to use these records to do any kind of forensic research, but the one time you do need it makes a huge difference. So many people doing similar products that I do look at me with a tilted head while their eyes go wide and glaze over as if I'm speaking an alien language discussing lineage tracking.

sdenton4

I think it's helpful to put on our statistics hats when looking at data like this... We have some observed values and a number of available covariates, which, perhaps, help explain the observed variability. Some legitimate sources of variation (eg, proximity to cooling in the NFS box, whether the hard drive was dropped as a child, stray cosmic rays) will remain obscured to us - we cannot fully explain all the variation. But when we average over more instances, those unexplainable sources of variation are captured as a residual to the explanations we can make, given the avialable covariates. The averaging acts a kind of low-pass filter over the data, which helps reveal meaningful trends.

Meanwhile, if we slice the data up three ways to hell and back, /all/ we see is unexplainable variation - every point is unique.

This is where PCA is helpful - given our set of covariates, what combination of variables best explain the variation, and how much of the residual remains? If there's a lot of residual, we should look for other covariates. If it's a tiny residual, we don't care, and can work on optimizing the known major axes.

IgorPartola

Exactly. I used to pore over the Backblaze data but so much of it is in the form of “we got 1,200 drives four months ago and so far none have failed”. That is a relatively small number over a small amount of time.

On top of that it seems like by the time there is a clear winner for reliability, the manufacturer no longer makes that particular model and the newer models are just not a part of the dataset yet. Basically, you can’t just go “Hitachi good, Seagate bad”. You have to look at specific models and there are what? Hundreds? Thousands?

tanvach

I find it more straight forward to just model the failure rate with the variables directly, and look metrics like AUC for out of sample data.

mnw21cam

I personally am looking forward to BackBlaze inventing error bars and statistical tests.

burnished

Well said, and made me want to go review my stats text.

tanvach

Agreed, these type of analyses benefit from grouping by cohort years. Standard practice in analytics.

Animats

Right. Does the trouble at year 8 reflect bad manufacturing 8 years ago?

dylan604

Honestly, at 8 years, I'd be leaning towards dirty power on the user's end. For a company like BackBlaze, I'd assume a data center would have conditioned power. For someone at home running a NAS with the same drive connected straight to mains, they may not receive the same life span for a drive from the same batch. Undervolting when the power dips is gnarly on equipment. It's amazing to me how the use of a UPS is not as ubiquitous at home.

thisislife2

Personal anecdote - I would say (a cautious) yes. Bought 3 WD hard drives (1 external, 2 internal, during different time periods; in the last 10+ years) for personal use and 2 failed exactly after the 5 year warranty period ended (within a month or so). One failed just a few weeks before the warranty period, and so WD had to replace it (and I got a replacement HDD that I could use for another 5 years). That's good engineering! (I also have an old 500GB external Seagate drive that has now lasted 10+ years, and still works perfectly - probably an outlier).

That said, one thing that I do find very attractive in Seagate HDDs now is that they are also offering free data recovery within the warranty period, with some models. Anybody who has lost data (i.e. idiots like me who didn't care about backups) and had to use such services knows how expensive they can be.

kvemkon

> replacement HDD that I could use for another 5 years

But the warranty lasts only 5 years since the purchase of the drive, doesn't it?

thisislife2

Yes, but the warranty is "irrelevant" when the drive actually last the whole 5 years (in other words, I am hoping the drive is as well-engineered as its predecessor and lasts the whole 5 years - and it has so far in the last 3+ years).

realityfactchex

Per charts in TFA, it looks like some disks are failing less overall, and failing after a longer period of time.

I'm still not sure how to confidently store decent amounts of (personal) data for over 5 years without

  1- giving to cloud,
  2- burning to M-disk, or
  3- replacing multiple HDD every 5 years on average
All whilst regularly checking for bitrot and not overwriting good files with bad corrupted files.

Who has the easy, self-service, cost-effective solution for basic, durable file storage? Synology? TrueNAS? Debian? UGreen?

(1) and (2) both have their annoyances, so (3) seems "best" still, but seems "too complex" for most? I'd consider myself pretty technical, and I'd say (3) presents real challenges if I don't want it to become a somewhat significant hobby.

IgorPartola

Get yourself a Xeon powered workstation that supports at least 4 drives. One will be your boot system drive and three or more will be a ZFS mirror. You will use ECC RAM (hence Xeon). I bought a Lenovo workstation like this for $35 on eBay.

ZFS with a three way mirror will be incredibly unlikely to fail. You only need one drive for your data to survive.

Then get a second setup exactly like this for your backup server. I use rsnapshot for that.

For your third copy you can use S3 like a block device, which means you can use an encrypted file system. Use FreeBSD for your base OS.

toast0

If you don't have too much stuff, you could probably do ok with mirroring across N+1 (distributed) disks, where N is enough that you're comfortable. Monitor for failure/pre-failure indicators and replace promptly.

When building up initially, make a point of trying to stagger purchases and service entry dates. After that, chances are failures will be staggered as well, so you naturally get staggered service entry dates. You can likely hit better than 5 year time in service if you run until failure, and don't accumulate much additional storage.

But I just did a 5 year replacement, so I dunno. Not a whole lot of work to replace disks that work.

ssl-3

One method that seems appealing:

1. Use ZFS with raidz

2. Scrub regularly to catch the bitrot

3. Park a small reasonably low-power computer at a friend's house across town or somewhere a little further out -- it can be single-disk or raidz1. Send ZFS snapshots to it using Tailscale or whatever. (And scrub that regularly, too.)

4. Bring over pizza or something from time to time.

As to brands: This method is independent of brand or distro.

bigiain

I have a simpler approach that I've used at home for about 2 decades now pretty much unchanged.

I have two raid1 pairs - "the old one", and "the new one", plus a third drive the same sizes as "the old pair". The new pair is always larger than the old pair, in the early days it was usually well over twice as big but drive growth rates have slowed since then. About every three years I buy a new "new pair" + third drive, and downgrade the current "new pair" to be the4 "old pair". The old pair is my primary storage, and gets rsynced to a partition that's the same size on the new pair. Te remainder of the new pair is used for data I'm OK with not being backed up (umm, all my BitTorrented Linux isos...) The third drive is on a switched powerpoint and spins up late Sunday night and rsyncs the data copy on the new pair then powers back down for the week.

gruez

>3. Park a small reasonably low-power computer at a friend's house across town or somewhere a little further out -- it can be single-disk or raidz1. Send ZFS snapshots to it using Tailscale or whatever. (And scrub that regularly, too.)

Unless you're storing terabyte levels of data, surely it's more straightforward and more reliable to store on backblaze or aws glacier? The only advantage of the DIY solution is if you value your time at zero and/or want to "homelab".

ssl-3

A chief advantage of storing backup data across town is that a person can just head over and get it (or ideally, a copy of it) in the unlikely event that it becomes necessary to recover from a local disaster that wasn't handled by raidz and local snapshots.

The time required to set this stuff up is...not very big.

Things like ZFS and Tailscale may sound daunting, but they're very light processes on even the most garbage-tier levels of vaguely-modern PC hardware and are simple to get working.

ghaff

I'd much rather just have a backblaze solution and maybe redundant local backups with Time Machine or your local backup of choice (which work fine for terabytes at this point). Maybe create a clone data drive and drop it off with a friend every now and then which should capture most important archive stuff.

fun444555

This works great although I should really do step 4 :)

Gigachad

Hard drive failure seems like more of a cost and annoyance problem than a data preservation issue. Even with incredible reliability you still need backups if your house burns down. And if you have a backup system then drive failure matters little.

palmotea

> 2- burning to M-disk, or

You can't buy those anymore. I've tried.

IIRC, the things currently marketed as MDisc are just regular BD-R discs (perhaps made to a higher standard, and maybe with a slower write speed programmed into them, but still regular BD-Rs).

LunaSea

Would tapes not be an option?

Not great for easy read access but other than that it might be decent storage.

gruez

>Would tapes not be an option?

AFAIK someone on reddit did the math and the break-even for tapes is between 50TB to 100TB. Any less and it's cheaper to get a bunch of hard drives.

ghaff

Unless you're basically a serious data hoarder or otherwise have unusual storage requirements, an 18TB drive (or maybe 2) get you a lot of the way to handling most normal home requirements.

thisislife2

Tapes would be great for backups - but the tape drive market's all "enterprise-y", and the pricing reflects that. There really isn't any affordable retail consumer option (which is surprising as there definitely is a market for it).

dmoy

I looked at tape a little while ago and decided it wasn't gonna work out for me reliability-wise at home without a more controlled environment (especially humidity).

yalogin

Ah I haven’t seen the yearly backblaze post in some time now, glad it’s back.

fooker

Hard drives are not getting better.

Hard drives you can conveniently buy as a consumer - yes. There's a difference.

hatmatrix

Do we have enough rare earth metals to provide storage for the AI boom?

jpitz

The question is, do we have enough capacity to mine and refine them at a reasonable price? They're there, in the dirt for the taking.

asdff

Future generations will blame us for damning them out of rare earths to build yet another cellphone. This is like us today with severely diminished whale populations just so Victorians could read the bible for another 2 hours a night. Was it worth it? Most would say no, save for the people who made a fortune off of it I'm sure.

aftergibson

Not from the prices I'm seeing.

gfody

pleasant contradiction to betteridge's law