Skip to content(if available)orjump to list(if available)

Synology Lost the Plot with Hard Drive Locking Move

Renaud

Synology isn't about the NAS hardware and OS. Once setup, it doesn't really matter as long as your config is reliable and fast, so there are many competitive options to move to.

The killer feature for me is the app ecosystem. I have a very old 8-bay Synology NAS and have it setup in just a few clicks to backup my dropbox, my MS365 accounts, my Google business accounts, do redundant backup to external drive, backup important folders to cloud, and it was also doing automated torrent downloads of TV series.

These apps, and more (like family photos, video server, etc), make the NAS a true hub for everything data-related, not just for storing local files.

I can understand Synology going this way, it puts more money in their pocket, and as a customer in professional environment, I'm ok to pay a premium for their approved drives if it gives me an additional level of warranty and (perceived) safety.

But enforcing this accross models used by home or soho users is dumb and will affect the good will of so many like me, who both used to buy Synology for home and were also recommending/purchasing the brand at work.

This is a tech product, don't destroy your tech fanbase.

I would rather Synology kept a list of drives to avoid based on user experience, and offer their Synology-specific drives with a generous warranty for pro environments. Hel, I would be ok with sharing stats about drive performance so they could build a useful database for all.

They way they reduce the performance of their system to penalise non-synology rebranded drives is bascially a slap in the face of their customers. Make it a setting and let the user choose to use the NAS their bought to its full capabilities.

j45

Other manufacturers like Qnap also have this app ecosystem.

It doesn’t address the mandatory nature of drives when at most dell and hp have put their part number on drives for the most part.

PeterStuer

I currently run 2 Synology NAS's in my setup. I am very satisfied with their performance, but nevertheless I will be phasing them out because their offerings are not evolving in line with customer satisfaction but with profit maximization through segmentation and vertical lock-in.

joshstrange

Do you have a plan on what you’re going to move to?

I’ve used (and still use) UnRaid before but switched to Synology for my data a while back due to both the plug-and-play nature of their systems (it’s been rock solid for me) and easily accessible hard drive trays.

I’ve built 3 UnRaid servers and while I like the software, hardware was always an issue for me. I’d love a 12-bay-style Synology hardware device that I could install whatever I wanted on. I’m just not interested in having to halfway deconstruct a tower to get at 1 hard drive. Hotswap bays are all I want to deal with now.

disambiguation

Not OP but TrueNAS is a good alternative - both the software and their all in one NAS builds.

I have an unraid on a usb stick somewhere in my rack, but overtime it started feeling limited, and when they began changing their license structure I decided it was time to switch, though I run it on a Dell r720xd instead of one of their builds (my only complaint is the fan noise - i think 730 and up are better in this regard)

Proxmox was also on my short list for hypervisors if you dont want TrueNAS.

chme

I also have a TrueNAS, but because of its limitations (read-only root file system), I came to the conclusion that, if I ever need to reinstall it, I would switch to Proxmox and install TrueNAS as one virtual client, next the other clients for my home lab.

I have found workarounds for the read-only root file system. But they aren't great. I have installed Gentoo with a prefix inside the home directory, which provides me with a working compiler and I can install and update packages. This sort of works.

For running services, I installed jailmaker, which starts a lxc debian, with docker-compose. But I am not so happy about that, because I would rather have an atomic system there. I couldn't figure out how to install Fedora CoreOS inside a lxc container, and if that is even possible. Maybe NixOS would be a another option.

But, as I said, for those services I would rather just run them in Proxmox and only use the TrueNAS for the NAS/ZFS management. That provides more flexibility and better system utilization.

Marsymars

I use a qnap TL-D800S for 8 bays connected to my home server. You could use as many as you have available PCIe ports.

PeterStuer

No plans yet. My current NAS setup should be fine for another 2 years.

If it were now, I'd probably look deeper into Asus, QNap or a DIY TrueNAS.

rpdillon

I'm in a similar position. I'm on my second NAS in the last 12 years. I've been very satisfied with their performance, but this kind of behavior is just completely unacceptable. I guess I'll need to look into QNAP or some other brand. Also, I think my four disc setup is in a RAID 5, but it might be Synology's proprietary version, so I'll need to figure out how to migrate off of that. I don't think I'll be able to just insert the drives in a different NAS and have it work.

kalleboo

Even Synology's "proprietary" RAID is just Linux mdadm, and they have instructions on their website on how to mount it under Linux. One of the reasons I preferred Synology in the first place was their openness about stuff like that!

rpdillon

Awesome to know! I'll read up on mdadm, appreciate the pointer!

1oooqooq

it's migrating to btrfs raid 1 now. and their docs just say to wipe the drivers in case of issues lol.

chrisandchris

That was my first thought too. I am currently a very happy Synology customer and am selling them to B2B customers for storage.

I will still have to come accross something like Hyper Vault for backup and Drive for storage that works (mostly) smoothless. I would be happy to self-host, but the No Work Needed (tm) products of Synology are just great.

Sad to see them taking this road.

dostick

Synology became so bad, they measure disk space in percent, and thresholds cannot be configured to lower than 5%. This may have been okay when volume sizes were in gigabytes, but now with multi-TB drives, 5% is a lot of space. The result of that is NAS in permanent alarm state because less than 5% space is free. And this makes it less likely for the user to notice when an actual alarm happens because they are desensitised to warnings. I submitted this to them at least four times, and they reply that this is fine, it’s already decided to be like that, so we will not change it. Another stupid thing is that notifications about low disk space are sent to you via email and push until it’s about 30 GB free. Then free space goes below 30 GB and reaches zero, yet notifications are not sent anymore. My multiple reports about this issue always responded along the lines of “it’s already done like that, so we will not change it”.

Most modern, especially software companies, choose not to fix relatively small but critical problems, yet they actively employ sometimes hundreds of customer support yes-people whose job seems to be defusing customer complaints. Nothing is ever fixed anymore.

pixelesque

Not defending them in any way, but I know with my Infrant (then Netgear unfortunately, who last year killed the products) ReadyNASs which also used mdadm to configure BTRFS with RAID5 in a similar way to Synology and QNAP, the recommendation was that you don't want your BTRFS filesystem to run low on space, because then it runs out of metadata space, and if it does that it becomes read-only, and can become unstable.

Basically, the recommendation was to always have 5% free space, so this isn't just Synology saying this.

kbolino

I think preventing alarm fatigue is a very good reason to fix issues.

But 5% free is very low. You may want to use every single byte you feel you paid for, but allocation algorithms really break down when free space gets so low. Remember that there's not just a solid chunk of that 5% sitting around at the end of the space. That's added up over all the holes across the volume. At 20-25% free, you should already be looking at whether to get more disks and/or deciding what stuff you don't actually need to store on this volume. So a hard alarm at 5% is not unreasonable, though there should also be a way to set a soft alarm before then.

kimixa

Also Synology use btrfs, a copy-on-write filesystem - that means there are operations that you might not expect that require allocation of new blocks - like any write, even if overwriting an existing file's data.

And "unexpected" failure paths like that are often poorly tested in apps.

j1elo

5% of my 500 GB is 25 GB, which is already a lot of space but understandable. Not many things would fit in there nowadays.

But 5% of a 5 TB volume is 250 GB, that's the size of my whole system disk! Probably not so understandable by the lay person.

kbolino

This is partly why SSDs just lie nowadays and tell you they only have 75-90% of the capacity that is actually built into them. You can't directly access that excess capacity but the drive controller can when it needs to (primarily to extend the life of the drive).

Some filesystems do stake out a reservation but I don't think any claim one as large as 5% (not counting the effect of fixed-size reservations on very small volumes). Maybe they ought to, as a way of managing expectations better.

For people who used computers when the disks were a lot smaller, or who primarily deal in files much much smaller than the volumes they're stored on, the absolute size of a percentage reservation can seem quite large. And, in certain cases, for certain workloads, the absolute size may actually be more important than the relative size.

But most file systems are designed for general use and, across a variety of different workloads, spare capacity and the impact of (not) keeping it open is more about relative than absolute sizes. Besides fragmentation, there's also bookkeeping issues, like adding one more file to a directory cascading into a complete rearrangement of the internal data structures.

sitkack

You have no idea what you are talking about.

runamok

100%. Those disks are likely working much harder moving the head all over the place to find those empty spaces when it writes.

gambiting

....do you think the drive doesn't know where the empty space actually is?

kotaKat

I'm going to buck the nerds and say I wish Drobo was back. I love my 5N, but had to retire it as it began to develop Type B Sudden Drobo Death Syndrome* and switch out to QNAP.

It was simple, it just worked, and I didn't have to think about it.

* TB SDDS - a multi-type phenomenon of Drobo units suddenly failing. There were three 'types' of SDDS I and a colleague discovered - "Type A" power management IC failures, "Type B" unexplainable lockups and catatonia, and "Type C" failed batteries. Type B units' SOCs have power and clock go in and nothing going out.

romanhn

My 2nd generation Drobo that I got back in 2008 is still chugging along. Haven't had to replace a hard drive in 10-12 years either. I love it even though it's super slow by today's standards. Been meaning to retire it for years, but it's been so rock solid I rarely have to think about it.

mig39

I still have two Drobo 5N2 NAS boxes going strong. One is the backup for the other. I really wish someone would take up the Drobo-like simplicity and run with it.

bob1029

Storing encrypted blobs in S3 is my new strategy for bulk media storage. You'll never beat the QoS and resilience of the cloud storage product with something at home. I have completely lost patience with maintaining local hardware like this. If no one has a clue what is inside your blobs, they might as well not exist from their perspective. This feels like smuggling cargo on a federation starship, which is way cooler to me than filling up a bunch of local disks.

I don't need 100% of my bytes to be instantly available to me on my network. The most important stuff is already available. I can wait a day for arbitrary media to thaw out for use. Local caching and pre-loading of read-only blobs is an extremely obvious path for smoothing over remote storage.

Other advantages should be obvious. There are no limits to the scale of storage and unless you are a top 1% hoarder, the cost will almost certainly be more than amortized by the capex you would have otherwise spent on all that hardware.

xyzzy123

S3 or glacier? Glacier is cost competitive with local disk but not very practical for the sorts of things people usually need lots of local disk for (media & disk images). Interested in how you use this!

20TB which u can keep in a 2-bay cute little nas will cost you $4k USD / year on S3 infrequent access tier in APAC (where I am). So "payback time" of local hardware is just 6 months vs S3 IA. That's before you pay for any data transfers.

bob1029

> S3 or glacier

This is the same product.

> 20TB

I think we might be pushing the 1% case here.

Just because we can shove 20TB of data into a cute little nas does not mean we should.

For me, knowledge that the data will definitely be there is way more important than having "free" access to a large pool of bytes.

Espressosaurus

20 TB isn't that out of reach when you're running your media server and taking high resolution photos or video (modern cameras push a LOT of bits).

I'm the last person I know who buys DVDs, and they're 2/3s of the reason I need more space. The last third is photography. 45.7 megapixels x 20 FPS adds up quick.

S3's cost is extreme when you're talking in the tens of terabytes range. I don't have the upstream to seed the backup, and if I'm going outside of my internal network it's too slow to use as primary storage. Just the NAS on gigabit ethernet is barely adequate to the task.

SteveNuts

> knowledge that the data will definitely be there is way more important than having "free" access to a large pool of bytes

Until Amazon inexplicably deletes your AWS account because your Amazon.com account had an expired credit card and was trying and failing to renew a subscription.

Ask me how I know

squigz

20TB isn't all that much anymore, especially if you do anything like filming, streaming, photography, etc. Even a handful of HQ TV shows can reach several TB rather quickly.

lurking_swe

hardly 1%, i’m sure anyone that works in the film industry or media in general has terabytes of video footage. Maybe even professional photographers who have many clients.

Hamuko

>This is the same product.

Confusingly "Glacier" is both its own product, which stores data in "vaults", and a family of storage tiers on Amazon S3, which stores data in "buckets". I think Glacier the product is deprecated though, since accessing the Glacier dashboard immediately recommends using Glacier the S3 storage tiers instead.

fodkodrasz

Did you factor in the resilience and redundancy S3 gives you and you cannot opt out from? I have my NAS, and it is cheaper than S3 if I ignore these, but having to run 2 offsite backups would make it much less compelling.

xyzzy123

Agree, they are not the same thing. Yes, S3 provides much better durability. I just can't afford it.

For my use-case I'm OK with un-hedged risk and dollars staying in my pocket.

viraptor

> If no one has a clue what is inside your blobs, they might as well not exist from their perspective.

This is not the perspective of actors working on longer timescale. For a number is agencies, preserving some encrypted data is beneficial, because it will be possible to recover in N years, whether any classic improvements, bugs found in key generators, or advances in quantum.

Very few people here will be that interesting, but... worth keeping in mind.

bob1029

The point of encryption in this context is to defeat content fingerprinting techniques, not the focused resources of a nation state.

3np

3-2-1 says you want both. Can be convenient to centralize backups on a local NAS and then publish to the cloud from there.

null

[deleted]

disambiguation

How much does S3 cost these days? I've been burned by their hidden surge pricing before and hesitate to rely on them for personal storage when self hosting is fairly cheap.

dmoy

Disk hosting cost isn't much

Bandwidth to get all of that back down to your system is much pricier, depending on how much you use that data.

emmelaich

I have some sympathy for this. With the disasters of the WD 'Green' series and the recent revelations on how used disks were being sold for new. Synology doesn't want to be lumped with other companies problems.

They really have to sell it by minimising the price differential and reducing the lead time.

asmor

Slapping Synology stickers on Seagate drives doesn't make them magically immune from being mislabeled out of refurbishment.

This is the same old tired argument Apple made about iPhone screens - complain about inferior aftermarket parts while doing everything in their power to not make the original parts available anywhere but AASPs. Except here we have the literal same parts with only a difference in the firmware vendor string.

romanhn

Any experiences with Ugreen NAS? They're a new player in the space, but with very compelling hardware offerings, way ahead of Synology. Been meaning to replace my old Drobo setup for years, and Ugreen seems to finally be hitting the sweet point of specs and pricing that I've been looking for.

mschild

I've been looking into a NAS myself.

I think self-built is the best bang for buck you're going to get and not have any annoying limitations.

There's plenty of motherboards with integrated CPUs (N100 same as cheaper Ugreen ones) for roughly 100. Buy a decent PSU and get an affordable case. For my configuration with a separate AMD CPU I'm looking at right around 400 Euros but I get total control.

And as far as software is concerned, setting up a modern OS like TrueNAS I find about the same difficulty as an integrated one from Ugreen.

asmor

Just keep in mind that Intel is keeping the total PCIe bandwidth out of those CPUs very constrained on purpose.

Magma7404

That's the first time I have heard about them but it looks very interesting and pretty. Synology has become way too expensive for me as I only need a 4-bay NAS and Ugreen is cheaper than the Synology. My only concern would be the software itself, and if they can avoid all the security holes that plagued some brands like Qnap.

Last but not least, they seem to have Docker support which was restricted to more powerful Synology models, and it's a nice bonus for self-hosting nowadays.

kyrofa

> When a drive fails, one of the key factors in data security is how fast an array can be rebuilt into a healthy status. Of course, Amazon is just one vendor, but they have the distribution to do same-day and early morning overnight parts to a large portion of the US. Even overnighting a drive that arrives by noon from another vendor would be slower to arrive than two of the four other options at Amazon.

In a way this is a valid point, but it also feels a bit silly. Do people really make use of devices like this and then try to overnight a drive when something fails? You're building an array-- you're designing for failure-- but then you don't plan on it? You should have spare drives on hand. Replenishing those spares is rarely an emergency situation.

tiew9Vii

A lot of these are home power users.

They build the array to support a drive failure but as home power users without unlimited funds don’t have a hot spare or store room they can run to. It’s completely reasonable to order a spare on failure unless it’s mission critical data needing 24/7 uptime.

They completely planned for it. They’ve planned for if there is a failure they can get a new drive within 24 hours which for home power users is generally enough, especially when likely get a warning before complete failure.

gambiting

>>You should have spare drives on hand.

I've never heard of anyone doing that for a home nas. I have one and I don't keep spare drives purely because it's hard to justify the expense.

kyrofa

Heh, I suppose you've heard of one now. Fair enough, I could be in the minority here.

Hamuko

I had a hot spare in the form of a backup drive. It was a 12 TB external WD that I'd already burned in and had as a backup target for the NAS. Then when one of the drives in the NAS failed, I broke the HDD out of the enclosure and used it to replace the broken drive. It hadn't been in use for many months and I'd rather sacrifice some backups rather than the array. I also technically had offsite backups for it that I could restore in an emergency.

1oooqooq

always run the previous drive gen space.

i budget 300usd each, for 2 or 3 drivers. that is always the sweet spot since forever. get the largest enterprise model for exactly that price.

that was 2tb 10yrs ago. 10tb 5yrs ago.

so 5yrs ago i rebuilt storage on those 10tb drivers but only using 2tb volumes (coulda be 5, but i was still keeping the last gen size as data haven't grow), now my old drivers are spares/monthly off-machine copies. i used one when getting a warranty for a failed new 10tb one btw.

now i can get 20tb drivers for that price, i will probably still only increase the volumes to 10tb at most and have two spares.

nichos

I wish 1 or more HD manufacturers would get together and sell a NAS that runs TrueNAS on it. Or even an existing NAS manufacturer (UGreen, etc)

All these NAS manufacturers a spending time developing their own OS, when TrueNAS is well established.

Kirby64

TrueNAS isn’t nearly friendly enough for the average user. HexOS may fit that bill, although it seems rather immature. It runs on top of TrueNAS.

ryao

My doctor was able to switch from Synology to TrueNAS after I advised him to replace his failing Synology NAS with a TrueNAS box and I gave him a link to the TrueNAS documentation. He is fairly average in my opinion.

dmoy

I bought a Synology last year and then had to return it because it didn't support two of the enterprise drive model revisions I have (but worked with the others, even one that's the same make).

For the same hardware cost I got a random mATX box that can hold 2.5x more hard drives, a much much beefier CPU, 10x the RAM, and an nvme. And yea it took an hour to set up trueNAS in a docker image, but w/e.

Same exact hard drives working perfectly fine in fedora. If it weren't for hard drive locking I'd have stuck with the Synology box out of laziness.

codecraze

I have a 8 bay nas from synology and i’m now considering a move out when i’ll have to replace my nas.

Is there something with 6-8 drives slots on which i could install whatever OS i want ? Ideally with a small form factor. I don’t want to have a giant desktop again for my nas purposes.

QuiEgo

Terramaster F6-424. Most of the non-Synology NASes let you install whatever OS you want (but, don't provide any support other than "here's how you install it"). Unraid, TrueNAS Scale, and Open Media Vault are popular OS choices.

mgsouth

I've no experience with Synology and have no opinion regarding their motivations, execution, or handling of customers.

However...

Long long ago I worked for a major NAS vendor. We had customers with huge NAS farms [1] and extremely valuable data. We were, I imagine, very exposed from a reputation or even legal standpoint. Drive testing and certification was A Very Big Deal. Our test suites frequently found fatal firmware bugs, and we had to very closely track the fw versions in customer installations. From a purely technical viewpoint there's no way we wanted customers to bring their own drives.

[1] Some monster servers had tripple-digit GBs of storage, or even a TB! (#getoffmylawn)

gh02t

Synology is consumer and SMB focused, though. High end storage that level of integration makes sense, but for Synology it's just not something *most of their customers care about or want.

thomasjudge

For an entertaining/terrifying perspective on firmware, obligatory Bryan Cantrill talk "Zebras All the Way Down" https://www.youtube.com/watch?v=fE2KDzZaxvE