Mini NASes marry NVMe to Intel's efficient chip
122 comments
·July 4, 2025transpute
reanimus
Where are you seeing devices without Bootguard fused? I'd be very curious to get my hands on some of those...
sandreas
While it may be tempting to go "mini" and NVMe, for a normal use case I think this is hardly cost effective.
You give up so much by using an all in mini device...
No Upgrades, no ECC, harder cooling, less I/O.
I have had a Proxmox Server with a used Fujitsu D3417 and 64gb ecc for roughly 5 years now, paid 350 bucks for the whole thing and upgraded the storage once from 1tb to 2tb. It draws 12-14W in normal day use and has 10 docker containers and 1 windows VM running.
So I would prefer a mATX board with ECC, IPMI 4xNVMe and 2.5GB over these toy boxes...
However, Jeff's content is awesome like always
samhclark
I think you're right generally, but I wanna call out the ODROID H4 models as an exception to a lot of what you said. They are mostly upgradable (SODIMM RAM, SATA ports, M.2 2280 slots), and it does support in-band ECC which kinda checks the ECC box. They've got a Mini-ITX adapter for $15 so it can fit into existing cases too.
No IPMI and not very many NVME slots. So I think you're right that a good mATX board could be better.
sandreas
Well, if you would like to go mini (with ECC and 2.5G) you could take a look at this one:
https://www.aliexpress.com/item/1005006369887180.html
Not totally upgradable, but at least pretty low cost and modern with an optional SATA + NVMe combination for Proxmox. Shovel in an enterprise SATA and a consumer 8TB WD SN850x and this should work pretty good. Even Optane is supported.
IPMI could be replaced with NanoKVM or JetKVM...
geek_at
Not sure about the odroid but I got myself the nas kit from friendly elec. With the largest ram it was about 150 bucks and comes with 2,5g ethernet and 4 NVME slots. No fan and keeps fairly cool even under load.
Running it with encrypted zfs volumes and even with a 5bay 3.5 Inch HDD dock attached via USB
ndiddy
Another thing is that unless you have a very specific need for SSDs (such as heavily random access focused workloads, very tight space constraints, or working in a bumpy environment), mechanical hard drives are still way more cost effective for storing lots of data than NVMe. You can get a manufacturer refurbished 12TB hard drive with a multi-year warranty for ~$120, while even an 8TB NVMe drive goes for at least $500. Of course for general-purpose internal drives, NVMe is a far better experience than a mechanical HDD, but my NAS with 6 hard drives in RAIDz2 still gets bottlenecked by my 2.5GBit LAN, not the speeds of the drives.
acranox
Don’t forget about power. If you’re trying to build a low power NAS, those hdds idle around 5w each, while the ssd is closer to 5mw. Once you’ve got a few disks, the HDDs can account for half the power or more. The cost penalty for 2TB or 4TB ssds is still big, but not as bad as at the 8TB level.
null
fnord77
these little boxes are perfect for my home
My use case is a backup server for my macs and cold storage for movies.
6x2Tb drives will give me a 9Tb raid-5 for $809 ($100 each for the drives, $209 for the nas).
Very quiet so I can have it in my living room plugged into my TV. < 10W power.
I have no room for a big noisy server.
sandreas
While I get your point about size, I'd not use RAID-5 for my personal homelab. I'd also say that 6x2TB drives are not the optimal solution for low power consumption. You're also missing out server quality BIOS, Design/Stability/x64 and remote management. However, not bad.
While my Server is quite big compared to a "mini" device, it's silent. No CPU Fan only 120mm case fans spinning around 500rpm, maybe 900rpm on load - hardly noticable. I've also a completely passive backup solution with a Streacom FC5, but I don't really trust it for the chipsets, so I also installed a low rpm 120mm fan.
How did you fit 6 drives in a "mini" case? Using Asus Flashstor or beelink?
epistasis
I'm interested in learning more about your setup. What sort of system did you put together for $350? Is it a normal ATX case? I really like the idea of running proxmox but I don't know how to get something cheap!
cyanydeez
I've had a synology since 2015. Why, besides the drives themselves, would most home labs need to upgrade?
I don't really understand the general public, or even most usages, requiring upgrade paths beyond get a new device.
By the time the need to upgrade comes, the tech stack is likely faster and you're basically just talking about gutting the PC and doing everything over again, except maybe power supply.
null
sandreas
Understandable... Well, the bottleneck for a Proxmox Server often is RAM - sometimes CPU cores (to share between VMs). This might not be the case for a NAS-only device.
Another upgrade path is to keep the case, fans, cooling solution and only switch Mainboard, CPU and RAM.
I'm also not a huge fan of non x64 devices, because they still often require jumping through some hoops regarding boot order, external device boot or power loss struggle.
bee_rider
Should a mini-NAS be considered a new type of thing with a new design goal? He seems to be describing about a desktop worth of storage (6TB), but always available on the network and less power consuming than a desktop.
This seems useful. But it seems quite different from his previous (80TB) NAS.
What is the idle power draw of an SSD anyway? I guess they usually have a volatile ram cache of some sort built in (is that right?) so it must not be zero…
CharlesW
> Should a mini-NAS be considered a new type of thing with a new design goal?
Small/portable low-power SSD-based NASs have been commercialized since 2016 or so. Some people call them "NASbooks", although I don't think that term ever gained critical MAS (little joke there).
Examples: https://www.qnap.com/en/product/tbs-464, https://www.qnap.com/en/product/tbs-h574tx, https://www.asustor.com/en/product?p_id=80
privatelypublic
With APSD the idle draw of a SSD is in the range of low tens of milliwatts.
layer8
HDD-based NASes are used for all kinds of storage amounts, from as low as 4TB to hundreds of TB. The SSD NASes aren’t really much different in use case, just limited in storage amount by available (and affordable) drive capacities, while needing less space, being quieter, but having a higher cost per TB.
transpute
> Should a mini-NAS be considered a new type of thing with a new design goal?
- Warm storage between mobile/tablet and cold NAS
- Sidecar server of functions disabled on other OSes
- Personal context cache for LLMs and agents
jeffbee
> less power consuming than a desktop
Not really seeing that in these minis. Either the devices under test haven't been optimized for low power, or their Linux installs have non-optimal configs for low power. My NUC 12 draws less than 4W, measured at the wall, when operating without an attached display and with Wi-Fi but no wired network link. All three of the boxes in the review use at least twice as much power at idle.
careful_ai
Geerling never misses. What I admire most is how these setups strike a balance: practical for homelabbers, inspiring for tinkerers, and grounded enough for pros. The pairing of low-power chips with NVMe speed really feels like a sweet spot for edge compute or local AI workloads. Feels like the future is homegrown again.
dwood_dev
I love reviews like these. I'm a fan of the N100 series for what they are in bringing low power x86 small PCs to a wide variety of applications.
One curiosity for @geerlingguy, does the Beelink work over USB-C PD? I doubt it, but would like to know for sure.
geerlingguy
That, I did not test. But as it's not listed in specs or shown in any of their documentation, I don't think so.
moondev
Looks like it only draws 45w which could allow this to be powered over POE++ with a splitter, but it has an integrated AC input and PSU - that's impressive regardless considering how small it is but not set up for PD or POE
Havoc
I've been running one of these quad nvme mini-NAS for a while. They're a good compromise if you can live with no ECC. With some DIY shenanigans they can even run fanless
If you're running on consumer nvmes then mirrored is probably a better idea than raidz though. Write amplification can easily shred consumer drives.
turnsout
I’m a TrueNAS/FreeNAS user, currently running an ECC system. The traditional wisdom is that ECC is a must-have for ZFS. What do you think? Is this outdated?
magicalhippo
Been running without for 15+ on my NAS boxes, built using my previous desktop hardware fitted with NAS disks.
They're on 24/ and run monthly scrubs, as well as monthly checksum verification of my backup images, and not noticed any issues so far.
I had some correctable errors which got fixed when changing SATA cable a few times, and some from a disk that after 7 years of 24/7 developed a small run of bad sectors.
That said, you got ECC so you should be able to monitor corrected memory errors.
Matt Ahrens himself (one of the creators of ZFS) had said there's nothing particular about ZFS:
There's nothing special about ZFS that requires/encourages the use of ECC RAM more so than any other filesystem. If you use UFS, EXT, NTFS, btrfs, etc without ECC RAM, you are just as much at risk as if you used ZFS without ECC RAM. Actually, ZFS can mitigate this risk to some degree if you enable the unsupported ZFS_DEBUG_MODIFY flag (zfs_flags=0x10). This will checksum the data while at rest in memory, and verify it before writing to disk, thus reducing the window of vulnerability from a memory error.
I would simply say: if you love your data, use ECC RAM. Additionally, use a filesystem that checksums your data, such as ZFS.
https://arstechnica.com/civis/viewtopic.php?f=2&t=1235679&p=...
matja
ECC is a must-have if you want to minimize the risk of corruption, but that is true for any filesystem.
Sun (and now Oracle) officially recommended using ECC ever since it was intended to be an enterprise product running on 24/7 servers, where it makes sense that anything that is going to be cached in RAM for long periods is protected by ECC.
In that sense it was a "must-have", as business-critical functions require that guarantee.
Now that you can use ZFS on a number of operating systems, on many different architectures, even a Raspberry Pi, the business-critical-only use-case is not as prevalent.
ZFS doesn't intrinsically require ECC but it does trust that the memory functions correctly which you have the best chance of achieving by using ECC.
stoltzmann
That traditional wisdom is wrong. ECC is a must-have for any computer. The only reason people think ECC is mandatory for ZFS is because it exposes errors due to inherent checksumming and most other filesystems don't, even if they suffer from the same problems.
HappMacDonald
I'm curious if it would make sense for write caches in RAM to just include a CRC32 on every block, to be verified as it gets written to disk.
seltzered_
https://danluu.com/why-ecc/ has an argument for it with an update from 2024.
evanjrowley
One way to look at it is ECC has recently become more affordable due to In-Band ECC (IBECC) providing ECC-like functionality for a lot of newer power efficient Intel CPUs.
https://www.phoronix.com/news/Intel-IGEN6-IBECC-Driver
Not every new CPU has it, for example, the Intel N95, N97, N100, N200, i3-N300, and i3-N305 all have it, but the N150 doesn't!
It's kind of disappointing that the low power NAS devices reviewed here, the only one with support for IBECC had a limited BIOS that most likely was missing this option. The ODROID H4 series, CWWK NAS products, AOOSTAR, and various N100 ITX motherboards all support it.
Havoc
Ultimately comes down to how important the data is to you. It's not really a technical question but one of risk tolerance
monster_truck
These are cute, I'd really like to see the "serious" version.
Something like a Ryzen 7745, 128gb ecc ddr5-5200, no less than two 10gbe ports (though unrealistic given the size, if they were sfp+ that'd be incredible), drives split across two different nvme raid controllers. I don't care how expensive or loud it is or how much power it uses, I just want a coffee-cup sized cube that can handle the kind of shit you'd typically bring a rack along for. It's 2025.
Palomides
the minisforum devices are probably the closest thing to that
unfortunately most people still consider ECC unnecessary, so options are slim
riobard
I’ve been always puzzled by the strange choice of raiding multiple small capacity M.2 NVMe in these tiny low-end Intel boxes with severely limited PCIe lanes using only one lane per SSD.
Why not a single large capacity M.2 SSD using 4 full lanes and proper backup with a cheaper , larger capacity and more reliable spinning disk?
tiew9Vii
The latest small M.2 NAS’s make very good consumer grade, small, quiet, power efficient storage you can put in your living room, next to the tv for media storage and light network attached storage.
It’d be great if you could fully utilise the M.2 speed but they are not about that.
Why not a single large M.2? Price.
riobard
Would four 2TB SSD be more or less expensive than one 8TB SSD? And also counting power efficiency and RAID complexity?
herf
Which SSDs do people rely on? Considering PLP (power loss protection), write endurance/DWPD (no QLC), and other bugs that affect ZFS especially? It is hard to find options that do these things well for <$100/TB, with lower-end datacenter options (e.g., Samsung PM9A3) costing maybe double what you see in a lot of builds.
nightfly
ZFS isn't more effected by those, your just more likely to notice them with ZFS. You'll probably never notice write endurance issues on a home NAS
privatelypublic
QLC isn't an issue for consumer NAS- are 'you' seriously going to write 160GB/day, every day?
magicalhippo
QLC have quite the write performance cliff though, which could be an issue during use or when rebuilding the array.
Just something to be aware of.
FloatArtifact
I think the N100 and N150 suffer the same weakness for this type of use case in the context of SSD storage 10gb networking. We need a next generation chip that can leverage more PCI lanes with roughly the same power efficiency.
I would remove points for a built-in non-modular standardized power supply. It's not fixable, and it's not comparable to Apple in quality.
koeng
Are there any mini NAS with ECC ram nowadays? I recall that being my personal limiting factor
qwertox
Minisforum N5 Pro Nas has up to 96 GB of ECC RAM
https://www.minisforum.com/pages/n5_pro
https://store.minisforum.com/en-de/products/minisforum-n5-n5...
no RAM 1.399€
16GB RAM 1.459€
48GB RAM 1.749€
96GB RAM 2.119€
96GB DDR5 SO-DIMM costs around 200€ to 280€ in Germany.https://geizhals.de/?cat=ramddr3&xf=15903_DDR5~15903_SO-DIMM...
I wonder if that 128GB kit would work, as the CPU supports up to 256GB
https://www.amd.com/en/products/processors/laptop/ryzen-pro/...
I can't force the page to show USD prices.
wyager
Is this "full" ECC, or just the baseline improved ECC that all DDR5 has?
Either way, on my most recent NAS build, I didn't bother with a server-grade motherboard, figuring that the standard consumer DDR5 ECC was probably good enough.
qwertox
This is full ECC, the CPU supports it (AMD Pro variant).
DDR5 ECC is not good enough. What if you have faulty RAM and ECC is constantly correcting it without you knowing it? There's no value in that. You need the OS to be informed so that you are aware of it. It also does not protect errors which occur between the RAM and the CPU.
This is similar to HDDs using ECC. Without SMART you'd have a problem, but part of SMART is that it allows you to get a count of ECC-corrected errors so that you can be aware of the state of the drive.
True ECC takes the role of SMART in regards of RAM, it's just that it only reports that: ECC-corrected errors.
On a NAS, where you likely store important data, true ECC does add value.
layer8
The DDR5 on-die ECC doesn’t report memory errors back to the CPU, which is why you would normally want ECC RAM in the first place. Unlike traditional side-band ECC, it also doesn’t protect the memory transfers between CPU and RAM. DDR5 requires the on-die ECC in order to still remain reliable in face of its chip density and speed.
brookst
The Aoostar WTR max is pretty beefy, supports 5 nvme and 6 hard drives, and up to 128GB of ECC ram. But it’s $700 bare bones, much more than these devices in the article.
Takennickname
Aoostar WTR series is one change away from being the PERFECT home server/nas. Passing the storage controller IOMMU to a VM is finicky at best. Still better than the vast majority of devices that don't allow it at all. But if they do that, I'm in homelab heaven. Unfortunately, the current iteration cannot due to a hardware limitation in the AMD chipset they're using.
brookst
Good info! Is it the same limitation on WTR pro and max? The max is an 8845hsv versus the 5825u in the pro.
amluto
Yes, but not particularly cheap: https://www.asustor.com/en/product?p_id=89
MarkSweep
Asustor has some cheaper options that support ECC. Though not as cheap as those in the OP article.
FLASHSTOR 6 Gen2 (FS6806X) $1000 - https://www.asustor.com/en/product?p_id=90
LOCKERSTOR 4 Gen3 (AS6804T) $1300 - https://www.asustor.com/en/product?p_id=86
Havoc
One of the arm ones is yes. Can't for the life of me remember which though - sorry - either something in bananapi or lattepanda part of universe I think
sorenjan
Is it possible (and easy) to make a NAS with harddrives for storage and an SSD for cache? I don't have any data that I use daily or even weekly, so I don't want the drives spinning needlessly 24/7, and I think an SSD cache would stop having to spin them up most of the time.
For instance, most reads from a media NAS will probably be biased towards both newly written files, and sequentially (next episode). This is a use case CPU cache usually deals with transparently when reading from RAM.
QuiEgo
https://github.com/trapexit/mergerfs/blob/master/mkdocs/docs...
I do this. One mergerfs mount with an ssd and three hdds made to look like one disk. Mergerfs is set to write to the ssd if it’s not full, and read from the ssd first.
A chron job moves out the oldest files on the ssd once per night to the hdds (via a second mergerfs mount without the ssd) if the ssd is getting full.
I have a fourth hdd that uses snap raid to protect the ssd and other hdds.
QuiEgo
Also, https://github.com/bexem/PlexCache which moves files between disks based on their state in a Plex DB
Intel N150 is the first consumer Atom [1] CPU (in 15 years!) to include TXT/DRTM for measured system launch with owner-managed keys. At every system boot, this can confirm that immutable components (anything from BIOS+config to the kernel to immutable partitions) have the expected binary hash/tree.
TXT/DRTM can enable AEM (Anti Evil Maid) with Qubes, SystemGuard with Windows IoT and hopefully future support from other operating systems. It would be a valuable feature addition to Proxmox, FreeNAS and OPNsense.
Some (many?) N150 devices from Topton (China) ship without Bootguard fused, which _may_ enable coreboot to be ported to those platforms. Hopefully ODROID (Korea) will ship N150 devices. Then we could have fanless N150 devices with coreboot and DRTM for less-insecure [2] routers and storage.
[1] Gracemont (E-core): https://chipsandcheese.com/p/gracemont-revenge-of-the-atom-c... | https://youtu.be/agUwkj1qTCs (Intel Austin architect, 2021)
[2] "Xfinity using WiFi signals in your house to detect motion", 400 comments, https://news.ycombinator.com/item?id=44426726#44427986