Skip to content(if available)orjump to list(if available)

Hitting Peak File IO Performance with Zig

Veserv

7 GB/s at 512 KB block size is only ~14,000 IO/s which is a whopping ~70 us/IO. That is a trivial rate for even synchronous IO. You should only need one inflight operation (prefetch 1) to overlap your memory copy (to avoid serializing the IO with the memory copy) to get the full IO bandwidth.

Their referenced previous post [1] demonstrates ~240,000 IO/s when using basic settings. Even that seems pretty low, but is still more than enough to completely trivialize this benchmark and saturate the hardware IO with zero tuning.

[1] https://steelcake.com/blog/comparing-io-uring/

laserbeam

Zig is currently undergoing lots of breaking changes in the IO API and implementation. Any post about IO in zig should also mention the zig version used.

I see it’s 0.15.1 in the zon file, but that should also be part of the post somewhere.

database64128

I see you use a hard-coded constant ALIGN = 512. Many NVMe drives actually allow you to raise the logical block size to 4096 by re-formatting (nvme-format(1)) the drive.

HippoBaro

It’s really the hardware block size that matters in this case (direct I/O). That value is a property of the hardware and can’t be changed.

In some situations, the “logical” block size can differ. For example, buffered writes use the page cache, which operates in PAGE_SIZE blocks (usually 4K). Or your RAID stripe size might be misconfigured, stuff like that. Otherwise they should be equal for best outcomes.

In general, we want it to be as small as possible!

wtallis

> It’s really the hardware block size that matters in this case (direct I/O). That value is a property of the hardware and can’t be changed.

NVMe drives have at least three "hardware block sizes". There's the LBA size that determines what size IO transfers the OS must exchange with the drive, and that can be re-configured on some drives, usually 512B and 4kB are the options. There's the underlying page size of the NAND flash, which is more or less the granularity of individual read and write operations, and is usually something like 16kB or more. There's the underlying erase block size of the NAND flash that comes into play when overwriting data or doing wear leveling, and is usually several MB. There's the granularity of the SSD controller's Flash Translation Layer, which determines the smallest size write the SSD can handle without doing a read-modify-write cycle, usually 4kB regardless of the LBA format selected, but on some special-purpose drives can be 32kB or more.

And then there's an assortment of hints the drive can provide to the OS about preferred granularity and alignment for best performance, or requirements for atomic operations. These values will generally be a consequence of the the above values, and possibly also influenced by the stripe and parity choices the SSD vendor made.

loeg

I've run into (specialized) flash hardware with 512 kB for that 3rd size.

null

[deleted]

imtringued

Why would you want the block size to be as small as possible? You will only benefit from that for very small files, hence the sweet spot is somewhere between "as small as possible" and "small multiple of the hardware block size".

If you have bigger files, then having bigger blocks means less fixed overhead from syscalls and NVMe/SATA requests.

If your native device block size is 4KiB, and you fetch 512 byte blocks, you need storage side RAM to hold smaller blocks and you have to address each block independently. Meanwhile if you are bigger than the device block size you end up with fewer requests and syscalls. If it turns out that the requested block size is too large for the device, then the OS can split your large request into smaller device appropriate requests to the storage device, since the OS knows the hardware characteristics.

The most difficult to optimize case is the one where you issue many parallel requests to the storage device using asynchronous file IO for latency hiding. In that case, knowing the device's exact block size is important, because you are IOPs bottlenecked and a block size that is closer to what the device supports natively will mean fewer IOPs per request.

nesarkvechnep

Interesting how an implementation using FreeBSDs AIO would compare.

marginalia_nu

I'm not very familiar with zig and was kind a struggling to follow the code and maybe that's why I couldn't find where the setting was being set up, but in case it's not, be sure to also use registered file descriptors with io_uring as they make a fairly big difference.

throwawaymaths

why not use page allocator to get aligned memory instead of overallocating?