Skip to content(if available)orjump to list(if available)

Working with Files Is Hard (2019)

Working with Files Is Hard (2019)

122 comments

·January 23, 2025

continuational

> Pillai et al., OSDI’14 looked at a bunch of software that writes to files, including things we'd hope write to files safely, like databases and version control systems: Leveldb, LMDB, GDBM, HSQLDB, Sqlite, PostgreSQL, Git, Mercurial, HDFS, Zookeeper. They then wrote a static analysis tool that can find incorrect usage of the file API, things like incorrectly assuming that operations that aren't atomic are actually atomic, incorrectly assuming that operations that can be re-ordered will execute in program order, etc.

> When they did this, they found that every single piece of software they tested except for SQLite in one particular mode had at least one bug. This isn't a knock on the developers of this software or the software -- the programmers who work on things like Leveldb, LBDM, etc., know more about filesystems than the vast majority programmers and the software has more rigorous tests than most software. But they still can't use files safely every time! A natural follow-up to this is the question: why the file API so hard to use that even experts make mistakes?

Retr0id

> why the file API so hard to use that even experts make mistakes?

I think the short answer is that the APIs are bad. The POSIX fs APIs and associated semantics are so deeply entrenched in the software ecosystem (both at the OS level, and at the application level) that it's hard to move away from them.

huntaub

I take a different view on this. IMO the tricks that existing file systems play to get more performance (specifically around ordering and atomicity) make it extra hard for developers to reason about. Obviously, you can't do anything about fsync dropping error codes, but some of these failure modes just aren't possible over file systems like NFS due to protocol semantics.

IgorPartola

Not only that, but the POSIX file API also assumes that NFS is a thing but NFS breaks half the important guarantees of a file system. I don’t know if it’s a baby and bath water situation, but NFS just seems like a whole bunch of problems. It’s like having eval in a programming language.

AutistiCoder

The whole software ecosystem is built on bubblegum, tape, and prayers.

huntaub

What aspects of NFS do you think break half of the important guarantees of a file system?

__loam

POSIX is also so old and essential that it's hard to imagine an alternative.

jcranmer

Not really, there's been lots of APIs that have improved on the POSIX model.

The kind of model I prefer is something based on atomicity. Most applications can get by with file-level atomicity--make whole file read/writes atomic with a copy-on-write model, and you can eliminate whole classes of filesystem bugs pretty quickly. (Note that something like writeFileAtomic is already a common primitive in many high-level filesystem APIs, and it's something that's already easily buildable with regular POSIX APIs). For cases like logging, you can extend the model slightly with atomic appends, where the only kind of write allowed is to atomically append a chunk of data to the file (so readers can only possibly either see no new data or the entire chunk of data at once).

I'm less knowledgeable about the way DBs interact with the filesystem, but there the solution is probably ditching the concept of the file stream entirely and just treating files as a sparse map of offsets to blocks, which can be atomically updated. (My understanding is that DBs basically do this already, except that "atomically updated" is difficult with the current APIs).

emmelaich

Some of the problems transcend POSIX. Someone I know maintains a non-relational db on IBM mainframes. When diving into a data issue, he was gob-smacked to find out that sync'd writes did not necessarily make it to the disk. They were cached in the drive memory and (I think) the disk controller memory. If all failed, data was lost.

MisterTea

I use Plan 9 regularly and while its Unix heritage is there, it most certainly isn't Unix and completely does away with POSIX.

timewizard

> POSIX fs APIs and associated semantics

Well I think that's the actual problem. POSIX gives you an abstract interface but it essentially does not enforce any particular semantics on those interfaces.

dkarl

> why the file API so hard to use that even experts make mistakes?

Sounds like Worse Is Better™: operating systems that tried to present safer abstractions were at a disadvantage compared to operating systems that shipped whatever was easiest to implement.

(I'm not an expert in the history, just observing the surface similarity and hoping someone with more knowledge can substantiate it.)

ncruces

POSIX file locking is clearly modeled around whatever was simplest to implement, although it makes no sense at all.

tjalfi

Jeremy Allison tracked down why POSIX standardized this behavior[0].

The reason is historical and reflects a flaw in the POSIX standards process, in my opinion, one that hopefully won't be repeated in the future. I finally tracked down why this insane behavior was standardized by the POSIX committee by talking to long-time BSD hacker and POSIX standards committee member Kirk McKusick (he of the BSD daemon artwork). As he recalls, AT&T brought the current behavior to the standards committee as a proposal for byte-range locking, as this was how their current code implementation worked. The committee asked other ISVs if this was how locking should be done. The ISVs who cared about byte range locking were the large database vendors such as Oracle, Sybase and Informix (at the time). All of these companies did their own byte range locking within their own applications, none of them depended on or needed the underlying operating system to provide locking services for them. So their unanimous answer was "we don't care". In the absence of any strong negative feedback on a proposal, the committee added it "as-is", and took as the desired behavior the specifics of the first implementation, the brain-dead one from AT&T.

[0] https://www.samba.org/samba/news/articles/low_point/tale_two...

trinix912

> Sounds like Worse Is Better™: operating systems that tried to present safer abstractions were at a disadvantage compared to operating systems that shipped whatever was easiest to implement.

What about the Windows API? Windows is a pretty successful OS with a less leaky FS abstraction. I know it's a totally different deal than POSIX (files can't be devices etc), the FS function calls require a seemingly absurd number of arguments, but it does seem safer and clearer what's going to happen.

thfuran

Why does that seem more likely than file system API simply not having been a major factor in the success of failure of OSes?

kccqzy

By the way, LMDB's main developer Howard Chu responded to the paper. He said,

> They report on a single "vulnerability" in LMDB, in which LMDB depends on the atomicity of a single sector 106-byte write for its transaction commit semantics. Their claim is that not all storage devices may guarantee the atomicity of such a write. While I myself filed an ITS on this very topic a year ago, http://www.openldap.org/its/index.cgi/Incoming?id=7668 the reality is that all storage devices made in the past 20+ years actually do guarantee atomicity of single-sector writes. You would have to rewind back to 30 years at least, to find a HDD where this is not true.

So this is a case where the programmers of LMDB thought about the "incorrect" use and decided that it was a calculated risk to take because the incorrectness does not manifest on any recent hardware.

This is analogous to the case where someone complains some C code has undefined behavior, and the developer responds by saying they have manually checked the generated assembler to make sure the assembler is correct at the ISA level even though the C code is wrong at the abstract C machine level, and they commit to checking this in the future.

Furthermore both the LMDB issue and the Postgres issue are noted in the paper to be previously known. The paper author states that Postgres documents this issue. The paper mentions pg_control so I'm guessing it's referring to this known issue here: https://wiki.postgresql.org/wiki/Full_page_writes

> We rely on 512 byte blocks (historical sector size of spinning disks) to be power-loss atomic, when we overwrite the "control file" at checkpoints.

yuboyt

This assumption was wrong for Intel Optane memory. Power loss could cut the data stream anywhere in the middle. (Note: the DIMM nonvolatile memory version)

nyrikki

consumer Optane were not "power loss protected", that is every different than not honoring a requested a synchronous write.

The crash-consistency problem is very different than the durability of real synchronous writes problem. There are some storage devices which will lie about synch writes, sometimes hoping that a backup battery will allow them to complete those write.

System crashes are inevitable, use things like write ahead logs depending on need etc... No storage API will get rid of all system crashes and yes even apple games the system by disabling real sync writes, so that will always be a battle.

lmm

Really? A 512-byte sector could get partially written? Did anyone actually observe this, or was it just a case of Intel CYA saying they didn't guarantee anything?

senderista

This is called “Atomic Write Unit Power Failure” (AWUPF).

Joker_vD

> the developer responds by saying they have manually checked the generated assembler to make sure the assembler is correct at the ISA level even though the C code is wrong at the abstract C machine level, and they commit to checking this in the future.

Yeah, sounds about right about quite a lot of the C programmers except for the "they commit to checking this in the future" part. I've responses like "well, don't upgrade your compiler; I'm gonna put 'Clang >= 9.0 is unsupported' in the README as a fix".

eviks

> why the file API so hard to use that even experts make mistakes?

Because it was poorly designed, and there is a high resistance to change, so those design mistakes from decades ago continue to bite

liontwist

Something this misses is that all programs make assumptions for example - “my process is the only one writing this file because it created it”

Evaluating correctness without that consideration is too high of a bar.

Safety and correctness cannot be “impossible to misuse”

nickelpro

And yet all of these systems basically work for day-to-day operations, and fail only under obscure error conditions.

It is totally acceptable for applications to say "I do not support X conditions". Swap out the file half way through a read? Sorry don't support that. Remove power to the storage devise in the middle of a sync operation? Sorry don't support that.

For vital applications, for example databases, this is a known problem and risks of the API are accounted for. Other applications don't have nearly that level of risk associated with them. My music tagging app doesn't need to be resistant to the SSD being struck by lightning.

It is perfectly acceptable to design APIs for 95% of use cases and leave extremely difficult leaks to be solved by the small number of practitioners that really need to solve those leaks.

belter

"PostgreSQL vs. fsync - How is it possible that PostgreSQL used fsync incorrectly for 20 years" - https://youtu.be/1VWIGBQLtxo

praptak

Ext4 actually special-handles the rename trick so that it works even if it should not:

"If auto_da_alloc is enabled, ext4 will detect the replace-via-rename and replace-via-truncate patterns and [basically save your ass]"[0]

[0]https://docs.kernel.org/admin-guide/ext4.html

Retr0id

> they found that every single piece of software they tested except for SQLite in one particular mode had at least one bug.

This is why whenever I need to persist any kind of state to disk, SQLite is the first tool I reach for. Filesystem APIs are scary, but SQLite is well-behaved.

Of course, it doesn't always make sense to do that, like the dropbox use case.

nodamage

Before becoming too overconfident in SQLite note that Rebello et al. (https://ramalagappan.github.io/pdfs/papers/cuttlefs.pdf) tested SQLite (along with Redis, LMDB, LevelDB, and PostgreSQL) using a proxy file system to simulate fsync errors and found that none of them handled all failure conditions safely.

In practice I believe I've seen SQLite databases corrupted due to what I suspect are two main causes:

1. The device powering off during the middle of a write, and

2. The device running out of space during the middle of a write.

justin66

I remembered Howard Chu commenting on that paper...

https://lists.openldap.org/hyperkitty/list/openldap-devel@op...

I'm pretty sure that's not where I originally saw his comments. I remember his criticisms being a little more pointed. Although I guess "This is a bunch of academic speculation, with a total absence of real world modeling to validate the failure scenarios they presented" is pretty pointed.

ablob

I believe it is impossible to prevent dataloss if the device powers off during a write. The point about corruption still stands and appears to be used correctly from what I skimmed in the paper. Nice reference.

lmm

> I believe it is impossible to prevent dataloss if the device powers off during a write.

Most devices write sectors atomically, and so you can build a system on top of that that does not lose committed data. (Of course if the device powers off during a write then you can lose the uncommitted data you were trying to write, but the point is you don't ever have corruption, you get either the data that was there before the write attempt or the data that is there after).

SoftTalker

Only way I know of is if you have e.g. a RAID controller with a battery-backed write cache. Even that may not be 100% reliable but it's the closest I know of. Of course that's not a software solution at all.

wmf

If the file system uses strict COW it should survive that situation.

ziddoap

>SQLite is the first tool I reach for.

Hopefully in whichever particular mode is referenced!

Retr0id

WAL mode, yes!

eatonphil

Do you turn on SQLite checksumming or how do you feel comfortable that data on disk stays keeps integrity?

edgarvaldes

As per HN headlines, files are hard, git is hard, regex is hard, time zones are hard, money as data type is hard, hiring is hard, people is hard.

I wonder what is easy.

paulddraper

Complaining :)

D-Coder

Selection error. The stuff that always works doesn't get posted here.

ssivark

To reuse another HN headline, all this is probably because no one really cares X-)

gavinhoward

I wonder if, in the Pillai paper, I wonder if they tested the SQLite Rollback option with the default synchronous [1] (`NORMAL`, I believe) or with `EXTRA`. I'm thinking that it was probably the default.

I kinda think, and I could be wrong, that SQLite rollback would not have any vulnerabilities with `synchronous=EXTRA` (and `fullfsync=F_FULLFSYNC` on macOS [2]).

[1]: https://www.sqlite.org/pragma.html#pragma_synchronous

[2]: https://www.sqlite.org/pragma.html#pragma_fullfsync

wruza

No mention on ntfs and windows keywords in the article, for those interested.

pjdesno

Although the conference this was presented at is platform-agnostic, the author is an expert on Linux, and the motivation for the talk is Linux-specific. (Dropbox dropping support for non-ext4 file systems)

The post supports its points with extensive references to prior research - research which hasn't been done in the Microsoft environment. For various reasons (NDAs, etc.) it's likely that no such research will ever be published, either. Basically it's impossible to write a post this detailed about safety issues in Microsoft file systems unless you work there. If you did, it would still take you a year or two of full-time work to do the background stuff, and when you finished, marketing and/or legal wouldn't let you actually tell anyone about it.

wmf

Universities can get Windows source code under NDA and do research on it but nobody really cares about such work.

pjdesno

"Getting windows source code under NDA" doesn't necessarily mean "can do research on it".

If you can't publish it, it's not research. If the source code is under NDA, then Microsoft gets the final say about whether you can publish or not, and if the result is embarrassing to Microsoft, I'm guessing it's "or not".

yahayahya

Is that because the windows APIs are better? Or because businesses build their embedded systems/servers with Windows?

p_ing

Certainly depends on which APIs you ultimately use as a developer, right? If it is .NET, they're super simple, and you can get IOCP for "free" and non-blocking async I/O is quite easy to implement.

I can't say the Win32 File API is "pretty", but it's also an abstraction, like the .NET File Class is. And if you touch the NT API, you're naughty.

On Linux and macOS you use the same API, just the backends are different if you want async (epoll [blocking async] on Linux, kqueue on macOS).

pjc50

The windows APIs are certainly slower. Apart from IOCP I don't think they're that much different? Oh, and mandatory locking on executable images which are loaded, which has .. advantages and disadvantages (it's why Windows keeps demanding restarts)

wruza

I doubt that, was just curious how it might compare in the article.

ryao

> On Linux ZFS, it appears that there's a code path designed to do the right thing, but CPU usage spikes and the system may hang or become unusable.

ZFS fsync will not fail, although it could end up waiting forever when a pool faults due to hardware failures:

https://papers.freebsd.org/2024/asiabsdcon/norris_openzfs-fs...

ein0p

ZFS on Linux unfortunately has a long standing bug which makes it unusable under load: https://github.com/openzfs/zfs/issues/9130. 5.5 years old, nobody knows the root cause. Symptoms: under load (such as what one or two large concurrent rsyncs may generate over a fast network - that's how I encountered it) the pool begins to crap out and shows integrity errors and in some cases loses data (for some users - it never lost data for me). So if you do any high rate copies you _must_ hash-compare source and destination. This needs to be done after all the writes are completed to the zpool, because concurrent high rate reads seem to exacerbate the issue. Once the data is at rest, things seem to be fine. Low levels of load are also fine.

null

[deleted]

ryao

There are actually several distinct issues being reported there. I replied responding to everyone who posted backtraces and a few who did not:

https://github.com/openzfs/zfs/issues/9130#issuecomment-2614...

That said, there are many others who stress ZFS on a regular basis and ZFS handles the stress fine. I do not doubt that there are bugs in the code, but I feel like there are other things at play in that report. Messages saying that the txg_sync thread has hung for 120 seconds typically indicate that disk IO is running slowly due to reasons external to ZFS (and sometimes, reasons internal to ZFS, such as data deduplication).

I will try to help everyone in that issue. Thanks for bringing that to my attention. I have been less active over the past few years, so I was not aware of that mega issue.

ein0p

Regarding your comment - seems unlikely that it "affects Ubuntu less". I don't see why that would be the case - it's not like Ubuntu runs a heavily customized kernel or anything. And thanks for taking a look - ZFS is just the way things should be in filesystems and logical volume management, I do wish I could stop doing hash compares after large, high throughput copies and just trust it to do what it was designed to do.

einpoklum

The article wrap up with this salient point:

> In conclusion, computers don't work (but I guess you already know this...

paulddraper

They work.

Just not all the time.

AutistiCoder

it's a good thing I'm a Web developer.

closest I come to working with files is localStorage, but that's thread safe.

jheriko

this whole thing is a story about using outdated stuff in a shitty ecosystem.

its not a real problem for most modern developers.

pwrite? wtf?

not one mention of fopen.

granted some of the fine detail discussion is interesting, but it doesn't make practical sense since about 1990.

rep_lodsb

The article is about the hardware and kernel level APIs used for interacting with storage. Everything else is by necessity built on top of that interface.

"fopen"? That is outdated stuff from a shitty ecosystem, and how do you think it's implemented?

null

[deleted]