Skip to content(if available)orjump to list(if available)

PSA: SQLite WAL checksums fail silently and may lose data

teraflop

> The checksums in WAL are likely not meant to check for random page corruption in the middle; maybe they’re just to check if the last write of a frame was fsynced properly or not?

This is the correct explanation. The purpose is to detect partial writes, not to detect arbitrary data corruption. If detecting corruption was the goal, then checksumming the WAL without also checksumming the database itself would be fairly pointless.

In fact, it's not accurate to say "SQLite does not do checksums by default, but it has checksums in WAL mode." SQLite always uses checksums for its journal, regardless of whether that's a rollback journal or a write-ahead log. [1]

For the purpose of tolerating and recovering from crashes/power failures, writes to the database file itself are effectively idempotent. It doesn't matter if only a subset of the DB writes are persisted before a crash, and you don't need to know which ones succeeded, because you can just roll all of them forward or backward (depending on the mode). But for the journal itself, distinguishing partial journal entries from complete ones matters.

No matter what order the disk physically writes out pages, the instant when the checksum matches the data is the instant at which the transaction can be unambiguously said to commit.

[1]: https://www.sqlite.org/fileformat.html

kentonv

Exactly. To put it another way:

Imagine the power goes out while sqlite is in the middle of writing a transaction to the WAL (before the write has been confirmed to the application). What do you want to happen when power comes back, and you reload the database?

If the transaction was fully written, then you'd probably like to keep it. But if it was not complete, you want to roll it back.

How does sqlite know if the transaction was complete? It needs to see two things:

1. The transaction ends with a commit frame, indicating the application did in fact perform a `COMMIT TRANSACTION`.

2. All the checksums are correct, indicating the data was fully synced to disk when it was committed.

If the checksums are wrong, the assumption is that the transaction wasn't fully written out. Therefore, it should be rolled back. That's exactly what sqlite does.

This is not "data loss", because the transaction was not ever fully committed. The power failure happened before the commit was confirmed to the application, so there's no way anyone should have expected that the transaction is durable.

The checksum is NOT intended to detect when the data was corrupted by some other means, like damage to the disk or a buggy app overwriting bytes. Myriad other mechanisms should be protecting against those already, and sqlite is assuming those other mechanisms are working, because if not, there's very little sqlite can do about it.

malone

Why is the commit frame not sufficient to determine whether the transaction was fully written or not? Is there a scenario where the commit frame is fsynced to disk but the proceeding data isn't?

adambb

The disk controller may decide to write out blocks in a different order than the logical layout in the log file itself, and be interrupted before completing this work.

hinkley

For instance, running on ZFS or one of its peers.

zaarn

ZFS isn’t viable for SQLite unless you turn off fsync’s in ZFS, because otherwise you will have the same experience I had for years; SQLite may randomly hang for up to a few minutes with no visible cause, if there isn’t sufficient write txg’s to fill up in the background. If your app depends on SQLite, it’ll randomly die.

Btrfs is a better choice for sqlite, haven’t seen that issue there.

jandrewrogers

Apropos this use case, ZFS is usually not recommended for databases. Competent database storage engines have their own strong corruption detection mechanisms regardless. What filesystems in the wild typically provide for this is weaker than what is advisable for a database, so databases should bring their own implementation.

lxgr

I believe it's also because of this (from https://www.sqlite.org/wal.html):

> [...] The checkpoint does not normally truncate the WAL file (unless the journal_size_limit pragma is set). Instead, it merely causes SQLite to start overwriting the WAL file from the beginning. This is done because it is normally faster to overwrite an existing file than to append.

Without the checksum, a new WAL entry might cleanly overwrite an existing longer one in a way that still looks valid (e.g. "A|B" -> "C|B" instead of "AB" -> "C|data corruption"), at least without doing an (expensive) scheme of overwriting B with invalid data, fsyncing, and then overwriting A with C and fsyncing again.

In other words, the checksum allows an optimized write path with fewer expensive fsync/truncate operations; it's not a sudden expression of mistrust of lower layers that doesn't exist in the non-WAL path.

lxgr

> This is a follow-up post to my PSA: SQLite does not do checksums and PSA: Most databases do not do checksums by default.

That's really all there is to it.

SQLite has very deliberate and well-documented assumptions (see for example [1], [2]) about the lower layers it supports. One of them is that data corruption is handled by these lower layers, except if stated otherwise.

Not relying on this assumption would require introducing checksums (or redundancy/using an ECC, really) on both the WAL/rollback journal and on the main database file. This would make SQLite significantly more complex.

I believe TFA is mistaken about how SQLite uses checksums. They primarily serve as a way to avoid some extra write barriers/fsync operations, and maybe to catch incomplete out-of-order writes, but never to detect actual data corruption: https://news.ycombinator.com/item?id=44671373

[1] https://www.sqlite.org/psow.html

[2] https://www.sqlite.org/howtocorrupt.html

nemothekid

I might be missing something (We use sqlite for our embedded stores) - but I feel like "failing silently" is alarmist here.

1. If the WAL is incomplete, then "failing" silently is the correct thing to do here, and is the natural function of the WAL. The WAL had an incomplete write, nothing should have been communicated back the application and the application should assume the write never completed.

2. If the WAL is corrupt (due to the reasons he mentioned), then sqlite says that is that's your problem, not sqlite's. I think this is the default behavior for other databases as well. If a bit flips on disk, it's not guaranteed the database will catch it.

This article is framed almost like a CVE, but to me this is kind of like saying "PSA: If your hard drive dies you may lose data". If you care about data integrity (because your friend is sending you sqlite files) you should be handling that.

supriyo-biswas

Also, partially applying a WAL has obvious issues even though the author of this post would somehow prefer that. If we update 3 rows in a database and the WAL entry for one of the rows is corrupted, do they expect to ignore the corrupted entry and apply the rest? What happens to data consistency in this particular case?

lxgr

Even worse: SQLite, by default, does not immediately truncate WAL files, but rather overwrites the existing WAL from the beginning after successfully applying a checksum.

Doing what the author suggests would actually introduce data corruption errors when "restoring a WAL with a broken checksum".

avinassh

However, SQLite, by default, always truncates the WAL files on last connection close

> When the last connection to a database closes, that connection does one last checkpoint and then deletes the WAL and its associated shared-memory file, to clean up the disk.

https://www.sqlite.org/wal.html

kburman

An employee of Turso, a commercial fork of SQLite, is presenting a standard, safety-first feature of SQLite's WAL as a dangerous flaw. As many have noted, this behavior prevents database corruption, it doesn't cause it.

chambers

Yeah, this tracks.

If the OP consulted with Turso on this blogpost, then Turso probably believes the reported behavior is indeed a failure or a flaw, which they think a local db should be responsible for.

The confusion is that Limbo, their solution to this presumed problem, is not mentioned in the article which means that everyone has to figure out where this post is coming from.

supriyo-biswas

I wouldn't have jumped to a conspiracy angle immediately, but there are some signs which are difficult to overlook:

- Said person was apparently employed due to his good understanding of databases and distributed systems concepts (there's a HN thread about how he found an issue in the paper describing an algorithm); yet makes fundamental mistakes in understanding what the WAL does and how it's possible not to "partly" apply a WAL.

- Said person expects a SQL database to expose WAL level errors to the user breaking transactional semantics (if you want that level of control, consider simpler file-based key-value stores that expose such semantics?)

- Said person maligns SQLite as being impossible to contribute; whereas the actual project only mentions that they may rewrite the proposed patch to avoid copyright implications.

- Said person again maligns SQLite as "limping along" in the face of disk errors (while making the opposite claim a few paragraphs ago); while ignoring that the checksum VFS exists when on-disk data corruption is a concern.

jrockway

I think it's kind of possible to partially apply the WAL manually. Imagine your frames are:

1) Insert new subscription for "foobar @ 123 Fake St." 2) Insert new subscription for "�#�#�xD�{.��t��3Axu:!" 3) Insert new subscription for "barbaz @ 742 Evergreen Terrace"

A human could probably grab two subscriptions out of that data loss incident. I think that's what they're saying. If you're very lucky and want to do a lot of manual work, you could maybe restore some of the data. Obviously both of the "obviously correct" records could just be random bitflips that happen to look right to humans. There's no way of knowing.

cwillu

And if the “obviously correct” last entry is actually an old entry that just hadn't been overwritten yet? Or if it was only permitted because of something in the corrupted section?

The database should absolutely not be performing guesswork about the meaning of its contents during recovery. If you want mongodb, go use mongodb.

ulrikrasmussen

Yes, in this particular example you could. But in general the database cannot make assumptions that changes are independent of each other.

I think SQLite assumes that a failing checksum occurs due to a crash during a write which never finished. A corrupt WAL frame before a valid frame can only occur if the underlying storage is corrupt, but it makes no sense for SQLite to start handling that during replay as it has no way to recover. You could maybe argue that it should emit a warning

supriyo-biswas

This could work for a simple key-value store; but SQLite also does referential integrity which means we might just end up with extra entries with no entities on the other side of the table. IMO, best avoided in a transactional database.

avinassh

of all places, I did not expect to get personal attacks on HN :)

> yet makes fundamental mistakes in understanding what the WAL does and how it's possible not to "partly" apply a WAL.

Please provide citation on where I said that. You can't partly apply WAL always, but there are very valid cases where you can do that to recover. Recovery doesn't have to automatic. It can be done by SQLite, or some recovery tool or with manual intervention.

> - Said person maligns SQLite as being impossible to contribute; whereas the actual project only mentions that they may rewrite the proposed patch to avoid copyright implications.

Please provide citation on where I said that. Someone asked me to send a patch to SQLite, I linked them to the SQLite's page.

supriyo-biswas

> You can't partly apply WAL always, but there are very valid cases where you can do that to recover.

Without mentioning the exact set of cases where recovery is possible and it isn't, going "PSA: SQLite is unreliable!!1one" is highly irresponsible. I think there's quite a bit of criticism going around though, you could add them to your blog article :)

Please also consider the fact that SQLite being a transactional database, it is usually not possible to expose a WAL level error to the user. The correct way to address it is to probably come up with a list of cases where it is possible, and then send in a patch, or at least a proposal, of how to address it.

> Please provide citation on where I said that [SQLite is impossible to contribute].

https://news.ycombinator.com/item?id=44672563

tucnak

> of all places, I did not expect to get personal attacks on HN

You must be new to the site.

cwillu

The benefit is that you're left with a database state that actually existed; there's no guarantee from the database's perspective that dropping some committed transactions and not others that came after will result in a valid state.

HelloNurse

This is the main point that the OP misses: even if the newer portion of the WAL file isn't corrupted, its content cannot be used in any way because doing so would require the lost transactions from the corrupted block. The chained checksums are a feature, not gratuitous fragility.

AlotOfReading

Sqlite could attempt to recover the detected errors though and not lose the transactions.

daneel_w

The WAL was corrupted, the actual data is lost. There's no parity. You're suggesting that sqlite should somehow recreate the data from nothing.

hobs

And not get in an infinite loop, and not harm the startup time of the process inordinately, and...

This is just basically how a WAL works, if you have an inconsistent state the transaction is rolled back - at that point you need to redo your work.

teraflop

Yes, and it's not just about application-level integrity. The WAL operates at a page level, so dropping one WAL entry and then applying later ones would be likely to cause corruption at the B-tree level.

For instance, say you have a node A which has a child B:

* Transaction 1 wants to add a value to B, but it's already full, so B is split into new nodes C and D. Correspondingly, the pointer in A that points to B is removed, and replaced with pointers to C and D.

* Transaction 2 makes an unrelated change to A.

If you skip the updates from transaction 1, and apply the updates from transaction 2, then suddenly A's data is overwritten with a new version that points to nodes C and D, but those nodes haven't been written. The pointers just point to uninitialized garbage.

slashdev

How would this work differently? As soon as you encounter a checksum failure, you can't trust anything from that point on. If the checksum were just per-page and didn't build on the previous page's checksum, you can't just apply pages from the WAL that were valid, skipping the ones which were not. The database at the end of that process would be corrupt.

If you stop at the first failure, the database is restored to the last good state. That's the best outcome that can be achieved under the circumstances. Some data could be lost, but there wasn't anything sensible you could do with it anyway.

avinassh

> How would this work differently?

I would like it to raise an error and then provide an option to continue or stop. Since continuing is the default, we need a way to opt in to stopping on checksum failure.

Not all checksum errors are impossible to recover from. Also, as the post mentions, only some non important pages could be corrupt too.

My main complaint is that it doesn't give developers an option.

thadt

Aight, I'll bite: continue or stop... and do what? As others have pointed out, the only safe option to get back to a consistent state is to roll back to a safe point.

If what we're really interested in is the log part of a write ahead log - where we could safely recover data after a corruption, then a better tool might be just a log file, instead of SQLite.

avinassh

> Aight, I'll bite: continue or stop... and do what? As others have pointed out, the only safe option to get back to a consistent state is to roll back to a safe point.

Attempt to recover! Again, not all checksum errors are impossible to recover. I hold the view that even if there is a 1% chance of recovery, we should attempt it. This may be done by SQLite, an external tool, or even manually. Since WAL corruption issues are silent, we cannot do that now.

There is a smoll demo in the post. In it, I corrupt an old frame that is not needed by the database at all. Now, one approach would be to continue the recovery and then present both states: one where the WAL is dropped, and another showing whatever we have recovered. If I had such an option, I would almost always pick the latter.

lxgr

Giving developers that option would require SQLite to change the way it writes WALs, which would increase overhead. Checksum corruptions can happen without any lower-level errors; this is a performance optimization by SQLite.

I've written more about this here: https://news.ycombinator.com/item?id=44673991

avinassh

> Giving developers that option would require SQLite to change the way it writes WALs, which would increase overhead.

Yes! But I am happy to accept that overhead with the corruption detection.

slashdev

It is good that it doesn't give you an option. I don't want some app on my phone telling me its database is corrupt, I want it to always load back to the last good state and I'll handle any missing data myself.

The checksums are not going to fail unless there was disk corruption or a partial write.

In the former, thank your lucky stars it was in the WAL file and you just lose some data but have a functioning database still.

In the latter, you didn't fsync, so it couldn't have been that important. If you care about not losing data, you need to fsync on every transaction commit. If you don't care enough to do that, why do you care about checksums, it's missing the point.

ryanjshaw

> What I want: throw an error when corruption is detected and let the code handle it.

I wonder what that code would look like. My sense is that it’ll look exactly like the code that would run as if the transactions never occurred to begin with, which is why the SQLite design makes sense.

For example, I have a database of todos that sync locally from the cloud. The WAL gets corrupted. The WAL gets truncated the next time the DB is opened. The app logic then checks the last update timestamp in the DB and syncs with the cloud.

I don’t see what the app would do differently if it were notified about the WAL corruption.

fer

Exactly. I'd read it as

> I want to correct errors that the DB wizard who implemented SQLite chose not to

When there's a design decision in such a high profile project that you disagree with, it's either

1. You don't understand why it was done like this.

2. You can (and probably will) submit a change that would solve it.

If you find yourself in the situation of understanding, yet not doing anything about it, you're the Schrodinger's developer: you're right and wrong until you collapse the mouth function by putting money on it.

It's very rarely an easy to fix mistake.

avinassh

> 2. You can (and probably will) submit a change that would solve it.

SQLite is not open to contribution - https://www.sqlite.org/copyright.html

> 1. You don't understand why it was done like this.

sure, I would like to understand it. That's why the post!

rovr138

> Contributed Code

> In order to keep SQLite completely free and unencumbered by copyright, the project does not accept patches. If you would like to suggest a change and you include a patch as a proof-of-concept, that would be great. However, please do not be offended if we rewrite your patch from scratch.

Propose it.

jmull

> What’s interesting is that when a frame is found to have a missing or invalid checksum, SQLite drops that frame and all the subsequent frames.

Skipping a frames but processing later ones would corrupt the database.

> SQLite doesn’t throw any error on detection of corruption

I don’t think it’s actually a corruption detection feature though. I think it’s to prevent a physical failure while writing (like power loss) from corrupting the database. A corruption detection feature would work differently. E.g., it would cover the whole database, not just the WAL. Throwing an error here doesn’t make sense.

asveikau

> You have SQLite .db and .db-wal files, but no accompanying .db-shm file. Maybe your friend shared it with you, or you downloaded some data off the internet.

Honestly this sounds out of scope for normal usage of sqlite and not realistic. I had a hard time reading past this. If I read that correctly, they're saying sqlite doesn't work if one of the database files disappears from under it.

I guess if you had filesystem corruption it's possible that .db-shm disappears without notice and that's a problem. But that isn't sqlite's fault.

CGamesPlay

This, exactly. Especially since these files are basically the "this database was not cleanly closed" markers for SQLite. From SQLite's docs:

> If the last client using the database shuts down cleanly by calling sqlite3_close(), then a checkpoint is run automatically in order to transfer all information from the wal file over into the main database, and both the shm file and the wal file are unlinked.

dev_l1x_be

I was wondering about this subject for some time but the only real solution as I see would be a transactional filesystem (re-designing how filesystems work).

Related: https://news.ycombinator.com/item?id=32200007

dathinab

Some things:

- there is an official check sum VFS shim, but I never used it and don't know how good it is. The difference between it and WAL checksum is that it works on a per page level and you seem to need manually run the checksum checks and then yourself decide what to do

- check sums (as used by SQLite WAL) aren't meant for backup, redundancy or data recovery (there are error recovery codes focused on allowing recovering a limited set of bits, but they have way more overhead then the kind of checksum used here)

- I also believe SQLite should indicate such checksum errors (e.g. so that you might engage out of band data recovery, i.e. fetch a backup from somewhere), but I'm not fully sure how you would integrate it in a backward compatible way? Like return it as an error which otherwise acts like a SQLITE_BUSY??

ncruces

The checksum VFS explicitly disables its checksums during checkpointing (search of inCkpt): https://sqlite.org/src/doc/tip/ext/misc/cksumvfs.c

Data in the WAL should be considered to be of "reduced durability".

adzm

sqlite has several callbacks / hooks / handlers that can be set. I think it is reasonable to expect there to be a way for this situation to be communicated to the application.