btrfs metadata block, the chunk root, was missing.
This page documents the recent btrfs restore failure on the Arch Linux workstation, why the restored filesystem would not mount, and how the recovery worked.
The system had been restored from a Clonezilla image onto a new SSD. The restore initially failed because critical btrfs metadata was missing from the image even though most file data still existed.
| Item | Meaning |
|---|---|
| What broke? | btrfs chunk-root metadata |
| Was file data gone? | No, mostly still there |
| Why no mount? | btrfs could not navigate its internal mapping layer |
| Why did recovery work? | Enough older and newer metadata survived to rebuild the missing node |
| Main lesson | partclone.btrfs should not be the only disaster backup |
The restore did not lose the whole filesystem.
What was missing was one tiny but critical piece of btrfs metadata: the chunk root block.
That block acts like the entry point into the filesystem's internal space map. Without it, btrfs cannot understand where the rest of the metadata and file data live, so it refuses to mount.
Recovery worked because:
In this incident:
chunk root was like a key page of the InhaltsverzeichnisThe book still existed, but the entry point into the index was gone.
A practical simplified model of btrfs is:
The superblock is the starting note.
It contains essential information such as:
This is the space map.
It tells btrfs how logical filesystem addresses map to physical disk locations.
This matters because btrfs does not think in a simple "file to sector" model. It has to translate logical addresses into actual positions on disk.
This is the file index.
It describes:
This is the actual content of files.
btrfs cannot find the rest of the map.
The chunk root is the root node of the chunk tree.
Conceptually it is the entry point into the part of btrfs that answers this question:
Where on the physical disk does this logical region live?
If that root node is unreadable, btrfs loses the ability to navigate the mapping layer. That often makes the whole filesystem unmountable even when most data blocks are still intact.
Specifically:
chunk root physical blocks were still importantbtrfs free-space accounting incorrectly marked them as freepartclone.btrfs, which trusts the filesystem's free/used metadataThe result was:
chunk rootIt looked like complete filesystem corruption because btrfs could not mount.
But this was really a metadata navigation failure.
The important distinction is:
The recovery was not a full filesystem rebuild.
It was a targeted repair of one missing metadata node.
The process was:
chunk root should beEnough of the surrounding metadata had survived.
In particular:
That meant the missing node could be reconstructed from evidence already on disk.
Snapshots in btrfs are not full copies.
A snapshot is more like a frozen root pointing to the same underlying extents until changes occur.
That means snapshots mainly depend on metadata and shared extents being reachable.
Since the main failure was the loss of one metadata entry point rather than broad data destruction, the snapshots reappeared once the filesystem could navigate its metadata trees again.
btrfs is inherently bad. The real lesson is that critical metadata can fail in ways that make a mostly intact filesystem look dead.
btrfs metadata damage can make a filesystem look totally dead even when the data still exists.partclone.btrfs should not be trusted as the only disaster backup for btrfs.