(→parent transid verify failed: parent transid in RTD)
(→unable to start balance with target metadata profile 32: moved to RTD)
|Line 212:||Line 212:|
= Deciphering error messages from syslog =
= Deciphering error messages from syslog =
Revision as of 17:58, 15 June 2022
How do I report bugs and issues?
Please report bugs on Bugzilla on kernel.org (requires registration) setting the component to Btrfs, and report bugs and issues to the mailing list (email@example.com; you are not required to subscribe). For quick questions you may want to join the IRC #btrfs channel on libera.chat (and stay around for some time in case you do not get the answer right away).
Please use btrfs-progs somewhere in the bug subject if you're reporting a bug for the userspace tools.
If you include kernel log backtraces in bug reports sent to the mailing list, please disable word wrapping in your mail user agent to keep the kernel log in a readable format.
Please attach files (like logs or dumps) directly to the bug and don't use pastebin-like services.
I can't mount my filesystem, and I get a kernel oops!
First, update your kernel to the latest one available and try mounting again. If you have your kernel on a btrfs filesystem, then you will probably have to find a recovery disk with a recent kernel on it.
Second, try mounting with options -o recovery or -o ro or -o recovery,ro (using the new kernel). One of these may work successfully.
Finally, if and only if the kernel oops in your logs has something like this in the middle of it,
? replay_one_dir_item+0xb5/0xb5 [btrfs] ? walk_log_tree+0x9c/0x19d [btrfs] ? btrfs_read_fs_root_no_radix+0x169/0x1a1 [btrfs] ? btrfs_recover_log_trees+0x195/0x29c [btrfs] ? replay_one_dir_item+0xb5/0xb5 [btrfs] ? btree_read_extent_buffer_pages+0x76/0xbc [btrfs] ? open_ctree+0xff6/0x132c [btrfs]
then you should try using btrfs rescure zero-log (see Manpage/btrfs-rescue).
Filesystem can't be mounted by label
See the next section.
Only one disk of a multi-volume filesystem will mount
If you have labelled your filesystem and put it in /etc/fstab, but you get:
# mount LABEL=foo mount: wrong fs type, bad option, bad superblock on /dev/sdd2, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so
or if one volume of a multi-volume filesystem fails when mounting, but the other succeeds:
# mount /dev/sda1 /mnt/fs mount: wrong fs type, bad option, bad superblock on /dev/sdd2, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so # mount /dev/sdb1 /mnt/fs #
Then you need to ensure that you run a btrfs device scan first:
# btrfs device scan
This should be in many distributions' startup scripts (and initrd images, if your root filesystem is btrfs), but you may have to add it yourself.
My filesystem won't mount and none of the above helped. Is there any hope for my data?
Maybe. Any number of things might be wrong. The restore tool is a non-destructive way to dump data to a backup drive and may be able to recover some or all of your data, even if we can't save the existing filesystem.
Defragmenting a directory doesn't work
# btrfs filesystem defragment ~/stuff
does not defragment the contents of the directory.
This is by design.
btrfs fi defrag operates on the single filesystem object passed to it, e.g. a (regular) file. When ran on a directory, it defragments the metadata held by the subvolume containing the directory, and not the contents of the directory. If you want to defragment the contents of a directory, you have to use the recursive mode with the
-r flag (see recursive defragmentation).
Compression doesn't work / poor compression ratios
First of all make sure you have passed "compress" mount option in fstab or mount command. If yes, and ratios are unsatisfactory, then you might try "compress-force" option. This way you make the btrfs to compress everything. The reason why "compress" ratios are so low is because btrfs very easily backs out of compress decision. (Probably not to waste too much CPU time on bad compressing data).
Copy-on-write doesn't work
You've just copied a large file, but still it consumed free space. Try:
# cp --reflink=always file1 file2
I get the message "failed to open /dev/btrfs-control skipping device registration" from "btrfs dev scan"
You are missing the /dev/btrfs-control device node. This is usually set up by udev. However, if you are not using udev, you will need to create it yourself:
# mknod /dev/btrfs-control c 10 234
You might also want to report to your distribution that their configuration without udev is missing this device.
How to clean up old superblock ?
The preferred way is to use the wipefs utility that is part of the util-linux package. Running the command with the device will not destroy the data, just list the detected filesystems:
# wipefs /dev/sda offset type ---------------------------------------------------------------- 0x10040 btrfs [filesystem] UUID: 7760469b-1704-487e-9b96-7d7a57d218a5
To actually remove the filesystem use:
# wipefs -o 0x10040 /dev/sda 8 bytes [5f 42 48 52 66 53 5f 4d] erased at offset 0x10040 (btrfs)
ie. copy the offset number to the commandline parameter.
Long time ago I created btrfs on /dev/sda. After some changes btrfs moved to /dev/sda1.
Use wipefs as well, it deletes only a small portion of sda that will not interfere with the next partition data.
What if I don't have wipefs at hand?
There are three superblocks: the first one is located at 64K, the second one at 64M, the third one at 256GB. The following lines reset the magic string on all the three superblocks
# dd if=/dev/zero bs=1 count=8 of=/dev/sda seek=$((64*1024+64)) # dd if=/dev/zero bs=1 count=8 of=/dev/sda seek=$((64*1024*1024+64)) # dd if=/dev/zero bs=1 count=8 of=/dev/sda seek=$((256*1024*1024*1024+64))
If you want to restore the superblocks magic string,
# echo "_BHRfS_M" | dd bs=1 count=8 of=/dev/sda seek=$((64*1024+64)) # echo "_BHRfS_M" | dd bs=1 count=8 of=/dev/sda seek=$((64*1024*1024+64)) # echo "_BHRfS_M" | dd bs=1 count=8 of=/dev/sda seek=$((256*1024*1024*1024+64))
I get "No space left on device" errors, but df says I've got lots of space
First, check how much space has been allocated on your filesystem:
$ sudo btrfs fi show Label: 'media' uuid: 3993e50e-a926-48a4-867f-36b53d924c35 Total devices 1 FS bytes used 61.61GB devid 1 size 133.04GB used 133.04GB path /dev/sdf
Note that in this case, all of the devices (the only device) in the filesystem are fully utilised. This is your first clue.
Next, check how much of your metadata allocation has been used up:
$ sudo btrfs fi df /mount/point Data: total=127.01GB, used=56.97GB System, DUP: total=8.00MB, used=20.00KB System: total=4.00MB, used=0.00 Metadata, DUP: total=3.00GB, used=2.32GB Metadata: total=8.00MB, used=0.00
Note that the Metadata used value is fairly close (75% or more) to the Metadata total value, but there's lots of Data space left. What has happened is that the filesystem has allocated all of the available space to either data or metadata, and then one of those has filled up (usually, it's the metadata space that does this). For now, a workaround is to run a partial balance:
$ sudo btrfs fi balance start -dusage=5 /mount/point
Note that there should be no space between the -d and the usage. This command will attempt to relocate data in empty or near-empty data chunks (at most 5% used, in this example), allowing the space to be reclaimed and reassigned to metadata.
If the balance command ends with "Done, had to relocate 0 out of XX chunks", then you need to increase the "dusage" percentage parameter till at least one chunk is relocated. More information is available elsewhere in this wiki, if you want to know what a balance does, or what options are available for the balance command.
I cannot delete an empty directory
First case, if you get:
- rmdir: failed to remove ‘emptydir’: Operation not permitted
then this is probably because "emptydir" is actually a subvolume.
You can check whether this is the case with:
# btrfs subvolume list -a /mountpoint
To delete the subvolume you'll have to run:
# btrfs subvolume delete emptydir
Second case, if you get:
- rmdir: failed to remove ‘emptydir’: Directory not empty
then you may have an empty directory with a non-zero i_size.
You can check whether this is the case with:
# stat -c %s emptydir 3196 <-- unexpected non-zero size
Running "btrfs check" on that (unmounted) filesystem will confirm the issue and list other problematic directories (if any).
You will get a similar output (excerpt):
checking fs roots root 5 inode 557772 errors 200, dir isize wrong root 266 inode 24021 errors 200, dir isize wrong ...
Such errors should be fixable with "btrfs check --repair" provided you run a recent enough version of btrfs-progs.
Note that "btrfs check --repair" should not be used lightly as in some cases it can make a problem worse instead of fixing anything.