SysadminGuide

From btrfs Wiki
Jump to: navigation, search

This page is intended to give a slightly deeper insight into what the various btrfs features are doing behind the scenes.

Contents

Btrfs introduction in a talk

If you'd like an overview with pointers to the more useful features and cookbooks, you can try Marc MERLIN's Btrfs talk at Linuxcon JP 2014.

Data usage and allocation

Btrfs, at its lowest level, manages a pool of raw storage, from which it allocates space for different internal purposes. This pool is made up of all of the block devices that the filesystem lives on, and the size of the pool is the "total" value reported by the ordinary df command. As the filesystem needs storage to hold file data, or filesystem metadata, it allocates chunks of this raw storage, typically in 1GiB lumps, for use by the higher levels of the filesystem. The allocation of these chunks is what you see as the output from btrfs filesystem show. Many files may be placed within a chunk, and files may span across more than one chunk. A chunk is simply a piece of storage that btrfs can use to put data on.

RAID and data replication

Btrfs's "RAID" implementation bears only passing resemblance to traditional RAID implementations. Instead, btrfs replicates data on a per-chunk basis. If the filesystem is configured to use "RAID-1", for example, chunks are allocated in pairs, with each chunk of the pair being taken from a different block device. Data written to such a chunk pair will be duplicated across both chunks.

Stripe-based "RAID" levels (RAID-0, RAID-10) work in a similar way, allocating as many chunks as can fit across the drives with free space, and then perform striping of data at a level smaller than a chunk. So, for a RAID-10 filesystem on 4 disks, data may be stored like this:

Storing file: 01234567...89

Block devices:  /dev/sda   /dev/sdb   /dev/sdc   /dev/sdd

Chunks:          +-A1-+  +-A2-+  +-A3-+  +-A4-+
                 | 0  |  | 1  |  | 0  |  | 1  | }
                 | 2  |  | 3  |  | 2  |  | 3  | } stripes within a chunk
                 | 4  |  | 5  |  | 4  |  | 5  | }
                 | 6  |  | 7  |  | 6  |  | 7  | }
                 |... |  |... |  |... |  |... | }
                 +----+  +----+  +----+  +----+

                 +-B3-+  +-B2-+  +-B4-+  +-B1-+   a second set of chunks may be needed for large files.
                 | 8  |  | 9  |  | 9  |  | 8  |
                 +----+  +----+  +----+  +----+

Note that chunks within a RAID grouping are not necessarily always allocated to the same devices (B1-B4 are reordered in the example above). This allows btrfs to do data duplication on block devices with varying sizes, and still use as much of the raw space as possible. With RAID-1 and RAID-10, only two copies of each byte of data are written, regardless of how many block devices are actually in use on the filesystem.

Balancing

A btrfs balance operation rewrites things at the level of chunks. It runs through all chunks on the filesystem, and writes them out again, discarding and freeing up the original chunks when the new copies have been written. This has the effect that if data is missing a replication copy (e.g. from a missing/dead disk), new replicas are created. The balance operation also has the effect of re-running the allocation process for all of the data on the filesystem, thus more effectively "balancing" the data allocation over the underlying disk storage.

Q - what kernel threads do this operation?

Copy on Write (CoW)

  • The CoW operation is used on all writes to the filesystem (unless turned off, see below).
  • This makes it much easier to implement lazy copies, where the copy is initially just a reference to the original, but as the copy (or the original) is changed, the two versions diverge from each other in the expected way.
  • If you just write a file that didn’t exist before, then the data is written to empty space, and some of the metadata blocks that make up the filesystem are CoWed. In a "normal" filesystem, if you then go back and overwrite a piece of that file, then the piece you’re writing is put directly over the data it is replacing. In a CoW filesystem, the new data is written to a piece of free space on the disk, and only then is the file’s metadata changed to refer to the new data. At that point, the old data that was replaced can be freed up because nothing points to it any more.
  • If you make a snapshot (or a cp --reflink=always) of a piece of data, you end up with two files that both reference the same data. If you modify one of those files, the CoW operation I described above still happens: the new data is written elsewhere, and the file’s metadata is updated to point at it, but the original data is kept, because it’s still referenced by the other file. * This leads to fragmentation in heavily-written-to files like VM images and database stores, even if there is no second pointer to the file.
  • If you mount the filesystem with nodatacow, or use chattr +C on the file, then it only does the CoW operation for data if there’s more than one copy referenced.
  • Some people insist that btrfs does "Redirect-on-write" rather than "Copy-on-write" because btrfs is based on a scheme for redirect-based updates of B-trees by Ohad Rodeh, and because understanding the code is easier with that mindset.

Subvolumes

A subvolume in btrfs is not the same as an LVM logical volume or a ZFS subvolume. With LVM, a logical volume is a block device in its own right (which could for example contain any other filesystem or container like dm-crypt, MD RAID, etc.) - this is not the case with btrfs.

A btrfs subvolume is not a block device (and cannot be treated as one) instead, a btrfs subvolume can be thought of as a POSIX file namespace. This namespace can be accessed via the top-level subvolume of the filesystem, or it can be mounted in its own right.

So, given a filesystem structure like this:

toplevel
+-- dir_1           (normal directory)
|   +-- file_2      (normal file)
|   \-- file_3      (normal file)
\-- subvol_a        (subvolume)
    +-- subvol_b    (subvolume, nested below subvol_a)
    |   \-- file_4  (normal file)
    \-- file_5      (normal file)

The top-level subvolume (ID5) (which one can think of as the root of the filesystem) can be mounted, and the full filesystem structure will be seen at the mount point; alternatively any other subvolume can be mounted (with the mount options subvol or subvolid, for example subvol=subvol_a) and only anything below that subvolume (in the above example the subvolume subvol_b, it's contents, and file file_4) will be visible at the mount point.

Subvolumes can be nested and each subvolume (except the top-level subolume) has a parent subvolume. Mounting a subvolume also makes any of its nested child subvolumes available at their respective location relative to the mountpoint.

A btrfs filesystem has a default subvolume, which is initially set to be the top-level subvolume and which is mounted if no subvol or subvolid option is specified.

Changing the default subvolume with btrfs subvolume default will make the top level of the filesystem inaccessible, except by use of the subvol=/ or subvolid=5 mount options.

Subvolumes can be moved around in the filesystem.

Layout

There are several basic schemas to layout subvolumes (including snapshots) as well as mixtures thereof.

Flat

Subvolumes are children of the top level subvolume (ID 5), typically directly below in the hierarchy or below some directories belonging to the top level subvolume, but especially not nested below other subvolumes, for example:

toplevel
  +-- root       (subvolume, to be mounted at /)
  +-- home       (subvolume, to be mounted at /home)
  +-- var        (directory)
  |   \-- www    (subvolume, to be mounted at /var/www)
  \-- postgres   (subvolume, to be mounted at /var/lib/postgresql)

Here, the toplevel should not normally be visible to the system. Of course, the "www" subvolume could have also been placed directly below the top level subvolume (without an intermediate directory) and the "postgres" subvolume could have also been placed at a directory var/lib/postgresql; these are just examples and the actual layout depends largely on personal taste. All subvolumes are however direct children of the top level subvolume.

This has several implications (advantages and disadvantages):

  • Management of snapshots (especially rolling them) may be considered easier as the effective layout is more directly visible.
  • All subvolumes need to be mounted manually (e.g. via fstab) to their desired locations, e.g. in the above example this would look like:
LABEL=the-btrfs-fs-device   /                    subvol=/root,defaults,noatime  0  0
LABEL=the-btrfs-fs-device   /home                subvol=/home,defaults,noatime  0  0
LABEL=the-btrfs-fs-device   /var/www             subvol=/var/www,noatime        0  0
LABEL=the-btrfs-fs-device   /var/lib/postgresql  subvol=/postgres,noatime       0  0
This means however as well, that any typically useful mount options (for example "noatime") need to be specified again for each mountpoint.
  • Everything in the filesystem, that's not beneath a subvolume that has been mounted, is not accessible or even visible. This may be beneficial for security, especially when being used with snapshots, see below.

Nested

Subvolumes are located anywhere in the file hierarchy, typically at their desired locations (that is where one would manually mount them at in the flat schema), especially below other subvolumes that are not the top-level subvolume, for example:

toplevel                  (subvolume, to be mounted at /)
+-- home                  (subvolume)
+-- var                   (directory)
    +-- www               (subvolume)
    +-- lib               (directory)
         \-- postgresql   (subvolume)

This has several implications (advantages and disadvantages):

  • Management of snapshots (especially rolling them) may be considered more difficult as the effective layout isn't directly visible.
  • Subvolumes don't need to be mounted manually (or via fstab) to their desired locations, they "appear automatically" at their respective locations.
  • For each of these subvolumes the mount options of their mountpoint applies.
  • Everything is visible. Of course one could only mount a subvolume, but then important parts of the filesystem (in this example much of the system's "main" filesystem) would be missing as well. This may have disadvantages for security, especially when being used with snapshots, see below.

The above layout, which obviously serves as the system's "main" filesystem, places data directly within the top-level subvolume (namely everything for example /usr, that's not in a child subvolume) This makes changing the structure (for example to something more flat) more difficult, which is why it's generally suggested to place the actual data in a subvolume (that is not the top-level subvolume), in the above example, a better layout would be the following:

toplevel
  \-- root                      (subvolume, to be mounted at /)
      +-- home                  (subvolume)
      +-- var                   (directory)
          +-- www               (subvolume)
          +-- lib               (directory)
               \-- postgresql   (subvolume)

Mixed

Of course the above two basic schemas can be mixed at personal convenience, e.g. the base structure may follow a flat layout, with certain parts of the filesystem, being placed in nested subvolumes. But care must be taken when snapshots should be made (nested subvolumes are not part of a snapshot, and as of now, there are no recursive snapshots) as well when those are rotated, where the nested subvolumes would then need to be manually moved (either the "current" versions or a snaptshot thereof).

When To Make Subvolumes

This is a not exhaustive list of typical guidelines where/when to make subvolumes, rather than keeping things in the same subvolume.

  • Nested subvoulmes are not going to be part of snapshots created from their parent subvolume. So one typical reason is to exclude certain parts of the filesystem from being snapshot.
A typical example could be a directory which contains just builds of software (in other words, nothing precious) that use up much space. Having those being part of the snapshot may be useless, as they can be easily regenerted from source.
  • "Split" of areas which are "complete" and/or "consistent" in themselves.
Examples would be "/home", "/var/www", or "/var/lib/postgresql/", which are usually more or less "independent" of the other other parts of system, at least with respect to a certain state.
In contrast, splitting parts of the system which "belong together" or depend on a certain state of each other into different subvolumes, may be a bad idea, at least when snapshots are being made.
For example, "/etc", many parts of "/var/" (especially "/var/lib"), "/usr", "/lib", "/bin/", etc. are typically closely related, at least by being managed from the package management system, which assumes them to be all in the same state. If these were not, because being snapshot at different times, the package manager may assume packages to be installed at a certain version, which would apply for e.g. "/usr" but not for "/bin" or "/etc". It's obvious that this means tickling the dragon.
One reason for "splitting" of areas into subvolumes may be the usage btrfs’ send/receive feature (for example for snapshots and backups or simply for moving/copying data):
When making a snapshot on the local system, having more in the subvolume than just the actually desired data (e.g. the whole system's "main" filesystem instead of just "/var/lib/postgresql") doesn't matter too much. This is at least more or less the case when CoW is used, since then all data is just ref-linked, which means it’s fast and doesn’t cost much more space (there may be however other implications, for example with respect to fragmentation).
But when that data shall be sent/received to other btrfs filesystems this "undesired" data makes a big difference and not just only that it needs to be transferred there at least once (btrfs’ send/receive feature works on subvolumes), which may already be bad, for security and privacy reasons.
If send/receive of the desired data shall happen more often, typically for backups, then even when btrfs-send’s -p or -c features would be used for incremental transfers, the undesired data would need to be transferred at least once (assuming it would never change again on the source system, which is unlikely).
  • Split of areas which need special properties.
As discussed elsewhere, certain types of data (databases, VM images and similar typically big files that are randomly written internally) may require CoW to be disabled for them.
So for example such areas could be placed in a subvolume, that is always mounted with the option "nodatacow".

Snapshots

A snapshot is simply a subvolume that shares its data (and metadata) with some other subvolume, using btrfs's COW capabilities.

Once a [writable] snapshot is made, there is no difference in status between the original subvolume, and the new snapshot subvolume. To roll back to a snapshot, unmount the modified original subvolume, use mv to rename the old subvolume to a temporary location, and then again to rename the snapshot to the original name. You can then remount the subvolume.

At this point, the original subvolume may be deleted if wished. Since a snapshot is a subvolume, snapshots of snapshots are also possible.

Beware: Care must be taken when snapshots are created that are then visible to any user (e.g. when they're created in a nested layout) as this may have security implications. Of course, the snapshot will have the same permissions as the subvolume from which it was created at the time it was, but these permissions may be tightened later on, while those of the snapshot wouldn't change, possibly allowing access to files that shouldn't be accessible anymore. Similarly, especially on the system's "main" filesystem, the snapshot would contain any files (for example setuid programs) of the state when it was created. In the meantime however, security updates may have been rolled out on the original subvolume, but when the snapshot is accessible (and for example the vulnerable setuid has been accessible before) a user could still invoke it.

Managing Snapshots

One typical structure for managing snapshots (particularly on a system's "main" filesystem) is based on the to flat schema from above. The top-level subvolume is mostly left empty except for subvolumes. It can be mounted temporarily while subvolume operations are being done and unmounted again afterwards. Alternatively an empty "snapshot" subvolume can be created (which in turn contains the actual snapshots) and permanently mounted at a convenient.

The actual structure could look like this:

 toplevel
   +-- root                 (subvolume, to be mounted at /)
   +-- home                 (subvolume, to be mounted at /home)
   \-- snapshots            (directory)
       +-- root             (directory)
           +-- 2015-01-01   (subvolume, ro-snapshot of subvolume "root")
           +-- 2015-06-01   (subvolume, ro-snapshot of subvolume "root")
       \-- home             (directory)
           \-- 2015-01-01   (subvolume, ro-snapshot of subvolume "home")
           \-- 2015-12-01   (subvolume, rw-snapshot of subvolume "home")

or even flatter, like this:

  toplevel
   +-- root                       (subvolume, to be mounted at /)
   |   +-- bin                    (directory)
   |   +-- usr                    (directory)
   +-- root_snapshot_2015-01-01   (subvolume, ro-snapshot of subvolume "root")
   |   +-- bin                    (directory)
   |   +-- usr                    (directory)
   +-- root_snapshot_2015-06-01   (subvolume, ro-snapshot of subvolume "root")
   |   +-- bin                    (directory)
   |   +-- usr                    (directory)
   +-- home                       (subvolume, to be mounted at /home)
   +-- home_snapshot_2015-01-01   (subvolume, ro-snapshot of subvolume "home")
   +-- home_snapshot_2015-12-01   (subvolume, rw-snapshot of subvolume "home")


A corresponding fstab could look like this:

 LABEL=the-btrfs-fs-device   /                    subvol=/root,defaults,noatime      0  0
 LABEL=the-btrfs-fs-device   /home                subvol=/home,defaults,noatime      0  0
 LABEL=the-btrfs-fs-device   /root/btrfs-top-lvl  subvol=/,defaults,noauto,noatime   0  0


Creating a ro-snapshot would work for example like this:

# mount /root/btrfs-top-lvl
# btrfs subvolume snapshot /root/btrfs-top-lvl/home /root/btrfs-top-lvl/snapshots/home/2015-12-01
# umount /root/btrfs-top-lvl

Rolling back a snapshot would work for example like this:

# mount /root/btrfs-top-lvl
# umount /home

now either the snapshot could be directly mounted at /home (which would be temporary, unless fstab is adatped as well):

# mount -o subvol=/snapshots/home/2015-12-01,defaults,noatime LABEL=the-btrfs-fs-device /home

or the subvolume itself could be moved and then mounted again

# mv /root/btrfs-top-lvl/home /root/btrfs-top-lvl/home.tmp     #or it could  have been deleted see below
# mv /root/btrfs-top-lvl/snapshots/home/2015-12-01 /root/btrfs-top-lvl/home
# mount /home

finally, the top-level subvolume could be unmounted:

# umount /root/btrfs-top-lvl

The careful reader may have noticed, that for the rollback, a rw-snapshot was used, which is in the above example of course necessary, as a read-only /home wouldn't be desired. If a ro-snapshot would have been created (with -r), then one would have simply made another snapshot of that, this time of course rw, for example:

…
# btrfs subvolume delete /root/btrfs-top-lvl/home
# btrfs subvolume snapshot /root/btrfs-top-lvl/snapshots/home/2015-01-01 /root/btrfs-top-lvl/home
…

Special Cases

  • Read-only subvolumes cannot be moved.
As mentioned above, subvolumes can be moved, though there are exceptions, namely read-only subvolumes, which cannot be moved from their parent directory (which may be a subvolume itself), because the snapshot's ".." file is pointed to by that, but cannot be changed because of being read-only.
Consider the following example:
 …
 +-- foo               (directory, optionally a subvolume)
 +-- bar               (directory, optionally a subvolume)
 |   \-- ro-snapshot   (ro-snapshot)
 …
It wouldn't be possible to move ro-snapshot to foo, as this would need to modify ro-snapshot`s ".." file.
Moving bar (including anything below it) to foo, works however.

btrfs on top of dmcrypt

As of linux kernel 3.2, it is now considered safe to have btrfs on top of dmcrypt (before that, there are risks of corruption during unclean shutdowns).

With multi-device setups, decrypt_keyctl may be used to unlock all disks at once. You can also look at this start-btrfs-dmcrypt from Marc MERLIN that shows you how to manually bring up a btrfs dmcrypted array.

The following page shows benchmarks of btrfs vs ext4 on top of dmcrypt or with ecryptfs: http://www.mayrhofer.eu.org/ssd-linux-benchmark . The summary is that btrfs with lzo compression enabled on top top of dmcrypt is either slightly faster or slightly slower than ext4 and encryption only creates a slowdown in the 5% range on an SSD (likely even less on a spinning drive).

# cryptsetup -c aes-xts-plain -s 256 luksFormat /dev/sda3
# cryptsetup luksOpen /dev/sda3 luksroot
# mkfs.btrfs /dev/mapper/luksroot
# mount -o noatime,ssd,compress=lzo /dev/mapper/luksroot /mnt
Personal tools