From btrfs Wiki
Revision as of 15:39, 25 May 2022 by Kdave (Talk | contribs)

Jump to: navigation, search
Note: this page is protected and cannot be edited by all users: edits must be approved, this page reflects status of the whole project



For a list of features by their introduction, please see the table Changelog#By_feature.

The table below aims to serve as an overview for the stability status of the features BTRFS supports. While a feature may be functionally safe and reliable, it does not necessarily mean that its useful, for example in meeting your performance expectations for your specific workload. Combination of features can vary in performance, the table does not cover all possibilities.

The table is based on the latest released linux kernel: 5.18

The columns for each feature reflrect the status of the implementation in following ways:

Stability - completeness of the implementation, usecase coverage
Status since - kernel version when the status has been last changed
Performance - how much it could be improved until the inherent limits are hit
Notes - short description of the known issues, or other information related to status


  • OK: should be safe to use, no known major defficiencies
  • mostly OK: safe for general use, there are some known problems that do not affect majority of users
  • Unstable: do not use for other then testing purposes, known severe problems, missing implementation of some core parts

Feature Stability Status since Performance Notes
discard (synchronous) OK OK mounted with -o discard (has performance implications), also see fstrim
discard (asynchronous) OK OK mounted with -o discard=async (improved performance)
Autodefrag OK OK
Defrag mostly OK OK extents get unshared (see below)
Compression, deduplication, checksumming
Compression OK 4.14 OK
Out-of-band dedupe OK mostly OK (reflink), heavily referenced extents have a noticeable performance hit (see below)
File range cloning OK mostly OK (reflink), heavily referenced extents have a noticeable performance hit (see below)
More checksum algorithms OK OK see manual page
Auto-repair OK OK automatically repair from a correct spare copy if possible (dup, raid1, raid10)
Scrub OK OK
Scrub + RAID56 mostly OK mostly OK
nodatacow OK OK also see Manpage/btrfs(5).
Device replace mostly OK mostly OK see below
Degraded mount OK 4.14 n/a
Block group profile
Single (block group profile) OK OK
DUP (block group profile) OK OK
RAID1 OK mostly OK reading from mirrors in parallel can be optimized further (see below)
RAID1C3 OK mostly OK reading from mirrors in parallel can be optimized further (see below)
RAID1C4 OK mostly OK reading from mirrors in parallel can be optimized further (see below)
RAID10 OK mostly OK reading from mirrors in parallel can be optimized further (see below)
RAID56 Unstable n/a write hole still exists (see below)
Mixed block groups OK OK see documentation
Filesystem resize OK OK shrink, grow
Balance OK OK balance + qgroups can be slow when there are many snapshots
Offline UUID change OK OK
Metadata UUID change OK OK
Subvolumes, snapshots OK OK
Send OK OK
Receive OK OK
Seeding OK OK
Quotas, qgroups mostly OK mostly OK qgroups with many snapshots slows down balance
Swapfile OK n/a check the limitations
cgroups OK OK IO controller
Samba OK OK Compression, server-side copies, snapshots
io_uring OK OK
fsverity OK OK
idmapped mount OK OK
Free space tree OK 4.9 OK
no-holes OK OK see documentation for compatibility
skinny-metadata OK OK see documentation for compatibility
extended-refs OK OK see documentation for compatibility
zoned mode mostly OK 5.18 mostly OK there are known bugs, use only for testing

Note to editors:

This page reflects status of the whole project and edits need to be approved by one of the maintainers (kdave). Suggest edits if:

  • there's a known missing entry
  • a particular feature combination that has a different status and is worth mentioning separately
  • you knouw of a bug that lowers the feature status
  • a reference could be enhanced by an actual link to documentation (wiki, manual pages)

Details that do not fit the table


The data affected by the defragmentation process will be newly written and will consume new space, the links to the original extents will not be kept. See also Manpage/btrfs-filesystem. Though autodefrag affects newly written data, it can read a few adjacent blocks (up to 64k) and write the contiguous extent to a new location. The adjacent blocks will be unshared. This happens on a smaller scale than the on-demand defrag and doesn't have the same impact.


The simple redundancy RAID levels utilize different mirrors in a way that does not achieve the maximum performance. The logic can be improved so the reads will spread over the mirrors evenly or based on device congestion.


Please see .

Device replace

Device replace and device delete insist on being able to read or reconstruct all data. If any read fails due to an IO error, the delete/replace operation is aborted and the administrator must remove or replace the damaged data before trying again.

On-disk format

The filesystem disk format is stable. This means it is not expected to change unless there are very strong reasons to do so. If there is a format change, filesystems which implement the previous disk format will continue to be mountable and usable by newer kernels.

The core of the on-disk format that comprises building blocks of the filesystem:

  • layout of the main data structures, eg. superblock, b-tree nodes, b-tree keys, block headers
  • the COW mechanism, based on the original design of Ohad Rodeh's paper "Shadowing and clones"

Newly introduced features build on top of the above and could add specific structures. If a backward compatibility is not possible to maintain, a bit in the filesystem superblock denotes that and the level of incompatibility (full, read-only mount possible).

Personal tools