Deduplication

From btrfs Wiki
Jump to: navigation, search

OBSOLETE CONTENT

This wiki has been archived and the content is no longer updated.

Due to its copy-on-write nature, BTRFS is able to copy files (eg with cp --reflink) or subvolumes (with btrfs subvolume snapshot) without actually copying the data. A new copy of the data is created if one of the files or subvolumes is updated.

Deduplication takes this a step further, by actively identifying when the same data has been written twice, and retrospectively combining them into an extent with the same copy-on-write semantics.

Contents

Out of band / batch deduplication

Out of band / batch deduplication is deduplication done outside of the write path. We've sometimes called it offline deduplication, but that can confuse people: btrfs dedup involves the kernel and always happens on mounted filesystems. To use out-of-band deduplication, you run a tool which searches your filesystem for identical blocks, and then deduplicates them.

Deduplication in BTRFS is mainly supported by ioctl_fideduperange(2), a compare-and-share operation, although some other tools may use the clone-oriented APIs instead.

There are multiple tools that take different approaches to deduplication, offer additional features or make trade-offs. The following table lists tools that are known to be up-to-date, maintained and widely used. There are more tools but not all of them meet the criteria and some of them have been removed. The projects are 3rd party, please check their status before you decide to use them.

Batch deduplicators for BTRFS
Name File-based Block-based Works on other filesystems Incremental Notes
duperemove Yes No Yes Yes Sqlite database for csum. Runs by extent boundary by default, but has an option to more carefully compare.
bees No Yes No Yes Runs as a daemon. Very light database, useful for large colder storages like backup servers. Uses SEARCH_V2 and LOGICAL_INO. Has workarounds for kernel bugs.
dduper Yes Yes No Yes Uses built-in BTRFS csum-tree, so is extremely fast and lightweight (13.8 seconds for identical 10GB files). Requires BTRFS-PROGS patch for csum access.

Legend:

  • File based: the tool takes a list of files and deduplicates blocks only from that
  • Block based: the tool enumerates blocks and looks for duplicates
  • Works on other filesystems: some other filesystems (XFS, OCFS2) support the deduplication ioctl, the tool can make use of it but may lack

Duplicate file finders with btrfs support

While any duplicate file finder utility (e.g. fdupes, fslint, etc) can find files for deduplication using another tool (eg duperemove), the following duplicate file finders have build-in btrfs deduplication capabilities:

  • rmlint is a duplicate file finder with btrfs support. To find and reflink duplicate files:
$ rmlint --types="duplicates" --config=sh:handler=clone [paths...]

This command finds duplicate files under paths and creates a batch file rmlint.sh for post-processing. handler=clone uses FIDEDUPERANGE, which maintains metadata of each file (instead of deleting one and recreating it as a reflink).

After reviewing the contents of rmlint.sh, run it to clone/reflink the duplicates (if possible):

$ ./rmlint.sh

Note if reflinking read-only snapshots, rmlint.sh must be run with -r option and with root privileges, e.g.:

$ sudo ./rmlint.sh -r
  • jdupes is a fork of fdupes which includes support for BTRFS deduplication when it identifies duplicate files.

Other tools

Now that the ioctl has been lifted to the VFS layer, rather than being a BTRFS-specific function, deduplication functionality can be implemented in a filesystem-independent way.

As such, xfs_io(8) is able to perform deduplication on a BTRFS file system, and provides a simple way to invoke the deduplication function from the command line, on any filesystem which supports the ioctl.

Example for deduplicating two identical files:

# NOTE: xfs_io commands strictly use a single space for tokenization. No quoting is allowed.
if cmp -s file1 file2; then
  size=$(stat --format="%s" -- file1)
  xfs_io -c "dedupe -C file2 0 0 $size" file1
fi

In-band deduplication

Inband / synchronous / inline deduplication is deduplication done in the write path, so it happens as data is written to the filesystem. This typically requires large amounts of RAM to store the lookup table of known block hashes and adds IO overhead to store the hashes. The feature is not actively developed, some patches patches have been posted. See the User notes on dedupe page for more details.

Personal tools