Deduplication

From btrfs Wiki
(Difference between revisions)
Jump to: navigation, search
(Inband)
Line 3: Line 3:
 
Deduplication takes this a step further, by actively identifying when the same data has been written twice, and retrospectively combining them into an extent with the same copy-on-write semantics.
 
Deduplication takes this a step further, by actively identifying when the same data has been written twice, and retrospectively combining them into an extent with the same copy-on-write semantics.
  
= Batch =
+
== Batch ==
  
 
Out of band / batch deduplication is deduplication done outside of the write path.  We've sometimes called it [http://www.ssrc.ucsc.edu/pub/jones-ssrctr-11-03.html offline] deduplication, but that can confuse people: btrfs dedup involves the kernel and always happens on ''mounted'' filesystems. To use out-of-band deduplication, you run a tool which searches your filesystem for identical blocks, and then deduplicates them.
 
Out of band / batch deduplication is deduplication done outside of the write path.  We've sometimes called it [http://www.ssrc.ucsc.edu/pub/jones-ssrctr-11-03.html offline] deduplication, but that can confuse people: btrfs dedup involves the kernel and always happens on ''mounted'' filesystems. To use out-of-band deduplication, you run a tool which searches your filesystem for identical blocks, and then deduplicates them.
  
== Dedicated btrfs deduplicators ==
+
Deduplication in BTRFS is mainly supported by ioctl_fideduperange(2), a compare-and-share operation, although some other tools may use the clone-oriented APIs instead.
  
'''[https://github.com/markfasheh/duperemove duperemove]''' finds and lists duplicate extents, and optionally will submit the duplicates to the kernel for deduplication. From the [https://github.com/markfasheh/duperemove/blob/master/README.md README]:
+
{| class=wikitable
 +
|+Batch deduplicators for BTRFS
 +
! Name !! Block-based !! Works on other FS !! Incremental !! Notes
 +
|-
 +
|[https://github.com/markfasheh/duperemove duperemove] || {{Yes}} || {{Yes}} || {{Yes}} ||
 +
|-
 +
|[https://github.com/g2p/bedup bedup] || {{No}} || {{No}} || {{Yes}} || Uses the clone ioctl due to concerns regarding kernel crashes with the latter as of kernel 4.2. . Appears to be unmaintained and is [https://github.com/g2p/bedup/issues/101 broken on 5.x kernels].
 +
|-
 +
|[https://github.com/Zygo/bees bees] || {{Yes}} || {{No}} || {{Yes}} || Runs at a daemon. Very light database, useful for large colder storages like backup servers. Uses SEARCH_V2 and LOGICAL_INO.
 +
|-
 +
| [https://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg79853.html dduper] || {{Yes}} || {{No}} || {{Yes}} || Uses built-in BTRFS csum-tree, so is extremely fast and lightweight (13.8 seconds for identical 10GB files). Requires BTRFS-PROGS patch for csum access.
 +
|}
  
<blockquote>
 
Duperemove is a simple tool for finding duplicated extents and
 
submitting them for deduplication. When given a list of files it will
 
hash their contents on a block by block basis and compare those hashes
 
to each other, finding and categorizing blocks that match each
 
other. When given the -d option, duperemove will submit those
 
extents for deduplication using the Linux kernel extent-same ioctl.
 
</blockquote>
 
  
'''[https://github.com/g2p/bedup bedup]''' implements incremental whole-file batch deduplication for Btrfs.  It integrates deeply with btrfs so that scans are incremental and low-impact.  It uses the btrfs clone ioctl to do the deduplication, rather than the extent-same ioctl, due to concerns regarding kernel crashes with the latter as of kernel 4.2. Appears to be unmaintained and is [https://github.com/g2p/bedup/issues/101 broken on 5.x kernels].
+
=== Duplicate file finders with btrfs support ===
 
+
'''[https://github.com/wellbehavedsoftware/btrfs-dedupe btrfs-dedupe]''' is a Rust library which implements incremental whole-file batch deduplication for Btrfs. It is written in Rust for safety and performance and uses the kernel ioctls to offload the actual deduplication to the kernel for safety. It maintains state for efficient regular operation, scanning file metadata on every run, hashing contents for files with new metadata, hashing file extent maps for files with new contents, and performing defragmentation and deduplication of files with matching content but non-matching extent maps.
+
 
+
'''[https://github.com/Zygo/bees bees]''' block-oriented userspace dedup agent, designed to avoid scalability problems on large filesystems. Work as a daemon, not store any information about filesystem structure (only store some extent info and hashes in simple mmap'ed db), retrieves such information on demand through btrfs SEARCH_V2 and LOGICAL_INO ioctls. Very useful on large not write heavy storages, like backup servers. See more info at Bees GitHub.
+
 
+
'''[https://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg79853.html dduper]''' is a block-level off-line deduplication tool. It relies on built-in BTRFS csum-tree. dduper avoids CPU intense operation like fetching each file data block and then computing its checksum by reusing BTRFS csum-tree. It is pretty fast, for example dduper took 13.8 seconds to dedupe two 10GB files with same data. Currently it uses ioctl_ficlonerange call for deduplication process, future plans include support for ioctl_fideduperange
+
 
+
== Duplicate file finders with btrfs support ==
+
  
 
While any duplicate file finder utility (e.g. [https://github.com/adrianlopezroche/fdupes fdupes], [http://www.pixelbeat.org/fslint/ fslint], etc) can find files for deduplication using another tool (eg duperemove), the following duplicate file finders have build-in btrfs deduplication capabilities:
 
While any duplicate file finder utility (e.g. [https://github.com/adrianlopezroche/fdupes fdupes], [http://www.pixelbeat.org/fslint/ fslint], etc) can find files for deduplication using another tool (eg duperemove), the following duplicate file finders have build-in btrfs deduplication capabilities:
  
'''[https://rmlint.readthedocs.io/en/latest/ rmlint]''' is a duplicate file finder with btrfs support.  To find and reflink duplicate files:
+
* '''[https://rmlint.readthedocs.io/en/latest/ rmlint]''' is a duplicate file finder with btrfs support.  To find and reflink duplicate files:
  
 
  $ rmlint -T df --config=sh:handler=clone [paths...]  # finds duplicates under paths and creates a batch file 'rmlint.sh' for post-processing
 
  $ rmlint -T df --config=sh:handler=clone [paths...]  # finds duplicates under paths and creates a batch file 'rmlint.sh' for post-processing
Line 42: Line 37:
 
  $ sudo ./rmlint.sh -r
 
  $ sudo ./rmlint.sh -r
  
'''[https://github.com/jbruchon/jdupes jdupes]''' is a fork of '''fdupes''' which includes support for BTRFS deduplication when it identifies duplicate files.
+
* '''[https://github.com/jbruchon/jdupes jdupes]''' is a fork of '''fdupes''' which includes support for BTRFS deduplication when it identifies duplicate files.
  
== Other tools ==
+
=== Other tools ===
  
 
Now that the ioctl has been lifted to the VFS layer, rather than being a BTRFS-specific function, deduplication functionality can be implemented in a filesystem-independent way.
 
Now that the ioctl has been lifted to the VFS layer, rather than being a BTRFS-specific function, deduplication functionality can be implemented in a filesystem-independent way.
Line 50: Line 45:
 
As such, '''[http://man7.org/linux/man-pages/man8/xfs_io.8.html xfs_io]''', is able to perform deduplication on a BTRFS file system, and provides a simple way to invoke the deduplication function from the command line, on any filesystem which supports the ioctl.
 
As such, '''[http://man7.org/linux/man-pages/man8/xfs_io.8.html xfs_io]''', is able to perform deduplication on a BTRFS file system, and provides a simple way to invoke the deduplication function from the command line, on any filesystem which supports the ioctl.
  
= Inband =
+
== Inband ==
  
 
Inband / synchronous / inline deduplication is deduplication done in the write path, so it happens as data is written to the filesystem. This typically requires large amounts of RAM to store the lookup table of known block hashes. [https://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg82003.html Patches] are currently being worked on and have been in development since at least 2014. See the [[User notes on dedupe]] page for more details.
 
Inband / synchronous / inline deduplication is deduplication done in the write path, so it happens as data is written to the filesystem. This typically requires large amounts of RAM to store the lookup table of known block hashes. [https://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg82003.html Patches] are currently being worked on and have been in development since at least 2014. See the [[User notes on dedupe]] page for more details.
  
 
[[Category: Features]]
 
[[Category: Features]]

Revision as of 07:21, 18 June 2020

Due to its copy-on-write nature, BTRFS is able to copy files (eg with cp --reflink) or subvolumes (with btrfs subvolume snapshot) without actually copying the data. A new copy of the data is created if one of the files or subvolumes is updated.

Deduplication takes this a step further, by actively identifying when the same data has been written twice, and retrospectively combining them into an extent with the same copy-on-write semantics.

Contents

Batch

Out of band / batch deduplication is deduplication done outside of the write path. We've sometimes called it offline deduplication, but that can confuse people: btrfs dedup involves the kernel and always happens on mounted filesystems. To use out-of-band deduplication, you run a tool which searches your filesystem for identical blocks, and then deduplicates them.

Deduplication in BTRFS is mainly supported by ioctl_fideduperange(2), a compare-and-share operation, although some other tools may use the clone-oriented APIs instead.

Batch deduplicators for BTRFS
Name Block-based Works on other FS Incremental Notes
duperemove Yes Yes Yes
bedup No No Yes Uses the clone ioctl due to concerns regarding kernel crashes with the latter as of kernel 4.2. . Appears to be unmaintained and is broken on 5.x kernels.
bees Yes No Yes Runs at a daemon. Very light database, useful for large colder storages like backup servers. Uses SEARCH_V2 and LOGICAL_INO.
dduper Yes No Yes Uses built-in BTRFS csum-tree, so is extremely fast and lightweight (13.8 seconds for identical 10GB files). Requires BTRFS-PROGS patch for csum access.


Duplicate file finders with btrfs support

While any duplicate file finder utility (e.g. fdupes, fslint, etc) can find files for deduplication using another tool (eg duperemove), the following duplicate file finders have build-in btrfs deduplication capabilities:

  • rmlint is a duplicate file finder with btrfs support. To find and reflink duplicate files:
$ rmlint -T df --config=sh:handler=clone [paths...]   # finds duplicates under paths and creates a batch file 'rmlint.sh' for post-processing
                                                      # ...review contents of rmlint.sh, then:
$ ./rmlint.sh                                         # clones/reflinks duplicates (if possible)

Note if reflinking read-only snapshots, rmlint.sh must be run with -r option and with root priveleges, eg:

$ sudo ./rmlint.sh -r
  • jdupes is a fork of fdupes which includes support for BTRFS deduplication when it identifies duplicate files.

Other tools

Now that the ioctl has been lifted to the VFS layer, rather than being a BTRFS-specific function, deduplication functionality can be implemented in a filesystem-independent way.

As such, xfs_io, is able to perform deduplication on a BTRFS file system, and provides a simple way to invoke the deduplication function from the command line, on any filesystem which supports the ioctl.

Inband

Inband / synchronous / inline deduplication is deduplication done in the write path, so it happens as data is written to the filesystem. This typically requires large amounts of RAM to store the lookup table of known block hashes. Patches are currently being worked on and have been in development since at least 2014. See the User notes on dedupe page for more details.

Personal tools