"if the first portion of data being compressed is not smaller than the original, the compression of the file is disabled -- unless the filesystem is mounted with -o compress-force. In that case compression will always be attempted on the file only to be later discarded."
"The utility chattr supports setting file attribute c that marks the inode to compress newly written data. Setting the compression property on a file using btrfs property set <file> compression <zlib|lzo|zstd> will force compression to be used on that file using the specified algorithm."
chattr +cis the per-file equivalent of
btrfs property set <file> compressionis the per-file equivalent of
btrfs property set don't affect each other? See also https://www.reddit.com/r/btrfs/comments/fhqz55/confused_by_compression_and_forced_compression_in/?ref=share&ref_source=link Jonathan (talk) 04:11, 15 January 2021 (UTC)
I realize there is work as far back as 2012 and a FAQ in 2014 regarding lz4 or anything like the commericial decompression-specific products... may have had limited visibility until recent years, but I have also encountered machine learning and block-chain use-cases which could benefit from both lzma and lz4 inclusion on a btrfs partition which incur user-space technical debt as a workaround. zstd:1 has proved to create a fatal bottleneck on AWS volume testing which imply that the AWS ssd raw IO is faster than the zstd real-time expectations.
A simple usecase of Vulkan Texture backing stores, and several other small and large block performance evaluations like highly compressable graph data on lzma and metadata-heavy inodes on lz4 could address performance envelopes not currently reached by the range of current compressor options. these two mentioned items lzma and lz4 are already block device options in the vanilla kernel.