Talk:Project ideas

From btrfs Wiki
Revision as of 22:45, 17 January 2013 by Ytrezq (Talk | contribs)

Jump to: navigation, search
- What I'm missing is any kind of encryption support.
  It would be nice to include encryption support into btrfs (as long as the on-disk file fromat isn't finished),
  so one can easily handle (upcoming) multiple device (raidB/B2) targets with one key.
  For now I would have to setup LUKS below btrfs, which is very suboptimal for multiple device configurations (raid)

°raidB/B2 is a synonym for btrfs enhanced raid5/6 ;)

André 07:55, 18 July 2008 (UTC)


It would be nice to see somewhat intelligent or flexible handling of devices of different sizes with RAID 5/6.

For example, if I had two 250GB drives and two 500GB drives and made a RAID5 of them, the two 250GB devices could be first "combined" to make one 500GB volume and then make the RAID5 on top of the three 500GB volumes. The resulting RAID5 volume would then have ~1000GB of usable space (with distributed parity).

In theory, I could make similar setup by combining the two 250GB drives with md RAID0 first, but this seems unoptimal.

--Cg 13:40, 1 February 2009 (UTC)

If we are going to have a fs for massive allocation; administration and maintenance will be an issue. I would like to see an active manager system that can be clustered across PCs. Ideally split into two dynamically changing and balanced workloads between guest time and host time.

I envision host time being used for automated fs maintenace tasks that will use upto 90% activity but leave a 10% window for incoming guest work. The guest time will be for live file read/write from the user/guest system(s). As the guest workload increases the Host workload will progressively shut down dynamically reducing to 0% and allowing the guest time to rise to 90% or more on the fly. As guest load drops the host load can rise back to full.

Defragmentation and filesystem integrity checking should be left to this manager, releasing this burden from the admin but allowing the drives to still be utilised at full speed when reqired.

Perhaps an aggression config option can be made available by this clustered manager system and allow host work disk access to be moderated from a lazy 10% to a rampant 90% with aggression states or levels inbetween. If this would be deemed more prudent for drive life expectancy. Then the admin can can alter this value during the day or night (should change be required for some reason) and worry about little else.

Anyway, such a manager may also allow improvements in drive speed by seperating out essential disk write aspects from less essential disk write aspects.

--Relic 17:37, 10 March 2009 (UTC)


Contents

RAID from Drives of Differing Capacities

It would be awesome to see intelligent handling of storage devices with various capacities akin to Drobo's Beyondraid. The ability to expand a raid with newer larger & cheaper drives as needed would be very useful for power users, small-medium business users and (with a friendly enough gui) average users.

The Beyondraid system uses the total redundant space as a pool in which it stores data from multiple virtual volumes. These volumes are a predetermined size usually 16TB, well beyond the physically installed capacity, users expand the physical capacity as needed for their data. Redundancy can be dynamically switched between 1 or 2 drives. The system is plug and play, self healing, data-aware, fully automated and near infinitely expandable.

Drobo's Beyondraid from [http://en.wikipedia.org/wiki/Non-standard_RAID_levels]

           Drives
 | 100 GB | 200 GB | 400 GB | 500 GB |

                            ----------
                            |   x    | unusable space (100 GB)
                            ----------
                   -------------------
                   |   A1   |   A1   | RAID 1 set (2× 100 GB)
                   -------------------
                   -------------------
                   |   B1   |   B1   | RAID 1 set (2× 100 GB)
                   -------------------
          ----------------------------
          |   C1   |   C2   |   Cp   | RAID 5 array (3× 100 GB)
          ----------------------------
 -------------------------------------
 |   D1   |   D2   |   D3   |   Dp   | RAID 5 array (4× 100 GB)
 -------------------------------------

1200Gb Drive Capacity 
Beyondraid: 700Gb Usable
RAID 5: 300Gb Usable

Differentiators for btrfs:

Adding drives to a "dynamic raid (pool)" would be a mkfs option and could be integrated into a gui application

The "dynamic raid" system would be able to work with any block device and intelligently deal with partitions from the same drive treating them as a large non-continuous drive so redundancy information is not kept on the same drive. This will allow users to use any capacity they have no matter where it's located.

What Beyondraid doesn't allow is for users to use the entire first drive if it is not paired with a second drive. Allowing this would offer an easy path into redundancy and expansion. New installs create a dynamic raid on one drive (unless more are available) Fill up the first drive; add a second drive, get redundancy; add a third or more get extra space. Perhaps the first drive of a "dynamic raid" could be created from an existing btrfs partition.

--Ddfitzy 11:33, 1 August 2010 (UTC)

Sharing

NFS sharing is ok, but there's no auto-discovery. Not like AoE where any shared devices show up in nautilus Device area.

Something that would be nice is if BTRFS would allow for read-only sharing/mounting of the device over AoE (ATA of Ethernet) or even sharing subvolumes/snapshots.

The main reason I'm looking for this is to read-only share local repo mirrors to anyone on my local network. Also AoE not being TCP/IP tends to go through VPN restrictions so I can install software while connected to a VPN with no local access.

--MikeyCarter 17:43, 8 August 2011 (UTC)

Advanced Replication

Right now I have the Documents/Code I work on copied to my home computer, laptop and server. The reason is when my laptop is off-line I want to still be able to work on my files. The other case is if for some-reason I need to work on a computer I can turn it off and still be able to access my files. Problem now is keeping three locations in sync.

It would be nice if btrfs had a way to link two or more copies of a subvolume on remote locations. Also can serve as a backup if we're keeping checks sums... It could also use this to repair problems.

Which brings me to my second idea. What about having the feature, instead of mirroring (RAID-1) have it so you can mirror a subvolume. That way if there is a problem detected it can use that. I have a lot of cases where I only want subvolume's mirrored and not the entire FS. (ie if I make a backup copy of a DVD, I don't need RAID-1 on it as I have the original. Same goes for things downloaded off the web)


--MikeyCarter 15:17, 15 August 2011 (UTC)

Conversion from other Filesystems

Complex conversion from md. Right now many have an ext4/mdraid1 disk setup. Using btrfs-convert to get to a btrfs-on-mdraid1 is "okay" but not ideal. The ideal conversion would replace the md with btrfs's builtin raid. Other raid-types could also benefit from the same treatment. Brendan M. Hide 18:14, 5 November 2012 (UTC)

Lot of I/O errors (mark drive as unreliable feature)

I think lot of I/O error should be explain because there are cases that give a lot of I/O errors whitout meaning a large part of the drive is corrupted.

-intensive operations on a same damaged part of the disk


-Use of flash memory during a long time:

   When a file is writen, the sectors allocated to the file as well as some sectors in the FS metadata are wrote.
   It means that for each random write access medata sectors are written again.
   Flash disk have a limited number of write operations.This lead to have some metadata sectors corrupted while most of the drive is OK.
   I often deal with this case on usb drive.


In the secound case I think drive should not be taken offline because it would probably lead users to try unmount It and then remount It.

But It would never been remounted. the FS type could no longer be recognized. Or even worse (In the case it is the partition begin in the boot sector) the whole partition table could not be recognized

It is importent in this case to warn the user and let him copy it's unsaved data.

I/O error recovery support

Some filesystems provide the possibility to try to extract data when the filesystem checker found an allocated damaged sector in supplment of markin the sector damaged.

I would to see this feature in btrfs.

I know it would probably mean changing certain thing in linux drive management:

 example: testdisk has the possibillity to make dump of damaged disk.
 When I use It on Linux I got the log full with "I/O error on device", and it skip damaged sector that could be eventually rescued
 When I use testdisk on Windows it still skip some sectors but some other has been rescued

When those issue could be fixed, BTRFS could go farther:

When a damaged sector is found while the volume is mounted. The kernel BTRFS driver (while still online) could try to extract the data present on the sector (if it is allocated) to a valid place on the disk, and mark the sector as bad

Personal tools