~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

TOMOYO Linux Cross Reference
Linux/Documentation/driver-api/md/raid5-cache.rst

Version: ~ [ linux-6.12-rc7 ] ~ [ linux-6.11.7 ] ~ [ linux-6.10.14 ] ~ [ linux-6.9.12 ] ~ [ linux-6.8.12 ] ~ [ linux-6.7.12 ] ~ [ linux-6.6.60 ] ~ [ linux-6.5.13 ] ~ [ linux-6.4.16 ] ~ [ linux-6.3.13 ] ~ [ linux-6.2.16 ] ~ [ linux-6.1.116 ] ~ [ linux-6.0.19 ] ~ [ linux-5.19.17 ] ~ [ linux-5.18.19 ] ~ [ linux-5.17.15 ] ~ [ linux-5.16.20 ] ~ [ linux-5.15.171 ] ~ [ linux-5.14.21 ] ~ [ linux-5.13.19 ] ~ [ linux-5.12.19 ] ~ [ linux-5.11.22 ] ~ [ linux-5.10.229 ] ~ [ linux-5.9.16 ] ~ [ linux-5.8.18 ] ~ [ linux-5.7.19 ] ~ [ linux-5.6.19 ] ~ [ linux-5.5.19 ] ~ [ linux-5.4.285 ] ~ [ linux-5.3.18 ] ~ [ linux-5.2.21 ] ~ [ linux-5.1.21 ] ~ [ linux-5.0.21 ] ~ [ linux-4.20.17 ] ~ [ linux-4.19.323 ] ~ [ linux-4.18.20 ] ~ [ linux-4.17.19 ] ~ [ linux-4.16.18 ] ~ [ linux-4.15.18 ] ~ [ linux-4.14.336 ] ~ [ linux-4.13.16 ] ~ [ linux-4.12.14 ] ~ [ linux-4.11.12 ] ~ [ linux-4.10.17 ] ~ [ linux-4.9.337 ] ~ [ linux-4.4.302 ] ~ [ linux-3.10.108 ] ~ [ linux-2.6.32.71 ] ~ [ linux-2.6.0 ] ~ [ linux-2.4.37.11 ] ~ [ unix-v6-master ] ~ [ ccs-tools-1.8.12 ] ~ [ policy-sample ] ~
Architecture: ~ [ i386 ] ~ [ alpha ] ~ [ m68k ] ~ [ mips ] ~ [ ppc ] ~ [ sparc ] ~ [ sparc64 ] ~

  1 ================
  2 RAID 4/5/6 cache
  3 ================
  4 
  5 Raid 4/5/6 could include an extra disk for data cache besides normal RAID
  6 disks. The role of RAID disks isn't changed with the cache disk. The cache disk
  7 caches data to the RAID disks. The cache can be in write-through (supported
  8 since 4.4) or write-back mode (supported since 4.10). mdadm (supported since
  9 3.4) has a new option '--write-journal' to create array with cache. Please
 10 refer to mdadm manual for details. By default (RAID array starts), the cache is
 11 in write-through mode. A user can switch it to write-back mode by::
 12 
 13         echo "write-back" > /sys/block/md0/md/journal_mode
 14 
 15 And switch it back to write-through mode by::
 16 
 17         echo "write-through" > /sys/block/md0/md/journal_mode
 18 
 19 In both modes, all writes to the array will hit cache disk first. This means
 20 the cache disk must be fast and sustainable.
 21 
 22 write-through mode
 23 ==================
 24 
 25 This mode mainly fixes the 'write hole' issue. For RAID 4/5/6 array, an unclean
 26 shutdown can cause data in some stripes to not be in consistent state, eg, data
 27 and parity don't match. The reason is that a stripe write involves several RAID
 28 disks and it's possible the writes don't hit all RAID disks yet before the
 29 unclean shutdown. We call an array degraded if it has inconsistent data. MD
 30 tries to resync the array to bring it back to normal state. But before the
 31 resync completes, any system crash will expose the chance of real data
 32 corruption in the RAID array. This problem is called 'write hole'.
 33 
 34 The write-through cache will cache all data on cache disk first. After the data
 35 is safe on the cache disk, the data will be flushed onto RAID disks. The
 36 two-step write will guarantee MD can recover correct data after unclean
 37 shutdown even the array is degraded. Thus the cache can close the 'write hole'.
 38 
 39 In write-through mode, MD reports IO completion to upper layer (usually
 40 filesystems) after the data is safe on RAID disks, so cache disk failure
 41 doesn't cause data loss. Of course cache disk failure means the array is
 42 exposed to 'write hole' again.
 43 
 44 In write-through mode, the cache disk isn't required to be big. Several
 45 hundreds megabytes are enough.
 46 
 47 write-back mode
 48 ===============
 49 
 50 write-back mode fixes the 'write hole' issue too, since all write data is
 51 cached on cache disk. But the main goal of 'write-back' cache is to speed up
 52 write. If a write crosses all RAID disks of a stripe, we call it full-stripe
 53 write. For non-full-stripe writes, MD must read old data before the new parity
 54 can be calculated. These synchronous reads hurt write throughput. Some writes
 55 which are sequential but not dispatched in the same time will suffer from this
 56 overhead too. Write-back cache will aggregate the data and flush the data to
 57 RAID disks only after the data becomes a full stripe write. This will
 58 completely avoid the overhead, so it's very helpful for some workloads. A
 59 typical workload which does sequential write followed by fsync is an example.
 60 
 61 In write-back mode, MD reports IO completion to upper layer (usually
 62 filesystems) right after the data hits cache disk. The data is flushed to raid
 63 disks later after specific conditions met. So cache disk failure will cause
 64 data loss.
 65 
 66 In write-back mode, MD also caches data in memory. The memory cache includes
 67 the same data stored on cache disk, so a power loss doesn't cause data loss.
 68 The memory cache size has performance impact for the array. It's recommended
 69 the size is big. A user can configure the size by::
 70 
 71         echo "2048" > /sys/block/md0/md/stripe_cache_size
 72 
 73 Too small cache disk will make the write aggregation less efficient in this
 74 mode depending on the workloads. It's recommended to use a cache disk with at
 75 least several gigabytes size in write-back mode.
 76 
 77 The implementation
 78 ==================
 79 
 80 The write-through and write-back cache use the same disk format. The cache disk
 81 is organized as a simple write log. The log consists of 'meta data' and 'data'
 82 pairs. The meta data describes the data. It also includes checksum and sequence
 83 ID for recovery identification. Data can be IO data and parity data. Data is
 84 checksummed too. The checksum is stored in the meta data ahead of the data. The
 85 checksum is an optimization because MD can write meta and data freely without
 86 worry about the order. MD superblock has a field pointed to the valid meta data
 87 of log head.
 88 
 89 The log implementation is pretty straightforward. The difficult part is the
 90 order in which MD writes data to cache disk and RAID disks. Specifically, in
 91 write-through mode, MD calculates parity for IO data, writes both IO data and
 92 parity to the log, writes the data and parity to RAID disks after the data and
 93 parity is settled down in log and finally the IO is finished. Read just reads
 94 from raid disks as usual.
 95 
 96 In write-back mode, MD writes IO data to the log and reports IO completion. The
 97 data is also fully cached in memory at that time, which means read must query
 98 memory cache. If some conditions are met, MD will flush the data to RAID disks.
 99 MD will calculate parity for the data and write parity into the log. After this
100 is finished, MD will write both data and parity into RAID disks, then MD can
101 release the memory cache. The flush conditions could be stripe becomes a full
102 stripe write, free cache disk space is low or free in-kernel memory cache space
103 is low.
104 
105 After an unclean shutdown, MD does recovery. MD reads all meta data and data
106 from the log. The sequence ID and checksum will help us detect corrupted meta
107 data and data. If MD finds a stripe with data and valid parities (1 parity for
108 raid4/5 and 2 for raid6), MD will write the data and parities to RAID disks. If
109 parities are incompleted, they are discarded. If part of data is corrupted,
110 they are discarded too. MD then loads valid data and writes them to RAID disks
111 in normal way.

~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

kernel.org | git.kernel.org | LWN.net | Project Home | SVN repository | Mail admin

Linux® is a registered trademark of Linus Torvalds in the United States and other countries.
TOMOYO® is a registered trademark of NTT DATA CORPORATION.

sflogo.php