~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

TOMOYO Linux Cross Reference
Linux/Documentation/driver-api/md/md-cluster.rst

Version: ~ [ linux-6.12-rc7 ] ~ [ linux-6.11.7 ] ~ [ linux-6.10.14 ] ~ [ linux-6.9.12 ] ~ [ linux-6.8.12 ] ~ [ linux-6.7.12 ] ~ [ linux-6.6.60 ] ~ [ linux-6.5.13 ] ~ [ linux-6.4.16 ] ~ [ linux-6.3.13 ] ~ [ linux-6.2.16 ] ~ [ linux-6.1.116 ] ~ [ linux-6.0.19 ] ~ [ linux-5.19.17 ] ~ [ linux-5.18.19 ] ~ [ linux-5.17.15 ] ~ [ linux-5.16.20 ] ~ [ linux-5.15.171 ] ~ [ linux-5.14.21 ] ~ [ linux-5.13.19 ] ~ [ linux-5.12.19 ] ~ [ linux-5.11.22 ] ~ [ linux-5.10.229 ] ~ [ linux-5.9.16 ] ~ [ linux-5.8.18 ] ~ [ linux-5.7.19 ] ~ [ linux-5.6.19 ] ~ [ linux-5.5.19 ] ~ [ linux-5.4.285 ] ~ [ linux-5.3.18 ] ~ [ linux-5.2.21 ] ~ [ linux-5.1.21 ] ~ [ linux-5.0.21 ] ~ [ linux-4.20.17 ] ~ [ linux-4.19.323 ] ~ [ linux-4.18.20 ] ~ [ linux-4.17.19 ] ~ [ linux-4.16.18 ] ~ [ linux-4.15.18 ] ~ [ linux-4.14.336 ] ~ [ linux-4.13.16 ] ~ [ linux-4.12.14 ] ~ [ linux-4.11.12 ] ~ [ linux-4.10.17 ] ~ [ linux-4.9.337 ] ~ [ linux-4.4.302 ] ~ [ linux-3.10.108 ] ~ [ linux-2.6.32.71 ] ~ [ linux-2.6.0 ] ~ [ linux-2.4.37.11 ] ~ [ unix-v6-master ] ~ [ ccs-tools-1.8.12 ] ~ [ policy-sample ] ~
Architecture: ~ [ i386 ] ~ [ alpha ] ~ [ m68k ] ~ [ mips ] ~ [ ppc ] ~ [ sparc ] ~ [ sparc64 ] ~

Diff markup

Differences between /Documentation/driver-api/md/md-cluster.rst (Version linux-6.12-rc7) and /Documentation/driver-api/md/md-cluster.rst (Version linux-5.8.18)


  1 ==========                                          1 ==========
  2 MD Cluster                                          2 MD Cluster
  3 ==========                                          3 ==========
  4                                                     4 
  5 The cluster MD is a shared-device RAID for a c      5 The cluster MD is a shared-device RAID for a cluster, it supports
  6 two levels: raid1 and raid10 (limited support)      6 two levels: raid1 and raid10 (limited support).
  7                                                     7 
  8                                                     8 
  9 1. On-disk format                                   9 1. On-disk format
 10 =================                                  10 =================
 11                                                    11 
 12 Separate write-intent-bitmaps are used for eac     12 Separate write-intent-bitmaps are used for each cluster node.
 13 The bitmaps record all writes that may have be     13 The bitmaps record all writes that may have been started on that node,
 14 and may not yet have finished. The on-disk lay     14 and may not yet have finished. The on-disk layout is::
 15                                                    15 
 16   0                    4k                          16   0                    4k                     8k                    12k
 17   --------------------------------------------     17   -------------------------------------------------------------------
 18   | idle                | md super                 18   | idle                | md super            | bm super [0] + bits |
 19   | bm bits[0, contd]   | bm super[1] + bits       19   | bm bits[0, contd]   | bm super[1] + bits  | bm bits[1, contd]   |
 20   | bm super[2] + bits  | bm bits [2, contd]       20   | bm super[2] + bits  | bm bits [2, contd]  | bm super[3] + bits  |
 21   | bm bits [3, contd]  |                          21   | bm bits [3, contd]  |                     |                     |
 22                                                    22 
 23 During "normal" functioning we assume the file     23 During "normal" functioning we assume the filesystem ensures that only
 24 one node writes to any given block at a time,      24 one node writes to any given block at a time, so a write request will
 25                                                    25 
 26  - set the appropriate bit (if not already set     26  - set the appropriate bit (if not already set)
 27  - commit the write to all mirrors                 27  - commit the write to all mirrors
 28  - schedule the bit to be cleared after a time     28  - schedule the bit to be cleared after a timeout.
 29                                                    29 
 30 Reads are just handled normally. It is up to t     30 Reads are just handled normally. It is up to the filesystem to ensure
 31 one node doesn't read from a location where an     31 one node doesn't read from a location where another node (or the same
 32 node) is writing.                                  32 node) is writing.
 33                                                    33 
 34                                                    34 
 35 2. DLM Locks for management                        35 2. DLM Locks for management
 36 ===========================                        36 ===========================
 37                                                    37 
 38 There are three groups of locks for managing t     38 There are three groups of locks for managing the device:
 39                                                    39 
 40 2.1 Bitmap lock resource (bm_lockres)              40 2.1 Bitmap lock resource (bm_lockres)
 41 -------------------------------------              41 -------------------------------------
 42                                                    42 
 43  The bm_lockres protects individual node bitma     43  The bm_lockres protects individual node bitmaps. They are named in
 44  the form bitmap000 for node 1, bitmap001 for      44  the form bitmap000 for node 1, bitmap001 for node 2 and so on. When a
 45  node joins the cluster, it acquires the lock      45  node joins the cluster, it acquires the lock in PW mode and it stays
 46  so during the lifetime the node is part of th     46  so during the lifetime the node is part of the cluster. The lock
 47  resource number is based on the slot number r     47  resource number is based on the slot number returned by the DLM
 48  subsystem. Since DLM starts node count from o     48  subsystem. Since DLM starts node count from one and bitmap slots
 49  start from zero, one is subtracted from the D     49  start from zero, one is subtracted from the DLM slot number to arrive
 50  at the bitmap slot number.                        50  at the bitmap slot number.
 51                                                    51 
 52  The LVB of the bitmap lock for a particular n     52  The LVB of the bitmap lock for a particular node records the range
 53  of sectors that are being re-synced by that n     53  of sectors that are being re-synced by that node.  No other
 54  node may write to those sectors.  This is use     54  node may write to those sectors.  This is used when a new nodes
 55  joins the cluster.                                55  joins the cluster.
 56                                                    56 
 57 2.2 Message passing locks                          57 2.2 Message passing locks
 58 -------------------------                          58 -------------------------
 59                                                    59 
 60  Each node has to communicate with other nodes     60  Each node has to communicate with other nodes when starting or ending
 61  resync, and for metadata superblock updates.      61  resync, and for metadata superblock updates.  This communication is
 62  managed through three locks: "token", "messag     62  managed through three locks: "token", "message", and "ack", together
 63  with the Lock Value Block (LVB) of one of the     63  with the Lock Value Block (LVB) of one of the "message" lock.
 64                                                    64 
 65 2.3 new-device management                          65 2.3 new-device management
 66 -------------------------                          66 -------------------------
 67                                                    67 
 68  A single lock: "no-new-dev" is used to coordi !!  68  A single lock: "no-new-dev" is used to co-ordinate the addition of
 69  new devices - this must be synchronized acros     69  new devices - this must be synchronized across the array.
 70  Normally all nodes hold a concurrent-read loc     70  Normally all nodes hold a concurrent-read lock on this device.
 71                                                    71 
 72 3. Communication                                   72 3. Communication
 73 ================                                   73 ================
 74                                                    74 
 75  Messages can be broadcast to all nodes, and t     75  Messages can be broadcast to all nodes, and the sender waits for all
 76  other nodes to acknowledge the message before     76  other nodes to acknowledge the message before proceeding.  Only one
 77  message can be processed at a time.               77  message can be processed at a time.
 78                                                    78 
 79 3.1 Message Types                                  79 3.1 Message Types
 80 -----------------                                  80 -----------------
 81                                                    81 
 82  There are six types of messages which are pas     82  There are six types of messages which are passed:
 83                                                    83 
 84 3.1.1 METADATA_UPDATED                             84 3.1.1 METADATA_UPDATED
 85 ^^^^^^^^^^^^^^^^^^^^^^                             85 ^^^^^^^^^^^^^^^^^^^^^^
 86                                                    86 
 87    informs other nodes that the metadata has       87    informs other nodes that the metadata has
 88    been updated, and the node must re-read the     88    been updated, and the node must re-read the md superblock. This is
 89    performed synchronously. It is primarily us     89    performed synchronously. It is primarily used to signal device
 90    failure.                                        90    failure.
 91                                                    91 
 92 3.1.2 RESYNCING                                    92 3.1.2 RESYNCING
 93 ^^^^^^^^^^^^^^^                                    93 ^^^^^^^^^^^^^^^
 94    informs other nodes that a resync is initia     94    informs other nodes that a resync is initiated or
 95    ended so that each node may suspend or resu     95    ended so that each node may suspend or resume the region.  Each
 96    RESYNCING message identifies a range of the     96    RESYNCING message identifies a range of the devices that the
 97    sending node is about to resync. This overr     97    sending node is about to resync. This overrides any previous
 98    notification from that node: only one range     98    notification from that node: only one ranged can be resynced at a
 99    time per-node.                                  99    time per-node.
100                                                   100 
101 3.1.3 NEWDISK                                     101 3.1.3 NEWDISK
102 ^^^^^^^^^^^^^                                     102 ^^^^^^^^^^^^^
103                                                   103 
104    informs other nodes that a device is being     104    informs other nodes that a device is being added to
105    the array. Message contains an identifier f    105    the array. Message contains an identifier for that device.  See
106    below for further details.                     106    below for further details.
107                                                   107 
108 3.1.4 REMOVE                                      108 3.1.4 REMOVE
109 ^^^^^^^^^^^^                                      109 ^^^^^^^^^^^^
110                                                   110 
111    A failed or spare device is being removed f    111    A failed or spare device is being removed from the
112    array. The slot-number of the device is inc    112    array. The slot-number of the device is included in the message.
113                                                   113 
114  3.1.5 RE_ADD:                                    114  3.1.5 RE_ADD:
115                                                   115 
116    A failed device is being re-activated - the    116    A failed device is being re-activated - the assumption
117    is that it has been determined to be workin    117    is that it has been determined to be working again.
118                                                   118 
119  3.1.6 BITMAP_NEEDS_SYNC:                         119  3.1.6 BITMAP_NEEDS_SYNC:
120                                                   120 
121    If a node is stopped locally but the bitmap    121    If a node is stopped locally but the bitmap
122    isn't clean, then another node is informed     122    isn't clean, then another node is informed to take the ownership of
123    resync.                                        123    resync.
124                                                   124 
125 3.2 Communication mechanism                       125 3.2 Communication mechanism
126 ---------------------------                       126 ---------------------------
127                                                   127 
128  The DLM LVB is used to communicate within nod    128  The DLM LVB is used to communicate within nodes of the cluster. There
129  are three resources used for the purpose:        129  are three resources used for the purpose:
130                                                   130 
131 3.2.1 token                                       131 3.2.1 token
132 ^^^^^^^^^^^                                       132 ^^^^^^^^^^^
133    The resource which protects the entire comm    133    The resource which protects the entire communication
134    system. The node having the token resource     134    system. The node having the token resource is allowed to
135    communicate.                                   135    communicate.
136                                                   136 
137 3.2.2 message                                     137 3.2.2 message
138 ^^^^^^^^^^^^^                                     138 ^^^^^^^^^^^^^
139    The lock resource which carries the data to    139    The lock resource which carries the data to communicate.
140                                                   140 
141 3.2.3 ack                                         141 3.2.3 ack
142 ^^^^^^^^^                                         142 ^^^^^^^^^
143                                                   143 
144    The resource, acquiring which means the mes    144    The resource, acquiring which means the message has been
145    acknowledged by all nodes in the cluster. T    145    acknowledged by all nodes in the cluster. The BAST of the resource
146    is used to inform the receiving node that a    146    is used to inform the receiving node that a node wants to
147    communicate.                                   147    communicate.
148                                                   148 
149 The algorithm is:                                 149 The algorithm is:
150                                                   150 
151  1. receive status - all nodes have concurrent    151  1. receive status - all nodes have concurrent-reader lock on "ack"::
152                                                   152 
153         sender                         receive    153         sender                         receiver                 receiver
154         "ack":CR                       "ack":C    154         "ack":CR                       "ack":CR                 "ack":CR
155                                                   155 
156  2. sender get EX on "token",                     156  2. sender get EX on "token",
157     sender get EX on "message"::                  157     sender get EX on "message"::
158                                                   158 
159         sender                        receiver    159         sender                        receiver                 receiver
160         "token":EX                    "ack":CR    160         "token":EX                    "ack":CR                 "ack":CR
161         "message":EX                              161         "message":EX
162         "ack":CR                                  162         "ack":CR
163                                                   163 
164     Sender checks that it still needs to send     164     Sender checks that it still needs to send a message. Messages
165     received or other events that happened whi    165     received or other events that happened while waiting for the
166     "token" may have made this message inappro    166     "token" may have made this message inappropriate or redundant.
167                                                   167 
168  3. sender writes LVB                             168  3. sender writes LVB
169                                                   169 
170     sender down-convert "message" from EX to C    170     sender down-convert "message" from EX to CW
171                                                   171 
172     sender try to get EX of "ack"                 172     sender try to get EX of "ack"
173                                                   173 
174     ::                                            174     ::
175                                                   175 
176       [ wait until all receivers have *process    176       [ wait until all receivers have *processed* the "message" ]
177                                                   177 
178                                        [ trigg    178                                        [ triggered by bast of "ack" ]
179                                        receive    179                                        receiver get CR on "message"
180                                        receive    180                                        receiver read LVB
181                                        receive    181                                        receiver processes the message
182                                        [ wait     182                                        [ wait finish ]
183                                        receive    183                                        receiver releases "ack"
184                                        receive    184                                        receiver tries to get PR on "message"
185                                                   185 
186      sender                         receiver      186      sender                         receiver                  receiver
187      "token":EX                     "message":    187      "token":EX                     "message":CR              "message":CR
188      "message":CW                                 188      "message":CW
189      "ack":EX                                     189      "ack":EX
190                                                   190 
191  4. triggered by grant of EX on "ack" (indicat    191  4. triggered by grant of EX on "ack" (indicating all receivers
192     have processed message)                       192     have processed message)
193                                                   193 
194     sender down-converts "ack" from EX to CR      194     sender down-converts "ack" from EX to CR
195                                                   195 
196     sender releases "message"                     196     sender releases "message"
197                                                   197 
198     sender releases "token"                       198     sender releases "token"
199                                                   199 
200     ::                                            200     ::
201                                                   201 
202                                  receiver upco    202                                  receiver upconvert to PR on "message"
203                                  receiver get     203                                  receiver get CR of "ack"
204                                  receiver rele    204                                  receiver release "message"
205                                                   205 
206      sender                      receiver         206      sender                      receiver                   receiver
207      "ack":CR                    "ack":CR         207      "ack":CR                    "ack":CR                   "ack":CR
208                                                   208 
209                                                   209 
210 4. Handling Failures                              210 4. Handling Failures
211 ====================                              211 ====================
212                                                   212 
213 4.1 Node Failure                                  213 4.1 Node Failure
214 ----------------                                  214 ----------------
215                                                   215 
216  When a node fails, the DLM informs the cluste    216  When a node fails, the DLM informs the cluster with the slot
217  number. The node starts a cluster recovery th    217  number. The node starts a cluster recovery thread. The cluster
218  recovery thread:                                 218  recovery thread:
219                                                   219 
220         - acquires the bitmap<number> lock of     220         - acquires the bitmap<number> lock of the failed node
221         - opens the bitmap                        221         - opens the bitmap
222         - reads the bitmap of the failed node     222         - reads the bitmap of the failed node
223         - copies the set bitmap to local node     223         - copies the set bitmap to local node
224         - cleans the bitmap of the failed node    224         - cleans the bitmap of the failed node
225         - releases bitmap<number> lock of the     225         - releases bitmap<number> lock of the failed node
226         - initiates resync of the bitmap on th    226         - initiates resync of the bitmap on the current node
227           md_check_recovery is invoked within     227           md_check_recovery is invoked within recover_bitmaps,
228           then md_check_recovery -> metadata_u    228           then md_check_recovery -> metadata_update_start/finish,
229           it will lock the communication by lo    229           it will lock the communication by lock_comm.
230           Which means when one node is resynci    230           Which means when one node is resyncing it blocks all
231           other nodes from writing anywhere on    231           other nodes from writing anywhere on the array.
232                                                   232 
233  The resync process is the regular md resync.     233  The resync process is the regular md resync. However, in a clustered
234  environment when a resync is performed, it ne    234  environment when a resync is performed, it needs to tell other nodes
235  of the areas which are suspended. Before a re    235  of the areas which are suspended. Before a resync starts, the node
236  send out RESYNCING with the (lo,hi) range of     236  send out RESYNCING with the (lo,hi) range of the area which needs to
237  be suspended. Each node maintains a suspend_l    237  be suspended. Each node maintains a suspend_list, which contains the
238  list of ranges which are currently suspended.    238  list of ranges which are currently suspended. On receiving RESYNCING,
239  the node adds the range to the suspend_list.     239  the node adds the range to the suspend_list. Similarly, when the node
240  performing resync finishes, it sends RESYNCIN    240  performing resync finishes, it sends RESYNCING with an empty range to
241  other nodes and other nodes remove the corres    241  other nodes and other nodes remove the corresponding entry from the
242  suspend_list.                                    242  suspend_list.
243                                                   243 
244  A helper function, ->area_resyncing() can be     244  A helper function, ->area_resyncing() can be used to check if a
245  particular I/O range should be suspended or n    245  particular I/O range should be suspended or not.
246                                                   246 
247 4.2 Device Failure                                247 4.2 Device Failure
248 ==================                                248 ==================
249                                                   249 
250  Device failures are handled and communicated     250  Device failures are handled and communicated with the metadata update
251  routine.  When a node detects a device failur    251  routine.  When a node detects a device failure it does not allow
252  any further writes to that device until the f    252  any further writes to that device until the failure has been
253  acknowledged by all other nodes.                 253  acknowledged by all other nodes.
254                                                   254 
255 5. Adding a new Device                            255 5. Adding a new Device
256 ----------------------                            256 ----------------------
257                                                   257 
258  For adding a new device, it is necessary that    258  For adding a new device, it is necessary that all nodes "see" the new
259  device to be added. For this, the following a    259  device to be added. For this, the following algorithm is used:
260                                                   260 
261    1.  Node 1 issues mdadm --manage /dev/mdX -    261    1.  Node 1 issues mdadm --manage /dev/mdX --add /dev/sdYY which issues
262        ioctl(ADD_NEW_DISK with disc.state set     262        ioctl(ADD_NEW_DISK with disc.state set to MD_DISK_CLUSTER_ADD)
263    2.  Node 1 sends a NEWDISK message with uui    263    2.  Node 1 sends a NEWDISK message with uuid and slot number
264    3.  Other nodes issue kobject_uevent_env wi    264    3.  Other nodes issue kobject_uevent_env with uuid and slot number
265        (Steps 4,5 could be a udev rule)           265        (Steps 4,5 could be a udev rule)
266    4.  In userspace, the node searches for the    266    4.  In userspace, the node searches for the disk, perhaps
267        using blkid -t SUB_UUID=""                 267        using blkid -t SUB_UUID=""
268    5.  Other nodes issue either of the followi    268    5.  Other nodes issue either of the following depending on whether
269        the disk was found:                        269        the disk was found:
270        ioctl(ADD_NEW_DISK with disc.state set     270        ioctl(ADD_NEW_DISK with disc.state set to MD_DISK_CANDIDATE and
271        disc.number set to slot number)            271        disc.number set to slot number)
272        ioctl(CLUSTERED_DISK_NACK)                 272        ioctl(CLUSTERED_DISK_NACK)
273    6.  Other nodes drop lock on "no-new-devs"     273    6.  Other nodes drop lock on "no-new-devs" (CR) if device is found
274    7.  Node 1 attempts EX lock on "no-new-dev"    274    7.  Node 1 attempts EX lock on "no-new-dev"
275    8.  If node 1 gets the lock, it sends METAD    275    8.  If node 1 gets the lock, it sends METADATA_UPDATED after
276        unmarking the disk as SpareLocal           276        unmarking the disk as SpareLocal
277    9.  If not (get "no-new-dev" lock), it fail    277    9.  If not (get "no-new-dev" lock), it fails the operation and sends
278        METADATA_UPDATED.                          278        METADATA_UPDATED.
279    10. Other nodes get the information whether    279    10. Other nodes get the information whether a disk is added or not
280        by the following METADATA_UPDATED.         280        by the following METADATA_UPDATED.
281                                                   281 
282 6. Module interface                               282 6. Module interface
283 ===================                               283 ===================
284                                                   284 
285  There are 17 call-backs which the md core can    285  There are 17 call-backs which the md core can make to the cluster
286  module.  Understanding these can give a good     286  module.  Understanding these can give a good overview of the whole
287  process.                                         287  process.
288                                                   288 
289 6.1 join(nodes) and leave()                       289 6.1 join(nodes) and leave()
290 ---------------------------                       290 ---------------------------
291                                                   291 
292  These are called when an array is started wit    292  These are called when an array is started with a clustered bitmap,
293  and when the array is stopped.  join() ensure    293  and when the array is stopped.  join() ensures the cluster is
294  available and initializes the various resourc    294  available and initializes the various resources.
295  Only the first 'nodes' nodes in the cluster c    295  Only the first 'nodes' nodes in the cluster can use the array.
296                                                   296 
297 6.2 slot_number()                                 297 6.2 slot_number()
298 -----------------                                 298 -----------------
299                                                   299 
300  Reports the slot number advised by the cluste    300  Reports the slot number advised by the cluster infrastructure.
301  Range is from 0 to nodes-1.                      301  Range is from 0 to nodes-1.
302                                                   302 
303 6.3 resync_info_update()                          303 6.3 resync_info_update()
304 ------------------------                          304 ------------------------
305                                                   305 
306  This updates the resync range that is stored     306  This updates the resync range that is stored in the bitmap lock.
307  The starting point is updated as the resync p    307  The starting point is updated as the resync progresses.  The
308  end point is always the end of the array.        308  end point is always the end of the array.
309  It does *not* send a RESYNCING message.          309  It does *not* send a RESYNCING message.
310                                                   310 
311 6.4 resync_start(), resync_finish()               311 6.4 resync_start(), resync_finish()
312 -----------------------------------               312 -----------------------------------
313                                                   313 
314  These are called when resync/recovery/reshape    314  These are called when resync/recovery/reshape starts or stops.
315  They update the resyncing range in the bitmap    315  They update the resyncing range in the bitmap lock and also
316  send a RESYNCING message.  resync_start repor    316  send a RESYNCING message.  resync_start reports the whole
317  array as resyncing, resync_finish reports non    317  array as resyncing, resync_finish reports none of it.
318                                                   318 
319  resync_finish() also sends a BITMAP_NEEDS_SYN    319  resync_finish() also sends a BITMAP_NEEDS_SYNC message which
320  allows some other node to take over.             320  allows some other node to take over.
321                                                   321 
322 6.5 metadata_update_start(), metadata_update_f    322 6.5 metadata_update_start(), metadata_update_finish(), metadata_update_cancel()
323 ----------------------------------------------    323 -------------------------------------------------------------------------------
324                                                   324 
325  metadata_update_start is used to get exclusiv    325  metadata_update_start is used to get exclusive access to
326  the metadata.  If a change is still needed on    326  the metadata.  If a change is still needed once that access is
327  gained, metadata_update_finish() will send a     327  gained, metadata_update_finish() will send a METADATA_UPDATE
328  message to all other nodes, otherwise metadat    328  message to all other nodes, otherwise metadata_update_cancel()
329  can be used to release the lock.                 329  can be used to release the lock.
330                                                   330 
331 6.6 area_resyncing()                              331 6.6 area_resyncing()
332 --------------------                              332 --------------------
333                                                   333 
334  This combines two elements of functionality.     334  This combines two elements of functionality.
335                                                   335 
336  Firstly, it will check if any node is current    336  Firstly, it will check if any node is currently resyncing
337  anything in a given range of sectors.  If any    337  anything in a given range of sectors.  If any resync is found,
338  then the caller will avoid writing or read-ba    338  then the caller will avoid writing or read-balancing in that
339  range.                                           339  range.
340                                                   340 
341  Secondly, while node recovery is happening it    341  Secondly, while node recovery is happening it reports that
342  all areas are resyncing for READ requests.  T    342  all areas are resyncing for READ requests.  This avoids races
343  between the cluster-filesystem and the cluste    343  between the cluster-filesystem and the cluster-RAID handling
344  a node failure.                                  344  a node failure.
345                                                   345 
346 6.7 add_new_disk_start(), add_new_disk_finish(    346 6.7 add_new_disk_start(), add_new_disk_finish(), new_disk_ack()
347 ----------------------------------------------    347 ---------------------------------------------------------------
348                                                   348 
349  These are used to manage the new-disk protoco    349  These are used to manage the new-disk protocol described above.
350  When a new device is added, add_new_disk_star    350  When a new device is added, add_new_disk_start() is called before
351  it is bound to the array and, if that succeed    351  it is bound to the array and, if that succeeds, add_new_disk_finish()
352  is called the device is fully added.             352  is called the device is fully added.
353                                                   353 
354  When a device is added in acknowledgement to     354  When a device is added in acknowledgement to a previous
355  request, or when the device is declared "unav    355  request, or when the device is declared "unavailable",
356  new_disk_ack() is called.                        356  new_disk_ack() is called.
357                                                   357 
358 6.8 remove_disk()                                 358 6.8 remove_disk()
359 -----------------                                 359 -----------------
360                                                   360 
361  This is called when a spare or failed device     361  This is called when a spare or failed device is removed from
362  the array.  It causes a REMOVE message to be     362  the array.  It causes a REMOVE message to be send to other nodes.
363                                                   363 
364 6.9 gather_bitmaps()                              364 6.9 gather_bitmaps()
365 --------------------                              365 --------------------
366                                                   366 
367  This sends a RE_ADD message to all other node    367  This sends a RE_ADD message to all other nodes and then
368  gathers bitmap information from all bitmaps.     368  gathers bitmap information from all bitmaps.  This combined
369  bitmap is then used to recovery the re-added     369  bitmap is then used to recovery the re-added device.
370                                                   370 
371 6.10 lock_all_bitmaps() and unlock_all_bitmaps    371 6.10 lock_all_bitmaps() and unlock_all_bitmaps()
372 ----------------------------------------------    372 ------------------------------------------------
373                                                   373 
374  These are called when change bitmap to none.     374  These are called when change bitmap to none. If a node plans
375  to clear the cluster raid's bitmap, it need t    375  to clear the cluster raid's bitmap, it need to make sure no other
376  nodes are using the raid which is achieved by    376  nodes are using the raid which is achieved by lock all bitmap
377  locks within the cluster, and also those lock    377  locks within the cluster, and also those locks are unlocked
378  accordingly.                                     378  accordingly.
379                                                   379 
380 7. Unsupported features                           380 7. Unsupported features
381 =======================                           381 =======================
382                                                   382 
383 There are somethings which are not supported b    383 There are somethings which are not supported by cluster MD yet.
384                                                   384 
385 - change array_sectors.                           385 - change array_sectors.
                                                      

~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

kernel.org | git.kernel.org | LWN.net | Project Home | SVN repository | Mail admin

Linux® is a registered trademark of Linus Torvalds in the United States and other countries.
TOMOYO® is a registered trademark of NTT DATA CORPORATION.

sflogo.php