~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

TOMOYO Linux Cross Reference
Linux/Documentation/filesystems/xfs/xfs-online-fsck-design.rst

Version: ~ [ linux-6.11.5 ] ~ [ linux-6.10.14 ] ~ [ linux-6.9.12 ] ~ [ linux-6.8.12 ] ~ [ linux-6.7.12 ] ~ [ linux-6.6.58 ] ~ [ linux-6.5.13 ] ~ [ linux-6.4.16 ] ~ [ linux-6.3.13 ] ~ [ linux-6.2.16 ] ~ [ linux-6.1.114 ] ~ [ linux-6.0.19 ] ~ [ linux-5.19.17 ] ~ [ linux-5.18.19 ] ~ [ linux-5.17.15 ] ~ [ linux-5.16.20 ] ~ [ linux-5.15.169 ] ~ [ linux-5.14.21 ] ~ [ linux-5.13.19 ] ~ [ linux-5.12.19 ] ~ [ linux-5.11.22 ] ~ [ linux-5.10.228 ] ~ [ linux-5.9.16 ] ~ [ linux-5.8.18 ] ~ [ linux-5.7.19 ] ~ [ linux-5.6.19 ] ~ [ linux-5.5.19 ] ~ [ linux-5.4.284 ] ~ [ linux-5.3.18 ] ~ [ linux-5.2.21 ] ~ [ linux-5.1.21 ] ~ [ linux-5.0.21 ] ~ [ linux-4.20.17 ] ~ [ linux-4.19.322 ] ~ [ linux-4.18.20 ] ~ [ linux-4.17.19 ] ~ [ linux-4.16.18 ] ~ [ linux-4.15.18 ] ~ [ linux-4.14.336 ] ~ [ linux-4.13.16 ] ~ [ linux-4.12.14 ] ~ [ linux-4.11.12 ] ~ [ linux-4.10.17 ] ~ [ linux-4.9.337 ] ~ [ linux-4.4.302 ] ~ [ linux-3.10.108 ] ~ [ linux-2.6.32.71 ] ~ [ linux-2.6.0 ] ~ [ linux-2.4.37.11 ] ~ [ unix-v6-master ] ~ [ ccs-tools-1.8.9 ] ~ [ policy-sample ] ~
Architecture: ~ [ i386 ] ~ [ alpha ] ~ [ m68k ] ~ [ mips ] ~ [ ppc ] ~ [ sparc ] ~ [ sparc64 ] ~

Diff markup

Differences between /Documentation/filesystems/xfs/xfs-online-fsck-design.rst (Version linux-6.11.5) and /Documentation/filesystems/xfs/xfs-online-fsck-design.rst (Version linux-4.10.17)


  1 .. SPDX-License-Identifier: GPL-2.0               
  2 .. _xfs_online_fsck_design:                       
  3                                                   
  4 ..                                                
  5         Mapping of heading styles within this     
  6         Heading 1 uses "====" above and below     
  7         Heading 2 uses "===="                     
  8         Heading 3 uses "----"                     
  9         Heading 4 uses "````"                     
 10         Heading 5 uses "^^^^"                     
 11         Heading 6 uses "~~~~"                     
 12         Heading 7 uses "...."                     
 13                                                   
 14         Sections are manually numbered because    
 15         does in the kernel.                       
 16                                                   
 17 ======================                            
 18 XFS Online Fsck Design                            
 19 ======================                            
 20                                                   
 21 This document captures the design of the onlin    
 22 XFS.                                              
 23 The purpose of this document is threefold:        
 24                                                   
 25 - To help kernel distributors understand exact    
 26   feature is, and issues about which they shou    
 27                                                   
 28 - To help people reading the code to familiari    
 29   concepts and design points before they start    
 30                                                   
 31 - To help developers maintaining the system by    
 32   supporting higher level decision making.        
 33                                                   
 34 As the online fsck code is merged, the links i    
 35 will be replaced with links to code.              
 36                                                   
 37 This document is licensed under the terms of t    
 38 The primary author is Darrick J. Wong.            
 39                                                   
 40 This design document is split into seven parts    
 41 Part 1 defines what fsck tools are and the mot    
 42 Parts 2 and 3 present a high level overview of    
 43 and how it is tested to ensure correct functio    
 44 Part 4 discusses the user interface and the in    
 45 program.                                          
 46 Parts 5 and 6 show off the high level componen    
 47 then present case studies of how each repair f    
 48 Part 7 sums up what has been discussed so far     
 49 might be built atop online fsck.                  
 50                                                   
 51 .. contents:: Table of Contents                   
 52    :local:                                        
 53                                                   
 54 1. What is a Filesystem Check?                    
 55 ==============================                    
 56                                                   
 57 A Unix filesystem has four main responsibiliti    
 58                                                   
 59 - Provide a hierarchy of names through which a    
 60   arbitrary blobs of data for any length of ti    
 61                                                   
 62 - Virtualize physical storage media across tho    
 63                                                   
 64 - Retrieve the named data blobs at any time.      
 65                                                   
 66 - Examine resource usage.                         
 67                                                   
 68 Metadata directly supporting these functions (    
 69 mappings) are sometimes called primary metadat    
 70 Secondary metadata (e.g. reverse mapping and d    
 71 operations internal to the filesystem, such as    
 72 and reorganization.                               
 73 Summary metadata, as the name implies, condens    
 74 primary metadata for performance reasons.         
 75                                                   
 76 The filesystem check (fsck) tool examines all     
 77 to look for errors.                               
 78 In addition to looking for obvious metadata co    
 79 cross-references different types of metadata r    
 80 for inconsistencies.                              
 81 People do not like losing data, so most fsck t    
 82 to correct any problems found.                    
 83 As a word of caution -- the primary goal of mo    
 84 the filesystem metadata to a consistent state,    
 85 recovered.                                        
 86 That precedent will not be challenged here.       
 87                                                   
 88 Filesystems of the 20th century generally lack    
 89 format, which means that fsck can only respond    
 90 errors are no longer detected.                    
 91 More recent filesystem designs contain enough     
 92 it is now possible to regenerate data structur    
 93 occur; this capability aids both strategies.      
 94                                                   
 95 +---------------------------------------------    
 96 | **Note**:                                       
 97 +---------------------------------------------    
 98 | System administrators avoid data loss by inc    
 99 | separate storage systems through the creatio    
100 | downtime by increasing the redundancy of eac    
101 | creation of RAID arrays.                        
102 | fsck tools address only the first problem.      
103 +---------------------------------------------    
104                                                   
105 TLDR; Show Me the Code!                           
106 -----------------------                           
107                                                   
108 Code is posted to the kernel.org git trees as     
109 `kernel changes <https://git.kernel.org/pub/sc    
110 `userspace changes <https://git.kernel.org/pub    
111 `QA test changes <https://git.kernel.org/pub/s    
112 Each kernel patchset adding an online repair f    
113 name across the kernel, xfsprogs, and fstests     
114                                                   
115 Existing Tools                                    
116 --------------                                    
117                                                   
118 The online fsck tool described here will be th    
119 XFS (on Linux) to check and repair filesystems    
120 Two programs precede it:                          
121                                                   
122 The first program, ``xfs_check``, was created     
123 (``xfs_db``) and can only be used with unmount    
124 It walks all metadata in the filesystem lookin    
125 metadata, though it lacks any ability to repai    
126 Due to its high memory requirements and inabil    
127 program is now deprecated and will not be disc    
128                                                   
129 The second program, ``xfs_repair``, was create    
130 than the first program.                           
131 Like its predecessor, it can only be used with    
132 It uses extent-based in-memory data structures    
133 and tries to schedule readahead IO appropriate    
134 while it scans the metadata of the entire file    
135 The most important feature of this tool is its    
136 inconsistencies in file metadata and directory    
137 to eliminate problems.                            
138 Space usage metadata are rebuilt from the obse    
139                                                   
140 Problem Statement                                 
141 -----------------                                 
142                                                   
143 The current XFS tools leave several problems u    
144                                                   
145 1. **User programs** suddenly **lose access**     
146    shutdowns occur as a result of silent corru    
147    These occur **unpredictably** and often wit    
148                                                   
149 2. **Users** experience a **total loss of serv    
150    after an **unexpected shutdown** occurs.       
151                                                   
152 3. **Users** experience a **total loss of serv    
153    offline to **look for problems** proactivel    
154                                                   
155 4. **Data owners** cannot **check the integrit    
156    reading all of it.                             
157    This may expose them to substantial billing    
158    performed by the storage system administrat    
159                                                   
160 5. **System administrators** cannot **schedule    
161    with corruptions if they **lack the means**    
162    while the filesystem is online.                
163                                                   
164 6. **Fleet monitoring tools** cannot **automat    
165    health when doing so requires **manual inte    
166                                                   
167 7. **Users** can be tricked into **doing thing    
168    malicious actors **exploit quirks of Unicod    
169    in directories.                                
170                                                   
171 Given this definition of the problems to be so    
172 benefit, the proposed solution is a third fsck    
173 filesystem.                                       
174                                                   
175 This new third program has three components: a    
176 metadata, an in-kernel facility to repair meta    
177 program to drive fsck activity on a live files    
178 ``xfs_scrub`` is the name of the driver progra    
179 The rest of this document presents the goals a    
180 tool, describes its major design points in con    
181 discusses the similarities and differences wit    
182                                                   
183 +---------------------------------------------    
184 | **Note**:                                       
185 +---------------------------------------------    
186 | Throughout this document, the existing offli    
187 | referred to by its current name "``xfs_repai    
188 | The userspace driver program for the new onl    
189 | referred to as "``xfs_scrub``".                 
190 | The kernel portion of online fsck that valid    
191 | "online scrub", and portion of the kernel th    
192 | "online repair".                                
193 +---------------------------------------------    
194                                                   
195 The naming hierarchy is broken up into objects    
196 and the physical space is split into pieces kn    
197 Sharding enables better performance on highly     
198 contain the damage when corruptions occur.        
199 The division of the filesystem into principal     
200 inodes) means that there are ample opportuniti    
201 repairs on a subset of the filesystem.            
202                                                   
203 While this is going on, other parts continue p    
204 Even if a piece of filesystem metadata can onl    
205 entire system, the scan can still be done in t    
206 operations continue.                              
207                                                   
208 In summary, online fsck takes advantage of res    
209 metadata to enable targeted checking and repai    
210 is running.                                       
211 This capability will be coupled to automatic s    
212 autonomous self-healing of XFS maximizes servi    
213                                                   
214 2. Theory of Operation                            
215 ======================                            
216                                                   
217 Because it is necessary for online fsck to loc    
218 online fsck consists of three separate code co    
219 The first is the userspace driver program ``xf    
220 for identifying individual metadata items, sch    
221 reacting to the outcomes appropriately, and re    
222 administrator.                                    
223 The second and third are in the kernel, which     
224 and repair each type of online fsck work item.    
225                                                   
226 +---------------------------------------------    
227 | **Note**:                                       
228 +---------------------------------------------    
229 | For brevity, this document shortens the phra    
230 | item" to "scrub item".                          
231 +---------------------------------------------    
232                                                   
233 Scrub item types are delineated in a manner co    
234 philosophy, which is to say that each item sho    
235 metadata structure, and handle it well.           
236                                                   
237 Scope                                             
238 -----                                             
239                                                   
240 In principle, online fsck should be able to ch    
241 the offline fsck program can handle.              
242 However, online fsck cannot be running 100% of    
243 latent errors may creep in after a scrub compl    
244 If these errors cause the next mount to fail,     
245 solution.                                         
246 This limitation means that maintenance of the     
247 A second limitation of online fsck is that it     
248 sharing and lock acquisition rules as the regu    
249 This means that scrub cannot take *any* shortc    
250 so could lead to concurrency problems.            
251 In other words, online fsck is not a complete     
252 a complete run of online fsck may take longer     
253 However, both of these limitations are accepta    
254 different motivations of online fsck, which ar    
255 and to **increase predictability of operation*    
256                                                   
257 .. _scrubphases:                                  
258                                                   
259 Phases of Work                                    
260 --------------                                    
261                                                   
262 The userspace driver program ``xfs_scrub`` spl    
263 repairing an entire filesystem into seven phas    
264 Each phase concentrates on checking specific t    
265 on the success of all previous phases.            
266 The seven phases are as follows:                  
267                                                   
268 1. Collect geometry information about the moun    
269    discover the online fsck capabilities of th    
270    underlying storage devices.                    
271                                                   
272 2. Check allocation group metadata, all realti    
273    files.                                         
274    Each metadata structure is scheduled as a s    
275    If corruption is found in the inode header     
276    is permitted to perform repairs, then those    
277    prepare for phase 3.                           
278    Repairs are implemented by using the inform    
279    resubmit the kernel scrub call with the rep    
280    discussed in the next section.                 
281    Optimizations and all other repairs are def    
282                                                   
283 3. Check all metadata of every file in the fil    
284    Each metadata structure is also scheduled a    
285    If repairs are needed and ``xfs_scrub`` is     
286    and there were no problems detected during     
287    are repaired immediately.                      
288    Optimizations, deferred repairs, and unsucc    
289    phase 4.                                       
290                                                   
291 4. All remaining repairs and scheduled optimiz    
292    phase, if the caller permits them.             
293    Before starting repairs, the summary counte    
294    repairs are performed so that subsequent re    
295    reservation step due to wildly incorrect su    
296    Unsuccessful repairs are requeued as long a    
297    made somewhere in the filesystem.              
298    Free space in the filesystem is trimmed at     
299    filesystem is clean.                           
300                                                   
301 5. By the start of this phase, all primary and    
302    must be correct.                               
303    Summary counters such as the free space cou    
304    are checked and corrected.                     
305    Directory entry names and extended attribut    
306    suspicious entries such as control characte    
307    appearing in names.                            
308                                                   
309 6. If the caller asks for a media scan, read a    
310    file extents in the filesystem.                
311    The ability to use hardware-assisted data f    
312    to online fsck; neither of the previous too    
313    If media errors occur, they will be mapped     
314                                                   
315 7. Re-check the summary counters and presents     
316    space usage and file counts.                   
317                                                   
318 This allocation of responsibilities will be :r    
319 later in this document.                           
320                                                   
321 Steps for Each Scrub Item                         
322 -------------------------                         
323                                                   
324 The kernel scrub code uses a three-step strate    
325 the one aspect of a metadata object represente    
326                                                   
327 1. The scrub item of interest is checked for c    
328    optimization; and for values that are direc    
329    administrator but look suspicious.             
330    If the item is not corrupt or does not need    
331    released and the positive scan results are     
332    If the item is corrupt or could be optimize    
333    this, resources are released and the negati    
334    userspace.                                     
335    Otherwise, the kernel moves on to the secon    
336                                                   
337 2. The repair function is called to rebuild th    
338    Repair functions generally choose rebuild a    
339    rather than try to salvage the existing str    
340    If the repair fails, the scan results from     
341    userspace.                                     
342    Otherwise, the kernel moves on to the third    
343                                                   
344 3. In the third step, the kernel runs the same    
345    item to assess the efficacy of the repairs.    
346    The results of the reassessment are returne    
347                                                   
348 Classification of Metadata                        
349 --------------------------                        
350                                                   
351 Each type of metadata object (and therefore ea    
352 classified as follows:                            
353                                                   
354 Primary Metadata                                  
355 ````````````````                                  
356                                                   
357 Metadata structures in this category should be    
358 users either because they are directly created    
359 objects created by the user                       
360 Most filesystem objects fall into this class:     
361                                                   
362 - Free space and reference count information      
363                                                   
364 - Inode records and indexes                       
365                                                   
366 - Storage mapping information for file data       
367                                                   
368 - Directories                                     
369                                                   
370 - Extended attributes                             
371                                                   
372 - Symbolic links                                  
373                                                   
374 - Quota limits                                    
375                                                   
376 Scrub obeys the same rules as regular filesyst    
377 acquisition.                                      
378                                                   
379 Primary metadata objects are the simplest for     
380 The principal filesystem object (either an all    
381 owns the item being scrubbed is locked to guar    
382 The check function examines every record assoc    
383 errors and cross-references healthy records ag    
384 inconsistencies.                                  
385 Repairs for this class of scrub item are simpl    
386 starts by holding all the resources acquired i    
387 The repair function scans available metadata a    
388 observations needed to complete the structure.    
389 Next, it stages the observations in a new ondi    
390 atomically to complete the repair.                
391 Finally, the storage from the old data structu    
392                                                   
393 Because ``xfs_scrub`` locks a primary object f    
394 this is effectively an offline repair operatio    
395 filesystem.                                       
396 This minimizes the complexity of the repair co    
397 handle concurrent updates from other threads,     
398 any other part of the filesystem.                 
399 As a result, indexed structures can be rebuilt    
400 trying to access the damaged structure will be    
401 The only infrastructure needed by the repair c    
402 observations and a means to write new structur    
403 Despite these limitations, the advantage that     
404 targeted work on individual shards of the file    
405 service.                                          
406                                                   
407 This mechanism is described in section 2.1 ("O    
408 V. Srinivasan and M. J. Carey, `"Performance o    
409 Algorithms" <https://minds.wisconsin.edu/bitst    
410 *Extending Database Technology*, pp. 293-309,     
411                                                   
412 Most primary metadata repair functions stage t    
413 in-memory array prior to formatting the new on    
414 similar to the list-based algorithm discussed     
415 Algorithms") of Srinivasan.                       
416 However, any data structure builder that maint    
417 duration of the repair is *always* an offline     
418                                                   
419 .. _secondary_metadata:                           
420                                                   
421 Secondary Metadata                                
422 ``````````````````                                
423                                                   
424 Metadata structures in this category reflect r    
425 but are only needed for online fsck or for reo    
426                                                   
427 Secondary metadata include:                       
428                                                   
429 - Reverse mapping information                     
430                                                   
431 - Directory parent pointers                       
432                                                   
433 This class of metadata is difficult for scrub     
434 to the secondary object but needs to check pri    
435 to the usual order of resource acquisition.       
436 Frequently, this means that full filesystems s    
437 metadata.                                         
438 Check functions can be limited in scope to red    
439 Repairs, however, require a full scan of prima    
440 long time to complete.                            
441 Under these conditions, ``xfs_scrub`` cannot l    
442 duration of the repair.                           
443                                                   
444 Instead, repair functions set up an in-memory     
445 observations.                                     
446 Depending on the requirements of the specific     
447 index will either have the same format as the     
448 specific to that repair function.                 
449 The next step is to release all locks and star    
450 When the repair scanner needs to record an obs    
451 locked long enough to apply the update.           
452 While the filesystem scan is in progress, the     
453 filesystem so that it can apply pending filesy    
454 information.                                      
455 Once the scan is done, the owning object is re    
456 write a new ondisk structure, and the repairs     
457 The hooks are disabled and the staging staging    
458 Finally, the storage from the old data structu    
459                                                   
460 Introducing concurrency helps online repair av    
461 comes at a high cost to code complexity.          
462 Live filesystem code has to be hooked so that     
463 updates in progress.                              
464 The staging area has to become a fully functio    
465 updates can be merged from the hooks.             
466 Finally, the hook, the filesystem scan, and th    
467 sufficiently well integrated that a hook event    
468 should be applied to the staging structure.       
469                                                   
470 In theory, the scrub implementation could appl    
471 primary metadata, but doing so would make it m    
472 performant.                                       
473 Programs attempting to access the damaged stru    
474 operation, which may cause application failure    
475 shutdown.                                         
476                                                   
477 Inspiration for the secondary metadata repair     
478 2.4 of Srinivasan above, and sections 2 ("NSF:    
479 and 3.1.1 ("Duplicate Key Insert Problem") in     
480 Creating Indexes for Very Large Tables Without    
481 <https://dl.acm.org/doi/10.1145/130283.130337>    
482                                                   
483 The sidecar index mentioned above bears some r    
484 method mentioned in Srinivasan and Mohan.         
485 Their method consists of an index builder that    
486 build the new structure as quickly as possible    
487 captures all updates that would be committed t    
488 the new index already online.                     
489 After the index building scan finishes, the up    
490 are applied to the new index.                     
491 To avoid conflicts between the index builder a    
492 builder maintains a publicly visible cursor th    
493 scan through the record space.                    
494 To avoid duplication of work between the side     
495 file updates are elided when the record ID for    
496 cursor position within the record ID space.       
497                                                   
498 To minimize changes to the rest of the codebas    
499 replacement index hidden until it's completely    
500 In other words, there is no attempt to expose     
501 while repair is running.                          
502 The complexity of such an approach would be ve    
503 appropriate to building *new* indices.            
504                                                   
505 **Future Work Question**: Can the full scan an    
506 facilitate a repair also be used to implement     
507                                                   
508 *Answer*: In theory, yes.  Check would be much    
509 employed these live scans to build a shadow co    
510 compared the shadow records to the ondisk reco    
511 However, doing that is a fair amount more work    
512 do now.                                           
513 The live scans and hooks were developed much l    
514 That in turn increases the runtime of those sc    
515                                                   
516 Summary Information                               
517 ```````````````````                               
518                                                   
519 Metadata structures in this last category summ    
520 metadata records.                                 
521 These are often used to speed up resource usag    
522 smaller than the primary metadata which they r    
523                                                   
524 Examples of summary information include:          
525                                                   
526 - Summary counts of free space and inodes         
527                                                   
528 - File link counts from directories               
529                                                   
530 - Quota resource usage counts                     
531                                                   
532 Check and repair require full filesystem scans    
533 acquisition follow the same paths as regular f    
534                                                   
535 The superblock summary counters have special r    
536 implementation of the incore counters, and wil    
537 Check and repair of the other types of summary    
538 and file link counts) employ the same filesyst    
539 techniques as outlined above, but because the     
540 integer counters, the staging data need not be    
541 ondisk structure.                                 
542                                                   
543 Inspiration for quota and file link count repa    
544 sections 2.12 ("Online Index Operations") thro    
545 Maintenance") of G.  Graefe, `"Concurrent Quer    
546 and Their Indexes"                                
547 <http://www.odbms.org/wp-content/uploads/2014/    
548                                                   
549 Since quotas are non-negative integer counts o    
550 quotacheck can use the incremental view deltas    
551 track pending changes to the block and inode u    
552 and commit those changes to a dquot side file     
553 Delta tracking is necessary for dquots because    
554 whereas the data structure being rebuilt is an    
555 Link count checking combines the view deltas a    
556 it sets attributes of the objects being scanne    
557 separate data structure.                          
558 Each online fsck function will be discussed as    
559 document.                                         
560                                                   
561 Risk Management                                   
562 ---------------                                   
563                                                   
564 During the development of online fsck, several    
565 that may make the feature unsuitable for certa    
566 Steps can be taken to mitigate or eliminate th    
567 functionality.                                    
568                                                   
569 - **Decreased performance**: Adding metadata i    
570   increases the time cost of persisting change    
571   mapping and directory parent pointers are no    
572   System administrators who require the maximu    
573   reverse mapping features at format time, tho    
574   reduces the ability of online fsck to find i    
575                                                   
576 - **Incorrect repairs**: As with all software,    
577   software that result in incorrect repairs be    
578   Systematic fuzz testing (detailed in the nex    
579   authors to find bugs early, but it might not    
580   The kernel build system provides Kconfig opt    
581   and ``CONFIG_XFS_ONLINE_REPAIR``) to enable     
582   accept this risk.                               
583   The xfsprogs build system has a configure op    
584   disables building of the ``xfs_scrub`` binar    
585   mitigation if the kernel functionality remai    
586                                                   
587 - **Inability to repair**: Sometimes, a filesy    
588   repairable.                                     
589   If the keyspaces of several metadata indices    
590   coherent narrative cannot be formed from rec    
591   fails.                                          
592   To reduce the chance that a repair will fail    
593   render the filesystem unusable, the online r    
594   designed to stage and validate all new recor    
595   structure.                                      
596                                                   
597 - **Misbehavior**: Online fsck requires many p    
598   devices, opening files by handle, ignoring U    
599   and the ability to perform administrative ch    
600   Running this automatically in the background    
601   background service is configured to run with    
602   Obviously, this cannot address certain probl    
603   deadlocking, but it should be sufficient to     
604   escaping and reconfiguring the system.          
605   The cron job does not have this protection.     
606                                                   
607 - **Fuzz Kiddiez**: There are many people now     
608   automated fuzz testing of ondisk artifacts t    
609   spraying exploit code onto the public mailin    
610   disclosure is somehow of some social benefit    
611   In the view of this author, the benefit is r    
612   operators help to **fix** the flaws, but thi    
613   widely shared among security "researchers".     
614   The XFS maintainers' continuing ability to m    
615   ongoing risk to the stability of the develop    
616   Automated testing should front-load some of     
617   considered EXPERIMENTAL.                        
618                                                   
619 Many of these risks are inherent to software p    
620 Despite this, it is hoped that this new functi    
621 reducing unexpected downtime.                     
622                                                   
623 3. Testing Plan                                   
624 ===============                                   
625                                                   
626 As stated before, fsck tools have three main g    
627                                                   
628 1. Detect inconsistencies in the metadata;        
629                                                   
630 2. Eliminate those inconsistencies; and           
631                                                   
632 3. Minimize further loss of data.                 
633                                                   
634 Demonstrations of correct operation are necess    
635 that the software behaves within expectations.    
636 Unfortunately, it was not really feasible to p    
637 of every aspect of a fsck tool until the intro    
638 machines with high-IOPS storage.                  
639 With ample hardware availability in mind, the     
640 fsck project involves differential analysis ag    
641 systematic testing of every attribute of every    
642 Testing can be split into four major categorie    
643                                                   
644 Integrated Testing with fstests                   
645 -------------------------------                   
646                                                   
647 The primary goal of any free software QA effor    
648 inexpensive and widespread as possible to maxi    
649 community.                                        
650 In other words, testing should maximize the br    
651 scenarios and hardware setups.                    
652 This improves code quality by enabling the aut    
653 fix bugs early, and helps developers of new fe    
654 issues earlier in their development effort.       
655                                                   
656 The Linux filesystem community shares a common    
657 `fstests <https://git.kernel.org/pub/scm/fs/xf    
658 functional and regression testing.                
659 Even before development work began on online f    
660 would run both the ``xfs_check`` and ``xfs_rep    
661 scratch filesystems between each test.            
662 This provides a level of assurance that the ke    
663 alignment about what constitutes consistent me    
664 During development of the online checking code    
665 ``xfs_scrub -n`` between each test to ensure t    
666 produces the same results as the two existing     
667                                                   
668 To start development of online repair, fstests    
669 ``xfs_repair`` to rebuild the filesystem's met    
670 This ensures that offline repair does not cras    
671 after it exists, or trigger complaints from th    
672 This also established a baseline for what can     
673 To complete the first phase of development of     
674 modified to be able to run ``xfs_scrub`` in a     
675 This enables a comparison of the effectiveness    
676 the existing offline repair tools.                
677                                                   
678 General Fuzz Testing of Metadata Blocks           
679 ---------------------------------------           
680                                                   
681 XFS benefits greatly from having a very robust    
682                                                   
683 Before development of online fsck even began,     
684 to test the rather common fault that entire me    
685 This required the creation of fstests library     
686 containing every possible type of metadata obj    
687 Next, individual test cases were created to cr    
688 a single block of a specific type of metadata     
689 existing ``blocktrash`` command in ``xfs_db``,    
690 particular metadata validation strategy.          
691                                                   
692 This earlier test suite enabled XFS developers    
693 in-kernel validation functions and the ability    
694 detect and eliminate the inconsistent metadata    
695 This part of the test suite was extended to co    
696 same manner.                                      
697                                                   
698 In other words, for a given fstests filesystem    
699                                                   
700 * For each metadata object existing on the fil    
701                                                   
702   * Write garbage to it                           
703                                                   
704   * Test the reactions of:                        
705                                                   
706     1. The kernel verifiers to stop obviously     
707     2. Offline repair (``xfs_repair``) to dete    
708     3. Online repair (``xfs_scrub``) to detect    
709                                                   
710 Targeted Fuzz Testing of Metadata Records         
711 -----------------------------------------         
712                                                   
713 The testing plan for online fsck includes exte    
714 infrastructure to provide a much more powerful    
715 of every metadata field of every metadata obje    
716 ``xfs_db`` can modify every field of every met    
717 block in the filesystem to simulate the effect    
718 software bugs.                                    
719 Given that fstests already contains the abilit    
720 containing every metadata format known to the     
721 used to perform exhaustive fuzz testing!          
722                                                   
723 For a given fstests filesystem configuration:     
724                                                   
725 * For each metadata object existing on the fil    
726                                                   
727   * For each record inside that metadata objec    
728                                                   
729     * For each field inside that record...        
730                                                   
731       * For each conceivable type of transform    
732                                                   
733         1. Clear all bits                         
734         2. Set all bits                           
735         3. Toggle the most significant bit        
736         4. Toggle the middle bit                  
737         5. Toggle the least significant bit       
738         6. Add a small quantity                   
739         7. Subtract a small quantity              
740         8. Randomize the contents                 
741                                                   
742         * ...test the reactions of:               
743                                                   
744           1. The kernel verifiers to stop obvi    
745           2. Offline checking (``xfs_repair -n    
746           3. Offline repair (``xfs_repair``)      
747           4. Online checking (``xfs_scrub -n``    
748           5. Online repair (``xfs_scrub``)        
749           6. Both repair tools (``xfs_scrub``     
750                                                   
751 This is quite the combinatoric explosion!         
752                                                   
753 Fortunately, having this much test coverage ma    
754 check the responses of XFS' fsck tools.           
755 Since the introduction of the fuzz testing fra    
756 used to discover incorrect repair code and mis    
757 classes of metadata objects in ``xfs_repair``.    
758 The enhanced testing was used to finalize the     
759 confirming that ``xfs_repair`` could detect at    
760 the older tool.                                   
761                                                   
762 These tests have been very valuable for ``xfs_    
763 allow the online fsck developers to compare on    
764 and they enable XFS developers to find deficie    
765                                                   
766 Proposed patchsets include                        
767 `general fuzzer improvements                      
768 <https://git.kernel.org/pub/scm/linux/kernel/g    
769 `fuzzing baselines                                
770 <https://git.kernel.org/pub/scm/linux/kernel/g    
771 and `improvements in fuzz testing comprehensiv    
772 <https://git.kernel.org/pub/scm/linux/kernel/g    
773                                                   
774 Stress Testing                                    
775 --------------                                    
776                                                   
777 A unique requirement to online fsck is the abi    
778 concurrently with regular workloads.              
779 Although it is of course impossible to run ``x    
780 impact on the running system, the online repai    
781 inconsistencies into the filesystem metadata,     
782 never notice resource starvation.                 
783 To verify that these conditions are being met,    
784 the following ways:                               
785                                                   
786 * For each scrub item type, create a test to e    
787   while running ``fsstress``.                     
788 * For each scrub item type, create a test to e    
789   while running ``fsstress``.                     
790 * Race ``fsstress`` and ``xfs_scrub -n`` to en    
791   filesystem doesn't cause problems.              
792 * Race ``fsstress`` and ``xfs_scrub`` in force    
793   force-repairing the whole filesystem doesn't    
794 * Race ``xfs_scrub`` in check and force-repair    
795   freezing and thawing the filesystem.            
796 * Race ``xfs_scrub`` in check and force-repair    
797   remounting the filesystem read-only and read    
798 * The same, but running ``fsx`` instead of ``f    
799                                                   
800 Success is defined by the ability to run all o    
801 any unexpected filesystem shutdowns due to cor    
802 check warnings, or any other sort of mischief.    
803                                                   
804 Proposed patchsets include `general stress tes    
805 <https://git.kernel.org/pub/scm/linux/kernel/g    
806 and the `evolution of existing per-function st    
807 <https://git.kernel.org/pub/scm/linux/kernel/g    
808                                                   
809 4. User Interface                                 
810 =================                                 
811                                                   
812 The primary user of online fsck is the system     
813 repair.                                           
814 Online fsck presents two modes of operation to    
815 A foreground CLI process for online fsck on de    
816 that performs autonomous checking and repair.     
817                                                   
818 Checking on Demand                                
819 ------------------                                
820                                                   
821 For administrators who want the absolute fresh    
822 metadata in a filesystem, ``xfs_scrub`` can be    
823 a command line.                                   
824 The program checks every piece of metadata in     
825 administrator waits for the results to be repo    
826 ``xfs_repair`` tool.                              
827 Both tools share a ``-n`` option to perform a     
828 option to increase the verbosity of the inform    
829                                                   
830 A new feature of ``xfs_scrub`` is the ``-x`` o    
831 correction capabilities of the hardware to che    
832 The media scan is not enabled by default becau    
833 program runtime and consume a lot of bandwidth    
834                                                   
835 The output of a foreground invocation is captu    
836                                                   
837 The ``xfs_scrub_all`` program walks the list o    
838 initiates ``xfs_scrub`` for each of them in pa    
839 It serializes scans for any filesystems that r    
840 kernel block device to prevent resource overco    
841                                                   
842 Background Service                                
843 ------------------                                
844                                                   
845 To reduce the workload of system administrator    
846 provides a suite of `systemd <https://systemd.    
847 run online fsck automatically on weekends by d    
848 The background service configures scrub to run    
849 possible, the lowest CPU and IO priority, and     
850 threaded mode.                                    
851 This can be tuned by the systemd administrator    
852 and throughput requirements of customer worklo    
853                                                   
854 The output of the background service is also c    
855 If desired, reports of failures (either due to    
856 errors) can be emailed automatically by settin    
857 variable in the following service files:          
858                                                   
859 * ``xfs_scrub_fail@.service``                     
860 * ``xfs_scrub_media_fail@.service``               
861 * ``xfs_scrub_all_fail.service``                  
862                                                   
863 The decision to enable the background scan is     
864 This can be done by enabling either of the fol    
865                                                   
866 * ``xfs_scrub_all.timer`` on systemd systems      
867 * ``xfs_scrub_all.cron`` on non-systemd system    
868                                                   
869 This automatic weekly scan is configured out o    
870 additional media scan of all file data once pe    
871 This is less foolproof than, say, storing file    
872 more performant if application software provid    
873 redundancy can be provided elsewhere above the    
874 device's integrity guarantees are deemed suffi    
875                                                   
876 The systemd unit file definitions have been su    
877 (as of systemd 249) to ensure that the xfs_scr    
878 access to the rest of the system as possible.     
879 This was performed via ``systemd-analyze secur    
880 were restricted to the minimum required, sandb    
881 extent possible with sandboxing and system cal    
882 filesystem tree was restricted to the minimum     
883 access the filesystem being scanned.              
884 The service definition files restrict CPU usag    
885 apply as nice of a priority to IO and CPU sche    
886 This measure was taken to minimize delays in t    
887 No such hardening has been performed for the c    
888                                                   
889 Proposed patchset:                                
890 `Enabling the xfs_scrub background service        
891 <https://git.kernel.org/pub/scm/linux/kernel/g    
892                                                   
893 Health Reporting                                  
894 ----------------                                  
895                                                   
896 XFS caches a summary of each filesystem's heal    
897 The information is updated whenever ``xfs_scru    
898 inconsistencies are detected in the filesystem    
899 operations.                                       
900 System administrators should use the ``health`    
901 download this information into a human-readabl    
902 If problems have been observed, the administra    
903 service window to run the online repair tool t    
904 Failing that, the administrator can decide to     
905 run the traditional offline repair tool to cor    
906                                                   
907 **Future Work Question**: Should the health re    
908 inotify fs error notification system?             
909 Would it be helpful for sysadmins to have a da    
910 notifications and initiate a repair?              
911                                                   
912 *Answer*: These questions remain unanswered, b    
913 conversation with early adopters and potential    
914                                                   
915 Proposed patchsets include                        
916 `wiring up health reports to correction return    
917 <https://git.kernel.org/pub/scm/linux/kernel/g    
918 and                                               
919 `preservation of sickness info during memory r    
920 <https://git.kernel.org/pub/scm/linux/kernel/g    
921                                                   
922 5. Kernel Algorithms and Data Structures          
923 ========================================          
924                                                   
925 This section discusses the key algorithms and     
926 code that provide the ability to check and rep    
927 is running.                                       
928 The first chapters in this section reveal the     
929 foundation for checking metadata.                 
930 The remainder of this section presents the mec    
931 regenerates itself.                               
932                                                   
933 Self Describing Metadata                          
934 ------------------------                          
935                                                   
936 Starting with XFS version 5 in 2012, XFS updat    
937 ondisk block header to record a magic number,     
938 "unique" identifier (UUID), an owner code, the    
939 and a log sequence number.                        
940 When loading a block buffer from disk, the mag    
941 ondisk address confirm that the retrieved bloc    
942 the current filesystem, and that the informati    
943 supposed to be found at the ondisk address.       
944 The first three components enable checking too    
945 that doesn't belong to the filesystem, and the    
946 filesystem to detect lost writes.                 
947                                                   
948 Whenever a file system operation modifies a bl    
949 to the log as part of a transaction.              
950 The log then processes these transactions mark    
951 safely persisted to storage.                      
952 The logging code maintains the checksum and th    
953 transactional update.                             
954 Checksums are useful for detecting torn writes    
955 be introduced between the computer and its sto    
956 Sequence number tracking enables log recovery     
957 log updates to the filesystem.                    
958                                                   
959 These two features improve overall runtime res    
960 the filesystem to detect obvious corruption wh    
961 disk, but these buffer verifiers cannot provid    
962 between metadata structures.                      
963                                                   
964 For more information, please see the documenta    
965 Documentation/filesystems/xfs/xfs-self-describ    
966                                                   
967 Reverse Mapping                                   
968 ---------------                                   
969                                                   
970 The original design of XFS (circa 1993) is an     
971 filesystem design.                                
972 In those days, storage density was expensive,     
973 excessive seek time could kill performance.       
974 For performance reasons, filesystem authors we    
975 the filesystem, even at the cost of data integ    
976 Filesystems designers in the early 21st centur    
977 increase internal redundancy -- either storing    
978 metadata, or more space-efficient encoding tec    
979                                                   
980 For XFS, a different redundancy strategy was c    
981 a secondary space usage index that maps alloca    
982 owners.                                           
983 By adding a new index, the filesystem retains     
984 well to heavily threaded workloads involving l    
985 file metadata (the directory tree, the file bl    
986 groups) remain unchanged.                         
987 Like any system that improves redundancy, the     
988 overhead costs for space mapping activities.      
989 However, it has two critical advantages: first    
990 enabling online fsck and other requested funct    
991 defragmentation, better media failure reportin    
992 Second, the different ondisk storage format of    
993 defeats device-level deduplication because the    
994 redundancy.                                       
995                                                   
996 +---------------------------------------------    
997 | **Sidebar**:                                    
998 +---------------------------------------------    
999 | A criticism of adding the secondary index is    
1000 | improve the robustness of user data storage    
1001 | This is a valid point, but adding a new ind    
1002 | checksums increases write amplification by     
1003 | copy-writes, which age the filesystem prema    
1004 | In keeping with thirty years of precedent,     
1005 | integrity can supply as powerful a solution    
1006 | As for metadata, the complexity of adding a    
1007 | usage is much less than adding volume manag    
1008 | mirroring to XFS itself.                       
1009 | Perfection of RAID and volume management ar    
1010 | layers in the kernel.                          
1011 +--------------------------------------------    
1012                                                  
1013 The information captured in a reverse space m    
1014                                                  
1015 .. code-block:: c                                
1016                                                  
1017         struct xfs_rmap_irec {                   
1018             xfs_agblock_t    rm_startblock;      
1019             xfs_extlen_t     rm_blockcount;      
1020             uint64_t         rm_owner;           
1021             uint64_t         rm_offset;          
1022             unsigned int     rm_flags;           
1023         };                                       
1024                                                  
1025 The first two fields capture the location and    
1026 in units of filesystem blocks.                   
1027 The owner field tells scrub which metadata st    
1028 assigned this space.                             
1029 For space allocated to files, the offset fiel    
1030 mapped within the file fork.                     
1031 Finally, the flags field provides extra infor    
1032 is this an attribute fork extent?  A file map    
1033 unwritten data extent?                           
1034                                                  
1035 Online filesystem checking judges the consist    
1036 record by comparing its information against a    
1037 The reverse mapping index plays a key role in    
1038 because it contains a centralized alternate c    
1039 information.                                     
1040 Program runtime and ease of resource acquisit    
1041 what online checking can consult.                
1042 For example, a file data extent mapping can b    
1043                                                  
1044 * The absence of an entry in the free space i    
1045 * The absence of an entry in the inode index.    
1046 * The absence of an entry in the reference co    
1047   marked as having shared extents.               
1048 * The correspondence of an entry in the rever    
1049                                                  
1050 There are several observations to make about     
1051                                                  
1052 1. Reverse mappings can provide a positive af    
1053    the above primary metadata are in doubt.      
1054    The checking code for most primary metadat    
1055    one outlined above.                           
1056                                                  
1057 2. Proving the consistency of secondary metad    
1058    difficult because that requires a full sca    
1059    which is very time intensive.                 
1060    For example, checking a reverse mapping re    
1061    btree block requires locking the file and     
1062    confirm the block.                            
1063    Instead, scrub relies on rigorous cross-re    
1064    mapping structure checks.                     
1065                                                  
1066 3. Consistency scans must use non-blocking lo    
1067    required locking order is not the same ord    
1068    operations.                                   
1069    For example, if the filesystem normally ta    
1070    the AGF buffer lock but scrub wants to tak    
1071    an AGF buffer lock, scrub cannot block on     
1072    This means that forward progress during th    
1073    mapping data cannot be guaranteed if syste    
1074                                                  
1075 In summary, reverse mappings play a key role     
1076 metadata.                                        
1077 The details of how these records are staged,     
1078 into the filesystem are covered in subsequent    
1079                                                  
1080 Checking and Cross-Referencing                   
1081 ------------------------------                   
1082                                                  
1083 The first step of checking a metadata structu    
1084 contained within the structure and its relati    
1085 system.                                          
1086 XFS contains multiple layers of checking to t    
1087 metadata from wreaking havoc on the system.      
1088 Each of these layers contributes information     
1089 three decisions about the health of a metadat    
1090                                                  
1091 - Is a part of this structure obviously corru    
1092 - Is this structure inconsistent with the res    
1093   (``XFS_SCRUB_OFLAG_XCORRUPT``) ?               
1094 - Is there so much damage around the filesyst    
1095   possible (``XFS_SCRUB_OFLAG_XFAIL``) ?         
1096 - Can the structure be optimized to improve p    
1097   metadata (``XFS_SCRUB_OFLAG_PREEN``) ?         
1098 - Does the structure contain data that is not    
1099   by the system administrator (``XFS_SCRUB_OF    
1100                                                  
1101 The following sections describe how the metad    
1102                                                  
1103 Metadata Buffer Verification                     
1104 ````````````````````````````                     
1105                                                  
1106 The lowest layer of metadata protection in XF    
1107 into the buffer cache.                           
1108 These functions perform inexpensive internal     
1109 itself, and answer these questions:              
1110                                                  
1111 - Does the block belong to this filesystem?      
1112                                                  
1113 - Does the block belong to the structure that    
1114   This assumes that metadata blocks only have    
1115   in XFS.                                        
1116                                                  
1117 - Is the type of data stored in the block wit    
1118   scrub is expecting?                            
1119                                                  
1120 - Does the physical location of the block mat    
1121                                                  
1122 - Does the block checksum match the data?        
1123                                                  
1124 The scope of the protections here are very li    
1125 establish that the filesystem code is reasona    
1126 and that the storage system is reasonably com    
1127 Corruption problems observed at runtime cause    
1128 failed system calls, and in the extreme case,    
1129 corrupt metadata force the cancellation of a     
1130                                                  
1131 Every online fsck scrubbing function is expec    
1132 block of a structure in the course of checkin    
1133 Corruption problems observed during a check a    
1134 userspace as corruption; during a cross-refer    
1135 failure to cross-reference once the full exam    
1136 Reads satisfied by a buffer already in cache     
1137 bypass these checks.                             
1138                                                  
1139 Internal Consistency Checks                      
1140 ```````````````````````````                      
1141                                                  
1142 After the buffer cache, the next level of met    
1143 record verification code built into the files    
1144 These checks are split between the buffer ver    
1145 the buffer cache, and the scrub code itself,     
1146 level context required.                          
1147 The scope of checking is still internal to th    
1148 These higher level checking functions answer     
1149                                                  
1150 - Does the type of data stored in the block m    
1151                                                  
1152 - Does the block belong to the owning structu    
1153                                                  
1154 - If the block contains records, do the recor    
1155                                                  
1156 - If the block tracks internal free space inf    
1157   the record areas?                              
1158                                                  
1159 - Are the records contained inside the block     
1160                                                  
1161 Record checks in this category are more rigor    
1162 For example, block pointers and inumbers are     
1163 within the dynamically allocated parts of an     
1164 the filesystem.                                  
1165 Names are checked for invalid characters, and    
1166 combinations.                                    
1167 Other record attributes are checked for sensi    
1168 Btree records spanning an interval of the btr    
1169 correct order and lack of mergeability (excep    
1170 For performance reasons, regular code may ski    
1171 debugging is enabled or a write is about to o    
1172 Scrub functions, of course, must check all po    
1173                                                  
1174 Validation of Userspace-Controlled Record Att    
1175 `````````````````````````````````````````````    
1176                                                  
1177 Various pieces of filesystem metadata are dir    
1178 Because of this nature, validation work canno    
1179 that a value is within the possible range.       
1180 These fields include:                            
1181                                                  
1182 - Superblock fields controlled by mount optio    
1183 - Filesystem labels                              
1184 - File timestamps                                
1185 - File permissions                               
1186 - File size                                      
1187 - File flags                                     
1188 - Names present in directory entries, extende    
1189   labels                                         
1190 - Extended attribute key namespaces              
1191 - Extended attribute values                      
1192 - File data block contents                       
1193 - Quota limits                                   
1194 - Quota timer expiration (if resource usage e    
1195                                                  
1196 Cross-Referencing Space Metadata                 
1197 ````````````````````````````````                 
1198                                                  
1199 After internal block checks, the next higher     
1200 cross-referencing records between metadata st    
1201 For regular runtime code, the cost of these c    
1202 prohibitively expensive, but as scrub is dedi    
1203 inconsistencies, it must pursue all avenues o    
1204 The exact set of cross-referencing is highly     
1205 data structure being checked.                    
1206                                                  
1207 The XFS btree code has keyspace scanning func    
1208 cross reference one structure with another.      
1209 Specifically, scrub can scan the key space of    
1210 keyspace is fully, sparsely, or not at all ma    
1211 For the reverse mapping btree, it is possible    
1212 purposes of performing a keyspace scan so tha    
1213 btree contains records mapping a certain exte    
1214 sparsenses of the rest of the rmap keyspace g    
1215                                                  
1216 Btree blocks undergo the following checks bef    
1217                                                  
1218 - Does the type of data stored in the block m    
1219                                                  
1220 - Does the block belong to the owning structu    
1221                                                  
1222 - Do the records fit within the block?           
1223                                                  
1224 - Are the records contained inside the block     
1225                                                  
1226 - Are the name hashes in the correct order?      
1227                                                  
1228 - Do node pointers within the btree point to     
1229   of btree?                                      
1230                                                  
1231 - Do child pointers point towards the leaves?    
1232                                                  
1233 - Do sibling pointers point across the same l    
1234                                                  
1235 - For each node block record, does the record    
1236   of the child block?                            
1237                                                  
1238 Space allocation records are cross-referenced    
1239                                                  
1240 1. Any space mentioned by any metadata struct    
1241    follows:                                      
1242                                                  
1243    - Does the reverse mapping index list only    
1244      owner of each block?                        
1245                                                  
1246    - Are none of the blocks claimed as free s    
1247                                                  
1248    - If these aren't file data blocks, are no    
1249      shared by different owners?                 
1250                                                  
1251 2. Btree blocks are cross-referenced as follo    
1252                                                  
1253    - Everything in class 1 above.                
1254                                                  
1255    - If there's a parent node block, do the k    
1256      keyspace of this block?                     
1257                                                  
1258    - Do the sibling pointers point to valid b    
1259                                                  
1260    - Do the child pointers point to valid blo    
1261                                                  
1262 3. Free space btree records are cross-referen    
1263                                                  
1264    - Everything in class 1 and 2 above.          
1265                                                  
1266    - Does the reverse mapping index list no o    
1267                                                  
1268    - Is this space not claimed by the inode i    
1269                                                  
1270    - Is it not mentioned by the reference cou    
1271                                                  
1272    - Is there a matching record in the other     
1273                                                  
1274 4. Inode btree records are cross-referenced a    
1275                                                  
1276    - Everything in class 1 and 2 above.          
1277                                                  
1278    - Is there a matching record in free inode    
1279                                                  
1280    - Do cleared bits in the holemask correspo    
1281                                                  
1282    - Do set bits in the freemask correspond w    
1283      count?                                      
1284                                                  
1285 5. Inode records are cross-referenced as foll    
1286                                                  
1287    - Everything in class 1.                      
1288                                                  
1289    - Do all the fields that summarize informa    
1290      match those forks?                          
1291                                                  
1292    - Does each inode with zero link count cor    
1293      inode btree?                                
1294                                                  
1295 6. File fork space mapping records are cross-    
1296                                                  
1297    - Everything in class 1 and 2 above.          
1298                                                  
1299    - Is this space not mentioned by the inode    
1300                                                  
1301    - If this is a CoW fork mapping, does it c    
1302      reference count btree?                      
1303                                                  
1304 7. Reference count records are cross-referenc    
1305                                                  
1306    - Everything in class 1 and 2 above.          
1307                                                  
1308    - Within the space subkeyspace of the rmap    
1309      records mapped to a particular space ext    
1310      are there the same number of reverse map    
1311      reference count record claims?              
1312                                                  
1313 Proposed patchsets are the series to find gap    
1314 `refcount btree                                  
1315 <https://git.kernel.org/pub/scm/linux/kernel/    
1316 `inode btree                                     
1317 <https://git.kernel.org/pub/scm/linux/kernel/    
1318 `rmap btree                                      
1319 <https://git.kernel.org/pub/scm/linux/kernel/    
1320 to find                                          
1321 `mergeable records                               
1322 <https://git.kernel.org/pub/scm/linux/kernel/    
1323 and to                                           
1324 `improve cross referencing with rmap             
1325 <https://git.kernel.org/pub/scm/linux/kernel/    
1326 before starting a repair.                        
1327                                                  
1328 Checking Extended Attributes                     
1329 ````````````````````````````                     
1330                                                  
1331 Extended attributes implement a key-value sto    
1332 to be attached to any file.                      
1333 Both the kernel and userspace can access the     
1334 namespace and privilege restrictions.            
1335 Most typically these fragments are metadata a    
1336 contexts, user-supplied labels, indexing info    
1337                                                  
1338 Names can be as long as 255 bytes and can exi    
1339 namespaces.                                      
1340 Values can be as large as 64KB.                  
1341 A file's extended attributes are stored in bl    
1342 The mappings point to leaf blocks, remote val    
1343 Block 0 in the attribute fork is always the t    
1344 each of the three types of blocks can be foun    
1345 Leaf blocks contain attribute key records tha    
1346 Names are always stored elsewhere in the same    
1347 Values that are less than 3/4 the size of a f    
1348 elsewhere in the same leaf block.                
1349 Remote value blocks contain values that are t    
1350 If the leaf information exceeds a single file    
1351 rooted at block 0) is created to map hashes o    
1352 blocks in the attr fork.                         
1353                                                  
1354 Checking an extended attribute structure is n    
1355 lack of separation between attr blocks and in    
1356 Scrub must read each block mapped by the attr    
1357 blocks:                                          
1358                                                  
1359 1. Walk the dabtree in the attr fork (if pres    
1360    irregularities in the blocks or dabtree ma    
1361    attr leaf blocks.                             
1362                                                  
1363 2. Walk the blocks of the attr fork looking f    
1364    For each entry inside a leaf:                 
1365                                                  
1366    a. Validate that the name does not contain    
1367                                                  
1368    b. Read the attr value.                       
1369       This performs a named lookup of the att    
1370       of the dabtree.                            
1371       If the value is stored in a remote bloc    
1372       integrity of the remote value block.       
1373                                                  
1374 Checking and Cross-Referencing Directories       
1375 ``````````````````````````````````````````       
1376                                                  
1377 The filesystem directory tree is a directed a    
1378 constituting the nodes, and directory entries    
1379 Directories are a special type of file contai    
1380 255-byte sequence (name) to an inumber.          
1381 These are called directory entries, or dirent    
1382 Each directory file must have exactly one dir    
1383 A root directory points to itself.               
1384 Directory entries point to files of any type.    
1385 Each non-directory file may have multiple dir    
1386                                                  
1387 In XFS, directories are implemented as a file    
1388 partitions.                                      
1389 The first partition contains directory entry     
1390 Each data block contains variable-sized recor    
1391 name with an inumber and, optionally, a file     
1392 If the directory entry data grows beyond one     
1393 exists as post-EOF extents) is populated with    
1394 information and an index that maps hashes of     
1395 blocks in the first partition.                   
1396 This makes directory name lookups very fast.     
1397 If this second partition grows beyond one blo    
1398 populated with a linear array of free space i    
1399 expansions.                                      
1400 If the free space has been separated and the     
1401 beyond one block, then a dabtree is used to m    
1402 directory data blocks.                           
1403                                                  
1404 Checking a directory is pretty straightforwar    
1405                                                  
1406 1. Walk the dabtree in the second partition (    
1407    are no irregularities in the blocks or dab    
1408    dirent blocks.                                
1409                                                  
1410 2. Walk the blocks of the first partition loo    
1411    Each dirent is checked as follows:            
1412                                                  
1413    a. Does the name contain no invalid charac    
1414                                                  
1415    b. Does the inumber correspond to an actua    
1416                                                  
1417    c. Does the child inode have a nonzero lin    
1418                                                  
1419    d. If a file type is included in the diren    
1420       inode?                                     
1421                                                  
1422    e. If the child is a subdirectory, does th    
1423       back to the parent?                        
1424                                                  
1425    f. If the directory has a second partition    
1426       dirent name to ensure the correctness o    
1427                                                  
1428 3. Walk the free space list in the third part    
1429    the free spaces it describes are really un    
1430                                                  
1431 Checking operations involving :ref:`parents <    
1432 :ref:`file link counts <nlinks>` are discusse    
1433 sections.                                        
1434                                                  
1435 Checking Directory/Attribute Btrees              
1436 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^              
1437                                                  
1438 As stated in previous sections, the directory    
1439 maps user-provided names to improve lookup ti    
1440 Internally, it maps a 32-bit hash of the name    
1441 appropriate file fork.                           
1442                                                  
1443 The internal structure of a dabtree closely r    
1444 fixed-size metadata records -- each dabtree b    
1445 checksum, sibling pointers, a UUID, a tree le    
1446 The format of leaf and node records are the s    
1447 next level down in the hierarchy, with dabtre    
1448 leaf blocks, and dabtree leaf records pointin    
1449 in the fork.                                     
1450                                                  
1451 Checking and cross-referencing the dabtree is    
1452 space btrees:                                    
1453                                                  
1454 - Does the type of data stored in the block m    
1455                                                  
1456 - Does the block belong to the owning structu    
1457                                                  
1458 - Do the records fit within the block?           
1459                                                  
1460 - Are the records contained inside the block     
1461                                                  
1462 - Are the name hashes in the correct order?      
1463                                                  
1464 - Do node pointers within the dabtree point t    
1465   blocks?                                        
1466                                                  
1467 - Do leaf pointers within the dabtree point t    
1468   or attr leaf blocks?                           
1469                                                  
1470 - Do child pointers point towards the leaves?    
1471                                                  
1472 - Do sibling pointers point across the same l    
1473                                                  
1474 - For each dabtree node record, does the reco    
1475   contents of the child dabtree block?           
1476                                                  
1477 - For each dabtree leaf record, does the reco    
1478   contents of the directory or attr block?       
1479                                                  
1480 Cross-Referencing Summary Counters               
1481 ``````````````````````````````````               
1482                                                  
1483 XFS maintains three classes of summary counte    
1484 resource usage, and file link counts.            
1485                                                  
1486 In theory, the amount of available resources     
1487 extents) can be found by walking the entire f    
1488 This would make for very slow reporting, so a    
1489 maintain summaries of this information in the    
1490 Cross-referencing these values against the fi    
1491 simple matter of walking the free space and i    
1492 realtime bitmap, but there are complications     
1493 :ref:`more detail <fscounters>` later.           
1494                                                  
1495 :ref:`Quota usage <quotacheck>` and :ref:`fil    
1496 checking are sufficiently complicated to warr    
1497                                                  
1498 Post-Repair Reverification                       
1499 ``````````````````````````                       
1500                                                  
1501 After performing a repair, the checking code     
1502 the new structure, and the results of the hea    
1503 internally and returned to the calling proces    
1504 This step is critical for enabling system adm    
1505 of the filesystem and the progress of any rep    
1506 For developers, it is a useful means to judge    
1507 and correction in the online and offline chec    
1508                                                  
1509 Eventual Consistency vs. Online Fsck             
1510 ------------------------------------             
1511                                                  
1512 Complex operations can make modifications to     
1513 with a chain of transactions.                    
1514 These chains, once committed to the log, are     
1515 the system crashes while processing the chain    
1516 Because the AG header buffers are unlocked be    
1517 online checking must coordinate with chained     
1518 avoid incorrectly detecting inconsistencies d    
1519 Furthermore, online repair must not run when     
1520 the metadata are temporarily inconsistent wit    
1521 not possible.                                    
1522                                                  
1523 Only online fsck has this requirement of tota    
1524 should be relatively rare as compared to file    
1525 Online fsck coordinates with transaction chai    
1526                                                  
1527 * For each AG, maintain a count of intent ite    
1528   The count should be bumped whenever a new i    
1529   The count should be dropped when the filesy    
1530   buffers and finished the work.                 
1531                                                  
1532 * When online fsck wants to examine an AG, it    
1533   buffers to quiesce all transaction chains t    
1534   If the count is zero, proceed with the chec    
1535   If it is nonzero, cycle the buffer locks to    
1536   progress.                                      
1537                                                  
1538 This may lead to online fsck taking a long ti    
1539 filesystem updates take precedence over backg    
1540 Details about the discovery of this situation    
1541 :ref:`next section <chain_coordination>`, and    
1542 are presented :ref:`after that<intent_drains>    
1543                                                  
1544 .. _chain_coordination:                          
1545                                                  
1546 Discovery of the Problem                         
1547 ````````````````````````                         
1548                                                  
1549 Midway through the development of online scru    
1550 uncovered a misinteraction between online fsc    
1551 created by other writer threads that resulted    
1552 inconsistency.                                   
1553 The root cause of these reports is the eventu    
1554 the expansion of deferred work items and comp    
1555 reverse mapping and reflink were introduced.     
1556                                                  
1557 Originally, transaction chains were added to     
1558 unmapping space from files.                      
1559 Deadlock avoidance rules require that AGs onl    
1560 which makes it impossible (say) to use a sing    
1561 extent in AG 7 and then try to free a now sup    
1562 in AG 3.                                         
1563 To avoid these kinds of deadlocks, XFS create    
1564 items to commit to freeing some space in one     
1565 actual metadata updates to a fresh transactio    
1566 The transaction sequence looks like this:        
1567                                                  
1568 1. The first transaction contains a physical     
1569    structures to remove the mapping from the     
1570    It then attaches to the in-memory transact    
1571    deferred freeing of space.                    
1572    Concretely, each transaction maintains a l    
1573    xfs_defer_pending`` objects, each of which    
1574    xfs_extent_free_item`` objects.               
1575    Returning to the example above, the action    
1576    the unmapped space from AG 7 and the block    
1577    AG 3.                                         
1578    Deferred frees recorded in this manner are    
1579    an EFI log item from the ``struct xfs_exte    
1580    attaching the log item to the transaction.    
1581    When the log is persisted to disk, the EFI    
1582    transaction record.                           
1583    EFIs can list up to 16 extents to free, al    
1584                                                  
1585 2. The second transaction contains a physical    
1586    of AG 3 to release the former BMBT block a    
1587    free space btrees of AG 7 to release the u    
1588    Observe that the physical updates are rese    
1589    when possible.                                
1590    Attached to the transaction is a an extent    
1591    The EFD contains a pointer to the EFI logg    
1592    recovery can tell if the EFI needs to be r    
1593                                                  
1594 If the system goes down after transaction #1     
1595 but before #2 is committed, a scan of the fil    
1596 inconsistent filesystem metadata because ther    
1597 of the unmapped space.                           
1598 Happily, log recovery corrects this inconsist    
1599 an intent log item but does not find a corres    
1600 reconstruct the incore state of the intent it    
1601 In the example above, the log must replay bot    
1602 EFI to complete the recovery phase.              
1603                                                  
1604 There are subtleties to XFS' transaction chai    
1605                                                  
1606 * Log items must be added to a transaction in    
1607   conflicts with principal objects that are n    
1608   In other words, all per-AG metadata updates    
1609   completed before the last update to free th    
1610   be reallocated until that last update commi    
1611                                                  
1612 * AG header buffers are released between each    
1613   This means that other threads can observe a    
1614   but as long as the first subtlety is handle    
1615   correctness of filesystem operations.          
1616                                                  
1617 * Unmounting the filesystem flushes all pendi    
1618   offline fsck never sees the temporary incon    
1619   work item processing.                          
1620                                                  
1621 In this manner, XFS employs a form of eventua    
1622 and increase parallelism.                        
1623                                                  
1624 During the design phase of the reverse mappin    
1625 decided that it was impractical to cram all t    
1626 single filesystem change into a single transa    
1627 mapping operation can explode into many small    
1628                                                  
1629 * The block mapping update itself                
1630 * A reverse mapping update for the block mapp    
1631 * Fixing the freelist                            
1632 * A reverse mapping update for the freelist f    
1633                                                  
1634 * A shape change to the block mapping btree      
1635 * A reverse mapping update for the btree upda    
1636 * Fixing the freelist (again)                    
1637 * A reverse mapping update for the freelist f    
1638                                                  
1639 * An update to the reference counting informa    
1640 * A reverse mapping update for the refcount u    
1641 * Fixing the freelist (a third time)             
1642 * A reverse mapping update for the freelist f    
1643                                                  
1644 * Freeing any space that was unmapped and not    
1645 * Fixing the freelist (a fourth time)            
1646 * A reverse mapping update for the freelist f    
1647                                                  
1648 * Freeing the space used by the block mapping    
1649 * Fixing the freelist (a fifth time)             
1650 * A reverse mapping update for the freelist f    
1651                                                  
1652 Free list fixups are not usually needed more     
1653 chain, but it is theoretically possible if sp    
1654 For copy-on-write updates this is even worse,    
1655 remove the space from a staging area and agai    
1656                                                  
1657 To deal with this explosion in a calm manner,    
1658 work items to cover most reverse mapping upda    
1659 This reduces the worst case size of transacti    
1660 work into a long chain of small updates, whic    
1661 consistency in the system.                       
1662 Again, this generally isn't a problem because    
1663 items carefully to avoid resource reuse confl    
1664                                                  
1665 However, online fsck changes the rules -- rem    
1666 updates to per-AG structures are coordinated     
1667 headers, buffer locks are dropped between tra    
1668 Once scrub acquires resources and takes locks    
1669 all the validation work without releasing the    
1670 If the main lock for a space btree is an AG h    
1671 interrupted another thread that is midway thr    
1672 For example, if a thread performing a copy-on    
1673 mapping update but not the corresponding refc    
1674 will appear inconsistent to scrub and an obse    
1675 recorded.  This observation will not be corre    
1676 If a repair is attempted in this state, the r    
1677                                                  
1678 Several other solutions to this problem were     
1679 flaw and rejected:                               
1680                                                  
1681 1. Add a higher level lock to allocation grou    
1682    acquire the higher level lock in AG order     
1683    This would be very difficult to implement     
1684    difficult to determine which locks need to    
1685    without simulating the entire operation.      
1686    Performing a dry run of a file operation t    
1687    make the filesystem very slow.                
1688                                                  
1689 2. Make the deferred work coordinator code aw    
1690    targeting the same AG and have it hold the    
1691    the transaction roll between updates.         
1692    This would introduce a lot of complexity i    
1693    only loosely coupled with the actual defer    
1694    It would also fail to solve the problem be    
1695    generate new deferred subtasks, but all su    
1696    work can start on a new sibling task.         
1697                                                  
1698 3. Teach online fsck to walk all transactions    
1699    protect the data structure being scrubbed     
1700    The checking and repair operations must fa    
1701    the evaluations being performed.              
1702    This solution is a nonstarter because it i    
1703    filesystem.                                   
1704                                                  
1705 .. _intent_drains:                               
1706                                                  
1707 Intent Drains                                    
1708 `````````````                                    
1709                                                  
1710 Online fsck uses an atomic intent item counte    
1711 with transaction chains.                         
1712 There are two key properties to the drain mec    
1713 First, the counter is incremented when a defe    
1714 transaction, and it is decremented after the     
1715 *committed* to another transaction.              
1716 The second property is that deferred work can    
1717 holding an AG header lock, but per-AG work it    
1718 locking that AG header buffer to log the phys    
1719 log item.                                        
1720 The first property enables scrub to yield to     
1721 is an explicit deprioritization of online fsc    
1722 The second property of the drain is key to th    
1723 since scrub will always be able to decide if     
1724                                                  
1725 For regular filesystem code, the drain works     
1726                                                  
1727 1. Call the appropriate subsystem function to    
1728    transaction.                                  
1729                                                  
1730 2. The function calls ``xfs_defer_drain_bump`    
1731                                                  
1732 3. When the deferred item manager wants to fi    
1733    calls ``->finish_item`` to complete it.       
1734                                                  
1735 4. The ``->finish_item`` implementation logs     
1736    ``xfs_defer_drain_drop`` to decrease the s    
1737    waiting on the drain.                         
1738                                                  
1739 5. The subtransaction commits, which unlocks     
1740    intent item.                                  
1741                                                  
1742 For scrub, the drain works as follows:           
1743                                                  
1744 1. Lock the resource(s) associated with the m    
1745    For example, a scan of the refcount btree     
1746    buffers.                                      
1747                                                  
1748 2. If the counter is zero (``xfs_defer_drain_    
1749    chains in progress and the operation may p    
1750                                                  
1751 3. Otherwise, release the resources grabbed i    
1752                                                  
1753 4. Wait for the intent counter to reach zero     
1754    back to step 1 unless a signal has been ca    
1755                                                  
1756 To avoid polling in step 4, the drain provide    
1757 be woken up whenever the intent count drops t    
1758                                                  
1759 The proposed patchset is the                     
1760 `scrub intent drain series                       
1761 <https://git.kernel.org/pub/scm/linux/kernel/    
1762                                                  
1763 .. _jump_labels:                                 
1764                                                  
1765 Static Keys (aka Jump Label Patching)            
1766 `````````````````````````````````````            
1767                                                  
1768 Online fsck for XFS separates the regular fil    
1769 repair code as much as possible.                 
1770 However, there are a few parts of online fsck    
1771 later, live update hooks) where it is useful     
1772 what's going on in the rest of the filesystem    
1773 Since it is not expected that online fsck wil    
1774 background, it is very important to minimize     
1775 these hooks when online fsck is compiled into    
1776 running on behalf of userspace.                  
1777 Taking locks in the hot path of a writer thre    
1778 to find that no further action is necessary i    
1779 computer, this have an overhead of 40-50ns pe    
1780 Fortunately, the kernel supports dynamic code    
1781 replace a static branch to hook code with ``n    
1782 running.                                         
1783 This sled has an overhead of however long it     
1784 skip past the sled, which seems to be on the     
1785 does not access memory outside of instruction    
1786                                                  
1787 When online fsck enables the static key, the     
1788 unconditional branch to call the hook code.      
1789 The switchover is quite expensive (~22000ns)     
1790 program that invoked online fsck, and can be     
1791 enter online fsck at the same time, or if mul    
1792 checked at the same time.                        
1793 Changing the branch direction requires taking    
1794 CPU initialization requires memory allocation    
1795 to change a static key while holding any lock    
1796 accessed in the memory reclaim paths.            
1797 To minimize contention on the CPU hotplug loc    
1798 enable or disable static keys unnecessarily.     
1799                                                  
1800 Because static keys are intended to minimize     
1801 filesystem operations when xfs_scrub is not r    
1802 patterns are as follows:                         
1803                                                  
1804 - The hooked part of XFS should declare a sta    
1805   defaults to false.                             
1806   The ``DEFINE_STATIC_KEY_FALSE`` macro takes    
1807   The static key itself should be declared as    
1808                                                  
1809 - When deciding to invoke code that's only us    
1810   filesystem should call the ``static_branch_    
1811   scrub-only hook code if the static key is n    
1812                                                  
1813 - The regular filesystem should export helper    
1814   ``static_branch_inc`` to enable and ``stati    
1815   static key.                                    
1816   Wrapper functions make it easy to compile o    
1817   distributor turns off online fsck at build     
1818                                                  
1819 - Scrub functions wanting to turn on scrub-on    
1820   the ``xchk_fsgates_enable`` from the setup     
1821   hook.                                          
1822   This must be done before obtaining any reso    
1823   reclaim.                                       
1824   Callers had better be sure they really need    
1825   static key; the ``TRY_HARDER`` flag is usef    
1826                                                  
1827 Online scrub has resource acquisition helpers    
1828 handle locking AGI and AGF buffers for all sc    
1829 If it detects a conflict between scrub and th    
1830 try to wait for intents to complete.             
1831 If the caller of the helper has not enabled t    
1832 return -EDEADLOCK, which should result in the    
1833 ``TRY_HARDER`` flag set.                         
1834 The scrub setup function should detect that f    
1835 try the scrub again.                             
1836 Scrub teardown disables all static keys obtai    
1837                                                  
1838 For more information, please see the kernel d    
1839 Documentation/staging/static-keys.rst.           
1840                                                  
1841 .. _xfile:                                       
1842                                                  
1843 Pageable Kernel Memory                           
1844 ----------------------                           
1845                                                  
1846 Some online checking functions work by scanni    
1847 shadow copy of an ondisk metadata structure i    
1848 copies.                                          
1849 For online repair to rebuild a metadata struc    
1850 set that will be stored in the new structure     
1851 structure to disk.                               
1852 Ideally, repairs complete with a single atomi    
1853 a new data structure.                            
1854 To meet these goals, the kernel needs to coll    
1855 in a place that doesn't require the correct o    
1856                                                  
1857 Kernel memory isn't suitable because:            
1858                                                  
1859 * Allocating a contiguous region of memory to    
1860   difficult, especially on 32-bit systems.       
1861                                                  
1862 * Linked lists of records introduce double po    
1863   and eliminate the possibility of indexed lo    
1864                                                  
1865 * Kernel memory is pinned, which can drive th    
1866                                                  
1867 * The system might not have sufficient memory    
1868                                                  
1869 At any given time, online fsck does not need     
1870 memory, which means that individual records c    
1871 Continued development of online fsck demonstr    
1872 indexed data storage would also be very usefu    
1873 Fortunately, the Linux kernel already has a f    
1874 pageable storage: tmpfs.                         
1875 In-kernel graphics drivers (most notably i915    
1876 to store intermediate data that doesn't need     
1877 that usage precedent is already established.     
1878 Hence, the ``xfile`` was born!                   
1879                                                  
1880 +--------------------------------------------    
1881 | **Historical Sidebar**:                        
1882 +--------------------------------------------    
1883 | The first edition of online repair inserted    
1884 | it found them, which failed because filesys    
1885 | built data structure, which would be live a    
1886 |                                                
1887 | The second edition solved the half-rebuilt     
1888 | everything in memory, but frequently ran th    
1889 |                                                
1890 | The third edition solved the OOM problem by    
1891 | memory overhead of the list pointers was ex    
1892 +--------------------------------------------    
1893                                                  
1894 xfile Access Models                              
1895 ```````````````````                              
1896                                                  
1897 A survey of the intended uses of xfiles sugge    
1898                                                  
1899 1. Arrays of fixed-sized records (space manag    
1900    extended attribute entries)                   
1901                                                  
1902 2. Sparse arrays of fixed-sized records (quot    
1903                                                  
1904 3. Large binary objects (BLOBs) of variable s    
1905    attribute names and values)                   
1906                                                  
1907 4. Staging btrees in memory (reverse mapping     
1908                                                  
1909 5. Arbitrary contents (realtime space managem    
1910                                                  
1911 To support the first four use cases, high lev    
1912 to share functionality between online fsck fu    
1913 The rest of this section discusses the interf    
1914 four of those five higher level data structur    
1915 The fifth use case is discussed in the :ref:`    
1916 study.                                           
1917                                                  
1918 XFS is very record-based, which suggests that    
1919 complete records is important.                   
1920 To support these cases, a pair of ``xfile_loa    
1921 functions are provided to read and persist ob    
1922 error as an out of memory error.  For online     
1923 in this manner is an acceptable behavior beca    
1924 the operation back to userspace.                 
1925                                                  
1926 However, no discussion of file access idioms     
1927 question, "But what about mmap?"                 
1928 It is convenient to access storage directly w    
1929 code does with regular memory.                   
1930 Online fsck must not drive the system into OO    
1931 xfiles must be responsive to memory reclamati    
1932 tmpfs can only push a pagecache folio to the     
1933 pinned nor locked, which means the xfile must    
1934                                                  
1935 Short term direct access to xfile contents is    
1936 folio and mapping it into kernel address spac    
1937 mechanism.  Folio locks are not supposed to b    
1938 long term direct access to xfile contents is     
1939 mapping it into kernel address space, and dro    
1940 These long term users *must* be responsive to    
1941 the shrinker infrastructure to know when to r    
1942                                                  
1943 The ``xfile_get_folio`` and ``xfile_put_folio    
1944 retrieve the (locked) folio that backs part o    
1945 The only code to use these folio lease functi    
1946 :ref:`sorting<xfarray_sort>` algorithms and t    
1947 btrees<xfbtree>`.                                
1948                                                  
1949 xfile Access Coordination                        
1950 `````````````````````````                        
1951                                                  
1952 For security reasons, xfiles must be owned pr    
1953 They are marked ``S_PRIVATE`` to prevent inte    
1954 must never be mapped into process file descri    
1955 never be mapped into userspace processes.        
1956                                                  
1957 To avoid locking recursion issues with the VF    
1958 are performed by manipulating the page cache     
1959 xfile writers call the ``->write_begin`` and     
1960 xfile's address space to grab writable pages,    
1961 page, and release the pages.                     
1962 xfile readers call ``shmem_read_mapping_page_    
1963 before copying the contents into the caller's    
1964 In other words, xfiles ignore the VFS read an    
1965 having to create a dummy ``struct kiocb`` and    
1966 freeze locks.                                    
1967 tmpfs cannot be frozen, and xfiles must not b    
1968                                                  
1969 If an xfile is shared between threads to stag    
1970 its own locks to coordinate access.              
1971 For example, if a scrub function stores scan     
1972 other threads to provide updates to the scann    
1973 provide a lock for all threads to share.         
1974                                                  
1975 .. _xfarray:                                     
1976                                                  
1977 Arrays of Fixed-Sized Records                    
1978 `````````````````````````````                    
1979                                                  
1980 In XFS, each type of indexed space metadata (    
1981 counts, file fork space, and reverse mappings    
1982 records indexed with a classic B+ tree.          
1983 Directories have a set of fixed-size dirent r    
1984 and extended attributes have a set of fixed-s    
1985 names and values.                                
1986 Quota counters and file link counters index r    
1987 During a repair, scrub needs to stage new rec    
1988 retrieve them during the btree building step.    
1989                                                  
1990 Although this requirement can be satisfied by    
1991 methods of the xfile directly, it is simpler     
1992 higher level abstraction to take care of comp    
1993 iterator functions, and to deal with sparse r    
1994 The ``xfarray`` abstraction presents a linear    
1995 the byte-accessible xfile.                       
1996                                                  
1997 .. _xfarray_access_patterns:                     
1998                                                  
1999 Array Access Patterns                            
2000 ^^^^^^^^^^^^^^^^^^^^^                            
2001                                                  
2002 Array access patterns in online fsck tend to     
2003 Iteration of records is assumed to be necessa    
2004 covered in the next section.                     
2005                                                  
2006 The first type of caller handles records that    
2007 Gaps may exist between records, and a record     
2008 during the collection step.                      
2009 In other words, these callers want a sparse l    
2010 The typical use case are quota records or fil    
2011 Access to array elements is performed program    
2012 ``xfarray_store`` functions, which wrap the s    
2013 provide loading and storing of array elements    
2014 Gaps are defined to be null records, and null    
2015 sequence of all zero bytes.                      
2016 Null records are detected by calling ``xfarra    
2017 They are created either by calling ``xfarray_    
2018 record or by never storing anything to an arr    
2019                                                  
2020 The second type of caller handles records tha    
2021 and do not require multiple updates to a reco    
2022 The typical use case here is rebuilding space    
2023 These callers can add records to the array wi    
2024 via the ``xfarray_append`` function, which st    
2025 array.                                           
2026 For callers that require records to be presen    
2027 rebuilding btree data), the ``xfarray_sort``     
2028 records; this function will be covered later.    
2029                                                  
2030 The third type of caller is a bag, which is u    
2031 The typical use case here is constructing spa    
2032 reverse mapping information.                     
2033 Records can be put in the bag in any order, t    
2034 at any time, and uniqueness of records is lef    
2035 The ``xfarray_store_anywhere`` function is us    
2036 null record slot in the bag; and the ``xfarra    
2037 record from the bag.                             
2038                                                  
2039 The proposed patchset is the                     
2040 `big in-memory array                             
2041 <https://git.kernel.org/pub/scm/linux/kernel/    
2042                                                  
2043 Iterating Array Elements                         
2044 ^^^^^^^^^^^^^^^^^^^^^^^^                         
2045                                                  
2046 Most users of the xfarray require the ability    
2047 the array.                                       
2048 Callers can probe every possible array index     
2049                                                  
2050 .. code-block:: c                                
2051                                                  
2052         xfarray_idx_t i;                         
2053         foreach_xfarray_idx(array, i) {          
2054             xfarray_load(array, i, &rec);        
2055                                                  
2056             /* do something with rec */          
2057         }                                        
2058                                                  
2059 All users of this idiom must be prepared to h    
2060 know that there aren't any.                      
2061                                                  
2062 For xfarray users that want to iterate a spar    
2063 function ignores indices in the xfarray that     
2064 calling ``xfile_seek_data`` (which internally    
2065 of the array that are not populated with memo    
2066 Once it finds a page, it will skip the zeroed    
2067                                                  
2068 .. code-block:: c                                
2069                                                  
2070         xfarray_idx_t i = XFARRAY_CURSOR_INIT    
2071         while ((ret = xfarray_iter(array, &i,    
2072             /* do something with rec */          
2073         }                                        
2074                                                  
2075 .. _xfarray_sort:                                
2076                                                  
2077 Sorting Array Elements                           
2078 ^^^^^^^^^^^^^^^^^^^^^^                           
2079                                                  
2080 During the fourth demonstration of online rep    
2081 that for performance reasons, online repair o    
2082 into btree record blocks instead of inserting    
2083 time.                                            
2084 The btree insertion code in XFS is responsibl    
2085 of the records, so naturally the xfarray must    
2086 set prior to bulk loading.                       
2087                                                  
2088 Case Study: Sorting xfarrays                     
2089 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~                     
2090                                                  
2091 The sorting algorithm used in the xfarray is     
2092 quicksort and a heapsort subalgorithm in the     
2093 `Sedgewick <https://algs4.cs.princeton.edu/23    
2094 `pdqsort <https://github.com/orlp/pdqsort>`_,    
2095 kernel.                                          
2096 To sort records in a reasonably short amount     
2097 advantage of the binary subpartitioning offer    
2098 heapsort to hedge against performance collaps    
2099 are poor.                                        
2100 Both algorithms are (in general) O(n * lg(n))    
2101 gulf between the two implementations.            
2102                                                  
2103 The Linux kernel already contains a reasonabl    
2104 It only operates on regular C arrays, which l    
2105 There are two key places where the xfarray us    
2106                                                  
2107 * Sorting any record subset backed by a singl    
2108                                                  
2109 * Loading a small number of xfarray records f    
2110   of the xfarray into a memory buffer, and so    
2111                                                  
2112 In other words, ``xfarray`` uses heapsort to     
2113 quicksort, thereby mitigating quicksort's wor    
2114                                                  
2115 Choosing a quicksort pivot is a tricky busine    
2116 A good pivot splits the set to sort in half,     
2117 behavior that is crucial to  O(n * lg(n)) per    
2118 A poor pivot barely splits the subset at all,    
2119 runtime.                                         
2120 The xfarray sort routine tries to avoid picki    
2121 records into a memory buffer and using the ke    
2122 median of the nine.                              
2123                                                  
2124 Most modern quicksort implementations employ     
2125 pivot from a classic C array.                    
2126 Typical ninther implementations pick three un    
2127 of the triads, and then sort the middle value    
2128 ninther value.                                   
2129 As stated previously, however, xfile accesses    
2130 It turned out to be much more performant to r    
2131 memory buffer, run the kernel's in-memory hea    
2132 the 4th element of that buffer as the pivot.     
2133 Tukey's ninthers are described in J. W. Tukey    
2134 low-effort robust (resistant) location in lar    
2135 Survey Sampling and Applied Statistics*, edit    
2136 1978), pp. 251–257.                            
2137                                                  
2138 The partitioning of quicksort is fairly textb    
2139 subset around the pivot, then set up the curr    
2140 sort with the larger and the smaller halves o    
2141 This keeps the stack space requirements to lo    
2142                                                  
2143 As a final performance optimization, the hi a    
2144 keeps examined xfile pages mapped in the kern    
2145 reduce map/unmap cycles.                         
2146 Surprisingly, this reduces overall sort runti    
2147 accounting for the application of heapsort di    
2148                                                  
2149 .. _xfblob:                                      
2150                                                  
2151 Blob Storage                                     
2152 ````````````                                     
2153                                                  
2154 Extended attributes and directories add an ad    
2155 records: arbitrary byte sequences of finite l    
2156 Each directory entry record needs to store en    
2157 and each extended attribute needs to store bo    
2158 The names, keys, and values can consume a lar    
2159 ``xfblob`` abstraction was created to simplif    
2160 atop an xfile.                                   
2161                                                  
2162 Blob arrays provide ``xfblob_load`` and ``xfb    
2163 and persist objects.                             
2164 The store function returns a magic cookie for    
2165 Later, callers provide this cookie to the ``x    
2166 The ``xfblob_free`` function frees a specific    
2167 function frees them all because compaction is    
2168                                                  
2169 The details of repairing directories and exte    
2170 in a subsequent section about atomic file con    
2171 However, it should be noted that these repair    
2172 to cache a small number of entries before add    
2173 file, which is why compaction is not required    
2174                                                  
2175 The proposed patchset is at the start of the     
2176 `extended attribute repair                       
2177 <https://git.kernel.org/pub/scm/linux/kernel/    
2178                                                  
2179 .. _xfbtree:                                     
2180                                                  
2181 In-Memory B+Trees                                
2182 `````````````````                                
2183                                                  
2184 The chapter about :ref:`secondary metadata<se    
2185 checking and repairing of secondary metadata     
2186 between a live metadata scan of the filesyste    
2187 updating that metadata.                          
2188 Keeping the scan data up to date requires req    
2189 metadata updates from the filesystem into the    
2190 This *can* be done by appending concurrent up    
2191 applying them before writing the new metadata    
2192 unbounded memory consumption if the rest of t    
2193 Another option is to skip the side-log and co    
2194 filesystem directly into the scan data, which    
2195 maximum memory requirement.                      
2196 In both cases, the data structure holding the    
2197 access to perform well.                          
2198                                                  
2199 Given that indexed lookups of scan data is re    
2200 fsck employs the second strategy of committin    
2201 scan data.                                       
2202 Because xfarrays are not indexed and do not e    
2203 are not suitable for this task.                  
2204 Conveniently, however, XFS has a library to c    
2205 mapping records: the existing rmap btree code    
2206 If only there was a means to create one in me    
2207                                                  
2208 Recall that the :ref:`xfile <xfile>` abstract    
2209 regular file, which means that the kernel can    
2210 virtual address spaces at will.                  
2211 The XFS buffer cache specializes in abstracti    
2212 spaces, which means that adaptation of the bu    
2213 xfiles enables reuse of the entire btree libr    
2214 Btrees built atop an xfile are collectively k    
2215 The next few sections describe how they actua    
2216                                                  
2217 The proposed patchset is the                     
2218 `in-memory btree                                 
2219 <https://git.kernel.org/pub/scm/linux/kernel/    
2220 series.                                          
2221                                                  
2222 Using xfiles as a Buffer Cache Target            
2223 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^            
2224                                                  
2225 Two modifications are necessary to support xf    
2226 The first is to make it possible for the ``st    
2227 host the ``struct xfs_buf`` rhashtable, becau    
2228 per-AG structure.                                
2229 The second change is to modify the buffer ``i    
2230 pages from the xfile and "write" cached pages    
2231 Multiple access to individual buffers is cont    
2232 since the xfile does not provide any locking     
2233 With this adaptation in place, users of the x    
2234 exactly the same APIs as users of the disk-ba    
2235 The separation between xfile and buffer cache    
2236 they do not share pages, but this property co    
2237 updates to an in-memory btree.                   
2238 Today, however, it simply eliminates the need    
2239                                                  
2240 Space Management with an xfbtree                 
2241 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                 
2242                                                  
2243 Space management for an xfile is very simple     
2244 page in size.                                    
2245 These blocks use the same header format as an    
2246 block verifiers ignore the checksums, assumin    
2247 corruption-prone than regular DRAM.              
2248 Reusing existing code here is more important     
2249                                                  
2250 The very first block of an xfile backing an x    
2251 The header describes the owner, height, and t    
2252 xfbtree block.                                   
2253                                                  
2254 To allocate a btree block, use ``xfile_seek_d    
2255 If there are no gaps, create one by extending    
2256 Preallocate space for the block with ``xfile_    
2257 location.                                        
2258 To free an xfbtree block, use ``xfile_discard    
2259 ``FALLOC_FL_PUNCH_HOLE``) to remove the memor    
2260                                                  
2261 Populating an xfbtree                            
2262 ^^^^^^^^^^^^^^^^^^^^^                            
2263                                                  
2264 An online fsck function that wants to create     
2265 follows:                                         
2266                                                  
2267 1. Call ``xfile_create`` to create an xfile.     
2268                                                  
2269 2. Call ``xfs_alloc_memory_buftarg`` to creat    
2270    pointing to the xfile.                        
2271                                                  
2272 3. Pass the buffer cache target, buffer ops,     
2273    ``xfbtree_init`` to initialize the passed     
2274    initial root block to the xfile.              
2275    Each btree type should define a wrapper th    
2276    the creation function.                        
2277    For example, rmap btrees define ``xfs_rmap    
2278    all the necessary details for callers.        
2279                                                  
2280 4. Pass the xfbtree object to the btree curso    
2281    btree type.                                   
2282    Following the example above, ``xfs_rmapbt_    
2283    for callers.                                  
2284                                                  
2285 5. Pass the btree cursor to the regular btree    
2286    and to update the in-memory btree.            
2287    For example, a btree cursor for an rmap xf    
2288    ``xfs_rmap_*`` functions just like any oth    
2289    See the :ref:`next section<xfbtree_commit>    
2290    xfbtree updates that are logged to a trans    
2291                                                  
2292 6. When finished, delete the btree cursor, de    
2293    buffer target, and the destroy the xfile t    
2294                                                  
2295 .. _xfbtree_commit:                              
2296                                                  
2297 Committing Logged xfbtree Buffers                
2298 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                
2299                                                  
2300 Although it is a clever hack to reuse the rma    
2301 structure, the ephemeral nature of the in-mem    
2302 some challenges of its own.                      
2303 The XFS transaction manager must not commit b    
2304 by an xfile because the log format does not u    
2305 other than the data device.                      
2306 An ephemeral xfbtree probably will not exist     
2307 log transactions back into the filesystem, an    
2308 log recovery.                                    
2309 For these reasons, any code updating an xfbtr    
2310 remove the buffer log items from the transact    
2311 backing xfile before committing or cancelling    
2312                                                  
2313 The ``xfbtree_trans_commit`` and ``xfbtree_tr    
2314 this functionality as follows:                   
2315                                                  
2316 1. Find each buffer log item whose buffer tar    
2317                                                  
2318 2. Record the dirty/ordered status of the log    
2319                                                  
2320 3. Detach the log item from the buffer.          
2321                                                  
2322 4. Queue the buffer to a special delwri list.    
2323                                                  
2324 5. Clear the transaction dirty flag if the on    
2325    that were detached in step 3.                 
2326                                                  
2327 6. Submit the delwri list to commit the chang    
2328    are being committed.                          
2329                                                  
2330 After removing xfile logged buffers from the     
2331 transaction can be committed or cancelled.       
2332                                                  
2333 Bulk Loading of Ondisk B+Trees                   
2334 ------------------------------                   
2335                                                  
2336 As mentioned previously, early iterations of     
2337 structures by creating a new btree and adding    
2338 Loading a btree one record at a time had a sl    
2339 the incore records to be sorted prior to comm    
2340 blocks if the system went down during a repai    
2341 Loading records one at a time also meant that    
2342 loading factor of the blocks in the new btree    
2343                                                  
2344 Fortunately, the venerable ``xfs_repair`` too    
2345 rebuilding a btree index from a collection of    
2346 This was implemented rather inefficiently cod    
2347 had separate copy-pasted implementations for     
2348                                                  
2349 To prepare for online fsck, each of the four     
2350 were taken, and the four were refactored into    
2351 loading mechanism.                               
2352 Those notes in turn have been refreshed and a    
2353                                                  
2354 Geometry Computation                             
2355 ````````````````````                             
2356                                                  
2357 The zeroth step of bulk loading is to assembl    
2358 be stored in the new btree, and sort the reco    
2359 Next, call ``xfs_btree_bload_compute_geometry    
2360 btree from the record set, the type of btree,    
2361 This information is required for resource res    
2362                                                  
2363 First, the geometry computation computes the     
2364 will fit in a leaf block from the size of a b    
2365 block header.                                    
2366 Roughly speaking, the maximum number of recor    
2367                                                  
2368         maxrecs = (block_size - header_size)     
2369                                                  
2370 The XFS design specifies that btree blocks sh    
2371 which means the minimum number of records is     
2372                                                  
2373         minrecs = maxrecs / 2                    
2374                                                  
2375 The next variable to determine is the desired    
2376 This must be at least minrecs and no more tha    
2377 Choosing minrecs is undesirable because it wa    
2378 Choosing maxrecs is also undesirable because     
2379 newly rebuilt leaf block will cause a tree sp    
2380 drop in performance immediately afterwards.      
2381 The default loading factor was chosen to be 7    
2382 reasonably compact structure without any imme    
2383                                                  
2384         default_load_factor = (maxrecs + minr    
2385                                                  
2386 If space is tight, the loading factor will be    
2387 running out of space::                           
2388                                                  
2389         leaf_load_factor = enough space ? def    
2390                                                  
2391 Load factor is computed for btree node blocks    
2392 btree key and pointer as the record size::       
2393                                                  
2394         maxrecs = (block_size - header_size)     
2395         minrecs = maxrecs / 2                    
2396         node_load_factor = enough space ? def    
2397                                                  
2398 Once that's done, the number of leaf blocks r    
2399 can be computed as::                             
2400                                                  
2401         leaf_blocks = ceil(record_count / lea    
2402                                                  
2403 The number of node blocks needed to point to     
2404 is computed as::                                 
2405                                                  
2406         n_blocks = (n == 0 ? leaf_blocks : no    
2407         node_blocks[n + 1] = ceil(n_blocks /     
2408                                                  
2409 The entire computation is performed recursive    
2410 needs one block.                                 
2411 The resulting geometry is as follows:            
2412                                                  
2413 - For AG-rooted btrees, this level is the roo    
2414   tree is ``level + 1`` and the space needed     
2415   blocks on each level.                          
2416                                                  
2417 - For inode-rooted btrees where the records i    
2418   inode fork area, the height is ``level + 2`    
2419   summation of the number of blocks on each l    
2420   the root block.                                
2421                                                  
2422 - For inode-rooted btrees where the records i    
2423   the inode fork area, then the root block ca    
2424   height is ``level + 1``, and the space need    
2425   of the number of blocks on each level.         
2426   This only becomes relevant when non-bmap bt    
2427   an inode, which is a future patchset and on    
2428                                                  
2429 .. _newbt:                                       
2430                                                  
2431 Reserving New B+Tree Blocks                      
2432 ```````````````````````````                      
2433                                                  
2434 Once repair knows the number of blocks needed    
2435 those blocks using the free space information    
2436 Each reserved extent is tracked separately by    
2437 To improve crash resilience, the reservation     
2438 Intent (EFI) item in the same transaction as     
2439 its in-memory ``struct xfs_extent_free_item``    
2440 If the system goes down, log recovery will us    
2441 unused space, the free space, leaving the fil    
2442                                                  
2443 Each time the btree builder claims a block fo    
2444 extent, it updates the in-memory reservation     
2445 Block reservation tries to allocate as much c    
2446 reduce the number of EFIs in play.               
2447                                                  
2448 While repair is writing these new btree block    
2449 reservations pin the tail of the ondisk log.     
2450 It's possible that other parts of the system     
2451 of the log towards the pinned tail.              
2452 To avoid livelocking the filesystem, the EFIs    
2453 for too long.                                    
2454 To alleviate this problem, the dynamic relogg    
2455 mechanism is reused here to commit a transact    
2456 EFD for the old EFI and new EFI at the head.     
2457 This enables the log to release the old EFI t    
2458                                                  
2459 EFIs have a role to play during the commit an    
2460 next section and the section about :ref:`reap    
2461                                                  
2462 Proposed patchsets are the                       
2463 `bitmap rework                                   
2464 <https://git.kernel.org/pub/scm/linux/kernel/    
2465 and the                                          
2466 `preparation for bulk loading btrees             
2467 <https://git.kernel.org/pub/scm/linux/kernel/    
2468                                                  
2469                                                  
2470 Writing the New Tree                             
2471 ````````````````````                             
2472                                                  
2473 This part is pretty simple -- the btree build    
2474 a block from the reserved list, writes the ne    
2475 rest of the block with records, and adds the     
2476 written blocks::                                 
2477                                                  
2478   ┌────┐                             
2479   │leaf│                                     
2480   │RRR │                                     
2481   └────┘                             
2482                                                  
2483 Sibling pointers are set every time a new blo    
2484                                                  
2485   ┌────┐ ┌────┐ ┌    
2486   │leaf│→│leaf│→│leaf│→│l    
2487   │RRR │←│RRR │←│RRR │←│R    
2488   └────┘ └────┘ └    
2489                                                  
2490 When it finishes writing the record leaf bloc    
2491 blocks                                           
2492 To fill a node block, it walks each block in     
2493 to compute the relevant keys and write them i    
2494                                                  
2495       ┌────┐       ┌───    
2496       │node│──────→│node    
2497       │PP  │←──────│PP      
2498       └────┘       └───    
2499       ↙   ↘         ↙   ↘                
2500   ┌────┐ ┌────┐ ┌    
2501   │leaf│→│leaf│→│leaf│→│l    
2502   │RRR │←│RRR │←│RRR │←│R    
2503   └────┘ └────┘ └    
2504                                                  
2505 When it reaches the root level, it is ready t    
2506                                                  
2507           ┌─────────┐      
2508           │  root   │                        
2509           │   PP    │                        
2510           └─────────┘      
2511           ↙         ↘                        
2512       ┌────┐       ┌───    
2513       │node│──────→│node    
2514       │PP  │←──────│PP      
2515       └────┘       └───    
2516       ↙   ↘         ↙   ↘                
2517   ┌────┐ ┌────┐ ┌    
2518   │leaf│→│leaf│→│leaf│→│l    
2519   │RRR │←│RRR │←│RRR │←│R    
2520   └────┘ └────┘ └    
2521                                                  
2522 The first step to commit the new btree is to     
2523 synchronously.                                   
2524 This is a little complicated because a new bt    
2525 in the recent past, so the builder must use `    
2526 remove the (stale) buffer from the AIL list b    
2527 to disk.                                         
2528 Blocks are queued for IO using a delwri list     
2529 with ``xfs_buf_delwri_submit``.                  
2530                                                  
2531 Once the new blocks have been persisted to di    
2532 individual repair function that called the bu    
2533 The repair function must log the location of     
2534 clean up the space reservations that were mad    
2535 old metadata blocks:                             
2536                                                  
2537 1. Commit the location of the new btree root.    
2538                                                  
2539 2. For each incore reservation:                  
2540                                                  
2541    a. Log Extent Freeing Done (EFD) items for    
2542       by the btree builder.  The new EFDs mus    
2543       the reservation to prevent log recovery    
2544                                                  
2545    b. For unclaimed portions of incore reserv    
2546       extent free work item to be free the un    
2547       transaction chain.                         
2548                                                  
2549    c. The EFDs and EFIs logged in steps 2a an    
2550       reservation of the committing transacti    
2551       If the btree loading code suspects this    
2552       call ``xrep_defer_finish`` to clear out    
2553       fresh transaction.                         
2554                                                  
2555 3. Clear out the deferred work a second time     
2556    the repair transaction.                       
2557                                                  
2558 The transaction rolling in steps 2c and 3 rep    
2559 algorithm, because a log flush and a crash be    
2560 result in space leaking.                         
2561 Online repair functions minimize the chances     
2562 large transactions, which each can accommodat    
2563 instructions.                                    
2564 Repair moves on to reaping the old blocks, wh    
2565 subsequent :ref:`section<reaping>` after a fe    
2566                                                  
2567 Case Study: Rebuilding the Inode Index           
2568 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^           
2569                                                  
2570 The high level process to rebuild the inode i    
2571                                                  
2572 1. Walk the reverse mapping records to genera    
2573    records from the inode chunk information a    
2574    blocks.                                       
2575                                                  
2576 2. Append the records to an xfarray in inode     
2577                                                  
2578 3. Use the ``xfs_btree_bload_compute_geometry    
2579    of blocks needed for the inode btree.         
2580    If the free space inode btree is enabled,     
2581    geometry of the finobt.                       
2582                                                  
2583 4. Allocate the number of blocks computed in     
2584                                                  
2585 5. Use ``xfs_btree_bload`` to write the xfarr    
2586    generate the internal node blocks.            
2587    If the free space inode btree is enabled,     
2588                                                  
2589 6. Commit the location of the new btree root     
2590                                                  
2591 7. Reap the old btree blocks using the bitmap    
2592                                                  
2593 Details are as follows.                          
2594                                                  
2595 The inode btree maps inumbers to the ondisk l    
2596 inode records, which means that the inode btr    
2597 reverse mapping information.                     
2598 Reverse mapping records with an owner of ``XF    
2599 location of the old inode btree blocks.          
2600 Each reverse mapping record with an owner of     
2601 location of at least one inode cluster buffer    
2602 A cluster is the smallest number of ondisk in    
2603 freed in a single transaction; it is never sm    
2604                                                  
2605 For the space represented by each inode clust    
2606 records in the free space btrees nor any reco    
2607 If there are, the space metadata inconsistenc    
2608 operation.                                       
2609 Otherwise, read each cluster buffer to check     
2610 ondisk inodes and to decide if the file is al    
2611 (``xfs_dinode.i_mode != 0``) or free (``xfs_d    
2612 Accumulate the results of successive inode cl    
2613 enough information to fill a single inode chu    
2614 numbers in the inumber keyspace.                 
2615 If the chunk is sparse, the chunk record may     
2616                                                  
2617 Once the repair function accumulates one chun    
2618 ``xfarray_append`` to add the inode btree rec    
2619 This xfarray is walked twice during the btree    
2620 the inode btree with all inode chunk records,    
2621 free inode btree with records for chunks that    
2622 The number of records for the inode btree is     
2623 but the record count for the free inode btree    
2624 records are stored in the xfarray.               
2625                                                  
2626 The proposed patchset is the                     
2627 `AG btree repair                                 
2628 <https://git.kernel.org/pub/scm/linux/kernel/    
2629 series.                                          
2630                                                  
2631 Case Study: Rebuilding the Space Reference Co    
2632 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^    
2633                                                  
2634 Reverse mapping records are used to rebuild t    
2635 Reference counts are required for correct ope    
2636 file data.                                       
2637 Imagine the reverse mapping entries as rectan    
2638 physical blocks, and that the rectangles can     
2639 overlap each other.                              
2640 From the diagram below, it is apparent that a    
2641 or end wherever the height of the stack chang    
2642 In other words, the record emission stimulus     
2643                                                  
2644                         █    ███         
2645               ██      █████ █    
2646         ██   ████     ███    
2647         ████████████    
2648         ^ ^  ^^ ^^    ^ ^^ ^^^  ^^^^  ^ ^^ ^     
2649         2 1  23 21    3 43 234  2123  1 01 2     
2650                                                  
2651 The ondisk reference count btree does not sto    
2652 the free space btree already records which bl    
2653 Extents being used to stage copy-on-write ope    
2654 with refcount == 1.                              
2655 Single-owner file blocks aren't recorded in e    
2656 reference count btrees.                          
2657                                                  
2658 The high level process to rebuild the referen    
2659                                                  
2660 1. Walk the reverse mapping records to genera    
2661    records for any space having more than one    
2662    the xfarray.                                  
2663    Any records owned by ``XFS_RMAP_OWN_COW``     
2664    because these are extents allocated to sta    
2665    are tracked in the refcount btree.            
2666                                                  
2667    Use any records owned by ``XFS_RMAP_OWN_RE    
2668    refcount btree blocks.                        
2669                                                  
2670 2. Sort the records in physical extent order,    
2671    at the end of the xfarray.                    
2672    This matches the sorting order of records     
2673                                                  
2674 3. Use the ``xfs_btree_bload_compute_geometry    
2675    of blocks needed for the new tree.            
2676                                                  
2677 4. Allocate the number of blocks computed in     
2678                                                  
2679 5. Use ``xfs_btree_bload`` to write the xfarr    
2680    generate the internal node blocks.            
2681                                                  
2682 6. Commit the location of new btree root bloc    
2683                                                  
2684 7. Reap the old btree blocks using the bitmap    
2685                                                  
2686 Details are as follows; the same algorithm is    
2687 generate refcount information from reverse ma    
2688                                                  
2689 - Until the reverse mapping btree runs out of    
2690                                                  
2691   - Retrieve the next record from the btree a    
2692                                                  
2693   - Collect all records with the same startin    
2694     them in the bag.                             
2695                                                  
2696   - While the bag isn't empty:                   
2697                                                  
2698     - Among the mappings in the bag, compute     
2699       reference count changes.                   
2700       This position will be either the starti    
2701       unprocessed reverse mapping or the next    
2702       in the bag.                                
2703                                                  
2704     - Remove all mappings from the bag that e    
2705                                                  
2706     - Collect all reverse mappings that start    
2707       and put them in the bag.                   
2708                                                  
2709     - If the size of the bag changed and is g    
2710       refcount record associating the block n    
2711       the size of the bag.                       
2712                                                  
2713 The bag-like structure in this case is a type    
2714 :ref:`xfarray access patterns<xfarray_access_    
2715 Reverse mappings are added to the bag using `    
2716 removed via ``xfarray_unset``.                   
2717 Bag members are examined through ``xfarray_it    
2718                                                  
2719 The proposed patchset is the                     
2720 `AG btree repair                                 
2721 <https://git.kernel.org/pub/scm/linux/kernel/    
2722 series.                                          
2723                                                  
2724 Case Study: Rebuilding File Fork Mapping Indi    
2725 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^    
2726                                                  
2727 The high level process to rebuild a data/attr    
2728                                                  
2729 1. Walk the reverse mapping records to genera    
2730    records from the reverse mapping records f    
2731    Append these records to an xfarray.           
2732    Compute the bitmap of the old bmap btree b    
2733    records.                                      
2734                                                  
2735 2. Use the ``xfs_btree_bload_compute_geometry    
2736    of blocks needed for the new tree.            
2737                                                  
2738 3. Sort the records in file offset order.        
2739                                                  
2740 4. If the extent records would fit in the ino    
2741    records to that immediate area and skip to    
2742                                                  
2743 5. Allocate the number of blocks computed in     
2744                                                  
2745 6. Use ``xfs_btree_bload`` to write the xfarr    
2746    generate the internal node blocks.            
2747                                                  
2748 7. Commit the new btree root block to the ino    
2749                                                  
2750 8. Reap the old btree blocks using the bitmap    
2751                                                  
2752 There are some complications here:               
2753 First, it's possible to move the fork offset     
2754 immediate areas if the data and attr forks ar    
2755 Second, if there are sufficiently few fork ma    
2756 EXTENTS format instead of BMBT, which may req    
2757 Third, the incore extent map must be reloaded    
2758 any delayed allocation extents.                  
2759                                                  
2760 The proposed patchset is the                     
2761 `file mapping repair                             
2762 <https://git.kernel.org/pub/scm/linux/kernel/    
2763 series.                                          
2764                                                  
2765 .. _reaping:                                     
2766                                                  
2767 Reaping Old Metadata Blocks                      
2768 ---------------------------                      
2769                                                  
2770 Whenever online fsck builds a new data struct    
2771 suspect, there is a question of how to find a    
2772 belonged to the old structure.                   
2773 The laziest method of course is not to deal w    
2774 leads to service degradations as space leaks     
2775 Hopefully, someone will schedule a rebuild of    
2776 plug all those leaks.                            
2777 Offline repair rebuilds all space metadata af    
2778 the files and directories that it decides not    
2779 structures in the discovered free space and a    
2780                                                  
2781 As part of a repair, online fsck relies heavi    
2782 to find space that is owned by the correspond    
2783 Cross referencing rmap records with other rma    
2784 there may be other data structures that also     
2785 blocks (e.g. crosslinked trees).                 
2786 Permitting the block allocator to hand them o    
2787 towards consistency.                             
2788                                                  
2789 For space metadata, the process of finding ex    
2790 follows this format:                             
2791                                                  
2792 1. Create a bitmap of space used by data stru    
2793    The space reservations used to create the     
2794    the same rmap owner code is used to denote    
2795                                                  
2796 2. Survey the reverse mapping data to create     
2797    same ``XFS_RMAP_OWN_*`` number for the met    
2798                                                  
2799 3. Use the bitmap disunion operator to subtra    
2800    The remaining set bits represent candidate    
2801    The process moves on to step 4 below.         
2802                                                  
2803 Repairs for file-based metadata such as exten    
2804 symbolic links, quota files and realtime bitm    
2805 new structure attached to a temporary file an    
2806 file forks.                                      
2807 Afterward, the mappings in the old file fork     
2808 disposal.                                        
2809                                                  
2810 The process for disposing of old extents is a    
2811                                                  
2812 4. For each candidate extent, count the numbe    
2813    the first block in that extent that do not    
2814    data structure being repaired.                
2815                                                  
2816    - If zero, the block has a single owner an    
2817                                                  
2818    - If not, the block is part of a crosslink    
2819      freed.                                      
2820                                                  
2821 5. Starting with the next block in the extent    
2822    have the same zero/nonzero other owner sta    
2823                                                  
2824 6. If the region is crosslinked, delete the r    
2825    structure being repaired and move on to th    
2826                                                  
2827 7. If the region is to be freed, mark any cor    
2828    cache as stale to prevent log writeback.      
2829                                                  
2830 8. Free the region and move on.                  
2831                                                  
2832 However, there is one complication to this pr    
2833 Transactions are of finite size, so the reapi    
2834 the transactions to avoid overruns.              
2835 Overruns come from two sources:                  
2836                                                  
2837 a. EFIs logged on behalf of space that is no     
2838                                                  
2839 b. Log items for buffer invalidations            
2840                                                  
2841 This is also a window in which a crash during    
2842 blocks.                                          
2843 As stated earlier, online repair functions us    
2844 minimize the chances of this occurring.          
2845                                                  
2846 The proposed patchset is the                     
2847 `preparation for bulk loading btrees             
2848 <https://git.kernel.org/pub/scm/linux/kernel/    
2849 series.                                          
2850                                                  
2851 Case Study: Reaping After a Regular Btree Rep    
2852 `````````````````````````````````````````````    
2853                                                  
2854 Old reference count and inode btrees are the     
2855 rmap records with special owner codes: ``XFS_    
2856 btree, and ``XFS_RMAP_OWN_INOBT`` for the ino    
2857 Creating a list of extents to reap the old bt    
2858 conceptually:                                    
2859                                                  
2860 1. Lock the relevant AGI/AGF header buffers t    
2861                                                  
2862 2. For each reverse mapping record with an rm    
2863    metadata structure being rebuilt, set the     
2864                                                  
2865 3. Walk the current data structures that have    
2866    For each block visited, clear that range i    
2867                                                  
2868 4. Each set bit in the bitmap represents a bl    
2869    old data structures and hence is a candida    
2870    In other words, ``(rmap_records_owned_by &    
2871    are the blocks that might be freeable.        
2872                                                  
2873 If it is possible to maintain the AGF lock th    
2874 common case), then step 2 can be performed at    
2875 mapping record walk that creates the records     
2876                                                  
2877 Case Study: Rebuilding the Free Space Indices    
2878 `````````````````````````````````````````````    
2879                                                  
2880 The high level process to rebuild the free sp    
2881                                                  
2882 1. Walk the reverse mapping records to genera    
2883    records from the gaps in the reverse mappi    
2884                                                  
2885 2. Append the records to an xfarray.             
2886                                                  
2887 3. Use the ``xfs_btree_bload_compute_geometry    
2888    of blocks needed for each new tree.           
2889                                                  
2890 4. Allocate the number of blocks computed in     
2891    space information collected.                  
2892                                                  
2893 5. Use ``xfs_btree_bload`` to write the xfarr    
2894    generate the internal node blocks for the     
2895    Call it again for the free space by block     
2896                                                  
2897 6. Commit the locations of the new btree root    
2898                                                  
2899 7. Reap the old btree blocks by looking for s    
2900    reverse mapping btree, the new free space     
2901                                                  
2902 Repairing the free space btrees has three key    
2903 btree repair:                                    
2904                                                  
2905 First, free space is not explicitly tracked i    
2906 Hence, the new free space records must be inf    
2907 space component of the keyspace of the revers    
2908                                                  
2909 Second, free space repairs cannot use the com    
2910 new blocks are reserved out of the free space    
2911 This is impossible when repairing the free sp    
2912 However, repair holds the AGF buffer lock for    
2913 index reconstruction, so it can use the colle    
2914 supply the blocks for the new free space btre    
2915 It is not necessary to back each reserved ext    
2916 free space btrees are constructed in what the    
2917 unowned space.                                   
2918 However, if reserving blocks for the new btre    
2919 information changes the number of free space     
2920 the new free space btree geometry with the ne    
2921 reservation is sufficient.                       
2922 As part of committing the new btrees, repair     
2923 are created for the reserved blocks and that     
2924 inserted into the free space btrees.             
2925 Deferrred rmap and freeing operations are use    
2926 is atomic, similar to the other btree repair     
2927                                                  
2928 Third, finding the blocks to reap after the r    
2929 straightforward.                                 
2930 Blocks for the free space btrees and the reve    
2931 the AGFL.                                        
2932 Blocks put onto the AGFL have reverse mapping    
2933 ``XFS_RMAP_OWN_AG``.                             
2934 This ownership is retained when blocks move f    
2935 btrees or the reverse mapping btrees.            
2936 When repair walks reverse mapping records to     
2937 creates a bitmap (``ag_owner_bitmap``) of all    
2938 ``XFS_RMAP_OWN_AG`` records.                     
2939 The repair context maintains a second bitmap     
2940 blocks and the AGFL blocks (``rmap_agfl_bitma    
2941 When the walk is complete, the bitmap disunio    
2942 ~rmap_agfl_bitmap)`` computes the extents tha    
2943 btrees.                                          
2944 These blocks can then be reaped using the met    
2945                                                  
2946 The proposed patchset is the                     
2947 `AG btree repair                                 
2948 <https://git.kernel.org/pub/scm/linux/kernel/    
2949 series.                                          
2950                                                  
2951 .. _rmap_reap:                                   
2952                                                  
2953 Case Study: Reaping After Repairing Reverse M    
2954 `````````````````````````````````````````````    
2955                                                  
2956 Old reverse mapping btrees are less difficult    
2957 As mentioned in the previous section, blocks     
2958 btree blocks, and the reverse mapping btree b    
2959 records with ``XFS_RMAP_OWN_AG`` as the owner    
2960 The full process of gathering reverse mapping    
2961 are described in the case study of               
2962 :ref:`live rebuilds of rmap data <rmap_repair    
2963 discussion is that the new rmap btree will no    
2964 rmap btree, nor will the old btree blocks be     
2965 The list of candidate reaping blocks is compu    
2966 corresponding to the gaps in the new rmap btr    
2967 bits corresponding to extents in the free spa    
2968 blocks.                                          
2969 The result ``(new_rmapbt_gaps & ~(agfl | bnob    
2970 methods outlined above.                          
2971                                                  
2972 The rest of the process of rebuildng the reve    
2973 in a separate :ref:`case study<rmap_repair>`.    
2974                                                  
2975 The proposed patchset is the                     
2976 `AG btree repair                                 
2977 <https://git.kernel.org/pub/scm/linux/kernel/    
2978 series.                                          
2979                                                  
2980 Case Study: Rebuilding the AGFL                  
2981 ```````````````````````````````                  
2982                                                  
2983 The allocation group free block list (AGFL) i    
2984                                                  
2985 1. Create a bitmap for all the space that the    
2986    owned by ``XFS_RMAP_OWN_AG``.                 
2987                                                  
2988 2. Subtract the space used by the two free sp    
2989                                                  
2990 3. Subtract any space that the reverse mappin    
2991    other owner, to avoid re-adding crosslinke    
2992                                                  
2993 4. Once the AGFL is full, reap any blocks lef    
2994                                                  
2995 5. The next operation to fix the freelist wil    
2996                                                  
2997 See `fs/xfs/scrub/agheader_repair.c <https://    
2998                                                  
2999 Inode Record Repairs                             
3000 --------------------                             
3001                                                  
3002 Inode records must be handled carefully, beca    
3003 ("dinodes") and an in-memory ("cached") repre    
3004 There is a very high potential for cache cohe    
3005 careful to access the ondisk metadata *only*     
3006 badly damaged that the filesystem cannot load    
3007 When online fsck wants to open a damaged file    
3008 specialized resource acquisition functions th    
3009 representation *or* a lock on whichever objec    
3010 update to the ondisk location.                   
3011                                                  
3012 The only repairs that should be made to the o    
3013 is necessary to get the in-core structure loa    
3014 This means fixing whatever is caught by the i    
3015 verifiers, and retrying the ``iget`` operatio    
3016 If the second ``iget`` fails, the repair has     
3017                                                  
3018 Once the in-memory representation is loaded,     
3019 subject it to comprehensive checks, repairs,     
3020 Most inode attributes are easy to check and c    
3021 arbitrary bit patterns; these are both easy t    
3022 Dealing with the data and attr fork extent co    
3023 more complicated, because computing the corre    
3024 forks, or if that fails, leaving the fields i    
3025 fsck functions to run.                           
3026                                                  
3027 The proposed patchset is the                     
3028 `inode                                           
3029 <https://git.kernel.org/pub/scm/linux/kernel/    
3030 repair series.                                   
3031                                                  
3032 Quota Record Repairs                             
3033 --------------------                             
3034                                                  
3035 Similar to inodes, quota records ("dquots") a    
3036 an in-memory representation, and hence are su    
3037 issues.                                          
3038 Somewhat confusingly, both are known as dquot    
3039                                                  
3040 The only repairs that should be made to the o    
3041 whatever is necessary to get the in-core stru    
3042 Once the in-memory representation is loaded,     
3043 checking are obviously bad limits and timer v    
3044                                                  
3045 Quota usage counters are checked, repaired, a    
3046 section about :ref:`live quotacheck <quotache    
3047                                                  
3048 The proposed patchset is the                     
3049 `quota                                           
3050 <https://git.kernel.org/pub/scm/linux/kernel/    
3051 repair series.                                   
3052                                                  
3053 .. _fscounters:                                  
3054                                                  
3055 Freezing to Fix Summary Counters                 
3056 --------------------------------                 
3057                                                  
3058 Filesystem summary counters track availabilit    
3059 as free blocks, free inodes, and allocated in    
3060 This information could be compiled by walking    
3061 but this is a slow process, so XFS maintains     
3062 that should reflect the ondisk metadata, at l    
3063 unmounted cleanly.                               
3064 For performance reasons, XFS also maintains i    
3065 which are key to enabling resource reservatio    
3066 Writer threads reserve the worst-case quantit    
3067 incore counter and give back whatever they do    
3068 It is therefore only necessary to serialize o    
3069 superblock is being committed to disk.           
3070                                                  
3071 The lazy superblock counter feature introduce    
3072 by training log recovery to recompute the sum    
3073 which eliminated the need for most transactio    
3074 The only time XFS commits the summary counter    
3075 To reduce contention even further, the incore    
3076 percpu counter, which means that each CPU is     
3077 global incore counter and can satisfy small a    
3078                                                  
3079 The high-performance nature of the summary co    
3080 online fsck to check them, since there is no     
3081 while the system is running.                     
3082 Although online fsck can read the filesystem     
3083 values of the summary counters, there's no wa    
3084 counter stable, so it's quite possible that t    
3085 the time the walk is complete.                   
3086 Earlier versions of online scrub would return    
3087 scan flag, but this is not a satisfying outco    
3088 For repairs, the in-memory counters must be s    
3089 filesystem metadata to get an accurate readin    
3090 counter.                                         
3091                                                  
3092 To satisfy this requirement, online fsck must    
3093 system from initiating new writes to the file    
3094 garbage collection threads, and it must wait     
3095 exit the kernel.                                 
3096 Once that has been established, scrub can wal    
3097 inode btrees, and the realtime bitmap to comp    
3098 four summary counters.                           
3099 This is very similar to a filesystem freeze,     
3100 necessary:                                       
3101                                                  
3102 - The final freeze state is set one higher th    
3103   prevent other threads from thawing the file    
3104   from initiating another fscounters freeze.     
3105                                                  
3106 - It does not quiesce the log.                   
3107                                                  
3108 With this code in place, it is now possible t    
3109 long enough to check and correct the summary     
3110                                                  
3111 +--------------------------------------------    
3112 | **Historical Sidebar**:                        
3113 +--------------------------------------------    
3114 | The initial implementation used the actual     
3115 | mechanism to quiesce filesystem activity.      
3116 | With the filesystem frozen, it is possible     
3117 | with exact precision, but there are many pr    
3118 | methods directly:                              
3119 |                                                
3120 | - Other programs can unfreeze the filesyste    
3121 |   This leads to incorrect scan results and     
3122 |                                                
3123 | - Adding an extra lock to prevent others fr    
3124 |   required the addition of a ``->freeze_sup    
3125 |   ``freeze_fs()``.                             
3126 |   This in turn caused other subtle problems    
3127 |   the VFS ``freeze_super`` and ``thaw_super    
3128 |   last reference to the VFS superblock, and    
3129 |   becomes a UAF bug!                           
3130 |   This can happen if the filesystem is unmo    
3131 |   block device has frozen the filesystem.      
3132 |   This problem could be solved by grabbing     
3133 |   superblock, but it felt suboptimal given     
3134 |   this approach.                               
3135 |                                                
3136 | - The log need not be quiesced to check the    
3137 |   freeze initiates one anyway.                 
3138 |   This adds unnecessary runtime to live fsc    
3139 |                                                
3140 | - Quiescing the log means that XFS flushes     
3141 |   counters to disk as part of cleaning the     
3142 |                                                
3143 | - A bug in the VFS meant that freeze could     
3144 |   sync_filesystem fails to flush the filesy    
3145 |   This bug was fixed in Linux 5.17.            
3146 +--------------------------------------------    
3147                                                  
3148 The proposed patchset is the                     
3149 `summary counter cleanup                         
3150 <https://git.kernel.org/pub/scm/linux/kernel/    
3151 series.                                          
3152                                                  
3153 Full Filesystem Scans                            
3154 ---------------------                            
3155                                                  
3156 Certain types of metadata can only be checked    
3157 entire filesystem to record observations and     
3158 what's recorded on disk.                         
3159 Like every other type of online repair, repai    
3160 observations to disk in a replacement structu    
3161 However, it is not practical to shut down the    
3162 hundreds of billions of files because the dow    
3163 Therefore, online fsck must build the infrast    
3164 all the files in the filesystem.                 
3165 There are two questions that need to be solve    
3166                                                  
3167 - How does scrub manage the scan while it is     
3168                                                  
3169 - How does the scan keep abreast of changes b    
3170   threads?                                       
3171                                                  
3172 .. _iscan:                                       
3173                                                  
3174 Coordinated Inode Scans                          
3175 ```````````````````````                          
3176                                                  
3177 In the original Unix filesystems of the 1970s    
3178 an index number (*inumber*) which was used as    
3179 (*itable*) of fixed-size records (*inodes*) d    
3180 its data block mapping.                          
3181 This system is described by J. Lions, `"inode    
3182 <http://www.lemis.com/grog/Documentation/Lion    
3183 UNIX, 6th Edition*, (Dept. of Computer Scienc    
3184 Wales, November 1977), pp. 18-2; and later by    
3185 `"Implementation of the File System"             
3186 <https://archive.org/details/bstj57-6-1905/pa    
3187 Time-Sharing System*, (The Bell System Techni    
3188 1913-4.                                          
3189                                                  
3190 XFS retains most of this design, except now i    
3191 the space in the data section filesystem.        
3192 They form a continuous keyspace that can be e    
3193 though the inodes themselves are sparsely dis    
3194 Scans proceed in a linear fashion across the     
3195 ``0x0`` and ending at ``0xFFFFFFFFFFFFFFFF``.    
3196 Naturally, a scan through a keyspace requires    
3197 scan progress.                                   
3198 Because this keyspace is sparse, this cursor     
3199 The first part of this scan cursor object tra    
3200 examined next; call this the examination curs    
3201 Somewhat less obviously, the scan cursor obje    
3202 the keyspace have already been visited, which    
3203 concurrent filesystem update needs to be inco    
3204 Call this the visited inode cursor.              
3205                                                  
3206 Advancing the scan cursor is a multi-step pro    
3207 ``xchk_iscan_iter``:                             
3208                                                  
3209 1. Lock the AGI buffer of the AG containing t    
3210    inode cursor.                                 
3211    This guarantee that inodes in this AG cann    
3212    advancing the cursor.                         
3213                                                  
3214 2. Use the per-AG inode btree to look up the     
3215    was just visited, since it may not be keys    
3216                                                  
3217 3. If there are no more inodes left in this A    
3218                                                  
3219    a. Move the examination cursor to the poin    
3220       corresponds to the start of the next AG    
3221                                                  
3222    b. Adjust the visited inode cursor to indi    
3223       last possible inode in the current AG's    
3224       XFS inumbers are segmented, so the curs    
3225       visited the entire keyspace up to just     
3226       inode keyspace.                            
3227                                                  
3228    c. Unlock the AGI and return to step 1 if     
3229       filesystem.                                
3230                                                  
3231    d. If there are no more AGs to examine, se    
3232       inumber keyspace.                          
3233       The scan is now complete.                  
3234                                                  
3235 4. Otherwise, there is at least one more inod    
3236                                                  
3237    a. Move the examination cursor ahead to th    
3238       by the inode btree.                        
3239                                                  
3240    b. Adjust the visited inode cursor to poin    
3241       the examination cursor is now.             
3242       Because the scanner holds the AGI buffe    
3243       created in the part of the inode keyspa    
3244       just advanced.                             
3245                                                  
3246 5. Get the incore inode for the inumber of th    
3247    By maintaining the AGI buffer lock until t    
3248    it was safe to advance the examination cur    
3249    and that it has stabilized this next inode    
3250    the filesystem until the scan releases the    
3251                                                  
3252 6. Drop the AGI lock and return the incore in    
3253                                                  
3254 Online fsck functions scan all files in the f    
3255                                                  
3256 1. Start a scan by calling ``xchk_iscan_start    
3257                                                  
3258 2. Advance the scan cursor (``xchk_iscan_iter    
3259    If one is provided:                           
3260                                                  
3261    a. Lock the inode to prevent updates durin    
3262                                                  
3263    b. Scan the inode.                            
3264                                                  
3265    c. While still holding the inode lock, adj    
3266       (``xchk_iscan_mark_visited``) to point     
3267                                                  
3268    d. Unlock and release the inode.              
3269                                                  
3270 8. Call ``xchk_iscan_teardown`` to complete t    
3271                                                  
3272 There are subtleties with the inode cache tha    
3273 inode for the caller.                            
3274 Obviously, it is an absolute requirement that    
3275 enough to load it into the inode cache.          
3276 Second, if the incore inode is stuck in some     
3277 coordinator must release the AGI and push the    
3278 back into a loadable state.                      
3279                                                  
3280 The proposed patches are the                     
3281 `inode scanner                                   
3282 <https://git.kernel.org/pub/scm/linux/kernel/    
3283 series.                                          
3284 The first user of the new functionality is th    
3285 `online quotacheck                               
3286 <https://git.kernel.org/pub/scm/linux/kernel/    
3287 series.                                          
3288                                                  
3289 Inode Management                                 
3290 ````````````````                                 
3291                                                  
3292 In regular filesystem code, references to all    
3293 always obtained (``xfs_iget``) outside of tra    
3294 creation of the incore context for an existin    
3295 updates.                                         
3296 However, it is important to note that referen    
3297 part of file creation must be performed in tr    
3298 filesystem must ensure the atomicity of the o    
3299 and the initialization of the actual ondisk i    
3300                                                  
3301 References to incore inodes are always releas    
3302 transaction context because there are a handf    
3303 require ondisk updates:                          
3304                                                  
3305 - The VFS may decide to kick off writeback as    
3306   release.                                       
3307                                                  
3308 - Speculative preallocations need to be unres    
3309                                                  
3310 - An unlinked file may have lost its last ref    
3311   file must be inactivated, which involves re    
3312   the ondisk metadata and freeing the inode.     
3313                                                  
3314 These activities are collectively called inod    
3315 Inactivation has two parts -- the VFS part, w    
3316 dirty file pages, and the XFS part, which cle    
3317 and frees the inode if it was unlinked.          
3318 If the inode is unlinked (or unconnected afte    
3319 kernel drops the inode into the inactivation     
3320                                                  
3321 During normal operation, resource acquisition    
3322 to avoid deadlocks:                              
3323                                                  
3324 1. Inode reference (``iget``).                   
3325                                                  
3326 2. Filesystem freeze protection, if repairing    
3327                                                  
3328 3. Inode ``IOLOCK`` (VFS ``i_rwsem``) lock to    
3329                                                  
3330 4. Inode ``MMAPLOCK`` (page cache ``invalidat    
3331    can update page cache mappings.               
3332                                                  
3333 5. Log feature enablement.                       
3334                                                  
3335 6. Transaction log space grant.                  
3336                                                  
3337 7. Space on the data and realtime devices for    
3338                                                  
3339 8. Incore dquot references, if a file is bein    
3340    Note that they are not locked, merely acqu    
3341                                                  
3342 9. Inode ``ILOCK`` for file metadata updates.    
3343                                                  
3344 10. AG header buffer locks / Realtime metadat    
3345                                                  
3346 11. Realtime metadata buffer locks, if applic    
3347                                                  
3348 12. Extent mapping btree blocks, if applicabl    
3349                                                  
3350 Resources are often released in the reverse o    
3351 However, online fsck differs from regular XFS    
3352 an object that normally is acquired in a late    
3353 then decide to cross-reference the object wit    
3354 earlier in the order.                            
3355 The next few sections detail the specific way    
3356 to avoid deadlocks.                              
3357                                                  
3358 iget and irele During a Scrub                    
3359 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                    
3360                                                  
3361 An inode scan performed on behalf of a scrub     
3362 context, and possibly with resources already     
3363 This isn't much of a problem for ``iget`` sin    
3364 of an existing transaction, as long as all of    
3365 before the inode reference in the regular fil    
3366                                                  
3367 When the VFS ``iput`` function is given a lin    
3368 references, it normally puts the inode on an     
3369 save time if another process re-opens the fil    
3370 of memory and frees it.                          
3371 Filesystem callers can short-circuit the LRU     
3372 flag on the inode to cause the kernel to try     
3373 inactivation machinery immediately.              
3374                                                  
3375 In the past, inactivation was always done fro    
3376 inode, which was a problem for scrub because     
3377 transaction, and XFS does not support nesting    
3378 On the other hand, if there is no scrub trans    
3379 otherwise unused inodes immediately to avoid     
3380 To capture these nuances, the online fsck cod    
3381 function to set or clear the ``DONTCACHE`` fl    
3382 behavior.                                        
3383                                                  
3384 Proposed patchsets include fixing                
3385 `scrub iget usage                                
3386 <https://git.kernel.org/pub/scm/linux/kernel/    
3387 `dir iget usage                                  
3388 <https://git.kernel.org/pub/scm/linux/kernel/    
3389                                                  
3390 .. _ilocking:                                    
3391                                                  
3392 Locking Inodes                                   
3393 ^^^^^^^^^^^^^^                                   
3394                                                  
3395 In regular filesystem code, the VFS and XFS w    
3396 in a well-known order: parent → child when     
3397 in numerical order of the addresses of their     
3398 For regular files, the MMAPLOCK can be acquir    
3399 faults.                                          
3400 If two MMAPLOCKs must be acquired, they are a    
3401 the addresses of their ``struct address_space    
3402 Due to the structure of existing filesystem c    
3403 acquired before transactions are allocated.      
3404 If two ILOCKs must be acquired, they are acqu    
3405                                                  
3406 Inode lock acquisition must be done carefully    
3407 Online fsck cannot abide these conventions, b    
3408 scanner, the scrub process holds the IOLOCK o    
3409 needs to take the IOLOCK of the file at the o    
3410 If the directory tree is corrupt because it c    
3411 cannot use the regular inode locking function    
3412 ABBA deadlock.                                   
3413                                                  
3414 Solving both of these problems is straightfor    
3415 needs to take a second lock of the same class    
3416 deadlock.                                        
3417 If the trylock fails, scrub drops all inode l    
3418 (re)acquire all necessary resources.             
3419 Trylock loops enable scrub to check for pendi    
3420 scrub avoids deadlocking the filesystem or be    
3421 However, trylock loops means that online fsck    
3422 resource being scrubbed before and after the     
3423 react accordingly.                               
3424                                                  
3425 .. _dirparent:                                   
3426                                                  
3427 Case Study: Finding a Directory Parent           
3428 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^           
3429                                                  
3430 Consider the directory parent pointer repair     
3431 Online fsck must verify that the dotdot diren    
3432 parent directory, and that the parent directo    
3433 pointing down to the child directory.            
3434 Fully validating this relationship (and repai    
3435 walk of every directory on the filesystem whi    
3436 while updates to the directory tree are being    
3437 The coordinated inode scan provides a way to     
3438 possibility of missing an inode.                 
3439 The child directory is kept locked to prevent    
3440 if the scanner fails to lock a parent, it can    
3441 and the prospective parent.                      
3442 If the dotdot entry changes while the directo    
3443 rename operation must have changed the child'    
3444 exit early.                                      
3445                                                  
3446 The proposed patchset is the                     
3447 `directory repair                                
3448 <https://git.kernel.org/pub/scm/linux/kernel/    
3449 series.                                          
3450                                                  
3451 .. _fshooks:                                     
3452                                                  
3453 Filesystem Hooks                                 
3454 `````````````````                                
3455                                                  
3456 The second piece of support that online fsck     
3457 filesystem scan is the ability to stay inform    
3458 other threads in the filesystem, since compar    
3459 in a dynamic environment.                        
3460 Two pieces of Linux kernel infrastructure ena    
3461 filesystem operations: filesystem hooks and :    
3462                                                  
3463 Filesystem hooks convey information about an     
3464 a downstream consumer.                           
3465 In this case, the downstream consumer is alwa    
3466 Because multiple fsck functions can run in pa    
3467 notifier call chain facility to dispatch upda    
3468 fsck processes.                                  
3469 Call chains are a dynamic list, which means t    
3470 run time.                                        
3471 Because these hooks are private to the XFS mo    
3472 contains exactly what the checking function n    
3473                                                  
3474 The current implementation of XFS hooks uses     
3475 impact to highly threaded workloads.             
3476 Regular blocking notifier chains use a rwsem     
3477 overhead for single-threaded applications.       
3478 However, it may turn out that the combination    
3479 keys are a more performant combination; more     
3480                                                  
3481 The following pieces are necessary to hook a     
3482                                                  
3483 - A ``struct xfs_hooks`` object must be embed    
3484   a well-known incore filesystem object.         
3485                                                  
3486 - Each hook must define an action code and a     
3487   about the action.                              
3488                                                  
3489 - Hook providers should provide appropriate w    
3490   around the ``xfs_hooks`` and ``xfs_hook`` o    
3491   checking to ensure correct usage.              
3492                                                  
3493 - A callsite in the regular filesystem code m    
3494   ``xfs_hooks_call`` with the action code and    
3495   This place should be adjacent to (and not e    
3496   the filesystem update is committed to the t    
3497   In general, when the filesystem calls a hoo    
3498   handle sleeping and should not be vulnerabl    
3499   recursion.                                     
3500   However, the exact requirements are very de    
3501   caller and the callee.                         
3502                                                  
3503 - The online fsck function should define a st    
3504   to coordinate access to the scan data, and     
3505   The scanner function and the regular filesy    
3506   in the same order; see the next section for    
3507                                                  
3508 - The online fsck code must contain a C funct    
3509   and data structure.                            
3510   If the object being updated has already bee    
3511   hook information must be applied to the sca    
3512                                                  
3513 - Prior to unlocking inodes to start the scan    
3514   ``xfs_hooks_setup`` to initialize the ``str    
3515   ``xfs_hooks_add`` to enable the hook.          
3516                                                  
3517 - Online fsck must call ``xfs_hooks_del`` to     
3518   complete.                                      
3519                                                  
3520 The number of hooks should be kept to a minim    
3521 Static keys are used to reduce the overhead o    
3522 zero when online fsck is not running.            
3523                                                  
3524 .. _liveupdate:                                  
3525                                                  
3526 Live Updates During a Scan                       
3527 ``````````````````````````                       
3528                                                  
3529 The code paths of the online fsck scanning co    
3530 filesystem code look like this::                 
3531                                                  
3532             other program                        
3533   
3534             inode lock ←──────    
3535   
3536             AG header lock                      
3537   
3538             filesystem function                 
3539   
3540             notifier call chain                 
3541   
3542             scrub hook function                 
3543   
3544             scan data mutex ←──┐    s    
3545                   ↓            ├───    
3546             update scan data   │    lock       
3547                   ↑            │             
3548             scan data mutex ←──┘         
3549   
3550             inode lock ←──────    
3551   
3552             scrub function                       
3553   
3554             inode scanner                        
3555   
3556             xfs_scrub                            
3557                                                  
3558 These rules must be followed to ensure correc    
3559 checking code and the code making an update t    
3560                                                  
3561 - Prior to invoking the notifier call chain,     
3562   hooked must acquire the same lock that the     
3563   to scan the inode.                             
3564                                                  
3565 - The scanning function and the scrub hook fu    
3566   the scan data by acquiring a lock on the sc    
3567                                                  
3568 - Scrub hook function must not add the live u    
3569   observations unless the inode being updated    
3570   The scan coordinator has a helper predicate    
3571   for this.                                      
3572                                                  
3573 - Scrub hook functions must not change the ca    
3574   transaction that it is running.                
3575   They must not acquire any resources that mi    
3576   function being hooked.                         
3577                                                  
3578 - The hook function can abort the inode scan     
3579                                                  
3580 The inode scan APIs are pretty simple:           
3581                                                  
3582 - ``xchk_iscan_start`` starts a scan             
3583                                                  
3584 - ``xchk_iscan_iter`` grabs a reference to th    
3585   returns zero if there is nothing left to sc    
3586                                                  
3587 - ``xchk_iscan_want_live_update`` to decide i    
3588   visited in the scan.                           
3589   This is critical for hook functions to deci    
3590   in-memory scan information.                    
3591                                                  
3592 - ``xchk_iscan_mark_visited`` to mark an inod    
3593   scan                                           
3594                                                  
3595 - ``xchk_iscan_teardown`` to finish the scan     
3596                                                  
3597 This functionality is also a part of the         
3598 `inode scanner                                   
3599 <https://git.kernel.org/pub/scm/linux/kernel/    
3600 series.                                          
3601                                                  
3602 .. _quotacheck:                                  
3603                                                  
3604 Case Study: Quota Counter Checking               
3605 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^               
3606                                                  
3607 It is useful to compare the mount time quotac    
3608 quotacheck code.                                 
3609 Mount time quotacheck does not have to conten    
3610 it does the following:                           
3611                                                  
3612 1. Make sure the ondisk dquots are in good en    
3613    dquots will actually load, and zero the re    
3614    ondisk buffer.                                
3615                                                  
3616 2. Walk every inode in the filesystem.           
3617    Add each file's resource usage to the inco    
3618                                                  
3619 3. Walk each incore dquot.                       
3620    If the incore dquot is not being flushed,     
3621    incore dquot to a delayed write (delwri) l    
3622                                                  
3623 4. Write the buffer list to disk.                
3624                                                  
3625 Like most online fsck functions, online quota    
3626 filesystem objects until the newly collected     
3627 state.                                           
3628 Therefore, online quotacheck records file res    
3629 index implemented with a sparse ``xfarray``,     
3630 once the scan is complete.                       
3631 Handling transactional updates is tricky beca    
3632 are handled in phases to minimize contention     
3633                                                  
3634 1. The inodes involved are joined and locked     
3635                                                  
3636 2. For each dquot attached to the file:          
3637                                                  
3638    a. The dquot is locked.                       
3639                                                  
3640    b. A quota reservation is added to the dqu    
3641       The reservation is recorded in the tran    
3642                                                  
3643    c. The dquot is unlocked.                     
3644                                                  
3645 3. Changes in actual quota usage are tracked     
3646                                                  
3647 4. At transaction commit time, each dquot is     
3648                                                  
3649    a. The dquot is locked again.                 
3650                                                  
3651    b. Quota usage changes are logged and unus    
3652       the dquot.                                 
3653                                                  
3654    c. The dquot is unlocked.                     
3655                                                  
3656 For online quotacheck, hooks are placed in st    
3657 The step 2 hook creates a shadow version of t    
3658 (``dqtrx``) that operates in a similar manner    
3659 The step 4 hook commits the shadow ``dqtrx``     
3660 Notice that both hooks are called with the in    
3661 live update coordinates with the inode scanne    
3662                                                  
3663 The quotacheck scan looks like this:             
3664                                                  
3665 1. Set up a coordinated inode scan.              
3666                                                  
3667 2. For each inode returned by the inode scan     
3668                                                  
3669    a. Grab and lock the inode.                   
3670                                                  
3671    b. Determine that inode's resource usage (    
3672       realtime blocks) and add that to the sh    
3673       and project ids associated with the ino    
3674                                                  
3675    c. Unlock and release the inode.              
3676                                                  
3677 3. For each dquot in the system:                 
3678                                                  
3679    a. Grab and lock the dquot.                   
3680                                                  
3681    b. Check the dquot against the shadow dquo    
3682       by the live hooks.                         
3683                                                  
3684 Live updates are key to being able to walk ev    
3685 needing to hold any locks for a long duration    
3686 If repairs are desired, the real and shadow d    
3687 resource counts are set to the values in the     
3688                                                  
3689 The proposed patchset is the                     
3690 `online quotacheck                               
3691 <https://git.kernel.org/pub/scm/linux/kernel/    
3692 series.                                          
3693                                                  
3694 .. _nlinks:                                      
3695                                                  
3696 Case Study: File Link Count Checking             
3697 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^             
3698                                                  
3699 File link count checking also uses live updat    
3700 The coordinated inode scanner is used to visi    
3701 filesystem, and per-file link count records a    
3702 indexed by inumber.                              
3703 During the scanning phase, each entry in a di    
3704 data as follows:                                 
3705                                                  
3706 1. If the entry is a dotdot (``'..'``) entry     
3707    directory's parent link count is bumped be    
3708    entry is self referential.                    
3709                                                  
3710 2. If the entry is a dotdot entry of a subdir    
3711    count is bumped.                              
3712                                                  
3713 3. If the entry is neither a dot nor a dotdot    
3714    count is bumped.                              
3715                                                  
3716 4. If the target is a subdirectory, the paren    
3717                                                  
3718 A crucial point to understand about how the l    
3719 with the live update hooks is that the scan c    
3720 directories have been scanned.                   
3721 In other words, the live updates ignore any u    
3722 not been scanned, even if B has been scanned.    
3723 Furthermore, a subdirectory A with a dotdot e    
3724 accounted as a backref counter in the shadow     
3725 entries affect the parent's link count.          
3726 Live update hooks are carefully placed in all    
3727 create, change, or remove directory entries,     
3728 bumplink and droplink.                           
3729                                                  
3730 For any file, the correct link count is the n    
3731 of child subdirectories.                         
3732 Non-directories never have children of any ki    
3733 The backref information is used to detect inc    
3734 links pointing to child subdirectories and th    
3735 pointing back.                                   
3736                                                  
3737 After the scan completes, the link count of e    
3738 both the inode and the shadow data, and compa    
3739 A second coordinated inode scan cursor is use    
3740 Live updates are key to being able to walk ev    
3741 any locks between inodes.                        
3742 If repairs are desired, the inode's link coun    
3743 shadow information.                              
3744 If no parents are found, the file must be :re    
3745 orphanage to prevent the file from being lost    
3746                                                  
3747 The proposed patchset is the                     
3748 `file link count repair                          
3749 <https://git.kernel.org/pub/scm/linux/kernel/    
3750 series.                                          
3751                                                  
3752 .. _rmap_repair:                                 
3753                                                  
3754 Case Study: Rebuilding Reverse Mapping Record    
3755 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^    
3756                                                  
3757 Most repair functions follow the same pattern    
3758 walk the surviving ondisk metadata looking fo    
3759 and use an :ref:`in-memory array <xfarray>` t    
3760 The primary advantage of this approach is the    
3761 repair code -- code and data are entirely con    
3762 do not require hooks in the main filesystem,     
3763 in memory use.                                   
3764 A secondary advantage of this repair approach    
3765 decides a structure is corrupt, no other thre    
3766 the kernel finishes repairing and revalidatin    
3767                                                  
3768 For repairs going on within a shard of the fi    
3769 outweigh the delays inherent in locking the s    
3770 shard.                                           
3771 Unfortunately, repairs to the reverse mapping    
3772 btree repair strategy because it must scan ev    
3773 every file in the filesystem, and the filesys    
3774 Therefore, rmap repair foregoes atomicity bet    
3775 It combines a :ref:`coordinated inode scanner    
3776 <liveupdate>`, and an :ref:`in-memory rmap bt    
3777 scan for reverse mapping records.                
3778                                                  
3779 1. Set up an xfbtree to stage rmap records.      
3780                                                  
3781 2. While holding the locks on the AGI and AGF    
3782    scrub, generate reverse mappings for all A    
3783    staging extents, and the internal log.        
3784                                                  
3785 3. Set up an inode scanner.                      
3786                                                  
3787 4. Hook into rmap updates for the AG being re    
3788    can receive updates to the rmap btree from    
3789    the file scan.                                
3790                                                  
3791 5. For each space mapping found in either for    
3792    decide if the mapping matches the AG of in    
3793    If so:                                        
3794                                                  
3795    a. Create a btree cursor for the in-memory    
3796                                                  
3797    b. Use the rmap code to add the record to     
3798                                                  
3799    c. Use the :ref:`special commit function <    
3800       xfbtree changes to the xfile.              
3801                                                  
3802 6. For each live update received via the hook    
3803    been scanned.                                 
3804    If so, apply the live update into the scan    
3805                                                  
3806    a. Create a btree cursor for the in-memory    
3807                                                  
3808    b. Replay the operation into the in-memory    
3809                                                  
3810    c. Use the :ref:`special commit function <    
3811       xfbtree changes to the xfile.              
3812       This is performed with an empty transac    
3813       caller's state.                            
3814                                                  
3815 7. When the inode scan finishes, create a new    
3816    two AG headers.                               
3817                                                  
3818 8. Compute the new btree geometry using the n    
3819    shadow btree, like all other btree rebuild    
3820                                                  
3821 9. Allocate the number of blocks computed in     
3822                                                  
3823 10. Perform the usual btree bulk loading and     
3824     btree.                                       
3825                                                  
3826 11. Reap the old rmap btree blocks as discuss    
3827     to :ref:`reap after rmap btree repair <rm    
3828                                                  
3829 12. Free the xfbtree now that it not needed.     
3830                                                  
3831 The proposed patchset is the                     
3832 `rmap repair                                     
3833 <https://git.kernel.org/pub/scm/linux/kernel/    
3834 series.                                          
3835                                                  
3836 Staging Repairs with Temporary Files on Disk     
3837 --------------------------------------------     
3838                                                  
3839 XFS stores a substantial amount of metadata i    
3840 extended attributes, symbolic link targets, f    
3841 information for the realtime volume, and quot    
3842 File forks map 64-bit logical file fork space    
3843 extents, similar to how a memory management u    
3844 to physical memory addresses.                    
3845 Therefore, file-based tree structures (such a    
3846 attributes) use blocks mapped in the file for    
3847 to other blocks mapped within that same addre    
3848 structures (such as bitmaps and quota records    
3849 the file fork offset address space.              
3850                                                  
3851 Because file forks can consume as much space     
3852 cannot be staged in memory, even when a pagin    
3853 Therefore, online repair of file-based metada    
3854 the XFS filesystem, writes a new structure at    
3855 temporary file, and atomically exchanges all     
3856 fork contents) to commit the repair.             
3857 Once the repair is complete, the old fork can    
3858 system goes down during the reap, the iunlink    
3859 during log recovery.                             
3860                                                  
3861 **Note**: All space usage and inode indices i    
3862 consistent to use a temporary file safely!       
3863 This dependency is the reason why online repa    
3864 memory to stage ondisk space usage informatio    
3865                                                  
3866 Exchanging metadata file mappings with a temp    
3867 field of the block headers to match the file     
3868 temporary file.                                  
3869 The directory, extended attribute, and symbol    
3870 modified to allow callers to specify owner nu    
3871                                                  
3872 There is a downside to the reaping process --    
3873 reap phase and the fork extents are crosslink    
3874 fail because freeing space will find the extr    
3875                                                  
3876 Temporary files created for repair are simila    
3877 by userspace.                                    
3878 They are not linked into a directory and the     
3879 the last reference to the file is lost.          
3880 The key differences are that these files must    
3881 the kernel at all, they must be specially mar    
3882 opened by handle, and they must never be link    
3883                                                  
3884 +--------------------------------------------    
3885 | **Historical Sidebar**:                        
3886 +--------------------------------------------    
3887 | In the initial iteration of file metadata r    
3888 | blocks would be scanned for salvageable dat    
3889 | fork would be reaped; and then a new struct    
3890 | place.                                         
3891 | This strategy did not survive the introduct    
3892 | requirement expressed earlier in this docum    
3893 |                                                
3894 | The second iteration explored building a se    
3895 | offset in the fork from the salvage data, r    
3896 | using a ``COLLAPSE_RANGE`` operation to sli    
3897 | place.                                         
3898 |                                                
3899 | This had many drawbacks:                       
3900 |                                                
3901 | - Array structures are linearly addressed,     
3902 |   codebase does not have the concept of a l    
3903 |   applied to the record offset computation     
3904 |                                                
3905 | - Extended attributes are allowed to use th    
3906 |   address space.                               
3907 |                                                
3908 | - Even if repair could build an alternate c    
3909 |   different part of the fork address space,    
3910 |   requirement means that online repair woul    
3911 |   a log assisted ``COLLAPSE_RANGE`` operati    
3912 |   structure was completely replaced.           
3913 |                                                
3914 | - A crash after construction of the seconda    
3915 |   collapse would leave unreachable blocks i    
3916 |   This would likely confuse things further.    
3917 |                                                
3918 | - Reaping blocks after a repair is not a si    
3919 |   initiating a reap operation from a restar    
3920 |   during log recovery is daunting.             
3921 |                                                
3922 | - Directory entry blocks and quota records     
3923 |   in the header area of each block.            
3924 |   An atomic range collapse operation would     
3925 |   each block header.                           
3926 |   Rewriting a single field in block headers    
3927 |   it's something to be aware of.               
3928 |                                                
3929 | - Each block in a directory or extended att    
3930 |   sibling and child block pointers.            
3931 |   Were the atomic commit to use a range col    
3932 |   would have to be rewritten very carefully    
3933 |   structure.                                   
3934 |   Doing this as part of a range collapse me    
3935 |   of blocks repeatedly, which is not conduc    
3936 |                                                
3937 | This lead to the introduction of temporary     
3938 +--------------------------------------------    
3939                                                  
3940 Using a Temporary File                           
3941 ``````````````````````                           
3942                                                  
3943 Online repair code should use the ``xrep_temp    
3944 temporary file inside the filesystem.            
3945 This allocates an inode, marks the in-core in    
3946 the scrub context.                               
3947 These files are hidden from userspace, may no    
3948 and must be kept private.                        
3949                                                  
3950 Temporary files only use two inode locks: the    
3951 The MMAPLOCK is not needed here, because ther    
3952 userspace for data fork blocks.                  
3953 The usage patterns of these two locks are the    
3954 access to file data are controlled via the IO    
3955 are controlled via the ILOCK.                    
3956 Locking helpers are provided so that the temp    
3957 be cleaned up by the scrub context.              
3958 To comply with the nested locking strategy la    
3959 locking<ilocking>` section, it is recommended    
3960 xrep_tempfile_ilock*_nowait lock helpers.        
3961                                                  
3962 Data can be written to a temporary file by tw    
3963                                                  
3964 1. ``xrep_tempfile_copyin`` can be used to se    
3965    temporary file from an xfile.                 
3966                                                  
3967 2. The regular directory, symbolic link, and     
3968    be used to write to the temporary file.       
3969                                                  
3970 Once a good copy of a data file has been cons    
3971 must be conveyed to the file being repaired,     
3972 section.                                         
3973                                                  
3974 The proposed patches are in the                  
3975 `repair temporary files                          
3976 <https://git.kernel.org/pub/scm/linux/kernel/    
3977 series.                                          
3978                                                  
3979 Logged File Content Exchanges                    
3980 -----------------------------                    
3981                                                  
3982 Once repair builds a temporary file with a ne    
3983 it, it must commit the new changes into the e    
3984 It is not possible to swap the inumbers of tw    
3985 metadata must replace the old.                   
3986 This suggests the need for the ability to swa    
3987 swapping code used by the file defragmenting     
3988 for online repair because:                       
3989                                                  
3990 a. When the reverse-mapping btree is enabled,    
3991    reverse mapping information up to date wit    
3992    Therefore, it can only exchange one mappin    
3993    transaction is independent.                   
3994                                                  
3995 b. Reverse-mapping is critical for the operat    
3996    defragmentation code (which swapped entire    
3997    operation) is not useful here.                
3998                                                  
3999 c. Defragmentation is assumed to occur betwee    
4000    contents.                                     
4001    For this use case, an incomplete exchange     
4002    change in file contents, even if the opera    
4003                                                  
4004 d. Online repair needs to swap the contents o    
4005    *not* identical.                              
4006    For directory and xattr repairs, the user-    
4007    same, but the contents of individual block    
4008                                                  
4009 e. Old blocks in the file may be cross-linked    
4010    not reappear if the system goes down mid-r    
4011                                                  
4012 These problems are overcome by creating a new    
4013 of log intent item to track the progress of a    
4014 ranges.                                          
4015 The new exchange operation type chains togeth    
4016 the reverse-mapping extent swap code, but rec    
4017 log so that operations can be restarted after    
4018 This new functionality is called the file con    
4019 code.                                            
4020 The underlying implementation exchanges file     
4021 The new log item records the progress of the     
4022 exchange begins, it will always run to comple    
4023 interruptions.                                   
4024 The new ``XFS_SB_FEAT_INCOMPAT_EXCHRANGE`` in    
4025 in the superblock protects these new log item    
4026 old kernels.                                     
4027                                                  
4028 The proposed patchset is the                     
4029 `file contents exchange                          
4030 <https://git.kernel.org/pub/scm/linux/kernel/    
4031 series.                                          
4032                                                  
4033 +--------------------------------------------    
4034 | **Sidebar: Using Log-Incompatible Feature F    
4035 +--------------------------------------------    
4036 | Starting with XFS v5, the superblock contai    
4037 | ``sb_features_log_incompat`` field to indic    
4038 | records that might not readable by all kern    
4039 | filesystem.                                    
4040 | In short, log incompat features protect the    
4041 | that will not understand the contents.         
4042 | Unlike the other superblock feature bits, l    
4043 | ephemeral because an empty (clean) log does    
4044 | The log cleans itself after its contents ha    
4045 | filesystem, either as part of an unmount or    
4046 | otherwise idle.                                
4047 | Because upper level code can be working on     
4048 | time that the log cleans itself, it is nece    
4049 | communicate to the log when it is going to     
4050 | feature.                                       
4051 |                                                
4052 | The log coordinates access to incompatible     
4053 | one ``struct rw_semaphore`` for each featur    
4054 | The log cleaning code tries to take this rw    
4055 | clear the bit; if the lock attempt fails, t    
4056 | The code supporting a log incompat feature     
4057 | functions to obtain the log feature and cal    
4058 | ``xfs_add_incompat_log_feature`` to set the    
4059 | superblock.                                    
4060 | The superblock update is performed transact    
4061 | obtain log assistance must be called just p    
4062 | transaction that uses the functionality.       
4063 | For a file operation, this step must happen    
4064 | and the MMAPLOCK, but before allocating the    
4065 | When the transaction is complete, the ``xlo    
4066 | function is called to release the feature.     
4067 | The feature bit will not be cleared from th    
4068 | becomes clean.                                 
4069 |                                                
4070 | Log-assisted extended attribute updates and    
4071 | use log incompat features and provide conve    
4072 | functionality.                                 
4073 +--------------------------------------------    
4074                                                  
4075 Mechanics of a Logged File Content Exchange      
4076 ```````````````````````````````````````````      
4077                                                  
4078 Exchanging contents between file forks is a c    
4079 The goal is to exchange all file fork mapping    
4080 ranges.                                          
4081 There are likely to be many extent mappings i    
4082 the mappings aren't necessarily aligned.         
4083 Furthermore, there may be other updates that     
4084 such as exchanging file sizes, inode flags, o    
4085 format.                                          
4086 This is roughly the format of the new deferre    
4087                                                  
4088 .. code-block:: c                                
4089                                                  
4090         struct xfs_exchmaps_intent {             
4091             /* Inodes participating in the op    
4092             struct xfs_inode    *xmi_ip1;        
4093             struct xfs_inode    *xmi_ip2;        
4094                                                  
4095             /* File offset range information.    
4096             xfs_fileoff_t       xmi_startoff1    
4097             xfs_fileoff_t       xmi_startoff2    
4098             xfs_filblks_t       xmi_blockcoun    
4099                                                  
4100             /* Set these file sizes after the    
4101             xfs_fsize_t         xmi_isize1;      
4102             xfs_fsize_t         xmi_isize2;      
4103                                                  
4104             /* XFS_EXCHMAPS_* log operation f    
4105             uint64_t            xmi_flags;       
4106         };                                       
4107                                                  
4108 The new log intent item contains enough infor    
4109 offset ranges: ``(inode1, startoff1, blockcou    
4110 blockcount)``.                                   
4111 Each step of an exchange operation exchanges     
4112 possible from one file to the other.             
4113 After each step in the exchange operation, th    
4114 incremented and the blockcount field is decre    
4115 made.                                            
4116 The flags field captures behavioral parameter    
4117 mappings instead of the data fork and other w    
4118 The two isize fields are used to exchange the    
4119 operation if the file data fork is the target    
4120                                                  
4121 When the exchange is initiated, the sequence     
4122                                                  
4123 1. Create a deferred work item for the file m    
4124    At the start, it should contain the entire    
4125    exchanged.                                    
4126                                                  
4127 2. Call ``xfs_defer_finish`` to process the e    
4128    This is encapsulated in ``xrep_tempexch_co    
4129    This will log an extent swap intent item t    
4130    mapping exchange work item.                   
4131                                                  
4132 3. Until ``xmi_blockcount`` of the deferred m    
4133                                                  
4134    a. Read the block maps of both file ranges    
4135       ``xmi_startoff2``, respectively, and co    
4136       be exchanged in a single step.             
4137       This is the minimum of the two ``br_blo    
4138       Keep advancing through the file forks u    
4139       contains written blocks.                   
4140       Mutual holes, unwritten extents, and ex    
4141       space are not exchanged.                   
4142                                                  
4143       For the next few steps, this document w    
4144       from file 1 as "map1", and the mapping     
4145                                                  
4146    b. Create a deferred block mapping update     
4147                                                  
4148    c. Create a deferred block mapping update     
4149                                                  
4150    d. Create a deferred block mapping update     
4151                                                  
4152    e. Create a deferred block mapping update     
4153                                                  
4154    f. Log the block, quota, and extent count     
4155                                                  
4156    g. Extend the ondisk size of either file i    
4157                                                  
4158    h. Log a mapping exchange done log item fo    
4159       item that was read at the start of step    
4160                                                  
4161    i. Compute the amount of file range that h    
4162       This quantity is ``(map1.br_startoff +     
4163       xmi_startoff1)``, because step 3a could    
4164                                                  
4165    j. Increase the starting offsets of ``xmi_    
4166       by the number of blocks computed in the    
4167       ``xmi_blockcount`` by the same quantity    
4168       This advances the cursor.                  
4169                                                  
4170    k. Log a new mapping exchange intent log i    
4171       of the work item.                          
4172                                                  
4173    l. Return the proper error code (EAGAIN) t    
4174       to inform it that there is more work to    
4175       The operation manager completes the def    
4176       moving back to the start of step 3.        
4177                                                  
4178 4. Perform any post-processing.                  
4179    This will be discussed in more detail in s    
4180                                                  
4181 If the filesystem goes down in the middle of     
4182 find the most recent unfinished maping exchan    
4183 from there.                                      
4184 This is how atomic file mapping exchanges gua    
4185 will either see the old broken structure or t    
4186 both.                                            
4187                                                  
4188 Preparation for File Content Exchanges           
4189 ``````````````````````````````````````           
4190                                                  
4191 There are a few things that need to be taken     
4192 atomic file mapping exchange operation.          
4193 First, regular files require the page cache t    
4194 operation begins, and directio writes to be q    
4195 Like any filesystem operation, file mapping e    
4196 maximum amount of disk space and quota that c    
4197 files in the operation, and reserve that quan    
4198 unrecoverable out of space failure once it st    
4199 The preparation step scans the ranges of both    
4200                                                  
4201 - Data device blocks needed to handle the rep    
4202   mappings.                                      
4203 - Change in data and realtime block counts fo    
4204 - Increase in quota usage for both files, if     
4205   same set of quota ids.                         
4206 - The number of extent mappings that will be     
4207 - Whether or not there are partially written     
4208   User programs must never be able to access     
4209   to different extents on the realtime volume    
4210   operation fails to run to completion.          
4211                                                  
4212 The need for precise estimation increases the    
4213 operation, but it is very important to mainta    
4214 The filesystem must not run completely out of    
4215 exchange ever add more extent mappings to a f    
4216 Regular users are required to abide the quota    
4217 may exceed quota to resolve inconsistent meta    
4218                                                  
4219 Special Features for Exchanging Metadata File    
4220 `````````````````````````````````````````````    
4221                                                  
4222 Extended attributes, symbolic links, and dire    
4223 "local" and treat the fork as a literal area     
4224 Metadata repairs must take extra steps to sup    
4225                                                  
4226 - If both forks are in local format and the f    
4227   exchange is performed by copying the incore    
4228   forks, and committing.                         
4229   The atomic file mapping exchange mechanism     
4230   be done with a single transaction.             
4231                                                  
4232 - If both forks map blocks, then the regular     
4233   used.                                          
4234                                                  
4235 - Otherwise, only one fork is in local format    
4236   The contents of the local format fork are c    
4237   exchange.                                      
4238   The conversion to block format must be done    
4239   logs the initial mapping exchange intent lo    
4240   The regular atomic mapping exchange is used    
4241   mappings.                                      
4242   Special flags are set on the exchange opera    
4243   be rolled one more time to convert the seco    
4244   format so that the second file will be read    
4245   dropped.                                       
4246                                                  
4247 Extended attributes and directories stamp the    
4248 but the buffer verifiers do not actually chec    
4249 Although there is no verification, it is stil    
4250 referential integrity, so prior to performing    
4251 repair builds every block in the new data str    
4252 file being repaired.                             
4253                                                  
4254 After a successful exchange operation, the re    
4255 fork blocks by processing each fork mapping t    
4256 extent reaping <reaping>` mechanism that is d    
4257 If the filesystem should go down during the r    
4258 iunlink processing at the end of recovery wil    
4259 whatever blocks were not reaped.                 
4260 However, this iunlink processing omits the cr    
4261 repair, and is not completely foolproof.         
4262                                                  
4263 Exchanging Temporary File Contents               
4264 ``````````````````````````````````               
4265                                                  
4266 To repair a metadata file, online repair proc    
4267                                                  
4268 1. Create a temporary repair file.               
4269                                                  
4270 2. Use the staging data to write out new cont    
4271    file.                                         
4272    The same fork must be written to as is bei    
4273                                                  
4274 3. Commit the scrub transaction, since the ex    
4275    must be completed before transaction reser    
4276                                                  
4277 4. Call ``xrep_tempexch_trans_alloc`` to allo    
4278    the appropriate resource reservations, loc    
4279    xfs_exchmaps_req`` with the details of the    
4280                                                  
4281 5. Call ``xrep_tempexch_contents`` to exchang    
4282                                                  
4283 6. Commit the transaction to complete the rep    
4284                                                  
4285 .. _rtsummary:                                   
4286                                                  
4287 Case Study: Repairing the Realtime Summary Fi    
4288 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^    
4289                                                  
4290 In the "realtime" section of an XFS filesyste    
4291 bitmap, similar to Unix FFS.                     
4292 Each bit in the bitmap represents one realtim    
4293 the filesystem block size between 4KiB and 1G    
4294 The realtime summary file indexes the number     
4295 the offset of the block within the realtime f    
4296 extents begin.                                   
4297 In other words, the summary file helps the al    
4298 length, similar to what the free space by cou    
4299 section.                                         
4300                                                  
4301 The summary file itself is a flat file (with     
4302 partitioned into ``log2(total rt extents)`` s    
4303 counters to match the number of blocks in the    
4304 Each counter records the number of free exten    
4305 and can satisfy a power-of-two allocation req    
4306                                                  
4307 To check the summary file against the bitmap:    
4308                                                  
4309 1. Take the ILOCK of both the realtime bitmap    
4310                                                  
4311 2. For each free space extent recorded in the    
4312                                                  
4313    a. Compute the position in the summary fil    
4314       represents this free extent.               
4315                                                  
4316    b. Read the counter from the xfile.           
4317                                                  
4318    c. Increment it, and write it back to the     
4319                                                  
4320 3. Compare the contents of the xfile against     
4321                                                  
4322 To repair the summary file, write the xfile c    
4323 and use atomic mapping exchange to commit the    
4324 The temporary file is then reaped.               
4325                                                  
4326 The proposed patchset is the                     
4327 `realtime summary repair                         
4328 <https://git.kernel.org/pub/scm/linux/kernel/    
4329 series.                                          
4330                                                  
4331 Case Study: Salvaging Extended Attributes        
4332 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^        
4333                                                  
4334 In XFS, extended attributes are implemented a    
4335 Values are limited in size to 64KiB, but ther    
4336 names.                                           
4337 The attribute fork is unpartitioned, which me    
4338 structure is always in logical block zero, bu    
4339 index blocks, and remote value blocks are int    
4340 Attribute leaf blocks contain variable-sized     
4341 user-provided names with the user-provided va    
4342 Values larger than a block are allocated sepa    
4343 If the leaf information expands beyond a sing    
4344 btree (``dabtree``) is created to map hashes     
4345 for fast lookup.                                 
4346                                                  
4347 Salvaging extended attributes is done as foll    
4348                                                  
4349 1. Walk the attr fork mappings of the file be    
4350    leaf blocks.                                  
4351    When one is found,                            
4352                                                  
4353    a. Walk the attr leaf block to find candid    
4354       When one is found,                         
4355                                                  
4356       1. Check the name for problems, and ign    
4357                                                  
4358       2. Retrieve the value.                     
4359          If that succeeds, add the name and v    
4360          xfblob.                                 
4361                                                  
4362 2. If the memory usage of the xfarray and xfb    
4363    memory or there are no more attr fork bloc    
4364    add the staged extended attributes to the     
4365                                                  
4366 3. Use atomic file mapping exchange to exchan    
4367    attribute structures.                         
4368    The old attribute blocks are now attached     
4369                                                  
4370 4. Reap the temporary file.                      
4371                                                  
4372 The proposed patchset is the                     
4373 `extended attribute repair                       
4374 <https://git.kernel.org/pub/scm/linux/kernel/    
4375 series.                                          
4376                                                  
4377 Fixing Directories                               
4378 ------------------                               
4379                                                  
4380 Fixing directories is difficult with currentl    
4381 since directory entries are not redundant.       
4382 The offline repair tool scans all inodes to f    
4383 and then it scans all directories to establis    
4384 Damaged files and directories are zapped, and    
4385 moved to the ``/lost+found`` directory.          
4386 It does not try to salvage anything.             
4387                                                  
4388 The best that online repair can do at this ti    
4389 blocks and salvage any dirents that look plau    
4390 move orphans back into the directory tree.       
4391 The salvage process is discussed in the case     
4392 The :ref:`file link count fsck <nlinks>` code    
4393 and moving orphans to the ``/lost+found`` dir    
4394                                                  
4395 Case Study: Salvaging Directories                
4396 `````````````````````````````````                
4397                                                  
4398 Unlike extended attributes, directory blocks     
4399 salvaging directories is straightforward:        
4400                                                  
4401 1. Find the parent of the directory.             
4402    If the dotdot entry is not unreadable, try    
4403    parent has a child entry pointing back to     
4404    Otherwise, walk the filesystem to find it.    
4405                                                  
4406 2. Walk the first partition of data fork of t    
4407    entry data blocks.                            
4408    When one is found,                            
4409                                                  
4410    a. Walk the directory data block to find c    
4411       When an entry is found:                    
4412                                                  
4413       i. Check the name for problems, and ign    
4414                                                  
4415       ii. Retrieve the inumber and grab the i    
4416           If that succeeds, add the name, ino    
4417           staging xfarray and xblob.             
4418                                                  
4419 3. If the memory usage of the xfarray and xfb    
4420    memory or there are no more directory data    
4421    directory and add the staged dirents into     
4422    Truncate the staging files.                   
4423                                                  
4424 4. Use atomic file mapping exchange to exchan    
4425    structures.                                   
4426    The old directory blocks are now attached     
4427                                                  
4428 5. Reap the temporary file.                      
4429                                                  
4430 **Future Work Question**: Should repair reval    
4431 rebuilding a directory?                          
4432                                                  
4433 *Answer*: Yes, it should.                        
4434                                                  
4435 In theory it is necessary to scan all dentry     
4436 ensure that one of the following apply:          
4437                                                  
4438 1. The cached dentry reflects an ondisk diren    
4439                                                  
4440 2. The cached dentry no longer has a correspo    
4441    directory and the dentry can be purged fro    
4442                                                  
4443 3. The cached dentry no longer has an ondisk     
4444    purged.                                       
4445    This is the problem case.                     
4446                                                  
4447 Unfortunately, the current dentry cache desig    
4448 every child dentry of a specific directory, w    
4449 There is no known solution.                      
4450                                                  
4451 The proposed patchset is the                     
4452 `directory repair                                
4453 <https://git.kernel.org/pub/scm/linux/kernel/    
4454 series.                                          
4455                                                  
4456 Parent Pointers                                  
4457 ```````````````                                  
4458                                                  
4459 A parent pointer is a piece of file metadata     
4460 file's parent directory without having to tra    
4461 root.                                            
4462 Without them, reconstruction of directory tre    
4463 way that the historic lack of reverse space m    
4464 reconstruction of filesystem space metadata.     
4465 The parent pointer feature, however, makes to    
4466 possible.                                        
4467                                                  
4468 XFS parent pointers contain the information n    
4469 corresponding directory entry in the parent d    
4470 In other words, child files use extended attr    
4471 parents in the form ``(dirent_name) → (pare    
4472 The directory checking process can be strengt    
4473 each dirent also contains a parent pointer po    
4474 Likewise, each parent pointer can be checked     
4475 each parent pointer is a directory and that i    
4476 the parent pointer.                              
4477 Both online and offline repair can use this s    
4478                                                  
4479 +--------------------------------------------    
4480 | **Historical Sidebar**:                        
4481 +--------------------------------------------    
4482 | Directory parent pointers were first propos    
4483 | than a decade ago by SGI.                      
4484 | Each link from a parent directory to a chil    
4485 | extended attribute in the child that could     
4486 | parent directory.                              
4487 | Unfortunately, this early implementation ha    
4488 | never merged into Linux XFS:                   
4489 |                                                
4490 | 1. The XFS codebase of the late 2000s did n    
4491 |    enforce strong referential integrity in     
4492 |    It did not guarantee that a change in a     
4493 |    followed up with the corresponding chang    
4494 |                                                
4495 | 2. Referential integrity was not integrated    
4496 |    Checking and repairs were performed on m    
4497 |    taking any kernel or inode locks to coor    
4498 |    It is not clear how this actually worked    
4499 |                                                
4500 | 3. The extended attribute did not record th    
4501 |    in the parent, so the SGI parent pointer    
4502 |    used to reconnect the directory tree.       
4503 |                                                
4504 | 4. Extended attribute forks only support 65    
4505 |    that parent pointer attribute creation i    
4506 |    point before the maximum file link count    
4507 |                                                
4508 | The original parent pointer design was too     
4509 | a file system repair to depend on.             
4510 | Allison Henderson, Chandan Babu, and Cather    
4511 | second implementation that solves all short    
4512 | During 2022, Allison introduced log intent     
4513 | manipulations of the extended attribute str    
4514 | This solves the referential integrity probl    
4515 | commit a dirent update and a parent pointer    
4516 | transaction.                                   
4517 | Chandan increased the maximum extent counts    
4518 | forks, thereby ensuring that the extended a    
4519 | to handle the maximum hardlink count of any    
4520 |                                                
4521 | For this second effort, the ondisk parent p    
4522 | proposed was ``(parent_inum, parent_gen, di    
4523 | The format was changed during development t    
4524 | of repair tools needing to to ensure that t    
4525 | always matched when reconstructing a direct    
4526 |                                                
4527 | There were a few other ways to have solved     
4528 |                                                
4529 | 1. The field could be designated advisory,     
4530 |    are sufficient to find the entry in the     
4531 |    However, this makes indexed key lookup i    
4532 |    ongoing.                                    
4533 |                                                
4534 | 2. We could allow creating directory entrie    
4535 |    solves the referential integrity problem    
4536 |    dirent creation will fail due to conflic    
4537 |    directory.                                  
4538 |                                                
4539 |    These conflicts could be resolved by app    
4540 |    and amending the xattr code to support u    
4541 |    reindexing the dabtree, though this woul    
4542 |    the parent directory still locked.          
4543 |                                                
4544 | 3. Same as above, but remove the old parent    
4545 |    one atomically.                             
4546 |                                                
4547 | 4. Change the ondisk xattr format to           
4548 |    ``(parent_inum, name) → (parent_gen)``    
4549 |    name uniqueness that we require, without    
4550 |    update the dirent position.                 
4551 |    Unfortunately, this requires changes to     
4552 |    attr names as long as 263 bytes.            
4553 |                                                
4554 | 5. Change the ondisk xattr format to ``(par    
4555 |    (name, parent_gen)``.                       
4556 |    If the hash is sufficiently resistant to    
4557 |    then this should provide the attr name u    
4558 |    Names shorter than 247 bytes could be st    
4559 |                                                
4560 | 6. Change the ondisk xattr format to ``(dir    
4561 |    parent_gen)``.  This format doesn't requ    
4562 |    nested name hashing of the previous sugg    
4563 |    discovered that multiple hardlinks to th    
4564 |    filename caused performance problems wit    
4565 |    the parent inumber is now xor'd into the    
4566 |                                                
4567 | In the end, it was decided that solution #6    
4568 | most performant.  A new hash function was d    
4569 +--------------------------------------------    
4570                                                  
4571                                                  
4572 Case Study: Repairing Directories with Parent    
4573 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^    
4574                                                  
4575 Directory rebuilding uses a :ref:`coordinated    
4576 a :ref:`directory entry live update hook <liv    
4577                                                  
4578 1. Set up a temporary directory for generatin    
4579    an xfblob for storing entry names, and an     
4580    size fields involved in a directory update    
4581    remove, name cookie, ftype)``.                
4582                                                  
4583 2. Set up an inode scanner and hook into the     
4584    updates on directory operations.              
4585                                                  
4586 3. For each parent pointer found in each file    
4587    pointer references the directory of intere    
4588    If so:                                        
4589                                                  
4590    a. Stash the parent pointer name and an ad    
4591       xfblob and xfarray, respectively.          
4592                                                  
4593    b. When finished scanning that file or the    
4594       a threshold, flush the stashed updates     
4595                                                  
4596 4. For each live directory update received vi    
4597    has already been scanned.                     
4598    If so:                                        
4599                                                  
4600    a. Stash the parent pointer name an addnam    
4601       dirent update in the xfblob and xfarray    
4602       We cannot write directly to the tempora    
4603       functions are not allowed to modify fil    
4604       Instead, we stash updates in the xfarra    
4605       to apply the stashed updates to the tem    
4606                                                  
4607 5. When the scan is complete, replay any stas    
4608                                                  
4609 6. When the scan is complete, atomically exch    
4610    directory and the directory being repaired    
4611    The temporary directory now contains the d    
4612                                                  
4613 7. Reap the temporary directory.                 
4614                                                  
4615 The proposed patchset is the                     
4616 `parent pointers directory repair                
4617 <https://git.kernel.org/pub/scm/linux/kernel/    
4618 series.                                          
4619                                                  
4620 Case Study: Repairing Parent Pointers            
4621 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^            
4622                                                  
4623 Online reconstruction of a file's parent poin    
4624 directory reconstruction:                        
4625                                                  
4626 1. Set up a temporary file for generating a n    
4627    an xfblob for storing parent pointer names    
4628    fixed size fields involved in a parent poi    
4629    parent generation, add vs. remove, name co    
4630                                                  
4631 2. Set up an inode scanner and hook into the     
4632    updates on directory operations.              
4633                                                  
4634 3. For each directory entry found in each dir    
4635    dirent references the file of interest.       
4636    If so:                                        
4637                                                  
4638    a. Stash the dirent name and an addpptr en    
4639       xfblob and xfarray, respectively.          
4640                                                  
4641    b. When finished scanning the directory or    
4642       exceeds a threshold, flush the stashed     
4643                                                  
4644 4. For each live directory update received vi    
4645    has already been scanned.                     
4646    If so:                                        
4647                                                  
4648    a. Stash the dirent name and an addpptr or    
4649       update in the xfblob and xfarray for la    
4650       We cannot write parent pointers directl    
4651       hook functions are not allowed to modif    
4652       Instead, we stash updates in the xfarra    
4653       to apply the stashed parent pointer upd    
4654                                                  
4655 5. When the scan is complete, replay any stas    
4656                                                  
4657 6. Copy all non-parent pointer extended attri    
4658                                                  
4659 7. When the scan is complete, atomically exch    
4660    forks of the temporary file and the file b    
4661    The temporary file now contains the damage    
4662                                                  
4663 8. Reap the temporary file.                      
4664                                                  
4665 The proposed patchset is the                     
4666 `parent pointers repair                          
4667 <https://git.kernel.org/pub/scm/linux/kernel/    
4668 series.                                          
4669                                                  
4670 Digression: Offline Checking of Parent Pointe    
4671 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^    
4672                                                  
4673 Examining parent pointers in offline repair w    
4674 files are erased long before directory tree c    
4675 Parent pointer checks are therefore a second     
4676 connectivity checks:                             
4677                                                  
4678 1. After the set of surviving files has been     
4679    walk the surviving directories of each AG     
4680    This is already performed as part of the c    
4681                                                  
4682 2. For each directory entry found,               
4683                                                  
4684    a. If the name has already been stored in     
4685       and skip the next step.                    
4686                                                  
4687    b. Otherwise, record the name in an xfblob    
4688       Unique mappings are critical for           
4689                                                  
4690       1. Deduplicating names to reduce memory    
4691                                                  
4692       2. Creating a stable sort key for the p    
4693          parent pointer validation described     
4694                                                  
4695    c. Store ``(child_ag_inum, parent_inum, pa    
4696       name_cookie)`` tuples in a per-AG in-me    
4697       referenced in this section is the regul    
4698       the specialized one used for parent poi    
4699                                                  
4700 3. For each AG in the filesystem,                
4701                                                  
4702    a. Sort the per-AG tuple set in order of `    
4703       ``name_hash``, and ``name_cookie``.        
4704       Having a single ``name_cookie`` for eac    
4705       handling the uncommon case of a directo    
4706       to the same file where all the names ha    
4707                                                  
4708    b. For each inode in the AG,                  
4709                                                  
4710       1. Scan the inode for parent pointers.     
4711          For each parent pointer found,          
4712                                                  
4713          a. Validate the ondisk parent pointe    
4714             If validation fails, move on to t    
4715             file.                                
4716                                                  
4717          b. If the name has already been stor    
4718             cookie and skip the next step.       
4719                                                  
4720          c. Record the name in a per-file xfb    
4721             cookie.                              
4722                                                  
4723          d. Store ``(parent_inum, parent_gen,    
4724             name_cookie)`` tuples in a per-fi    
4725                                                  
4726       2. Sort the per-file tuples in order of    
4727          and ``name_cookie``.                    
4728                                                  
4729       3. Position one slab cursor at the star    
4730          per-AG tuple slab.                      
4731          This should be trivial since the per    
4732          order.                                  
4733                                                  
4734       4. Position a second slab cursor at the    
4735                                                  
4736       5. Iterate the two cursors in lockstep,    
4737          ``name_hash``, and ``name_cookie`` f    
4738          cursor:                                 
4739                                                  
4740          a. If the per-AG cursor is at a lowe    
4741             per-file cursor, then the per-AG     
4742             pointer.                             
4743             Add the parent pointer to the ino    
4744             cursor.                              
4745                                                  
4746          b. If the per-file cursor is at a lo    
4747             the per-AG cursor, then the per-f    
4748             parent pointer.                      
4749             Remove the parent pointer from th    
4750             cursor.                              
4751                                                  
4752          c. Otherwise, both cursors point at     
4753             Update the parent_gen component i    
4754             Advance both cursors.                
4755                                                  
4756 4. Move on to examining link counts, as we do    
4757                                                  
4758 The proposed patchset is the                     
4759 `offline parent pointers repair                  
4760 <https://git.kernel.org/pub/scm/linux/kernel/    
4761 series.                                          
4762                                                  
4763 Rebuilding directories from parent pointers i    
4764 challenging because xfs_repair currently uses    
4765 filesystem during phases 3 and 4 to decide wh    
4766 zapped.                                          
4767 This scan would have to be converted into a m    
4768                                                  
4769 1. The first pass of the scan zaps corrupt in    
4770    much as it does now.                          
4771    Corrupt directories are noted but not zapp    
4772                                                  
4773 2. The next pass records parent pointers poin    
4774    as being corrupt in the first pass.           
4775    This second pass may have to happen after     
4776    blocks, if phase 4 is also capable of zapp    
4777                                                  
4778 3. The third pass resets corrupt directories     
4779    Free space metadata has not been ensured y    
4780    directory building code in libxfs.            
4781                                                  
4782 4. At the start of phase 6, space metadata ha    
4783    Use the parent pointer information recorde    
4784    the dirents and add them to the now-empty     
4785                                                  
4786 This code has not yet been constructed.          
4787                                                  
4788 .. _dirtree:                                     
4789                                                  
4790 Case Study: Directory Tree Structure             
4791 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^             
4792                                                  
4793 As mentioned earlier, the filesystem director    
4794 directed acylic graph structure.                 
4795 However, each node in this graph is a separat    
4796 own locks, which makes validating the tree qu    
4797 Fortunately, non-directories are allowed to h    
4798 have children, so only directories need to be    
4799 Directories typically constitute 5-10% of the    
4800 reduces the amount of work dramatically.         
4801                                                  
4802 If the directory tree could be frozen, it wou    
4803 disconnected regions by running a depth (or b    
4804 from the root directory and marking a bitmap     
4805 At any point in the walk, trying to set an al    
4806 cycle.                                           
4807 After the scan completes, XORing the marked i    
4808 allocation bitmap reveals disconnected inodes    
4809 However, one of online repair's design goals     
4810 filesystem unless it's absolutely necessary.     
4811 Directory tree updates can move subtrees acro    
4812 filesystem, so the bitmap algorithm cannot be    
4813                                                  
4814 Directory parent pointers enable an increment    
4815 tree structure.                                  
4816 Instead of using one thread to scan the entir    
4817 walk from individual subdirectories upwards t    
4818 For this to work, all directory entries and p    
4819 consistent, each directory entry must have a     
4820 counts of all directories must be correct.       
4821 Each scanner thread must be able to take the     
4822 directory while holding the IOLOCK of the chi    
4823 directory from being moved within the tree.      
4824 This is not possible since the VFS does not t    
4825 subdirectory when moving that subdirectory, s    
4826 the parent -> child relationship by taking th    
4827 update hook to detect changes.                   
4828                                                  
4829 The scanning process uses a dirent hook to de    
4830 mentioned in the scan data.                      
4831 The scan works as follows:                       
4832                                                  
4833 1. For each subdirectory in the filesystem,      
4834                                                  
4835    a. For each parent pointer of that subdire    
4836                                                  
4837       1. Create a path object for that parent    
4838          subdirectory inode number in the pat    
4839                                                  
4840       2. Record the parent pointer name and i    
4841                                                  
4842       3. If the alleged parent is the subdire    
4843          a cycle.                                
4844          Mark the path for deletion and repea    
4845          subdirectory parent pointer.            
4846                                                  
4847       4. Try to mark the alleged parent inode    
4848          object.                                 
4849          If the bit is already set, then ther    
4850          tree.                                   
4851          Mark the path as a cycle and repeat     
4852          parent pointer.                         
4853                                                  
4854       5. Load the alleged parent.                
4855          If the alleged parent is not a linke    
4856          because the parent pointer informati    
4857                                                  
4858       6. For each parent pointer of this alle    
4859                                                  
4860          a. Record the parent pointer name an    
4861             if no parent has been set for tha    
4862                                                  
4863          b. If an ancestor has more than one     
4864             Repeat step 1a with the next subd    
4865                                                  
4866          c. Repeat steps 1a3-1a6 for the ance    
4867             This repeats until the directory     
4868             are found.                           
4869                                                  
4870       7. If the walk terminates at the root d    
4871                                                  
4872       8. If the walk terminates without reach    
4873          disconnected.                           
4874                                                  
4875 2. If the directory entry update hook trigger    
4876    by the scan.                                  
4877    If the entry matches part of a path, mark     
4878    When the scanner thread sees that the scan    
4879    all scan data and starts over.                
4880                                                  
4881 Repairing the directory tree works as follows    
4882                                                  
4883 1. Walk each path of the target subdirectory.    
4884                                                  
4885    a. Corrupt paths and cycle paths are count    
4886                                                  
4887    b. Paths already marked for deletion are c    
4888                                                  
4889    c. Paths that reached the root are counted    
4890                                                  
4891 2. If the subdirectory is either the root dir    
4892    delete all incoming directory entries in t    
4893    Repairs are complete.                         
4894                                                  
4895 3. If the subdirectory has exactly one path,     
4896    parent and exit.                              
4897                                                  
4898 4. If the subdirectory has at least one good     
4899    incoming directory entries in the immediat    
4900                                                  
4901 5. If the subdirectory has no good paths and     
4902    all the other incoming directory entries i    
4903                                                  
4904 6. If the subdirectory has zero paths, attach    
4905                                                  
4906 The proposed patches are in the                  
4907 `directory tree repair                           
4908 <https://git.kernel.org/pub/scm/linux/kernel/    
4909 series.                                          
4910                                                  
4911                                                  
4912 .. _orphanage:                                   
4913                                                  
4914 The Orphanage                                    
4915 -------------                                    
4916                                                  
4917 Filesystems present files as a directed, and     
4918 In other words, a tree.                          
4919 The root of the filesystem is a directory, an    
4920 downwards either to more subdirectories or to    
4921 Unfortunately, a disruption in the directory     
4922 disconnected graph, which makes files impossi    
4923 resolution.                                      
4924                                                  
4925 Without parent pointers, the directory parent    
4926 detect a dotdot entry pointing to a parent di    
4927 back to the child directory and the file link    
4928 that isn't pointed to by any directory in the    
4929 If such a file has a positive link count, the    
4930                                                  
4931 With parent pointers, directories can be rebu    
4932 and parent pointers can be rebuilt by scannin    
4933 This should reduce the incidence of files end    
4934                                                  
4935 When orphans are found, they should be reconn    
4936 Offline fsck solves the problem by creating a    
4937 serve as an orphanage, and linking orphan fil    
4938 inumber as the name.                             
4939 Reparenting a file to the orphanage does not     
4940 ACLs.                                            
4941                                                  
4942 This process is more involved in the kernel t    
4943 The directory and file link count repair setu    
4944 VFS mechanisms to create the orphanage direct    
4945 security attributes and dentry cache entries,    
4946 tree modification.                               
4947                                                  
4948 Orphaned files are adopted by the orphanage a    
4949                                                  
4950 1. Call ``xrep_orphanage_try_create`` at the     
4951    to try to ensure that the lost and found d    
4952    This also attaches the orphanage directory    
4953                                                  
4954 2. If the decision is made to reconnect a fil    
4955    orphanage and the file being reattached.      
4956    The ``xrep_orphanage_iolock_two`` function    
4957    strategy discussed earlier.                   
4958                                                  
4959 3. Use ``xrep_adoption_trans_alloc`` to reser    
4960    transaction.                                  
4961                                                  
4962 4. Call ``xrep_orphanage_compute_name`` to co    
4963    orphanage.                                    
4964                                                  
4965 5. If the adoption is going to happen, call `    
4966    reparent the orphaned file into the lost a    
4967    cache.                                        
4968                                                  
4969 6. Call ``xrep_adoption_finish`` to commit an    
4970    orphanage ILOCK, and clean the scrub trans    
4971    ``xrep_adoption_commit`` to commit the upd    
4972                                                  
4973 7. If a runtime error happens, call ``xrep_ad    
4974    resources.                                    
4975                                                  
4976 The proposed patches are in the                  
4977 `orphanage adoption                              
4978 <https://git.kernel.org/pub/scm/linux/kernel/    
4979 series.                                          
4980                                                  
4981 6. Userspace Algorithms and Data Structures      
4982 ===========================================      
4983                                                  
4984 This section discusses the key algorithms and    
4985 program, ``xfs_scrub``, that provide the abil    
4986 repairs in the kernel, verify file data, and     
4987                                                  
4988 .. _scrubcheck:                                  
4989                                                  
4990 Checking Metadata                                
4991 -----------------                                
4992                                                  
4993 Recall the :ref:`phases of fsck work<scrubpha    
4994 That structure follows naturally from the dat    
4995 filesystem from its beginnings in 1993.          
4996 In XFS, there are several groups of metadata     
4997                                                  
4998 a. Filesystem summary counts depend on consis    
4999    the allocation group space btrees, and the    
5000    information.                                  
5001                                                  
5002 b. Quota resource counts depend on consistenc    
5003    forks, inode indices, inode records, and t    
5004    system.                                       
5005                                                  
5006 c. The naming hierarchy depends on consistenc    
5007    extended attribute structures.                
5008    This includes file link counts.               
5009                                                  
5010 d. Directories, extended attributes, and file    
5011    the file forks that map directory and exte    
5012    storage media.                                
5013                                                  
5014 e. The file forks depends on consistency with    
5015    metadata indices of the allocation groups     
5016    This includes quota and realtime metadata     
5017                                                  
5018 f. Inode records depends on consistency withi    
5019                                                  
5020 g. Realtime space metadata depend on the inod    
5021    realtime metadata inodes.                     
5022                                                  
5023 h. The allocation group metadata indices (fre    
5024    and reverse mapping btrees) depend on cons    
5025    between all the AG metadata btrees.           
5026                                                  
5027 i. ``xfs_scrub`` depends on the filesystem be    
5028    for online fsck functionality.                
5029                                                  
5030 Therefore, a metadata dependency graph is a c    
5031 operations in the ``xfs_scrub`` program:         
5032                                                  
5033 - Phase 1 checks that the provided path maps     
5034   the kernel's scrubbing abilities, which val    
5035                                                  
5036 - Phase 2 scrubs groups (g) and (h) in parall    
5037                                                  
5038 - Phase 3 scans inodes in parallel.              
5039   For each inode, groups (f), (e), and (d) ar    
5040                                                  
5041 - Phase 4 repairs everything in groups (i) th    
5042   may run reliably.                              
5043                                                  
5044 - Phase 5 starts by checking groups (b) and (    
5045   to checking names.                             
5046                                                  
5047 - Phase 6 depends on groups (i) through (b) t    
5048   to read them, and to report which blocks of    
5049                                                  
5050 - Phase 7 checks group (a), having validated     
5051                                                  
5052 Notice that the data dependencies between gro    
5053 of the program flow.                             
5054                                                  
5055 Parallel Inode Scans                             
5056 --------------------                             
5057                                                  
5058 An XFS filesystem can easily contain hundreds    
5059 Given that XFS targets installations with lar    
5060 it is desirable to scrub inodes in parallel t    
5061 if the program has been invoked manually from    
5062 This requires careful scheduling to keep the     
5063 possible.                                        
5064                                                  
5065 Early iterations of the ``xfs_scrub`` inode s    
5066 workqueue and scheduled a single workqueue it    
5067 Each workqueue item walked the inode btree (w    
5068 inode chunks and then called bulkstat (``XFS_    
5069 information to construct file handles.           
5070 The file handle was then passed to a function    
5071 metadata object of each inode.                   
5072 This simple algorithm leads to thread balanci    
5073 filesystem contains one AG with a few large s    
5074 AGs contain many smaller files.                  
5075 The inode scan dispatch function was not suff    
5076 been dispatching at the level of individual i    
5077 consumption, inode btree records.                
5078                                                  
5079 Thanks to Dave Chinner, bounded workqueues in    
5080 avoid this problem with ease by adding a seco    
5081 Just like before, the first workqueue is seed    
5082 and it uses INUMBERS to find inode btree chun    
5083 The second workqueue, however, is configured     
5084 of items that can be waiting to be run.          
5085 Each inode btree chunk found by the first wor    
5086 second workqueue, and it is this second workq    
5087 creates a file handle, and passes it to a fun    
5088 each metadata object of each inode.              
5089 If the second workqueue is too full, the work    
5090 first workqueue's workers until the backlog e    
5091 This doesn't completely solve the balancing p    
5092 move on to more pressing issues.                 
5093                                                  
5094 The proposed patchsets are the scrub             
5095 `performance tweaks                              
5096 <https://git.kernel.org/pub/scm/linux/kernel/    
5097 and the                                          
5098 `inode scan rebalance                            
5099 <https://git.kernel.org/pub/scm/linux/kernel/    
5100 series.                                          
5101                                                  
5102 .. _scrubrepair:                                 
5103                                                  
5104 Scheduling Repairs                               
5105 ------------------                               
5106                                                  
5107 During phase 2, corruptions and inconsistenci    
5108 inode btree are repaired immediately, because    
5109 functioning of the inode indices to find inod    
5110 Failed repairs are rescheduled to phase 4.       
5111 Problems reported in any other space metadata    
5112 Optimization opportunities are always deferre    
5113 origin.                                          
5114                                                  
5115 During phase 3, corruptions and inconsistenci    
5116 file's metadata are repaired immediately if a    
5117 during phase 2.                                  
5118 Repairs that fail or cannot be repaired immed    
5119                                                  
5120 In the original design of ``xfs_scrub``, it w    
5121 so infrequent that the ``struct xfs_scrub_met    
5122 communicate with the kernel could also be use    
5123 schedule repairs.                                
5124 With recent increases in the number of optimi    
5125 filesystem object, it became much more memory    
5126 repairs for a given filesystem object with a     
5127 Each repair item represents a single lockable    
5128 individual inodes, or a class of summary info    
5129                                                  
5130 Phase 4 is responsible for scheduling a lot o    
5131 manner as is practical.                          
5132 The :ref:`data dependencies <scrubcheck>` out    
5133 means that ``xfs_scrub`` must try to complete    
5134 phase 2 before trying repair work scheduled b    
5135 The repair process is as follows:                
5136                                                  
5137 1. Start a round of repair with a workqueue a    
5138    as busy as the user desires.                  
5139                                                  
5140    a. For each repair item queued by phase 2,    
5141                                                  
5142       i.   Ask the kernel to repair everythin    
5143            given filesystem object.              
5144                                                  
5145       ii.  Make a note if the kernel made any    
5146            of repairs needed for this object.    
5147                                                  
5148       iii. If the object no longer requires r    
5149            associated with this object.          
5150            If the revalidation succeeds, drop    
5151            If not, requeue the item for more     
5152                                                  
5153    b. If any repairs were made, jump back to     
5154                                                  
5155    c. For each repair item queued by phase 3,    
5156                                                  
5157       i.   Ask the kernel to repair everythin    
5158            given filesystem object.              
5159                                                  
5160       ii.  Make a note if the kernel made any    
5161            of repairs needed for this object.    
5162                                                  
5163       iii. If the object no longer requires r    
5164            associated with this object.          
5165            If the revalidation succeeds, drop    
5166            If not, requeue the item for more     
5167                                                  
5168    d. If any repairs were made, jump back to     
5169                                                  
5170 2. If step 1 made any repair progress of any     
5171    another round of repair.                      
5172                                                  
5173 3. If there are items left to repair, run the    
5174    Complain if the repairs were not successfu    
5175    to repair anything.                           
5176                                                  
5177 Corruptions and inconsistencies encountered d    
5178 immediately.                                     
5179 Corrupt file data blocks reported by phase 6     
5180 filesystem.                                      
5181                                                  
5182 The proposed patchsets are the                   
5183 `repair warning improvements                     
5184 <https://git.kernel.org/pub/scm/linux/kernel/    
5185 refactoring of the                               
5186 `repair data dependency                          
5187 <https://git.kernel.org/pub/scm/linux/kernel/    
5188 and                                              
5189 `object tracking                                 
5190 <https://git.kernel.org/pub/scm/linux/kernel/    
5191 and the                                          
5192 `repair scheduling                               
5193 <https://git.kernel.org/pub/scm/linux/kernel/    
5194 improvement series.                              
5195                                                  
5196 Checking Names for Confusable Unicode Sequenc    
5197 ---------------------------------------------    
5198                                                  
5199 If ``xfs_scrub`` succeeds in validating the f    
5200 phase 4, it moves on to phase 5, which checks    
5201 the filesystem.                                  
5202 These names consist of the filesystem label,     
5203 the names of extended attributes.                
5204 Like most Unix filesystems, XFS imposes the s    
5205 contents of a name:                              
5206                                                  
5207 - Slashes and null bytes are not allowed in d    
5208                                                  
5209 - Null bytes are not allowed in userspace-vis    
5210                                                  
5211 - Null bytes are not allowed in the filesyste    
5212                                                  
5213 Directory entries and attribute keys store th    
5214 ondisk, which means that nulls are not name t    
5215 For this section, the term "naming domain" re    
5216 presented together -- all the names in a dire    
5217 file.                                            
5218                                                  
5219 Although the Unix naming constraints are very    
5220 modern-day Linux systems is that programs wor    
5221 points to support international languages.       
5222 These programs typically encode those code po    
5223 with the C library because the kernel expects    
5224 In the common case, therefore, names found in    
5225 UTF-8 encoded Unicode data.                      
5226                                                  
5227 To maximize its expressiveness, the Unicode s    
5228 points for various characters that render sim    
5229 systems around the world.                        
5230 For example, the character "Cyrillic Small Le    
5231 identically to "Latin Small Letter A" U+0061     
5232                                                  
5233 The standard also permits characters to be co    
5234 either by using a defined code point, or by c    
5235 various combining marks.                         
5236 For example, the character "Angstrom Sign U+2    
5237 as "Latin Capital Letter A" U+0041 "A" follow    
5238 U+030A "◌̊".                                  
5239 Both sequences render identically.               
5240                                                  
5241 Like the standards that preceded it, Unicode     
5242 characters to alter the presentation of text.    
5243 For example, the character "Right-to-Left Ove    
5244 programs into rendering "moo\\xe2\\x80\\xaegn    
5245 A second category of rendering problems invol    
5246 If the character "Zero Width Space" U+200B is    
5247 name will render identically to a name that d    
5248 space.                                           
5249                                                  
5250 If two names within a naming domain have diff    
5251 identically, a user may be confused by it.       
5252 The kernel, in its indifference to upper leve    
5253 Most filesystem drivers persist the byte sequ    
5254 by the VFS.                                      
5255                                                  
5256 Techniques for detecting confusable names are    
5257 sections 4 and 5 of the                          
5258 `Unicode Security Mechanisms <https://unicode    
5259 document.                                        
5260 When ``xfs_scrub`` detects UTF-8 encoding in     
5261 Unicode normalization form NFD in conjunction    
5262 detection component of                           
5263 `libicu <https://github.com/unicode-org/icu>`    
5264 to identify names with a directory or within     
5265 could be confused for each other.                
5266 Names are also checked for control characters    
5267 mixing of bidirectional characters.              
5268 All of these potential issues are reported to    
5269 phase 5.                                         
5270                                                  
5271 Media Verification of File Data Extents          
5272 ---------------------------------------          
5273                                                  
5274 The system administrator can elect to initiat    
5275 blocks.                                          
5276 This scan after validation of all filesystem     
5277 counters) as phase 6.                            
5278 The scan starts by calling ``FS_IOC_GETFSMAP`    
5279 to find areas that are allocated to file data    
5280 Gaps between data fork extents that are small    
5281 they were data fork extents to reduce the com    
5282 When the space map scan accumulates a region     
5283 verification request is sent to the disk as a    
5284 device.                                          
5285                                                  
5286 If the verification read fails, ``xfs_scrub``    
5287 to narrow down the failure to the specific re    
5288 When it has finished issuing verification req    
5289 mapping ioctl to map the recorded media error    
5290 and report what has been lost.                   
5291 For media errors in blocks owned by files, pa    
5292 construct file paths from inode numbers for u    
5293                                                  
5294 7. Conclusion and Future Work                    
5295 =============================                    
5296                                                  
5297 It is hoped that the reader of this document     
5298 in this document and now has some familiarity    
5299 rebuilding of its metadata indices, and how f    
5300 that functionality.                              
5301 Although the scope of this work is daunting,     
5302 make it easier for code readers to understand    
5303 has been built, and why.                         
5304 Please feel free to contact the XFS mailing l    
5305                                                  
5306 XFS_IOC_EXCHANGE_RANGE                           
5307 ----------------------                           
5308                                                  
5309 As discussed earlier, a second frontend to th    
5310 mechanism is a new ioctl call that userspace     
5311 to files atomically.                             
5312 This frontend has been out for review for sev    
5313 necessary refinements to online repair and la    
5314 the proposal has not been pushed very hard.      
5315                                                  
5316 File Content Exchanges with Regular User File    
5317 `````````````````````````````````````````````    
5318                                                  
5319 As mentioned earlier, XFS has long had the ab    
5320 files, which is used almost exclusively by ``    
5321 The earliest form of this was the fork swap m    
5322 contents of data forks could be exchanged bet    
5323 raw bytes in each inode fork's immediate area    
5324 When XFS v5 came along with self-describing m    
5325 some log support to continue rewriting the ow    
5326 log recovery.                                    
5327 When the reverse mapping btree was later adde    
5328 the consistency of the fork mappings with the    
5329 develop an iterative mechanism that used defe    
5330 swap mappings one at a time.                     
5331 This mechanism is identical to steps 2-3 from    
5332 the new tracking items, because the atomic fi    
5333 an iteration of an existing mechanism and not    
5334 For the narrow case of file defragmentation,     
5335 identical, so the recovery guarantees are not    
5336                                                  
5337 Atomic file content exchanges are much more f    
5338 implementations because it can guarantee that    
5339 old and new contents even after a crash, and     
5340 file fork ranges.                                
5341 The extra flexibility enables several new use    
5342                                                  
5343 - **Atomic commit of file writes**: A userspa    
5344   wants to update.                               
5345   Next, it opens a temporary file and calls t    
5346   the first file's contents into the temporar    
5347   Writes to the original file should instead     
5348   Finally, the process calls the atomic file     
5349   (``XFS_IOC_EXCHANGE_RANGE``) to exchange th    
5350   committing all of the updates to the origin    
5351                                                  
5352 .. _exchrange_if_unchanged:                      
5353                                                  
5354 - **Transactional file updates**: The same me    
5355   only wants the commit to occur if the origi    
5356   changed.                                       
5357   To make this happen, the calling process sn    
5358   change timestamps of the original file befo    
5359   temporary file.                                
5360   When the program is ready to commit the cha    
5361   into the kernel as arguments to the atomic     
5362   The kernel only commits the changes if the     
5363   original file.                                 
5364   A new ioctl (``XFS_IOC_COMMIT_RANGE``) is p    
5365                                                  
5366 - **Emulation of atomic block device writes**    
5367   logical sector size matching the filesystem    
5368   to be aligned to the filesystem block size.    
5369   Stage all writes to a temporary file, and w    
5370   atomic file mapping exchange system call wi    
5371   in the temporary file should be ignored.       
5372   This emulates an atomic device write in sof    
5373   scattered writes.                              
5374                                                  
5375 Vectorized Scrub                                 
5376 ----------------                                 
5377                                                  
5378 As it turns out, the :ref:`refactoring <scrub    
5379 earlier was a catalyst for enabling a vectori    
5380 Since 2018, the cost of making a kernel call     
5381 systems to mitigate the effects of speculativ    
5382 This incentivizes program authors to make as     
5383 reduce the number of times an execution path     
5384                                                  
5385 With vectorized scrub, userspace pushes to th    
5386 filesystem object, a list of scrub types to r    
5387 simple representation of the data dependencie    
5388 types.                                           
5389 The kernel executes as much of the caller's p    
5390 dependency that cannot be satisfied due to a     
5391 how much was accomplished.                       
5392 It is hoped that ``io_uring`` will pick up en    
5393 online fsck can use that instead of adding a     
5394 call to XFS.                                     
5395                                                  
5396 The relevant patchsets are the                   
5397 `kernel vectorized scrub                         
5398 <https://git.kernel.org/pub/scm/linux/kernel/    
5399 and                                              
5400 `userspace vectorized scrub                      
5401 <https://git.kernel.org/pub/scm/linux/kernel/    
5402 series.                                          
5403                                                  
5404 Quality of Service Targets for Scrub             
5405 ------------------------------------             
5406                                                  
5407 One serious shortcoming of the online fsck co    
5408 it can spend in the kernel holding resource l    
5409 Userspace is allowed to send a fatal signal t    
5410 ``xfs_scrub`` to exit when it reaches a good     
5411 for userspace to provide a time budget to the    
5412 Given that the scrub codebase has helpers to     
5413 be too much work to allow userspace to specif    
5414 operation and abort the operation if it excee    
5415 However, most repair functions have the prope    
5416 ondisk metadata, the operation cannot be canc    
5417 timeout is no longer useful.                     
5418                                                  
5419 Defragmenting Free Space                         
5420 ------------------------                         
5421                                                  
5422 Over the years, many XFS users have requested    
5423 clear a portion of the physical storage under    
5424 becomes a contiguous chunk of free space.        
5425 Call this free space defragmenter ``clearspac    
5426                                                  
5427 The first piece the ``clearspace`` program ne    
5428 reverse mapping index from userspace.            
5429 This already exists in the form of the ``FS_I    
5430 The second piece it needs is a new fallocate     
5431 (``FALLOC_FL_MAP_FREE_SPACE``) that allocates    
5432 maps it to a file.                               
5433 Call this file the "space collector" file.       
5434 The third piece is the ability to force an on    
5435                                                  
5436 To clear all the metadata out of a portion of    
5437 uses the new fallocate map-freespace call to     
5438 to the space collector file.                     
5439 Next, clearspace finds all metadata blocks in    
5440 ``GETFSMAP`` and issues forced repair request    
5441 This often results in the metadata being rebu    
5442 cleared.                                         
5443 After each relocation, clearspace calls the "    
5444 collect any newly freed space in the region b    
5445                                                  
5446 To clear all the file data out of a portion o    
5447 uses the FSMAP information to find relevant f    
5448 Having identified a good target, it uses the     
5449 of the file to try to share the physical spac    
5450 Cloning the extent means that the original ow    
5451 contents; any changes will be written somewhe    
5452 Clearspace makes its own copy of the frozen e    
5453 cleared, and uses ``FIEDEUPRANGE`` (or the :r    
5454 <exchrange_if_unchanged>` feature) to change     
5455 mapping away from the area being cleared.        
5456 When all other mappings have been moved, clea    
5457 space collector file so that it becomes unava    
5458                                                  
5459 There are further optimizations that could ap    
5460 To clear a piece of physical storage that has    
5461 strongly desirable to retain this sharing fac    
5462 In fact, these extents should be moved first     
5463 the operation completes.                         
5464 To make this work smoothly, clearspace needs     
5465 (``FS_IOC_GETREFCOUNTS``) to report reference    
5466 With the refcount information exposed, clears    
5467 most shared data extents in the filesystem, a    
5468                                                  
5469 **Future Work Question**: How might the files    
5470                                                  
5471 *Answer*: To move inode chunks, Dave Chinner     
5472 that creates a new file with the old contents    
5473 the filesystem updating directory entries.       
5474 The operation cannot complete if the filesyst    
5475 That problem isn't totally insurmountable: cr    
5476 hidden behind a jump label, and a log item th    
5477 filesystem to update directory entries.          
5478 The trouble is, the kernel can't do anything     
5479 revoke them.                                     
5480                                                  
5481 **Future Work Question**: Can static keys be     
5482 supporting ``revoke()`` on XFS files?            
5483                                                  
5484 *Answer*: Yes.                                   
5485 Until the first revocation, the bailout code     
5486 all.                                             
5487                                                  
5488 The relevant patchsets are the                   
5489 `kernel freespace defrag                         
5490 <https://git.kernel.org/pub/scm/linux/kernel/    
5491 and                                              
5492 `userspace freespace defrag                      
5493 <https://git.kernel.org/pub/scm/linux/kernel/    
5494 series.                                          
5495                                                  
5496 Shrinking Filesystems                            
5497 ---------------------                            
5498                                                  
5499 Removing the end of the filesystem ought to b    
5500 the data and metadata at the end of the files    
5501 to the shrink code.                              
5502 That requires an evacuation of the space at e    
5503 use of free space defragmentation!               
                                                      

~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

kernel.org | git.kernel.org | LWN.net | Project Home | SVN repository | Mail admin

Linux® is a registered trademark of Linus Torvalds in the United States and other countries.
TOMOYO® is a registered trademark of NTT DATA CORPORATION.

sflogo.php