1 .. SPDX-License-Identifier: GPL-2.0 1 .. SPDX-License-Identifier: GPL-2.0 2 .. _xfs_online_fsck_design: 2 .. _xfs_online_fsck_design: 3 3 4 .. 4 .. 5 Mapping of heading styles within this 5 Mapping of heading styles within this document: 6 Heading 1 uses "====" above and below 6 Heading 1 uses "====" above and below 7 Heading 2 uses "====" 7 Heading 2 uses "====" 8 Heading 3 uses "----" 8 Heading 3 uses "----" 9 Heading 4 uses "````" 9 Heading 4 uses "````" 10 Heading 5 uses "^^^^" 10 Heading 5 uses "^^^^" 11 Heading 6 uses "~~~~" 11 Heading 6 uses "~~~~" 12 Heading 7 uses "...." 12 Heading 7 uses "...." 13 13 14 Sections are manually numbered because 14 Sections are manually numbered because apparently that's what everyone 15 does in the kernel. 15 does in the kernel. 16 16 17 ====================== 17 ====================== 18 XFS Online Fsck Design 18 XFS Online Fsck Design 19 ====================== 19 ====================== 20 20 21 This document captures the design of the onlin 21 This document captures the design of the online filesystem check feature for 22 XFS. 22 XFS. 23 The purpose of this document is threefold: 23 The purpose of this document is threefold: 24 24 25 - To help kernel distributors understand exact 25 - To help kernel distributors understand exactly what the XFS online fsck 26 feature is, and issues about which they shou 26 feature is, and issues about which they should be aware. 27 27 28 - To help people reading the code to familiari 28 - To help people reading the code to familiarize themselves with the relevant 29 concepts and design points before they start 29 concepts and design points before they start digging into the code. 30 30 31 - To help developers maintaining the system by 31 - To help developers maintaining the system by capturing the reasons 32 supporting higher level decision making. 32 supporting higher level decision making. 33 33 34 As the online fsck code is merged, the links i 34 As the online fsck code is merged, the links in this document to topic branches 35 will be replaced with links to code. 35 will be replaced with links to code. 36 36 37 This document is licensed under the terms of t 37 This document is licensed under the terms of the GNU Public License, v2. 38 The primary author is Darrick J. Wong. 38 The primary author is Darrick J. Wong. 39 39 40 This design document is split into seven parts 40 This design document is split into seven parts. 41 Part 1 defines what fsck tools are and the mot 41 Part 1 defines what fsck tools are and the motivations for writing a new one. 42 Parts 2 and 3 present a high level overview of 42 Parts 2 and 3 present a high level overview of how online fsck process works 43 and how it is tested to ensure correct functio 43 and how it is tested to ensure correct functionality. 44 Part 4 discusses the user interface and the in 44 Part 4 discusses the user interface and the intended usage modes of the new 45 program. 45 program. 46 Parts 5 and 6 show off the high level componen 46 Parts 5 and 6 show off the high level components and how they fit together, and 47 then present case studies of how each repair f 47 then present case studies of how each repair function actually works. 48 Part 7 sums up what has been discussed so far 48 Part 7 sums up what has been discussed so far and speculates about what else 49 might be built atop online fsck. 49 might be built atop online fsck. 50 50 51 .. contents:: Table of Contents 51 .. contents:: Table of Contents 52 :local: 52 :local: 53 53 54 1. What is a Filesystem Check? 54 1. What is a Filesystem Check? 55 ============================== 55 ============================== 56 56 57 A Unix filesystem has four main responsibiliti 57 A Unix filesystem has four main responsibilities: 58 58 59 - Provide a hierarchy of names through which a 59 - Provide a hierarchy of names through which application programs can associate 60 arbitrary blobs of data for any length of ti 60 arbitrary blobs of data for any length of time, 61 61 62 - Virtualize physical storage media across tho 62 - Virtualize physical storage media across those names, and 63 63 64 - Retrieve the named data blobs at any time. 64 - Retrieve the named data blobs at any time. 65 65 66 - Examine resource usage. 66 - Examine resource usage. 67 67 68 Metadata directly supporting these functions ( 68 Metadata directly supporting these functions (e.g. files, directories, space 69 mappings) are sometimes called primary metadat 69 mappings) are sometimes called primary metadata. 70 Secondary metadata (e.g. reverse mapping and d 70 Secondary metadata (e.g. reverse mapping and directory parent pointers) support 71 operations internal to the filesystem, such as 71 operations internal to the filesystem, such as internal consistency checking 72 and reorganization. 72 and reorganization. 73 Summary metadata, as the name implies, condens 73 Summary metadata, as the name implies, condense information contained in 74 primary metadata for performance reasons. 74 primary metadata for performance reasons. 75 75 76 The filesystem check (fsck) tool examines all 76 The filesystem check (fsck) tool examines all the metadata in a filesystem 77 to look for errors. 77 to look for errors. 78 In addition to looking for obvious metadata co 78 In addition to looking for obvious metadata corruptions, fsck also 79 cross-references different types of metadata r 79 cross-references different types of metadata records with each other to look 80 for inconsistencies. 80 for inconsistencies. 81 People do not like losing data, so most fsck t 81 People do not like losing data, so most fsck tools also contains some ability 82 to correct any problems found. 82 to correct any problems found. 83 As a word of caution -- the primary goal of mo 83 As a word of caution -- the primary goal of most Linux fsck tools is to restore 84 the filesystem metadata to a consistent state, 84 the filesystem metadata to a consistent state, not to maximize the data 85 recovered. 85 recovered. 86 That precedent will not be challenged here. 86 That precedent will not be challenged here. 87 87 88 Filesystems of the 20th century generally lack 88 Filesystems of the 20th century generally lacked any redundancy in the ondisk 89 format, which means that fsck can only respond 89 format, which means that fsck can only respond to errors by erasing files until 90 errors are no longer detected. 90 errors are no longer detected. 91 More recent filesystem designs contain enough 91 More recent filesystem designs contain enough redundancy in their metadata that 92 it is now possible to regenerate data structur 92 it is now possible to regenerate data structures when non-catastrophic errors 93 occur; this capability aids both strategies. 93 occur; this capability aids both strategies. 94 94 95 +--------------------------------------------- 95 +--------------------------------------------------------------------------+ 96 | **Note**: 96 | **Note**: | 97 +--------------------------------------------- 97 +--------------------------------------------------------------------------+ 98 | System administrators avoid data loss by inc 98 | System administrators avoid data loss by increasing the number of | 99 | separate storage systems through the creatio 99 | separate storage systems through the creation of backups; and they avoid | 100 | downtime by increasing the redundancy of eac 100 | downtime by increasing the redundancy of each storage system through the | 101 | creation of RAID arrays. 101 | creation of RAID arrays. | 102 | fsck tools address only the first problem. 102 | fsck tools address only the first problem. | 103 +--------------------------------------------- 103 +--------------------------------------------------------------------------+ 104 104 105 TLDR; Show Me the Code! 105 TLDR; Show Me the Code! 106 ----------------------- 106 ----------------------- 107 107 108 Code is posted to the kernel.org git trees as 108 Code is posted to the kernel.org git trees as follows: 109 `kernel changes <https://git.kernel.org/pub/sc 109 `kernel changes <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=repair-symlink>`_, 110 `userspace changes <https://git.kernel.org/pub 110 `userspace changes <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfsprogs-dev.git/log/?h=scrub-media-scan-service>`_, and 111 `QA test changes <https://git.kernel.org/pub/s 111 `QA test changes <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfstests-dev.git/log/?h=repair-dirs>`_. 112 Each kernel patchset adding an online repair f 112 Each kernel patchset adding an online repair function will use the same branch 113 name across the kernel, xfsprogs, and fstests 113 name across the kernel, xfsprogs, and fstests git repos. 114 114 115 Existing Tools 115 Existing Tools 116 -------------- 116 -------------- 117 117 118 The online fsck tool described here will be th 118 The online fsck tool described here will be the third tool in the history of 119 XFS (on Linux) to check and repair filesystems 119 XFS (on Linux) to check and repair filesystems. 120 Two programs precede it: 120 Two programs precede it: 121 121 122 The first program, ``xfs_check``, was created 122 The first program, ``xfs_check``, was created as part of the XFS debugger 123 (``xfs_db``) and can only be used with unmount 123 (``xfs_db``) and can only be used with unmounted filesystems. 124 It walks all metadata in the filesystem lookin 124 It walks all metadata in the filesystem looking for inconsistencies in the 125 metadata, though it lacks any ability to repai 125 metadata, though it lacks any ability to repair what it finds. 126 Due to its high memory requirements and inabil 126 Due to its high memory requirements and inability to repair things, this 127 program is now deprecated and will not be disc 127 program is now deprecated and will not be discussed further. 128 128 129 The second program, ``xfs_repair``, was create 129 The second program, ``xfs_repair``, was created to be faster and more robust 130 than the first program. 130 than the first program. 131 Like its predecessor, it can only be used with 131 Like its predecessor, it can only be used with unmounted filesystems. 132 It uses extent-based in-memory data structures 132 It uses extent-based in-memory data structures to reduce memory consumption, 133 and tries to schedule readahead IO appropriate 133 and tries to schedule readahead IO appropriately to reduce I/O waiting time 134 while it scans the metadata of the entire file 134 while it scans the metadata of the entire filesystem. 135 The most important feature of this tool is its 135 The most important feature of this tool is its ability to respond to 136 inconsistencies in file metadata and directory 136 inconsistencies in file metadata and directory tree by erasing things as needed 137 to eliminate problems. 137 to eliminate problems. 138 Space usage metadata are rebuilt from the obse 138 Space usage metadata are rebuilt from the observed file metadata. 139 139 140 Problem Statement 140 Problem Statement 141 ----------------- 141 ----------------- 142 142 143 The current XFS tools leave several problems u 143 The current XFS tools leave several problems unsolved: 144 144 145 1. **User programs** suddenly **lose access** 145 1. **User programs** suddenly **lose access** to the filesystem when unexpected 146 shutdowns occur as a result of silent corru 146 shutdowns occur as a result of silent corruptions in the metadata. 147 These occur **unpredictably** and often wit 147 These occur **unpredictably** and often without warning. 148 148 149 2. **Users** experience a **total loss of serv 149 2. **Users** experience a **total loss of service** during the recovery period 150 after an **unexpected shutdown** occurs. 150 after an **unexpected shutdown** occurs. 151 151 152 3. **Users** experience a **total loss of serv 152 3. **Users** experience a **total loss of service** if the filesystem is taken 153 offline to **look for problems** proactivel 153 offline to **look for problems** proactively. 154 154 155 4. **Data owners** cannot **check the integrit 155 4. **Data owners** cannot **check the integrity** of their stored data without 156 reading all of it. 156 reading all of it. 157 This may expose them to substantial billing 157 This may expose them to substantial billing costs when a linear media scan 158 performed by the storage system administrat 158 performed by the storage system administrator might suffice. 159 159 160 5. **System administrators** cannot **schedule 160 5. **System administrators** cannot **schedule** a maintenance window to deal 161 with corruptions if they **lack the means** 161 with corruptions if they **lack the means** to assess filesystem health 162 while the filesystem is online. 162 while the filesystem is online. 163 163 164 6. **Fleet monitoring tools** cannot **automat 164 6. **Fleet monitoring tools** cannot **automate periodic checks** of filesystem 165 health when doing so requires **manual inte 165 health when doing so requires **manual intervention** and downtime. 166 166 167 7. **Users** can be tricked into **doing thing 167 7. **Users** can be tricked into **doing things they do not desire** when 168 malicious actors **exploit quirks of Unicod 168 malicious actors **exploit quirks of Unicode** to place misleading names 169 in directories. 169 in directories. 170 170 171 Given this definition of the problems to be so 171 Given this definition of the problems to be solved and the actors who would 172 benefit, the proposed solution is a third fsck 172 benefit, the proposed solution is a third fsck tool that acts on a running 173 filesystem. 173 filesystem. 174 174 175 This new third program has three components: a 175 This new third program has three components: an in-kernel facility to check 176 metadata, an in-kernel facility to repair meta 176 metadata, an in-kernel facility to repair metadata, and a userspace driver 177 program to drive fsck activity on a live files 177 program to drive fsck activity on a live filesystem. 178 ``xfs_scrub`` is the name of the driver progra 178 ``xfs_scrub`` is the name of the driver program. 179 The rest of this document presents the goals a 179 The rest of this document presents the goals and use cases of the new fsck 180 tool, describes its major design points in con 180 tool, describes its major design points in connection to those goals, and 181 discusses the similarities and differences wit 181 discusses the similarities and differences with existing tools. 182 182 183 +--------------------------------------------- 183 +--------------------------------------------------------------------------+ 184 | **Note**: 184 | **Note**: | 185 +--------------------------------------------- 185 +--------------------------------------------------------------------------+ 186 | Throughout this document, the existing offli 186 | Throughout this document, the existing offline fsck tool can also be | 187 | referred to by its current name "``xfs_repai 187 | referred to by its current name "``xfs_repair``". | 188 | The userspace driver program for the new onl 188 | The userspace driver program for the new online fsck tool can be | 189 | referred to as "``xfs_scrub``". 189 | referred to as "``xfs_scrub``". | 190 | The kernel portion of online fsck that valid 190 | The kernel portion of online fsck that validates metadata is called | 191 | "online scrub", and portion of the kernel th 191 | "online scrub", and portion of the kernel that fixes metadata is called | 192 | "online repair". 192 | "online repair". | 193 +--------------------------------------------- 193 +--------------------------------------------------------------------------+ 194 194 195 The naming hierarchy is broken up into objects 195 The naming hierarchy is broken up into objects known as directories and files 196 and the physical space is split into pieces kn 196 and the physical space is split into pieces known as allocation groups. 197 Sharding enables better performance on highly 197 Sharding enables better performance on highly parallel systems and helps to 198 contain the damage when corruptions occur. 198 contain the damage when corruptions occur. 199 The division of the filesystem into principal 199 The division of the filesystem into principal objects (allocation groups and 200 inodes) means that there are ample opportuniti 200 inodes) means that there are ample opportunities to perform targeted checks and 201 repairs on a subset of the filesystem. 201 repairs on a subset of the filesystem. 202 202 203 While this is going on, other parts continue p 203 While this is going on, other parts continue processing IO requests. 204 Even if a piece of filesystem metadata can onl 204 Even if a piece of filesystem metadata can only be regenerated by scanning the 205 entire system, the scan can still be done in t 205 entire system, the scan can still be done in the background while other file 206 operations continue. 206 operations continue. 207 207 208 In summary, online fsck takes advantage of res 208 In summary, online fsck takes advantage of resource sharding and redundant 209 metadata to enable targeted checking and repai 209 metadata to enable targeted checking and repair operations while the system 210 is running. 210 is running. 211 This capability will be coupled to automatic s 211 This capability will be coupled to automatic system management so that 212 autonomous self-healing of XFS maximizes servi 212 autonomous self-healing of XFS maximizes service availability. 213 213 214 2. Theory of Operation 214 2. Theory of Operation 215 ====================== 215 ====================== 216 216 217 Because it is necessary for online fsck to loc 217 Because it is necessary for online fsck to lock and scan live metadata objects, 218 online fsck consists of three separate code co 218 online fsck consists of three separate code components. 219 The first is the userspace driver program ``xf 219 The first is the userspace driver program ``xfs_scrub``, which is responsible 220 for identifying individual metadata items, sch 220 for identifying individual metadata items, scheduling work items for them, 221 reacting to the outcomes appropriately, and re 221 reacting to the outcomes appropriately, and reporting results to the system 222 administrator. 222 administrator. 223 The second and third are in the kernel, which 223 The second and third are in the kernel, which implements functions to check 224 and repair each type of online fsck work item. 224 and repair each type of online fsck work item. 225 225 226 +--------------------------------------------- 226 +------------------------------------------------------------------+ 227 | **Note**: 227 | **Note**: | 228 +--------------------------------------------- 228 +------------------------------------------------------------------+ 229 | For brevity, this document shortens the phra 229 | For brevity, this document shortens the phrase "online fsck work | 230 | item" to "scrub item". 230 | item" to "scrub item". | 231 +--------------------------------------------- 231 +------------------------------------------------------------------+ 232 232 233 Scrub item types are delineated in a manner co 233 Scrub item types are delineated in a manner consistent with the Unix design 234 philosophy, which is to say that each item sho 234 philosophy, which is to say that each item should handle one aspect of a 235 metadata structure, and handle it well. 235 metadata structure, and handle it well. 236 236 237 Scope 237 Scope 238 ----- 238 ----- 239 239 240 In principle, online fsck should be able to ch 240 In principle, online fsck should be able to check and to repair everything that 241 the offline fsck program can handle. 241 the offline fsck program can handle. 242 However, online fsck cannot be running 100% of 242 However, online fsck cannot be running 100% of the time, which means that 243 latent errors may creep in after a scrub compl 243 latent errors may creep in after a scrub completes. 244 If these errors cause the next mount to fail, 244 If these errors cause the next mount to fail, offline fsck is the only 245 solution. 245 solution. 246 This limitation means that maintenance of the 246 This limitation means that maintenance of the offline fsck tool will continue. 247 A second limitation of online fsck is that it 247 A second limitation of online fsck is that it must follow the same resource 248 sharing and lock acquisition rules as the regu 248 sharing and lock acquisition rules as the regular filesystem. 249 This means that scrub cannot take *any* shortc 249 This means that scrub cannot take *any* shortcuts to save time, because doing 250 so could lead to concurrency problems. 250 so could lead to concurrency problems. 251 In other words, online fsck is not a complete 251 In other words, online fsck is not a complete replacement for offline fsck, and 252 a complete run of online fsck may take longer 252 a complete run of online fsck may take longer than online fsck. 253 However, both of these limitations are accepta 253 However, both of these limitations are acceptable tradeoffs to satisfy the 254 different motivations of online fsck, which ar 254 different motivations of online fsck, which are to **minimize system downtime** 255 and to **increase predictability of operation* 255 and to **increase predictability of operation**. 256 256 257 .. _scrubphases: 257 .. _scrubphases: 258 258 259 Phases of Work 259 Phases of Work 260 -------------- 260 -------------- 261 261 262 The userspace driver program ``xfs_scrub`` spl 262 The userspace driver program ``xfs_scrub`` splits the work of checking and 263 repairing an entire filesystem into seven phas 263 repairing an entire filesystem into seven phases. 264 Each phase concentrates on checking specific t 264 Each phase concentrates on checking specific types of scrub items and depends 265 on the success of all previous phases. 265 on the success of all previous phases. 266 The seven phases are as follows: 266 The seven phases are as follows: 267 267 268 1. Collect geometry information about the moun 268 1. Collect geometry information about the mounted filesystem and computer, 269 discover the online fsck capabilities of th 269 discover the online fsck capabilities of the kernel, and open the 270 underlying storage devices. 270 underlying storage devices. 271 271 272 2. Check allocation group metadata, all realti 272 2. Check allocation group metadata, all realtime volume metadata, and all quota 273 files. 273 files. 274 Each metadata structure is scheduled as a s 274 Each metadata structure is scheduled as a separate scrub item. 275 If corruption is found in the inode header 275 If corruption is found in the inode header or inode btree and ``xfs_scrub`` 276 is permitted to perform repairs, then those 276 is permitted to perform repairs, then those scrub items are repaired to 277 prepare for phase 3. 277 prepare for phase 3. 278 Repairs are implemented by using the inform 278 Repairs are implemented by using the information in the scrub item to 279 resubmit the kernel scrub call with the rep 279 resubmit the kernel scrub call with the repair flag enabled; this is 280 discussed in the next section. 280 discussed in the next section. 281 Optimizations and all other repairs are def 281 Optimizations and all other repairs are deferred to phase 4. 282 282 283 3. Check all metadata of every file in the fil 283 3. Check all metadata of every file in the filesystem. 284 Each metadata structure is also scheduled a 284 Each metadata structure is also scheduled as a separate scrub item. 285 If repairs are needed and ``xfs_scrub`` is 285 If repairs are needed and ``xfs_scrub`` is permitted to perform repairs, 286 and there were no problems detected during 286 and there were no problems detected during phase 2, then those scrub items 287 are repaired immediately. 287 are repaired immediately. 288 Optimizations, deferred repairs, and unsucc 288 Optimizations, deferred repairs, and unsuccessful repairs are deferred to 289 phase 4. 289 phase 4. 290 290 291 4. All remaining repairs and scheduled optimiz 291 4. All remaining repairs and scheduled optimizations are performed during this 292 phase, if the caller permits them. 292 phase, if the caller permits them. 293 Before starting repairs, the summary counte 293 Before starting repairs, the summary counters are checked and any necessary 294 repairs are performed so that subsequent re 294 repairs are performed so that subsequent repairs will not fail the resource 295 reservation step due to wildly incorrect su 295 reservation step due to wildly incorrect summary counters. 296 Unsuccessful repairs are requeued as long a 296 Unsuccessful repairs are requeued as long as forward progress on repairs is 297 made somewhere in the filesystem. 297 made somewhere in the filesystem. 298 Free space in the filesystem is trimmed at 298 Free space in the filesystem is trimmed at the end of phase 4 if the 299 filesystem is clean. 299 filesystem is clean. 300 300 301 5. By the start of this phase, all primary and 301 5. By the start of this phase, all primary and secondary filesystem metadata 302 must be correct. 302 must be correct. 303 Summary counters such as the free space cou 303 Summary counters such as the free space counts and quota resource counts 304 are checked and corrected. 304 are checked and corrected. 305 Directory entry names and extended attribut 305 Directory entry names and extended attribute names are checked for 306 suspicious entries such as control characte 306 suspicious entries such as control characters or confusing Unicode sequences 307 appearing in names. 307 appearing in names. 308 308 309 6. If the caller asks for a media scan, read a 309 6. If the caller asks for a media scan, read all allocated and written data 310 file extents in the filesystem. 310 file extents in the filesystem. 311 The ability to use hardware-assisted data f 311 The ability to use hardware-assisted data file integrity checking is new 312 to online fsck; neither of the previous too 312 to online fsck; neither of the previous tools have this capability. 313 If media errors occur, they will be mapped 313 If media errors occur, they will be mapped to the owning files and reported. 314 314 315 7. Re-check the summary counters and presents 315 7. Re-check the summary counters and presents the caller with a summary of 316 space usage and file counts. 316 space usage and file counts. 317 317 318 This allocation of responsibilities will be :r 318 This allocation of responsibilities will be :ref:`revisited <scrubcheck>` 319 later in this document. 319 later in this document. 320 320 321 Steps for Each Scrub Item 321 Steps for Each Scrub Item 322 ------------------------- 322 ------------------------- 323 323 324 The kernel scrub code uses a three-step strate 324 The kernel scrub code uses a three-step strategy for checking and repairing 325 the one aspect of a metadata object represente 325 the one aspect of a metadata object represented by a scrub item: 326 326 327 1. The scrub item of interest is checked for c 327 1. The scrub item of interest is checked for corruptions; opportunities for 328 optimization; and for values that are direc 328 optimization; and for values that are directly controlled by the system 329 administrator but look suspicious. 329 administrator but look suspicious. 330 If the item is not corrupt or does not need 330 If the item is not corrupt or does not need optimization, resource are 331 released and the positive scan results are 331 released and the positive scan results are returned to userspace. 332 If the item is corrupt or could be optimize 332 If the item is corrupt or could be optimized but the caller does not permit 333 this, resources are released and the negati 333 this, resources are released and the negative scan results are returned to 334 userspace. 334 userspace. 335 Otherwise, the kernel moves on to the secon 335 Otherwise, the kernel moves on to the second step. 336 336 337 2. The repair function is called to rebuild th 337 2. The repair function is called to rebuild the data structure. 338 Repair functions generally choose rebuild a 338 Repair functions generally choose rebuild a structure from other metadata 339 rather than try to salvage the existing str 339 rather than try to salvage the existing structure. 340 If the repair fails, the scan results from 340 If the repair fails, the scan results from the first step are returned to 341 userspace. 341 userspace. 342 Otherwise, the kernel moves on to the third 342 Otherwise, the kernel moves on to the third step. 343 343 344 3. In the third step, the kernel runs the same 344 3. In the third step, the kernel runs the same checks over the new metadata 345 item to assess the efficacy of the repairs. 345 item to assess the efficacy of the repairs. 346 The results of the reassessment are returne 346 The results of the reassessment are returned to userspace. 347 347 348 Classification of Metadata 348 Classification of Metadata 349 -------------------------- 349 -------------------------- 350 350 351 Each type of metadata object (and therefore ea 351 Each type of metadata object (and therefore each type of scrub item) is 352 classified as follows: 352 classified as follows: 353 353 354 Primary Metadata 354 Primary Metadata 355 ```````````````` 355 ```````````````` 356 356 357 Metadata structures in this category should be 357 Metadata structures in this category should be most familiar to filesystem 358 users either because they are directly created 358 users either because they are directly created by the user or they index 359 objects created by the user 359 objects created by the user 360 Most filesystem objects fall into this class: 360 Most filesystem objects fall into this class: 361 361 362 - Free space and reference count information 362 - Free space and reference count information 363 363 364 - Inode records and indexes 364 - Inode records and indexes 365 365 366 - Storage mapping information for file data 366 - Storage mapping information for file data 367 367 368 - Directories 368 - Directories 369 369 370 - Extended attributes 370 - Extended attributes 371 371 372 - Symbolic links 372 - Symbolic links 373 373 374 - Quota limits 374 - Quota limits 375 375 376 Scrub obeys the same rules as regular filesyst 376 Scrub obeys the same rules as regular filesystem accesses for resource and lock 377 acquisition. 377 acquisition. 378 378 379 Primary metadata objects are the simplest for 379 Primary metadata objects are the simplest for scrub to process. 380 The principal filesystem object (either an all 380 The principal filesystem object (either an allocation group or an inode) that 381 owns the item being scrubbed is locked to guar 381 owns the item being scrubbed is locked to guard against concurrent updates. 382 The check function examines every record assoc 382 The check function examines every record associated with the type for obvious 383 errors and cross-references healthy records ag 383 errors and cross-references healthy records against other metadata to look for 384 inconsistencies. 384 inconsistencies. 385 Repairs for this class of scrub item are simpl 385 Repairs for this class of scrub item are simple, since the repair function 386 starts by holding all the resources acquired i 386 starts by holding all the resources acquired in the previous step. 387 The repair function scans available metadata a 387 The repair function scans available metadata as needed to record all the 388 observations needed to complete the structure. 388 observations needed to complete the structure. 389 Next, it stages the observations in a new ondi 389 Next, it stages the observations in a new ondisk structure and commits it 390 atomically to complete the repair. 390 atomically to complete the repair. 391 Finally, the storage from the old data structu 391 Finally, the storage from the old data structure are carefully reaped. 392 392 393 Because ``xfs_scrub`` locks a primary object f 393 Because ``xfs_scrub`` locks a primary object for the duration of the repair, 394 this is effectively an offline repair operatio 394 this is effectively an offline repair operation performed on a subset of the 395 filesystem. 395 filesystem. 396 This minimizes the complexity of the repair co 396 This minimizes the complexity of the repair code because it is not necessary to 397 handle concurrent updates from other threads, 397 handle concurrent updates from other threads, nor is it necessary to access 398 any other part of the filesystem. 398 any other part of the filesystem. 399 As a result, indexed structures can be rebuilt 399 As a result, indexed structures can be rebuilt very quickly, and programs 400 trying to access the damaged structure will be 400 trying to access the damaged structure will be blocked until repairs complete. 401 The only infrastructure needed by the repair c 401 The only infrastructure needed by the repair code are the staging area for 402 observations and a means to write new structur 402 observations and a means to write new structures to disk. 403 Despite these limitations, the advantage that 403 Despite these limitations, the advantage that online repair holds is clear: 404 targeted work on individual shards of the file 404 targeted work on individual shards of the filesystem avoids total loss of 405 service. 405 service. 406 406 407 This mechanism is described in section 2.1 ("O 407 This mechanism is described in section 2.1 ("Off-Line Algorithm") of 408 V. Srinivasan and M. J. Carey, `"Performance o 408 V. Srinivasan and M. J. Carey, `"Performance of On-Line Index Construction 409 Algorithms" <https://minds.wisconsin.edu/bitst 409 Algorithms" <https://minds.wisconsin.edu/bitstream/handle/1793/59524/TR1047.pdf>`_, 410 *Extending Database Technology*, pp. 293-309, 410 *Extending Database Technology*, pp. 293-309, 1992. 411 411 412 Most primary metadata repair functions stage t 412 Most primary metadata repair functions stage their intermediate results in an 413 in-memory array prior to formatting the new on 413 in-memory array prior to formatting the new ondisk structure, which is very 414 similar to the list-based algorithm discussed 414 similar to the list-based algorithm discussed in section 2.3 ("List-Based 415 Algorithms") of Srinivasan. 415 Algorithms") of Srinivasan. 416 However, any data structure builder that maint 416 However, any data structure builder that maintains a resource lock for the 417 duration of the repair is *always* an offline 417 duration of the repair is *always* an offline algorithm. 418 418 419 .. _secondary_metadata: 419 .. _secondary_metadata: 420 420 421 Secondary Metadata 421 Secondary Metadata 422 `````````````````` 422 `````````````````` 423 423 424 Metadata structures in this category reflect r 424 Metadata structures in this category reflect records found in primary metadata, 425 but are only needed for online fsck or for reo 425 but are only needed for online fsck or for reorganization of the filesystem. 426 426 427 Secondary metadata include: 427 Secondary metadata include: 428 428 429 - Reverse mapping information 429 - Reverse mapping information 430 430 431 - Directory parent pointers 431 - Directory parent pointers 432 432 433 This class of metadata is difficult for scrub 433 This class of metadata is difficult for scrub to process because scrub attaches 434 to the secondary object but needs to check pri 434 to the secondary object but needs to check primary metadata, which runs counter 435 to the usual order of resource acquisition. 435 to the usual order of resource acquisition. 436 Frequently, this means that full filesystems s 436 Frequently, this means that full filesystems scans are necessary to rebuild the 437 metadata. 437 metadata. 438 Check functions can be limited in scope to red 438 Check functions can be limited in scope to reduce runtime. 439 Repairs, however, require a full scan of prima 439 Repairs, however, require a full scan of primary metadata, which can take a 440 long time to complete. 440 long time to complete. 441 Under these conditions, ``xfs_scrub`` cannot l 441 Under these conditions, ``xfs_scrub`` cannot lock resources for the entire 442 duration of the repair. 442 duration of the repair. 443 443 444 Instead, repair functions set up an in-memory 444 Instead, repair functions set up an in-memory staging structure to store 445 observations. 445 observations. 446 Depending on the requirements of the specific 446 Depending on the requirements of the specific repair function, the staging 447 index will either have the same format as the 447 index will either have the same format as the ondisk structure or a design 448 specific to that repair function. 448 specific to that repair function. 449 The next step is to release all locks and star 449 The next step is to release all locks and start the filesystem scan. 450 When the repair scanner needs to record an obs 450 When the repair scanner needs to record an observation, the staging data are 451 locked long enough to apply the update. 451 locked long enough to apply the update. 452 While the filesystem scan is in progress, the 452 While the filesystem scan is in progress, the repair function hooks the 453 filesystem so that it can apply pending filesy 453 filesystem so that it can apply pending filesystem updates to the staging 454 information. 454 information. 455 Once the scan is done, the owning object is re 455 Once the scan is done, the owning object is re-locked, the live data is used to 456 write a new ondisk structure, and the repairs 456 write a new ondisk structure, and the repairs are committed atomically. 457 The hooks are disabled and the staging staging 457 The hooks are disabled and the staging staging area is freed. 458 Finally, the storage from the old data structu 458 Finally, the storage from the old data structure are carefully reaped. 459 459 460 Introducing concurrency helps online repair av 460 Introducing concurrency helps online repair avoid various locking problems, but 461 comes at a high cost to code complexity. 461 comes at a high cost to code complexity. 462 Live filesystem code has to be hooked so that 462 Live filesystem code has to be hooked so that the repair function can observe 463 updates in progress. 463 updates in progress. 464 The staging area has to become a fully functio 464 The staging area has to become a fully functional parallel structure so that 465 updates can be merged from the hooks. 465 updates can be merged from the hooks. 466 Finally, the hook, the filesystem scan, and th 466 Finally, the hook, the filesystem scan, and the inode locking model must be 467 sufficiently well integrated that a hook event 467 sufficiently well integrated that a hook event can decide if a given update 468 should be applied to the staging structure. 468 should be applied to the staging structure. 469 469 470 In theory, the scrub implementation could appl 470 In theory, the scrub implementation could apply these same techniques for 471 primary metadata, but doing so would make it m 471 primary metadata, but doing so would make it massively more complex and less 472 performant. 472 performant. 473 Programs attempting to access the damaged stru 473 Programs attempting to access the damaged structures are not blocked from 474 operation, which may cause application failure 474 operation, which may cause application failure or an unplanned filesystem 475 shutdown. 475 shutdown. 476 476 477 Inspiration for the secondary metadata repair 477 Inspiration for the secondary metadata repair strategy was drawn from section 478 2.4 of Srinivasan above, and sections 2 ("NSF: 478 2.4 of Srinivasan above, and sections 2 ("NSF: Inded Build Without Side-File") 479 and 3.1.1 ("Duplicate Key Insert Problem") in 479 and 3.1.1 ("Duplicate Key Insert Problem") in C. Mohan, `"Algorithms for 480 Creating Indexes for Very Large Tables Without 480 Creating Indexes for Very Large Tables Without Quiescing Updates" 481 <https://dl.acm.org/doi/10.1145/130283.130337> 481 <https://dl.acm.org/doi/10.1145/130283.130337>`_, 1992. 482 482 483 The sidecar index mentioned above bears some r 483 The sidecar index mentioned above bears some resemblance to the side file 484 method mentioned in Srinivasan and Mohan. 484 method mentioned in Srinivasan and Mohan. 485 Their method consists of an index builder that 485 Their method consists of an index builder that extracts relevant record data to 486 build the new structure as quickly as possible 486 build the new structure as quickly as possible; and an auxiliary structure that 487 captures all updates that would be committed t 487 captures all updates that would be committed to the index by other threads were 488 the new index already online. 488 the new index already online. 489 After the index building scan finishes, the up 489 After the index building scan finishes, the updates recorded in the side file 490 are applied to the new index. 490 are applied to the new index. 491 To avoid conflicts between the index builder a 491 To avoid conflicts between the index builder and other writer threads, the 492 builder maintains a publicly visible cursor th 492 builder maintains a publicly visible cursor that tracks the progress of the 493 scan through the record space. 493 scan through the record space. 494 To avoid duplication of work between the side 494 To avoid duplication of work between the side file and the index builder, side 495 file updates are elided when the record ID for 495 file updates are elided when the record ID for the update is greater than the 496 cursor position within the record ID space. 496 cursor position within the record ID space. 497 497 498 To minimize changes to the rest of the codebas 498 To minimize changes to the rest of the codebase, XFS online repair keeps the 499 replacement index hidden until it's completely 499 replacement index hidden until it's completely ready to go. 500 In other words, there is no attempt to expose 500 In other words, there is no attempt to expose the keyspace of the new index 501 while repair is running. 501 while repair is running. 502 The complexity of such an approach would be ve 502 The complexity of such an approach would be very high and perhaps more 503 appropriate to building *new* indices. 503 appropriate to building *new* indices. 504 504 505 **Future Work Question**: Can the full scan an 505 **Future Work Question**: Can the full scan and live update code used to 506 facilitate a repair also be used to implement 506 facilitate a repair also be used to implement a comprehensive check? 507 507 508 *Answer*: In theory, yes. Check would be much 508 *Answer*: In theory, yes. Check would be much stronger if each scrub function 509 employed these live scans to build a shadow co 509 employed these live scans to build a shadow copy of the metadata and then 510 compared the shadow records to the ondisk reco 510 compared the shadow records to the ondisk records. 511 However, doing that is a fair amount more work 511 However, doing that is a fair amount more work than what the checking functions 512 do now. 512 do now. 513 The live scans and hooks were developed much l 513 The live scans and hooks were developed much later. 514 That in turn increases the runtime of those sc 514 That in turn increases the runtime of those scrub functions. 515 515 516 Summary Information 516 Summary Information 517 ``````````````````` 517 ``````````````````` 518 518 519 Metadata structures in this last category summ 519 Metadata structures in this last category summarize the contents of primary 520 metadata records. 520 metadata records. 521 These are often used to speed up resource usag 521 These are often used to speed up resource usage queries, and are many times 522 smaller than the primary metadata which they r 522 smaller than the primary metadata which they represent. 523 523 524 Examples of summary information include: 524 Examples of summary information include: 525 525 526 - Summary counts of free space and inodes 526 - Summary counts of free space and inodes 527 527 528 - File link counts from directories 528 - File link counts from directories 529 529 530 - Quota resource usage counts 530 - Quota resource usage counts 531 531 532 Check and repair require full filesystem scans 532 Check and repair require full filesystem scans, but resource and lock 533 acquisition follow the same paths as regular f 533 acquisition follow the same paths as regular filesystem accesses. 534 534 535 The superblock summary counters have special r 535 The superblock summary counters have special requirements due to the underlying 536 implementation of the incore counters, and wil 536 implementation of the incore counters, and will be treated separately. 537 Check and repair of the other types of summary 537 Check and repair of the other types of summary counters (quota resource counts 538 and file link counts) employ the same filesyst 538 and file link counts) employ the same filesystem scanning and hooking 539 techniques as outlined above, but because the 539 techniques as outlined above, but because the underlying data are sets of 540 integer counters, the staging data need not be 540 integer counters, the staging data need not be a fully functional mirror of the 541 ondisk structure. 541 ondisk structure. 542 542 543 Inspiration for quota and file link count repa 543 Inspiration for quota and file link count repair strategies were drawn from 544 sections 2.12 ("Online Index Operations") thro 544 sections 2.12 ("Online Index Operations") through 2.14 ("Incremental View 545 Maintenance") of G. Graefe, `"Concurrent Quer 545 Maintenance") of G. Graefe, `"Concurrent Queries and Updates in Summary Views 546 and Their Indexes" 546 and Their Indexes" 547 <http://www.odbms.org/wp-content/uploads/2014/ 547 <http://www.odbms.org/wp-content/uploads/2014/06/Increment-locks.pdf>`_, 2011. 548 548 549 Since quotas are non-negative integer counts o 549 Since quotas are non-negative integer counts of resource usage, online 550 quotacheck can use the incremental view deltas 550 quotacheck can use the incremental view deltas described in section 2.14 to 551 track pending changes to the block and inode u 551 track pending changes to the block and inode usage counts in each transaction, 552 and commit those changes to a dquot side file 552 and commit those changes to a dquot side file when the transaction commits. 553 Delta tracking is necessary for dquots because 553 Delta tracking is necessary for dquots because the index builder scans inodes, 554 whereas the data structure being rebuilt is an 554 whereas the data structure being rebuilt is an index of dquots. 555 Link count checking combines the view deltas a 555 Link count checking combines the view deltas and commit step into one because 556 it sets attributes of the objects being scanne 556 it sets attributes of the objects being scanned instead of writing them to a 557 separate data structure. 557 separate data structure. 558 Each online fsck function will be discussed as 558 Each online fsck function will be discussed as case studies later in this 559 document. 559 document. 560 560 561 Risk Management 561 Risk Management 562 --------------- 562 --------------- 563 563 564 During the development of online fsck, several 564 During the development of online fsck, several risk factors were identified 565 that may make the feature unsuitable for certa 565 that may make the feature unsuitable for certain distributors and users. 566 Steps can be taken to mitigate or eliminate th 566 Steps can be taken to mitigate or eliminate those risks, though at a cost to 567 functionality. 567 functionality. 568 568 569 - **Decreased performance**: Adding metadata i 569 - **Decreased performance**: Adding metadata indices to the filesystem 570 increases the time cost of persisting change 570 increases the time cost of persisting changes to disk, and the reverse space 571 mapping and directory parent pointers are no 571 mapping and directory parent pointers are no exception. 572 System administrators who require the maximu 572 System administrators who require the maximum performance can disable the 573 reverse mapping features at format time, tho 573 reverse mapping features at format time, though this choice dramatically 574 reduces the ability of online fsck to find i 574 reduces the ability of online fsck to find inconsistencies and repair them. 575 575 576 - **Incorrect repairs**: As with all software, 576 - **Incorrect repairs**: As with all software, there might be defects in the 577 software that result in incorrect repairs be 577 software that result in incorrect repairs being written to the filesystem. 578 Systematic fuzz testing (detailed in the nex 578 Systematic fuzz testing (detailed in the next section) is employed by the 579 authors to find bugs early, but it might not 579 authors to find bugs early, but it might not catch everything. 580 The kernel build system provides Kconfig opt 580 The kernel build system provides Kconfig options (``CONFIG_XFS_ONLINE_SCRUB`` 581 and ``CONFIG_XFS_ONLINE_REPAIR``) to enable 581 and ``CONFIG_XFS_ONLINE_REPAIR``) to enable distributors to choose not to 582 accept this risk. 582 accept this risk. 583 The xfsprogs build system has a configure op 583 The xfsprogs build system has a configure option (``--enable-scrub=no``) that 584 disables building of the ``xfs_scrub`` binar 584 disables building of the ``xfs_scrub`` binary, though this is not a risk 585 mitigation if the kernel functionality remai 585 mitigation if the kernel functionality remains enabled. 586 586 587 - **Inability to repair**: Sometimes, a filesy 587 - **Inability to repair**: Sometimes, a filesystem is too badly damaged to be 588 repairable. 588 repairable. 589 If the keyspaces of several metadata indices 589 If the keyspaces of several metadata indices overlap in some manner but a 590 coherent narrative cannot be formed from rec 590 coherent narrative cannot be formed from records collected, then the repair 591 fails. 591 fails. 592 To reduce the chance that a repair will fail 592 To reduce the chance that a repair will fail with a dirty transaction and 593 render the filesystem unusable, the online r 593 render the filesystem unusable, the online repair functions have been 594 designed to stage and validate all new recor 594 designed to stage and validate all new records before committing the new 595 structure. 595 structure. 596 596 597 - **Misbehavior**: Online fsck requires many p 597 - **Misbehavior**: Online fsck requires many privileges -- raw IO to block 598 devices, opening files by handle, ignoring U 598 devices, opening files by handle, ignoring Unix discretionary access control, 599 and the ability to perform administrative ch 599 and the ability to perform administrative changes. 600 Running this automatically in the background 600 Running this automatically in the background scares people, so the systemd 601 background service is configured to run with 601 background service is configured to run with only the privileges required. 602 Obviously, this cannot address certain probl 602 Obviously, this cannot address certain problems like the kernel crashing or 603 deadlocking, but it should be sufficient to 603 deadlocking, but it should be sufficient to prevent the scrub process from 604 escaping and reconfiguring the system. 604 escaping and reconfiguring the system. 605 The cron job does not have this protection. 605 The cron job does not have this protection. 606 606 607 - **Fuzz Kiddiez**: There are many people now 607 - **Fuzz Kiddiez**: There are many people now who seem to think that running 608 automated fuzz testing of ondisk artifacts t 608 automated fuzz testing of ondisk artifacts to find mischievous behavior and 609 spraying exploit code onto the public mailin 609 spraying exploit code onto the public mailing list for instant zero-day 610 disclosure is somehow of some social benefit 610 disclosure is somehow of some social benefit. 611 In the view of this author, the benefit is r 611 In the view of this author, the benefit is realized only when the fuzz 612 operators help to **fix** the flaws, but thi 612 operators help to **fix** the flaws, but this opinion apparently is not 613 widely shared among security "researchers". 613 widely shared among security "researchers". 614 The XFS maintainers' continuing ability to m 614 The XFS maintainers' continuing ability to manage these events presents an 615 ongoing risk to the stability of the develop 615 ongoing risk to the stability of the development process. 616 Automated testing should front-load some of 616 Automated testing should front-load some of the risk while the feature is 617 considered EXPERIMENTAL. 617 considered EXPERIMENTAL. 618 618 619 Many of these risks are inherent to software p 619 Many of these risks are inherent to software programming. 620 Despite this, it is hoped that this new functi 620 Despite this, it is hoped that this new functionality will prove useful in 621 reducing unexpected downtime. 621 reducing unexpected downtime. 622 622 623 3. Testing Plan 623 3. Testing Plan 624 =============== 624 =============== 625 625 626 As stated before, fsck tools have three main g 626 As stated before, fsck tools have three main goals: 627 627 628 1. Detect inconsistencies in the metadata; 628 1. Detect inconsistencies in the metadata; 629 629 630 2. Eliminate those inconsistencies; and 630 2. Eliminate those inconsistencies; and 631 631 632 3. Minimize further loss of data. 632 3. Minimize further loss of data. 633 633 634 Demonstrations of correct operation are necess 634 Demonstrations of correct operation are necessary to build users' confidence 635 that the software behaves within expectations. 635 that the software behaves within expectations. 636 Unfortunately, it was not really feasible to p 636 Unfortunately, it was not really feasible to perform regular exhaustive testing 637 of every aspect of a fsck tool until the intro 637 of every aspect of a fsck tool until the introduction of low-cost virtual 638 machines with high-IOPS storage. 638 machines with high-IOPS storage. 639 With ample hardware availability in mind, the 639 With ample hardware availability in mind, the testing strategy for the online 640 fsck project involves differential analysis ag 640 fsck project involves differential analysis against the existing fsck tools and 641 systematic testing of every attribute of every 641 systematic testing of every attribute of every type of metadata object. 642 Testing can be split into four major categorie 642 Testing can be split into four major categories, as discussed below. 643 643 644 Integrated Testing with fstests 644 Integrated Testing with fstests 645 ------------------------------- 645 ------------------------------- 646 646 647 The primary goal of any free software QA effor 647 The primary goal of any free software QA effort is to make testing as 648 inexpensive and widespread as possible to maxi 648 inexpensive and widespread as possible to maximize the scaling advantages of 649 community. 649 community. 650 In other words, testing should maximize the br 650 In other words, testing should maximize the breadth of filesystem configuration 651 scenarios and hardware setups. 651 scenarios and hardware setups. 652 This improves code quality by enabling the aut 652 This improves code quality by enabling the authors of online fsck to find and 653 fix bugs early, and helps developers of new fe 653 fix bugs early, and helps developers of new features to find integration 654 issues earlier in their development effort. 654 issues earlier in their development effort. 655 655 656 The Linux filesystem community shares a common 656 The Linux filesystem community shares a common QA testing suite, 657 `fstests <https://git.kernel.org/pub/scm/fs/xf 657 `fstests <https://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git/>`_, for 658 functional and regression testing. 658 functional and regression testing. 659 Even before development work began on online f 659 Even before development work began on online fsck, fstests (when run on XFS) 660 would run both the ``xfs_check`` and ``xfs_rep 660 would run both the ``xfs_check`` and ``xfs_repair -n`` commands on the test and 661 scratch filesystems between each test. 661 scratch filesystems between each test. 662 This provides a level of assurance that the ke 662 This provides a level of assurance that the kernel and the fsck tools stay in 663 alignment about what constitutes consistent me 663 alignment about what constitutes consistent metadata. 664 During development of the online checking code 664 During development of the online checking code, fstests was modified to run 665 ``xfs_scrub -n`` between each test to ensure t 665 ``xfs_scrub -n`` between each test to ensure that the new checking code 666 produces the same results as the two existing 666 produces the same results as the two existing fsck tools. 667 667 668 To start development of online repair, fstests 668 To start development of online repair, fstests was modified to run 669 ``xfs_repair`` to rebuild the filesystem's met 669 ``xfs_repair`` to rebuild the filesystem's metadata indices between tests. 670 This ensures that offline repair does not cras 670 This ensures that offline repair does not crash, leave a corrupt filesystem 671 after it exists, or trigger complaints from th 671 after it exists, or trigger complaints from the online check. 672 This also established a baseline for what can 672 This also established a baseline for what can and cannot be repaired offline. 673 To complete the first phase of development of 673 To complete the first phase of development of online repair, fstests was 674 modified to be able to run ``xfs_scrub`` in a 674 modified to be able to run ``xfs_scrub`` in a "force rebuild" mode. 675 This enables a comparison of the effectiveness 675 This enables a comparison of the effectiveness of online repair as compared to 676 the existing offline repair tools. 676 the existing offline repair tools. 677 677 678 General Fuzz Testing of Metadata Blocks 678 General Fuzz Testing of Metadata Blocks 679 --------------------------------------- 679 --------------------------------------- 680 680 681 XFS benefits greatly from having a very robust 681 XFS benefits greatly from having a very robust debugging tool, ``xfs_db``. 682 682 683 Before development of online fsck even began, 683 Before development of online fsck even began, a set of fstests were created 684 to test the rather common fault that entire me 684 to test the rather common fault that entire metadata blocks get corrupted. 685 This required the creation of fstests library 685 This required the creation of fstests library code that can create a filesystem 686 containing every possible type of metadata obj 686 containing every possible type of metadata object. 687 Next, individual test cases were created to cr 687 Next, individual test cases were created to create a test filesystem, identify 688 a single block of a specific type of metadata 688 a single block of a specific type of metadata object, trash it with the 689 existing ``blocktrash`` command in ``xfs_db``, 689 existing ``blocktrash`` command in ``xfs_db``, and test the reaction of a 690 particular metadata validation strategy. 690 particular metadata validation strategy. 691 691 692 This earlier test suite enabled XFS developers 692 This earlier test suite enabled XFS developers to test the ability of the 693 in-kernel validation functions and the ability 693 in-kernel validation functions and the ability of the offline fsck tool to 694 detect and eliminate the inconsistent metadata 694 detect and eliminate the inconsistent metadata. 695 This part of the test suite was extended to co 695 This part of the test suite was extended to cover online fsck in exactly the 696 same manner. 696 same manner. 697 697 698 In other words, for a given fstests filesystem 698 In other words, for a given fstests filesystem configuration: 699 699 700 * For each metadata object existing on the fil 700 * For each metadata object existing on the filesystem: 701 701 702 * Write garbage to it 702 * Write garbage to it 703 703 704 * Test the reactions of: 704 * Test the reactions of: 705 705 706 1. The kernel verifiers to stop obviously 706 1. The kernel verifiers to stop obviously bad metadata 707 2. Offline repair (``xfs_repair``) to dete 707 2. Offline repair (``xfs_repair``) to detect and fix 708 3. Online repair (``xfs_scrub``) to detect 708 3. Online repair (``xfs_scrub``) to detect and fix 709 709 710 Targeted Fuzz Testing of Metadata Records 710 Targeted Fuzz Testing of Metadata Records 711 ----------------------------------------- 711 ----------------------------------------- 712 712 713 The testing plan for online fsck includes exte 713 The testing plan for online fsck includes extending the existing fs testing 714 infrastructure to provide a much more powerful 714 infrastructure to provide a much more powerful facility: targeted fuzz testing 715 of every metadata field of every metadata obje 715 of every metadata field of every metadata object in the filesystem. 716 ``xfs_db`` can modify every field of every met 716 ``xfs_db`` can modify every field of every metadata structure in every 717 block in the filesystem to simulate the effect 717 block in the filesystem to simulate the effects of memory corruption and 718 software bugs. 718 software bugs. 719 Given that fstests already contains the abilit 719 Given that fstests already contains the ability to create a filesystem 720 containing every metadata format known to the 720 containing every metadata format known to the filesystem, ``xfs_db`` can be 721 used to perform exhaustive fuzz testing! 721 used to perform exhaustive fuzz testing! 722 722 723 For a given fstests filesystem configuration: 723 For a given fstests filesystem configuration: 724 724 725 * For each metadata object existing on the fil 725 * For each metadata object existing on the filesystem... 726 726 727 * For each record inside that metadata objec 727 * For each record inside that metadata object... 728 728 729 * For each field inside that record... 729 * For each field inside that record... 730 730 731 * For each conceivable type of transform 731 * For each conceivable type of transformation that can be applied to a bit field... 732 732 733 1. Clear all bits 733 1. Clear all bits 734 2. Set all bits 734 2. Set all bits 735 3. Toggle the most significant bit 735 3. Toggle the most significant bit 736 4. Toggle the middle bit 736 4. Toggle the middle bit 737 5. Toggle the least significant bit 737 5. Toggle the least significant bit 738 6. Add a small quantity 738 6. Add a small quantity 739 7. Subtract a small quantity 739 7. Subtract a small quantity 740 8. Randomize the contents 740 8. Randomize the contents 741 741 742 * ...test the reactions of: 742 * ...test the reactions of: 743 743 744 1. The kernel verifiers to stop obvi 744 1. The kernel verifiers to stop obviously bad metadata 745 2. Offline checking (``xfs_repair -n 745 2. Offline checking (``xfs_repair -n``) 746 3. Offline repair (``xfs_repair``) 746 3. Offline repair (``xfs_repair``) 747 4. Online checking (``xfs_scrub -n`` 747 4. Online checking (``xfs_scrub -n``) 748 5. Online repair (``xfs_scrub``) 748 5. Online repair (``xfs_scrub``) 749 6. Both repair tools (``xfs_scrub`` 749 6. Both repair tools (``xfs_scrub`` and then ``xfs_repair`` if online repair doesn't succeed) 750 750 751 This is quite the combinatoric explosion! 751 This is quite the combinatoric explosion! 752 752 753 Fortunately, having this much test coverage ma 753 Fortunately, having this much test coverage makes it easy for XFS developers to 754 check the responses of XFS' fsck tools. 754 check the responses of XFS' fsck tools. 755 Since the introduction of the fuzz testing fra 755 Since the introduction of the fuzz testing framework, these tests have been 756 used to discover incorrect repair code and mis 756 used to discover incorrect repair code and missing functionality for entire 757 classes of metadata objects in ``xfs_repair``. 757 classes of metadata objects in ``xfs_repair``. 758 The enhanced testing was used to finalize the 758 The enhanced testing was used to finalize the deprecation of ``xfs_check`` by 759 confirming that ``xfs_repair`` could detect at 759 confirming that ``xfs_repair`` could detect at least as many corruptions as 760 the older tool. 760 the older tool. 761 761 762 These tests have been very valuable for ``xfs_ 762 These tests have been very valuable for ``xfs_scrub`` in the same ways -- they 763 allow the online fsck developers to compare on 763 allow the online fsck developers to compare online fsck against offline fsck, 764 and they enable XFS developers to find deficie 764 and they enable XFS developers to find deficiencies in the code base. 765 765 766 Proposed patchsets include 766 Proposed patchsets include 767 `general fuzzer improvements 767 `general fuzzer improvements 768 <https://git.kernel.org/pub/scm/linux/kernel/g 768 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfstests-dev.git/log/?h=fuzzer-improvements>`_, 769 `fuzzing baselines 769 `fuzzing baselines 770 <https://git.kernel.org/pub/scm/linux/kernel/g 770 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfstests-dev.git/log/?h=fuzz-baseline>`_, 771 and `improvements in fuzz testing comprehensiv 771 and `improvements in fuzz testing comprehensiveness 772 <https://git.kernel.org/pub/scm/linux/kernel/g 772 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfstests-dev.git/log/?h=more-fuzz-testing>`_. 773 773 774 Stress Testing 774 Stress Testing 775 -------------- 775 -------------- 776 776 777 A unique requirement to online fsck is the abi 777 A unique requirement to online fsck is the ability to operate on a filesystem 778 concurrently with regular workloads. 778 concurrently with regular workloads. 779 Although it is of course impossible to run ``x 779 Although it is of course impossible to run ``xfs_scrub`` with *zero* observable 780 impact on the running system, the online repai 780 impact on the running system, the online repair code should never introduce 781 inconsistencies into the filesystem metadata, 781 inconsistencies into the filesystem metadata, and regular workloads should 782 never notice resource starvation. 782 never notice resource starvation. 783 To verify that these conditions are being met, 783 To verify that these conditions are being met, fstests has been enhanced in 784 the following ways: 784 the following ways: 785 785 786 * For each scrub item type, create a test to e 786 * For each scrub item type, create a test to exercise checking that item type 787 while running ``fsstress``. 787 while running ``fsstress``. 788 * For each scrub item type, create a test to e 788 * For each scrub item type, create a test to exercise repairing that item type 789 while running ``fsstress``. 789 while running ``fsstress``. 790 * Race ``fsstress`` and ``xfs_scrub -n`` to en 790 * Race ``fsstress`` and ``xfs_scrub -n`` to ensure that checking the whole 791 filesystem doesn't cause problems. 791 filesystem doesn't cause problems. 792 * Race ``fsstress`` and ``xfs_scrub`` in force 792 * Race ``fsstress`` and ``xfs_scrub`` in force-rebuild mode to ensure that 793 force-repairing the whole filesystem doesn't 793 force-repairing the whole filesystem doesn't cause problems. 794 * Race ``xfs_scrub`` in check and force-repair 794 * Race ``xfs_scrub`` in check and force-repair mode against ``fsstress`` while 795 freezing and thawing the filesystem. 795 freezing and thawing the filesystem. 796 * Race ``xfs_scrub`` in check and force-repair 796 * Race ``xfs_scrub`` in check and force-repair mode against ``fsstress`` while 797 remounting the filesystem read-only and read 797 remounting the filesystem read-only and read-write. 798 * The same, but running ``fsx`` instead of ``f 798 * The same, but running ``fsx`` instead of ``fsstress``. (Not done yet?) 799 799 800 Success is defined by the ability to run all o 800 Success is defined by the ability to run all of these tests without observing 801 any unexpected filesystem shutdowns due to cor 801 any unexpected filesystem shutdowns due to corrupted metadata, kernel hang 802 check warnings, or any other sort of mischief. 802 check warnings, or any other sort of mischief. 803 803 804 Proposed patchsets include `general stress tes 804 Proposed patchsets include `general stress testing 805 <https://git.kernel.org/pub/scm/linux/kernel/g 805 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfstests-dev.git/log/?h=race-scrub-and-mount-state-changes>`_ 806 and the `evolution of existing per-function st 806 and the `evolution of existing per-function stress testing 807 <https://git.kernel.org/pub/scm/linux/kernel/g 807 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfstests-dev.git/log/?h=refactor-scrub-stress>`_. 808 808 809 4. User Interface 809 4. User Interface 810 ================= 810 ================= 811 811 812 The primary user of online fsck is the system 812 The primary user of online fsck is the system administrator, just like offline 813 repair. 813 repair. 814 Online fsck presents two modes of operation to 814 Online fsck presents two modes of operation to administrators: 815 A foreground CLI process for online fsck on de 815 A foreground CLI process for online fsck on demand, and a background service 816 that performs autonomous checking and repair. 816 that performs autonomous checking and repair. 817 817 818 Checking on Demand 818 Checking on Demand 819 ------------------ 819 ------------------ 820 820 821 For administrators who want the absolute fresh 821 For administrators who want the absolute freshest information about the 822 metadata in a filesystem, ``xfs_scrub`` can be 822 metadata in a filesystem, ``xfs_scrub`` can be run as a foreground process on 823 a command line. 823 a command line. 824 The program checks every piece of metadata in 824 The program checks every piece of metadata in the filesystem while the 825 administrator waits for the results to be repo 825 administrator waits for the results to be reported, just like the existing 826 ``xfs_repair`` tool. 826 ``xfs_repair`` tool. 827 Both tools share a ``-n`` option to perform a 827 Both tools share a ``-n`` option to perform a read-only scan, and a ``-v`` 828 option to increase the verbosity of the inform 828 option to increase the verbosity of the information reported. 829 829 830 A new feature of ``xfs_scrub`` is the ``-x`` o 830 A new feature of ``xfs_scrub`` is the ``-x`` option, which employs the error 831 correction capabilities of the hardware to che 831 correction capabilities of the hardware to check data file contents. 832 The media scan is not enabled by default becau 832 The media scan is not enabled by default because it may dramatically increase 833 program runtime and consume a lot of bandwidth 833 program runtime and consume a lot of bandwidth on older storage hardware. 834 834 835 The output of a foreground invocation is captu 835 The output of a foreground invocation is captured in the system log. 836 836 837 The ``xfs_scrub_all`` program walks the list o 837 The ``xfs_scrub_all`` program walks the list of mounted filesystems and 838 initiates ``xfs_scrub`` for each of them in pa 838 initiates ``xfs_scrub`` for each of them in parallel. 839 It serializes scans for any filesystems that r 839 It serializes scans for any filesystems that resolve to the same top level 840 kernel block device to prevent resource overco 840 kernel block device to prevent resource overconsumption. 841 841 842 Background Service 842 Background Service 843 ------------------ 843 ------------------ 844 844 845 To reduce the workload of system administrator 845 To reduce the workload of system administrators, the ``xfs_scrub`` package 846 provides a suite of `systemd <https://systemd. 846 provides a suite of `systemd <https://systemd.io/>`_ timers and services that 847 run online fsck automatically on weekends by d 847 run online fsck automatically on weekends by default. 848 The background service configures scrub to run 848 The background service configures scrub to run with as little privilege as 849 possible, the lowest CPU and IO priority, and 849 possible, the lowest CPU and IO priority, and in a CPU-constrained single 850 threaded mode. 850 threaded mode. 851 This can be tuned by the systemd administrator 851 This can be tuned by the systemd administrator at any time to suit the latency 852 and throughput requirements of customer worklo 852 and throughput requirements of customer workloads. 853 853 854 The output of the background service is also c 854 The output of the background service is also captured in the system log. 855 If desired, reports of failures (either due to 855 If desired, reports of failures (either due to inconsistencies or mere runtime 856 errors) can be emailed automatically by settin 856 errors) can be emailed automatically by setting the ``EMAIL_ADDR`` environment 857 variable in the following service files: 857 variable in the following service files: 858 858 859 * ``xfs_scrub_fail@.service`` 859 * ``xfs_scrub_fail@.service`` 860 * ``xfs_scrub_media_fail@.service`` 860 * ``xfs_scrub_media_fail@.service`` 861 * ``xfs_scrub_all_fail.service`` 861 * ``xfs_scrub_all_fail.service`` 862 862 863 The decision to enable the background scan is 863 The decision to enable the background scan is left to the system administrator. 864 This can be done by enabling either of the fol 864 This can be done by enabling either of the following services: 865 865 866 * ``xfs_scrub_all.timer`` on systemd systems 866 * ``xfs_scrub_all.timer`` on systemd systems 867 * ``xfs_scrub_all.cron`` on non-systemd system 867 * ``xfs_scrub_all.cron`` on non-systemd systems 868 868 869 This automatic weekly scan is configured out o 869 This automatic weekly scan is configured out of the box to perform an 870 additional media scan of all file data once pe 870 additional media scan of all file data once per month. 871 This is less foolproof than, say, storing file 871 This is less foolproof than, say, storing file data block checksums, but much 872 more performant if application software provid 872 more performant if application software provides its own integrity checking, 873 redundancy can be provided elsewhere above the 873 redundancy can be provided elsewhere above the filesystem, or the storage 874 device's integrity guarantees are deemed suffi 874 device's integrity guarantees are deemed sufficient. 875 875 876 The systemd unit file definitions have been su 876 The systemd unit file definitions have been subjected to a security audit 877 (as of systemd 249) to ensure that the xfs_scr 877 (as of systemd 249) to ensure that the xfs_scrub processes have as little 878 access to the rest of the system as possible. 878 access to the rest of the system as possible. 879 This was performed via ``systemd-analyze secur 879 This was performed via ``systemd-analyze security``, after which privileges 880 were restricted to the minimum required, sandb 880 were restricted to the minimum required, sandboxing was set up to the maximal 881 extent possible with sandboxing and system cal 881 extent possible with sandboxing and system call filtering; and access to the 882 filesystem tree was restricted to the minimum 882 filesystem tree was restricted to the minimum needed to start the program and 883 access the filesystem being scanned. 883 access the filesystem being scanned. 884 The service definition files restrict CPU usag 884 The service definition files restrict CPU usage to 80% of one CPU core, and 885 apply as nice of a priority to IO and CPU sche 885 apply as nice of a priority to IO and CPU scheduling as possible. 886 This measure was taken to minimize delays in t 886 This measure was taken to minimize delays in the rest of the filesystem. 887 No such hardening has been performed for the c 887 No such hardening has been performed for the cron job. 888 888 889 Proposed patchset: 889 Proposed patchset: 890 `Enabling the xfs_scrub background service 890 `Enabling the xfs_scrub background service 891 <https://git.kernel.org/pub/scm/linux/kernel/g 891 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfsprogs-dev.git/log/?h=scrub-media-scan-service>`_. 892 892 893 Health Reporting 893 Health Reporting 894 ---------------- 894 ---------------- 895 895 896 XFS caches a summary of each filesystem's heal 896 XFS caches a summary of each filesystem's health status in memory. 897 The information is updated whenever ``xfs_scru 897 The information is updated whenever ``xfs_scrub`` is run, or whenever 898 inconsistencies are detected in the filesystem 898 inconsistencies are detected in the filesystem metadata during regular 899 operations. 899 operations. 900 System administrators should use the ``health` 900 System administrators should use the ``health`` command of ``xfs_spaceman`` to 901 download this information into a human-readabl 901 download this information into a human-readable format. 902 If problems have been observed, the administra 902 If problems have been observed, the administrator can schedule a reduced 903 service window to run the online repair tool t 903 service window to run the online repair tool to correct the problem. 904 Failing that, the administrator can decide to 904 Failing that, the administrator can decide to schedule a maintenance window to 905 run the traditional offline repair tool to cor 905 run the traditional offline repair tool to correct the problem. 906 906 907 **Future Work Question**: Should the health re 907 **Future Work Question**: Should the health reporting integrate with the new 908 inotify fs error notification system? 908 inotify fs error notification system? 909 Would it be helpful for sysadmins to have a da 909 Would it be helpful for sysadmins to have a daemon to listen for corruption 910 notifications and initiate a repair? 910 notifications and initiate a repair? 911 911 912 *Answer*: These questions remain unanswered, b 912 *Answer*: These questions remain unanswered, but should be a part of the 913 conversation with early adopters and potential 913 conversation with early adopters and potential downstream users of XFS. 914 914 915 Proposed patchsets include 915 Proposed patchsets include 916 `wiring up health reports to correction return 916 `wiring up health reports to correction returns 917 <https://git.kernel.org/pub/scm/linux/kernel/g 917 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=corruption-health-reports>`_ 918 and 918 and 919 `preservation of sickness info during memory r 919 `preservation of sickness info during memory reclaim 920 <https://git.kernel.org/pub/scm/linux/kernel/g 920 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=indirect-health-reporting>`_. 921 921 922 5. Kernel Algorithms and Data Structures 922 5. Kernel Algorithms and Data Structures 923 ======================================== 923 ======================================== 924 924 925 This section discusses the key algorithms and 925 This section discusses the key algorithms and data structures of the kernel 926 code that provide the ability to check and rep 926 code that provide the ability to check and repair metadata while the system 927 is running. 927 is running. 928 The first chapters in this section reveal the 928 The first chapters in this section reveal the pieces that provide the 929 foundation for checking metadata. 929 foundation for checking metadata. 930 The remainder of this section presents the mec 930 The remainder of this section presents the mechanisms through which XFS 931 regenerates itself. 931 regenerates itself. 932 932 933 Self Describing Metadata 933 Self Describing Metadata 934 ------------------------ 934 ------------------------ 935 935 936 Starting with XFS version 5 in 2012, XFS updat 936 Starting with XFS version 5 in 2012, XFS updated the format of nearly every 937 ondisk block header to record a magic number, 937 ondisk block header to record a magic number, a checksum, a universally 938 "unique" identifier (UUID), an owner code, the 938 "unique" identifier (UUID), an owner code, the ondisk address of the block, 939 and a log sequence number. 939 and a log sequence number. 940 When loading a block buffer from disk, the mag 940 When loading a block buffer from disk, the magic number, UUID, owner, and 941 ondisk address confirm that the retrieved bloc 941 ondisk address confirm that the retrieved block matches the specific owner of 942 the current filesystem, and that the informati 942 the current filesystem, and that the information contained in the block is 943 supposed to be found at the ondisk address. 943 supposed to be found at the ondisk address. 944 The first three components enable checking too 944 The first three components enable checking tools to disregard alleged metadata 945 that doesn't belong to the filesystem, and the 945 that doesn't belong to the filesystem, and the fourth component enables the 946 filesystem to detect lost writes. 946 filesystem to detect lost writes. 947 947 948 Whenever a file system operation modifies a bl 948 Whenever a file system operation modifies a block, the change is submitted 949 to the log as part of a transaction. 949 to the log as part of a transaction. 950 The log then processes these transactions mark 950 The log then processes these transactions marking them done once they are 951 safely persisted to storage. 951 safely persisted to storage. 952 The logging code maintains the checksum and th 952 The logging code maintains the checksum and the log sequence number of the last 953 transactional update. 953 transactional update. 954 Checksums are useful for detecting torn writes 954 Checksums are useful for detecting torn writes and other discrepancies that can 955 be introduced between the computer and its sto 955 be introduced between the computer and its storage devices. 956 Sequence number tracking enables log recovery 956 Sequence number tracking enables log recovery to avoid applying out of date 957 log updates to the filesystem. 957 log updates to the filesystem. 958 958 959 These two features improve overall runtime res 959 These two features improve overall runtime resiliency by providing a means for 960 the filesystem to detect obvious corruption wh 960 the filesystem to detect obvious corruption when reading metadata blocks from 961 disk, but these buffer verifiers cannot provid 961 disk, but these buffer verifiers cannot provide any consistency checking 962 between metadata structures. 962 between metadata structures. 963 963 964 For more information, please see the documenta 964 For more information, please see the documentation for 965 Documentation/filesystems/xfs/xfs-self-describ 965 Documentation/filesystems/xfs/xfs-self-describing-metadata.rst 966 966 967 Reverse Mapping 967 Reverse Mapping 968 --------------- 968 --------------- 969 969 970 The original design of XFS (circa 1993) is an 970 The original design of XFS (circa 1993) is an improvement upon 1980s Unix 971 filesystem design. 971 filesystem design. 972 In those days, storage density was expensive, 972 In those days, storage density was expensive, CPU time was scarce, and 973 excessive seek time could kill performance. 973 excessive seek time could kill performance. 974 For performance reasons, filesystem authors we 974 For performance reasons, filesystem authors were reluctant to add redundancy to 975 the filesystem, even at the cost of data integ 975 the filesystem, even at the cost of data integrity. 976 Filesystems designers in the early 21st centur 976 Filesystems designers in the early 21st century choose different strategies to 977 increase internal redundancy -- either storing 977 increase internal redundancy -- either storing nearly identical copies of 978 metadata, or more space-efficient encoding tec 978 metadata, or more space-efficient encoding techniques. 979 979 980 For XFS, a different redundancy strategy was c 980 For XFS, a different redundancy strategy was chosen to modernize the design: 981 a secondary space usage index that maps alloca 981 a secondary space usage index that maps allocated disk extents back to their 982 owners. 982 owners. 983 By adding a new index, the filesystem retains 983 By adding a new index, the filesystem retains most of its ability to scale 984 well to heavily threaded workloads involving l 984 well to heavily threaded workloads involving large datasets, since the primary 985 file metadata (the directory tree, the file bl 985 file metadata (the directory tree, the file block map, and the allocation 986 groups) remain unchanged. 986 groups) remain unchanged. 987 Like any system that improves redundancy, the 987 Like any system that improves redundancy, the reverse-mapping feature increases 988 overhead costs for space mapping activities. 988 overhead costs for space mapping activities. 989 However, it has two critical advantages: first 989 However, it has two critical advantages: first, the reverse index is key to 990 enabling online fsck and other requested funct 990 enabling online fsck and other requested functionality such as free space 991 defragmentation, better media failure reportin 991 defragmentation, better media failure reporting, and filesystem shrinking. 992 Second, the different ondisk storage format of 992 Second, the different ondisk storage format of the reverse mapping btree 993 defeats device-level deduplication because the 993 defeats device-level deduplication because the filesystem requires real 994 redundancy. 994 redundancy. 995 995 996 +--------------------------------------------- 996 +--------------------------------------------------------------------------+ 997 | **Sidebar**: 997 | **Sidebar**: | 998 +--------------------------------------------- 998 +--------------------------------------------------------------------------+ 999 | A criticism of adding the secondary index is 999 | A criticism of adding the secondary index is that it does nothing to | 1000 | improve the robustness of user data storage 1000 | improve the robustness of user data storage itself. | 1001 | This is a valid point, but adding a new ind 1001 | This is a valid point, but adding a new index for file data block | 1002 | checksums increases write amplification by 1002 | checksums increases write amplification by turning data overwrites into | 1003 | copy-writes, which age the filesystem prema 1003 | copy-writes, which age the filesystem prematurely. | 1004 | In keeping with thirty years of precedent, 1004 | In keeping with thirty years of precedent, users who want file data | 1005 | integrity can supply as powerful a solution 1005 | integrity can supply as powerful a solution as they require. | 1006 | As for metadata, the complexity of adding a 1006 | As for metadata, the complexity of adding a new secondary index of space | 1007 | usage is much less than adding volume manag 1007 | usage is much less than adding volume management and storage device | 1008 | mirroring to XFS itself. 1008 | mirroring to XFS itself. | 1009 | Perfection of RAID and volume management ar 1009 | Perfection of RAID and volume management are best left to existing | 1010 | layers in the kernel. 1010 | layers in the kernel. | 1011 +-------------------------------------------- 1011 +--------------------------------------------------------------------------+ 1012 1012 1013 The information captured in a reverse space m 1013 The information captured in a reverse space mapping record is as follows: 1014 1014 1015 .. code-block:: c 1015 .. code-block:: c 1016 1016 1017 struct xfs_rmap_irec { 1017 struct xfs_rmap_irec { 1018 xfs_agblock_t rm_startblock; 1018 xfs_agblock_t rm_startblock; /* extent start block */ 1019 xfs_extlen_t rm_blockcount; 1019 xfs_extlen_t rm_blockcount; /* extent length */ 1020 uint64_t rm_owner; 1020 uint64_t rm_owner; /* extent owner */ 1021 uint64_t rm_offset; 1021 uint64_t rm_offset; /* offset within the owner */ 1022 unsigned int rm_flags; 1022 unsigned int rm_flags; /* state flags */ 1023 }; 1023 }; 1024 1024 1025 The first two fields capture the location and 1025 The first two fields capture the location and size of the physical space, 1026 in units of filesystem blocks. 1026 in units of filesystem blocks. 1027 The owner field tells scrub which metadata st 1027 The owner field tells scrub which metadata structure or file inode have been 1028 assigned this space. 1028 assigned this space. 1029 For space allocated to files, the offset fiel 1029 For space allocated to files, the offset field tells scrub where the space was 1030 mapped within the file fork. 1030 mapped within the file fork. 1031 Finally, the flags field provides extra infor 1031 Finally, the flags field provides extra information about the space usage -- 1032 is this an attribute fork extent? A file map 1032 is this an attribute fork extent? A file mapping btree extent? Or an 1033 unwritten data extent? 1033 unwritten data extent? 1034 1034 1035 Online filesystem checking judges the consist 1035 Online filesystem checking judges the consistency of each primary metadata 1036 record by comparing its information against a 1036 record by comparing its information against all other space indices. 1037 The reverse mapping index plays a key role in 1037 The reverse mapping index plays a key role in the consistency checking process 1038 because it contains a centralized alternate c 1038 because it contains a centralized alternate copy of all space allocation 1039 information. 1039 information. 1040 Program runtime and ease of resource acquisit 1040 Program runtime and ease of resource acquisition are the only real limits to 1041 what online checking can consult. 1041 what online checking can consult. 1042 For example, a file data extent mapping can b 1042 For example, a file data extent mapping can be checked against: 1043 1043 1044 * The absence of an entry in the free space i 1044 * The absence of an entry in the free space information. 1045 * The absence of an entry in the inode index. 1045 * The absence of an entry in the inode index. 1046 * The absence of an entry in the reference co 1046 * The absence of an entry in the reference count data if the file is not 1047 marked as having shared extents. 1047 marked as having shared extents. 1048 * The correspondence of an entry in the rever 1048 * The correspondence of an entry in the reverse mapping information. 1049 1049 1050 There are several observations to make about 1050 There are several observations to make about reverse mapping indices: 1051 1051 1052 1. Reverse mappings can provide a positive af 1052 1. Reverse mappings can provide a positive affirmation of correctness if any of 1053 the above primary metadata are in doubt. 1053 the above primary metadata are in doubt. 1054 The checking code for most primary metadat 1054 The checking code for most primary metadata follows a path similar to the 1055 one outlined above. 1055 one outlined above. 1056 1056 1057 2. Proving the consistency of secondary metad 1057 2. Proving the consistency of secondary metadata with the primary metadata is 1058 difficult because that requires a full sca 1058 difficult because that requires a full scan of all primary space metadata, 1059 which is very time intensive. 1059 which is very time intensive. 1060 For example, checking a reverse mapping re 1060 For example, checking a reverse mapping record for a file extent mapping 1061 btree block requires locking the file and 1061 btree block requires locking the file and searching the entire btree to 1062 confirm the block. 1062 confirm the block. 1063 Instead, scrub relies on rigorous cross-re 1063 Instead, scrub relies on rigorous cross-referencing during the primary space 1064 mapping structure checks. 1064 mapping structure checks. 1065 1065 1066 3. Consistency scans must use non-blocking lo 1066 3. Consistency scans must use non-blocking lock acquisition primitives if the 1067 required locking order is not the same ord 1067 required locking order is not the same order used by regular filesystem 1068 operations. 1068 operations. 1069 For example, if the filesystem normally ta 1069 For example, if the filesystem normally takes a file ILOCK before taking 1070 the AGF buffer lock but scrub wants to tak 1070 the AGF buffer lock but scrub wants to take a file ILOCK while holding 1071 an AGF buffer lock, scrub cannot block on 1071 an AGF buffer lock, scrub cannot block on that second acquisition. 1072 This means that forward progress during th 1072 This means that forward progress during this part of a scan of the reverse 1073 mapping data cannot be guaranteed if syste 1073 mapping data cannot be guaranteed if system load is heavy. 1074 1074 1075 In summary, reverse mappings play a key role 1075 In summary, reverse mappings play a key role in reconstruction of primary 1076 metadata. 1076 metadata. 1077 The details of how these records are staged, 1077 The details of how these records are staged, written to disk, and committed 1078 into the filesystem are covered in subsequent 1078 into the filesystem are covered in subsequent sections. 1079 1079 1080 Checking and Cross-Referencing 1080 Checking and Cross-Referencing 1081 ------------------------------ 1081 ------------------------------ 1082 1082 1083 The first step of checking a metadata structu 1083 The first step of checking a metadata structure is to examine every record 1084 contained within the structure and its relati 1084 contained within the structure and its relationship with the rest of the 1085 system. 1085 system. 1086 XFS contains multiple layers of checking to t 1086 XFS contains multiple layers of checking to try to prevent inconsistent 1087 metadata from wreaking havoc on the system. 1087 metadata from wreaking havoc on the system. 1088 Each of these layers contributes information 1088 Each of these layers contributes information that helps the kernel to make 1089 three decisions about the health of a metadat 1089 three decisions about the health of a metadata structure: 1090 1090 1091 - Is a part of this structure obviously corru 1091 - Is a part of this structure obviously corrupt (``XFS_SCRUB_OFLAG_CORRUPT``) ? 1092 - Is this structure inconsistent with the res 1092 - Is this structure inconsistent with the rest of the system 1093 (``XFS_SCRUB_OFLAG_XCORRUPT``) ? 1093 (``XFS_SCRUB_OFLAG_XCORRUPT``) ? 1094 - Is there so much damage around the filesyst 1094 - Is there so much damage around the filesystem that cross-referencing is not 1095 possible (``XFS_SCRUB_OFLAG_XFAIL``) ? 1095 possible (``XFS_SCRUB_OFLAG_XFAIL``) ? 1096 - Can the structure be optimized to improve p 1096 - Can the structure be optimized to improve performance or reduce the size of 1097 metadata (``XFS_SCRUB_OFLAG_PREEN``) ? 1097 metadata (``XFS_SCRUB_OFLAG_PREEN``) ? 1098 - Does the structure contain data that is not 1098 - Does the structure contain data that is not inconsistent but deserves review 1099 by the system administrator (``XFS_SCRUB_OF 1099 by the system administrator (``XFS_SCRUB_OFLAG_WARNING``) ? 1100 1100 1101 The following sections describe how the metad 1101 The following sections describe how the metadata scrubbing process works. 1102 1102 1103 Metadata Buffer Verification 1103 Metadata Buffer Verification 1104 ```````````````````````````` 1104 ```````````````````````````` 1105 1105 1106 The lowest layer of metadata protection in XF 1106 The lowest layer of metadata protection in XFS are the metadata verifiers built 1107 into the buffer cache. 1107 into the buffer cache. 1108 These functions perform inexpensive internal 1108 These functions perform inexpensive internal consistency checking of the block 1109 itself, and answer these questions: 1109 itself, and answer these questions: 1110 1110 1111 - Does the block belong to this filesystem? 1111 - Does the block belong to this filesystem? 1112 1112 1113 - Does the block belong to the structure that 1113 - Does the block belong to the structure that asked for the read? 1114 This assumes that metadata blocks only have 1114 This assumes that metadata blocks only have one owner, which is always true 1115 in XFS. 1115 in XFS. 1116 1116 1117 - Is the type of data stored in the block wit 1117 - Is the type of data stored in the block within a reasonable range of what 1118 scrub is expecting? 1118 scrub is expecting? 1119 1119 1120 - Does the physical location of the block mat 1120 - Does the physical location of the block match the location it was read from? 1121 1121 1122 - Does the block checksum match the data? 1122 - Does the block checksum match the data? 1123 1123 1124 The scope of the protections here are very li 1124 The scope of the protections here are very limited -- verifiers can only 1125 establish that the filesystem code is reasona 1125 establish that the filesystem code is reasonably free of gross corruption bugs 1126 and that the storage system is reasonably com 1126 and that the storage system is reasonably competent at retrieval. 1127 Corruption problems observed at runtime cause 1127 Corruption problems observed at runtime cause the generation of health reports, 1128 failed system calls, and in the extreme case, 1128 failed system calls, and in the extreme case, filesystem shutdowns if the 1129 corrupt metadata force the cancellation of a 1129 corrupt metadata force the cancellation of a dirty transaction. 1130 1130 1131 Every online fsck scrubbing function is expec 1131 Every online fsck scrubbing function is expected to read every ondisk metadata 1132 block of a structure in the course of checkin 1132 block of a structure in the course of checking the structure. 1133 Corruption problems observed during a check a 1133 Corruption problems observed during a check are immediately reported to 1134 userspace as corruption; during a cross-refer 1134 userspace as corruption; during a cross-reference, they are reported as a 1135 failure to cross-reference once the full exam 1135 failure to cross-reference once the full examination is complete. 1136 Reads satisfied by a buffer already in cache 1136 Reads satisfied by a buffer already in cache (and hence already verified) 1137 bypass these checks. 1137 bypass these checks. 1138 1138 1139 Internal Consistency Checks 1139 Internal Consistency Checks 1140 ``````````````````````````` 1140 ``````````````````````````` 1141 1141 1142 After the buffer cache, the next level of met 1142 After the buffer cache, the next level of metadata protection is the internal 1143 record verification code built into the files 1143 record verification code built into the filesystem. 1144 These checks are split between the buffer ver 1144 These checks are split between the buffer verifiers, the in-filesystem users of 1145 the buffer cache, and the scrub code itself, 1145 the buffer cache, and the scrub code itself, depending on the amount of higher 1146 level context required. 1146 level context required. 1147 The scope of checking is still internal to th 1147 The scope of checking is still internal to the block. 1148 These higher level checking functions answer 1148 These higher level checking functions answer these questions: 1149 1149 1150 - Does the type of data stored in the block m 1150 - Does the type of data stored in the block match what scrub is expecting? 1151 1151 1152 - Does the block belong to the owning structu 1152 - Does the block belong to the owning structure that asked for the read? 1153 1153 1154 - If the block contains records, do the recor 1154 - If the block contains records, do the records fit within the block? 1155 1155 1156 - If the block tracks internal free space inf 1156 - If the block tracks internal free space information, is it consistent with 1157 the record areas? 1157 the record areas? 1158 1158 1159 - Are the records contained inside the block 1159 - Are the records contained inside the block free of obvious corruptions? 1160 1160 1161 Record checks in this category are more rigor 1161 Record checks in this category are more rigorous and more time-intensive. 1162 For example, block pointers and inumbers are 1162 For example, block pointers and inumbers are checked to ensure that they point 1163 within the dynamically allocated parts of an 1163 within the dynamically allocated parts of an allocation group and within 1164 the filesystem. 1164 the filesystem. 1165 Names are checked for invalid characters, and 1165 Names are checked for invalid characters, and flags are checked for invalid 1166 combinations. 1166 combinations. 1167 Other record attributes are checked for sensi 1167 Other record attributes are checked for sensible values. 1168 Btree records spanning an interval of the btr 1168 Btree records spanning an interval of the btree keyspace are checked for 1169 correct order and lack of mergeability (excep 1169 correct order and lack of mergeability (except for file fork mappings). 1170 For performance reasons, regular code may ski 1170 For performance reasons, regular code may skip some of these checks unless 1171 debugging is enabled or a write is about to o 1171 debugging is enabled or a write is about to occur. 1172 Scrub functions, of course, must check all po 1172 Scrub functions, of course, must check all possible problems. 1173 1173 1174 Validation of Userspace-Controlled Record Att 1174 Validation of Userspace-Controlled Record Attributes 1175 ````````````````````````````````````````````` 1175 ```````````````````````````````````````````````````` 1176 1176 1177 Various pieces of filesystem metadata are dir 1177 Various pieces of filesystem metadata are directly controlled by userspace. 1178 Because of this nature, validation work canno 1178 Because of this nature, validation work cannot be more precise than checking 1179 that a value is within the possible range. 1179 that a value is within the possible range. 1180 These fields include: 1180 These fields include: 1181 1181 1182 - Superblock fields controlled by mount optio 1182 - Superblock fields controlled by mount options 1183 - Filesystem labels 1183 - Filesystem labels 1184 - File timestamps 1184 - File timestamps 1185 - File permissions 1185 - File permissions 1186 - File size 1186 - File size 1187 - File flags 1187 - File flags 1188 - Names present in directory entries, extende 1188 - Names present in directory entries, extended attribute keys, and filesystem 1189 labels 1189 labels 1190 - Extended attribute key namespaces 1190 - Extended attribute key namespaces 1191 - Extended attribute values 1191 - Extended attribute values 1192 - File data block contents 1192 - File data block contents 1193 - Quota limits 1193 - Quota limits 1194 - Quota timer expiration (if resource usage e 1194 - Quota timer expiration (if resource usage exceeds the soft limit) 1195 1195 1196 Cross-Referencing Space Metadata 1196 Cross-Referencing Space Metadata 1197 ```````````````````````````````` 1197 ```````````````````````````````` 1198 1198 1199 After internal block checks, the next higher 1199 After internal block checks, the next higher level of checking is 1200 cross-referencing records between metadata st 1200 cross-referencing records between metadata structures. 1201 For regular runtime code, the cost of these c 1201 For regular runtime code, the cost of these checks is considered to be 1202 prohibitively expensive, but as scrub is dedi 1202 prohibitively expensive, but as scrub is dedicated to rooting out 1203 inconsistencies, it must pursue all avenues o 1203 inconsistencies, it must pursue all avenues of inquiry. 1204 The exact set of cross-referencing is highly 1204 The exact set of cross-referencing is highly dependent on the context of the 1205 data structure being checked. 1205 data structure being checked. 1206 1206 1207 The XFS btree code has keyspace scanning func 1207 The XFS btree code has keyspace scanning functions that online fsck uses to 1208 cross reference one structure with another. 1208 cross reference one structure with another. 1209 Specifically, scrub can scan the key space of 1209 Specifically, scrub can scan the key space of an index to determine if that 1210 keyspace is fully, sparsely, or not at all ma 1210 keyspace is fully, sparsely, or not at all mapped to records. 1211 For the reverse mapping btree, it is possible 1211 For the reverse mapping btree, it is possible to mask parts of the key for the 1212 purposes of performing a keyspace scan so tha 1212 purposes of performing a keyspace scan so that scrub can decide if the rmap 1213 btree contains records mapping a certain exte 1213 btree contains records mapping a certain extent of physical space without the 1214 sparsenses of the rest of the rmap keyspace g 1214 sparsenses of the rest of the rmap keyspace getting in the way. 1215 1215 1216 Btree blocks undergo the following checks bef 1216 Btree blocks undergo the following checks before cross-referencing: 1217 1217 1218 - Does the type of data stored in the block m 1218 - Does the type of data stored in the block match what scrub is expecting? 1219 1219 1220 - Does the block belong to the owning structu 1220 - Does the block belong to the owning structure that asked for the read? 1221 1221 1222 - Do the records fit within the block? 1222 - Do the records fit within the block? 1223 1223 1224 - Are the records contained inside the block 1224 - Are the records contained inside the block free of obvious corruptions? 1225 1225 1226 - Are the name hashes in the correct order? 1226 - Are the name hashes in the correct order? 1227 1227 1228 - Do node pointers within the btree point to 1228 - Do node pointers within the btree point to valid block addresses for the type 1229 of btree? 1229 of btree? 1230 1230 1231 - Do child pointers point towards the leaves? 1231 - Do child pointers point towards the leaves? 1232 1232 1233 - Do sibling pointers point across the same l 1233 - Do sibling pointers point across the same level? 1234 1234 1235 - For each node block record, does the record 1235 - For each node block record, does the record key accurate reflect the contents 1236 of the child block? 1236 of the child block? 1237 1237 1238 Space allocation records are cross-referenced 1238 Space allocation records are cross-referenced as follows: 1239 1239 1240 1. Any space mentioned by any metadata struct 1240 1. Any space mentioned by any metadata structure are cross-referenced as 1241 follows: 1241 follows: 1242 1242 1243 - Does the reverse mapping index list only 1243 - Does the reverse mapping index list only the appropriate owner as the 1244 owner of each block? 1244 owner of each block? 1245 1245 1246 - Are none of the blocks claimed as free s 1246 - Are none of the blocks claimed as free space? 1247 1247 1248 - If these aren't file data blocks, are no 1248 - If these aren't file data blocks, are none of the blocks claimed as space 1249 shared by different owners? 1249 shared by different owners? 1250 1250 1251 2. Btree blocks are cross-referenced as follo 1251 2. Btree blocks are cross-referenced as follows: 1252 1252 1253 - Everything in class 1 above. 1253 - Everything in class 1 above. 1254 1254 1255 - If there's a parent node block, do the k 1255 - If there's a parent node block, do the keys listed for this block match the 1256 keyspace of this block? 1256 keyspace of this block? 1257 1257 1258 - Do the sibling pointers point to valid b 1258 - Do the sibling pointers point to valid blocks? Of the same level? 1259 1259 1260 - Do the child pointers point to valid blo 1260 - Do the child pointers point to valid blocks? Of the next level down? 1261 1261 1262 3. Free space btree records are cross-referen 1262 3. Free space btree records are cross-referenced as follows: 1263 1263 1264 - Everything in class 1 and 2 above. 1264 - Everything in class 1 and 2 above. 1265 1265 1266 - Does the reverse mapping index list no o 1266 - Does the reverse mapping index list no owners of this space? 1267 1267 1268 - Is this space not claimed by the inode i 1268 - Is this space not claimed by the inode index for inodes? 1269 1269 1270 - Is it not mentioned by the reference cou 1270 - Is it not mentioned by the reference count index? 1271 1271 1272 - Is there a matching record in the other 1272 - Is there a matching record in the other free space btree? 1273 1273 1274 4. Inode btree records are cross-referenced a 1274 4. Inode btree records are cross-referenced as follows: 1275 1275 1276 - Everything in class 1 and 2 above. 1276 - Everything in class 1 and 2 above. 1277 1277 1278 - Is there a matching record in free inode 1278 - Is there a matching record in free inode btree? 1279 1279 1280 - Do cleared bits in the holemask correspo 1280 - Do cleared bits in the holemask correspond with inode clusters? 1281 1281 1282 - Do set bits in the freemask correspond w 1282 - Do set bits in the freemask correspond with inode records with zero link 1283 count? 1283 count? 1284 1284 1285 5. Inode records are cross-referenced as foll 1285 5. Inode records are cross-referenced as follows: 1286 1286 1287 - Everything in class 1. 1287 - Everything in class 1. 1288 1288 1289 - Do all the fields that summarize informa 1289 - Do all the fields that summarize information about the file forks actually 1290 match those forks? 1290 match those forks? 1291 1291 1292 - Does each inode with zero link count cor 1292 - Does each inode with zero link count correspond to a record in the free 1293 inode btree? 1293 inode btree? 1294 1294 1295 6. File fork space mapping records are cross- 1295 6. File fork space mapping records are cross-referenced as follows: 1296 1296 1297 - Everything in class 1 and 2 above. 1297 - Everything in class 1 and 2 above. 1298 1298 1299 - Is this space not mentioned by the inode 1299 - Is this space not mentioned by the inode btrees? 1300 1300 1301 - If this is a CoW fork mapping, does it c 1301 - If this is a CoW fork mapping, does it correspond to a CoW entry in the 1302 reference count btree? 1302 reference count btree? 1303 1303 1304 7. Reference count records are cross-referenc 1304 7. Reference count records are cross-referenced as follows: 1305 1305 1306 - Everything in class 1 and 2 above. 1306 - Everything in class 1 and 2 above. 1307 1307 1308 - Within the space subkeyspace of the rmap 1308 - Within the space subkeyspace of the rmap btree (that is to say, all 1309 records mapped to a particular space ext 1309 records mapped to a particular space extent and ignoring the owner info), 1310 are there the same number of reverse map 1310 are there the same number of reverse mapping records for each block as the 1311 reference count record claims? 1311 reference count record claims? 1312 1312 1313 Proposed patchsets are the series to find gap 1313 Proposed patchsets are the series to find gaps in 1314 `refcount btree 1314 `refcount btree 1315 <https://git.kernel.org/pub/scm/linux/kernel/ 1315 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=scrub-detect-refcount-gaps>`_, 1316 `inode btree 1316 `inode btree 1317 <https://git.kernel.org/pub/scm/linux/kernel/ 1317 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=scrub-detect-inobt-gaps>`_, and 1318 `rmap btree 1318 `rmap btree 1319 <https://git.kernel.org/pub/scm/linux/kernel/ 1319 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=scrub-detect-rmapbt-gaps>`_ records; 1320 to find 1320 to find 1321 `mergeable records 1321 `mergeable records 1322 <https://git.kernel.org/pub/scm/linux/kernel/ 1322 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=scrub-detect-mergeable-records>`_; 1323 and to 1323 and to 1324 `improve cross referencing with rmap 1324 `improve cross referencing with rmap 1325 <https://git.kernel.org/pub/scm/linux/kernel/ 1325 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=scrub-strengthen-rmap-checking>`_ 1326 before starting a repair. 1326 before starting a repair. 1327 1327 1328 Checking Extended Attributes 1328 Checking Extended Attributes 1329 ```````````````````````````` 1329 ```````````````````````````` 1330 1330 1331 Extended attributes implement a key-value sto 1331 Extended attributes implement a key-value store that enable fragments of data 1332 to be attached to any file. 1332 to be attached to any file. 1333 Both the kernel and userspace can access the 1333 Both the kernel and userspace can access the keys and values, subject to 1334 namespace and privilege restrictions. 1334 namespace and privilege restrictions. 1335 Most typically these fragments are metadata a 1335 Most typically these fragments are metadata about the file -- origins, security 1336 contexts, user-supplied labels, indexing info 1336 contexts, user-supplied labels, indexing information, etc. 1337 1337 1338 Names can be as long as 255 bytes and can exi 1338 Names can be as long as 255 bytes and can exist in several different 1339 namespaces. 1339 namespaces. 1340 Values can be as large as 64KB. 1340 Values can be as large as 64KB. 1341 A file's extended attributes are stored in bl 1341 A file's extended attributes are stored in blocks mapped by the attr fork. 1342 The mappings point to leaf blocks, remote val 1342 The mappings point to leaf blocks, remote value blocks, or dabtree blocks. 1343 Block 0 in the attribute fork is always the t 1343 Block 0 in the attribute fork is always the top of the structure, but otherwise 1344 each of the three types of blocks can be foun 1344 each of the three types of blocks can be found at any offset in the attr fork. 1345 Leaf blocks contain attribute key records tha 1345 Leaf blocks contain attribute key records that point to the name and the value. 1346 Names are always stored elsewhere in the same 1346 Names are always stored elsewhere in the same leaf block. 1347 Values that are less than 3/4 the size of a f 1347 Values that are less than 3/4 the size of a filesystem block are also stored 1348 elsewhere in the same leaf block. 1348 elsewhere in the same leaf block. 1349 Remote value blocks contain values that are t 1349 Remote value blocks contain values that are too large to fit inside a leaf. 1350 If the leaf information exceeds a single file 1350 If the leaf information exceeds a single filesystem block, a dabtree (also 1351 rooted at block 0) is created to map hashes o 1351 rooted at block 0) is created to map hashes of the attribute names to leaf 1352 blocks in the attr fork. 1352 blocks in the attr fork. 1353 1353 1354 Checking an extended attribute structure is n 1354 Checking an extended attribute structure is not so straightforward due to the 1355 lack of separation between attr blocks and in 1355 lack of separation between attr blocks and index blocks. 1356 Scrub must read each block mapped by the attr 1356 Scrub must read each block mapped by the attr fork and ignore the non-leaf 1357 blocks: 1357 blocks: 1358 1358 1359 1. Walk the dabtree in the attr fork (if pres 1359 1. Walk the dabtree in the attr fork (if present) to ensure that there are no 1360 irregularities in the blocks or dabtree ma 1360 irregularities in the blocks or dabtree mappings that do not point to 1361 attr leaf blocks. 1361 attr leaf blocks. 1362 1362 1363 2. Walk the blocks of the attr fork looking f 1363 2. Walk the blocks of the attr fork looking for leaf blocks. 1364 For each entry inside a leaf: 1364 For each entry inside a leaf: 1365 1365 1366 a. Validate that the name does not contain 1366 a. Validate that the name does not contain invalid characters. 1367 1367 1368 b. Read the attr value. 1368 b. Read the attr value. 1369 This performs a named lookup of the att 1369 This performs a named lookup of the attr name to ensure the correctness 1370 of the dabtree. 1370 of the dabtree. 1371 If the value is stored in a remote bloc 1371 If the value is stored in a remote block, this also validates the 1372 integrity of the remote value block. 1372 integrity of the remote value block. 1373 1373 1374 Checking and Cross-Referencing Directories 1374 Checking and Cross-Referencing Directories 1375 `````````````````````````````````````````` 1375 `````````````````````````````````````````` 1376 1376 1377 The filesystem directory tree is a directed a 1377 The filesystem directory tree is a directed acylic graph structure, with files 1378 constituting the nodes, and directory entries 1378 constituting the nodes, and directory entries (dirents) constituting the edges. 1379 Directories are a special type of file contai 1379 Directories are a special type of file containing a set of mappings from a 1380 255-byte sequence (name) to an inumber. 1380 255-byte sequence (name) to an inumber. 1381 These are called directory entries, or dirent 1381 These are called directory entries, or dirents for short. 1382 Each directory file must have exactly one dir 1382 Each directory file must have exactly one directory pointing to the file. 1383 A root directory points to itself. 1383 A root directory points to itself. 1384 Directory entries point to files of any type. 1384 Directory entries point to files of any type. 1385 Each non-directory file may have multiple dir 1385 Each non-directory file may have multiple directories point to it. 1386 1386 1387 In XFS, directories are implemented as a file 1387 In XFS, directories are implemented as a file containing up to three 32GB 1388 partitions. 1388 partitions. 1389 The first partition contains directory entry 1389 The first partition contains directory entry data blocks. 1390 Each data block contains variable-sized recor 1390 Each data block contains variable-sized records associating a user-provided 1391 name with an inumber and, optionally, a file 1391 name with an inumber and, optionally, a file type. 1392 If the directory entry data grows beyond one 1392 If the directory entry data grows beyond one block, the second partition (which 1393 exists as post-EOF extents) is populated with 1393 exists as post-EOF extents) is populated with a block containing free space 1394 information and an index that maps hashes of 1394 information and an index that maps hashes of the dirent names to directory data 1395 blocks in the first partition. 1395 blocks in the first partition. 1396 This makes directory name lookups very fast. 1396 This makes directory name lookups very fast. 1397 If this second partition grows beyond one blo 1397 If this second partition grows beyond one block, the third partition is 1398 populated with a linear array of free space i 1398 populated with a linear array of free space information for faster 1399 expansions. 1399 expansions. 1400 If the free space has been separated and the 1400 If the free space has been separated and the second partition grows again 1401 beyond one block, then a dabtree is used to m 1401 beyond one block, then a dabtree is used to map hashes of dirent names to 1402 directory data blocks. 1402 directory data blocks. 1403 1403 1404 Checking a directory is pretty straightforwar 1404 Checking a directory is pretty straightforward: 1405 1405 1406 1. Walk the dabtree in the second partition ( 1406 1. Walk the dabtree in the second partition (if present) to ensure that there 1407 are no irregularities in the blocks or dab 1407 are no irregularities in the blocks or dabtree mappings that do not point to 1408 dirent blocks. 1408 dirent blocks. 1409 1409 1410 2. Walk the blocks of the first partition loo 1410 2. Walk the blocks of the first partition looking for directory entries. 1411 Each dirent is checked as follows: 1411 Each dirent is checked as follows: 1412 1412 1413 a. Does the name contain no invalid charac 1413 a. Does the name contain no invalid characters? 1414 1414 1415 b. Does the inumber correspond to an actua 1415 b. Does the inumber correspond to an actual, allocated inode? 1416 1416 1417 c. Does the child inode have a nonzero lin 1417 c. Does the child inode have a nonzero link count? 1418 1418 1419 d. If a file type is included in the diren 1419 d. If a file type is included in the dirent, does it match the type of the 1420 inode? 1420 inode? 1421 1421 1422 e. If the child is a subdirectory, does th 1422 e. If the child is a subdirectory, does the child's dotdot pointer point 1423 back to the parent? 1423 back to the parent? 1424 1424 1425 f. If the directory has a second partition 1425 f. If the directory has a second partition, perform a named lookup of the 1426 dirent name to ensure the correctness o 1426 dirent name to ensure the correctness of the dabtree. 1427 1427 1428 3. Walk the free space list in the third part 1428 3. Walk the free space list in the third partition (if present) to ensure that 1429 the free spaces it describes are really un 1429 the free spaces it describes are really unused. 1430 1430 1431 Checking operations involving :ref:`parents < 1431 Checking operations involving :ref:`parents <dirparent>` and 1432 :ref:`file link counts <nlinks>` are discusse 1432 :ref:`file link counts <nlinks>` are discussed in more detail in later 1433 sections. 1433 sections. 1434 1434 1435 Checking Directory/Attribute Btrees 1435 Checking Directory/Attribute Btrees 1436 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 1436 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 1437 1437 1438 As stated in previous sections, the directory 1438 As stated in previous sections, the directory/attribute btree (dabtree) index 1439 maps user-provided names to improve lookup ti 1439 maps user-provided names to improve lookup times by avoiding linear scans. 1440 Internally, it maps a 32-bit hash of the name 1440 Internally, it maps a 32-bit hash of the name to a block offset within the 1441 appropriate file fork. 1441 appropriate file fork. 1442 1442 1443 The internal structure of a dabtree closely r 1443 The internal structure of a dabtree closely resembles the btrees that record 1444 fixed-size metadata records -- each dabtree b 1444 fixed-size metadata records -- each dabtree block contains a magic number, a 1445 checksum, sibling pointers, a UUID, a tree le 1445 checksum, sibling pointers, a UUID, a tree level, and a log sequence number. 1446 The format of leaf and node records are the s 1446 The format of leaf and node records are the same -- each entry points to the 1447 next level down in the hierarchy, with dabtre 1447 next level down in the hierarchy, with dabtree node records pointing to dabtree 1448 leaf blocks, and dabtree leaf records pointin 1448 leaf blocks, and dabtree leaf records pointing to non-dabtree blocks elsewhere 1449 in the fork. 1449 in the fork. 1450 1450 1451 Checking and cross-referencing the dabtree is 1451 Checking and cross-referencing the dabtree is very similar to what is done for 1452 space btrees: 1452 space btrees: 1453 1453 1454 - Does the type of data stored in the block m 1454 - Does the type of data stored in the block match what scrub is expecting? 1455 1455 1456 - Does the block belong to the owning structu 1456 - Does the block belong to the owning structure that asked for the read? 1457 1457 1458 - Do the records fit within the block? 1458 - Do the records fit within the block? 1459 1459 1460 - Are the records contained inside the block 1460 - Are the records contained inside the block free of obvious corruptions? 1461 1461 1462 - Are the name hashes in the correct order? 1462 - Are the name hashes in the correct order? 1463 1463 1464 - Do node pointers within the dabtree point t 1464 - Do node pointers within the dabtree point to valid fork offsets for dabtree 1465 blocks? 1465 blocks? 1466 1466 1467 - Do leaf pointers within the dabtree point t 1467 - Do leaf pointers within the dabtree point to valid fork offsets for directory 1468 or attr leaf blocks? 1468 or attr leaf blocks? 1469 1469 1470 - Do child pointers point towards the leaves? 1470 - Do child pointers point towards the leaves? 1471 1471 1472 - Do sibling pointers point across the same l 1472 - Do sibling pointers point across the same level? 1473 1473 1474 - For each dabtree node record, does the reco 1474 - For each dabtree node record, does the record key accurate reflect the 1475 contents of the child dabtree block? 1475 contents of the child dabtree block? 1476 1476 1477 - For each dabtree leaf record, does the reco 1477 - For each dabtree leaf record, does the record key accurate reflect the 1478 contents of the directory or attr block? 1478 contents of the directory or attr block? 1479 1479 1480 Cross-Referencing Summary Counters 1480 Cross-Referencing Summary Counters 1481 `````````````````````````````````` 1481 `````````````````````````````````` 1482 1482 1483 XFS maintains three classes of summary counte 1483 XFS maintains three classes of summary counters: available resources, quota 1484 resource usage, and file link counts. 1484 resource usage, and file link counts. 1485 1485 1486 In theory, the amount of available resources 1486 In theory, the amount of available resources (data blocks, inodes, realtime 1487 extents) can be found by walking the entire f 1487 extents) can be found by walking the entire filesystem. 1488 This would make for very slow reporting, so a 1488 This would make for very slow reporting, so a transactional filesystem can 1489 maintain summaries of this information in the 1489 maintain summaries of this information in the superblock. 1490 Cross-referencing these values against the fi 1490 Cross-referencing these values against the filesystem metadata should be a 1491 simple matter of walking the free space and i 1491 simple matter of walking the free space and inode metadata in each AG and the 1492 realtime bitmap, but there are complications 1492 realtime bitmap, but there are complications that will be discussed in 1493 :ref:`more detail <fscounters>` later. 1493 :ref:`more detail <fscounters>` later. 1494 1494 1495 :ref:`Quota usage <quotacheck>` and :ref:`fil 1495 :ref:`Quota usage <quotacheck>` and :ref:`file link count <nlinks>` 1496 checking are sufficiently complicated to warr 1496 checking are sufficiently complicated to warrant separate sections. 1497 1497 1498 Post-Repair Reverification 1498 Post-Repair Reverification 1499 `````````````````````````` 1499 `````````````````````````` 1500 1500 1501 After performing a repair, the checking code 1501 After performing a repair, the checking code is run a second time to validate 1502 the new structure, and the results of the hea 1502 the new structure, and the results of the health assessment are recorded 1503 internally and returned to the calling proces 1503 internally and returned to the calling process. 1504 This step is critical for enabling system adm 1504 This step is critical for enabling system administrator to monitor the status 1505 of the filesystem and the progress of any rep 1505 of the filesystem and the progress of any repairs. 1506 For developers, it is a useful means to judge 1506 For developers, it is a useful means to judge the efficacy of error detection 1507 and correction in the online and offline chec 1507 and correction in the online and offline checking tools. 1508 1508 1509 Eventual Consistency vs. Online Fsck 1509 Eventual Consistency vs. Online Fsck 1510 ------------------------------------ 1510 ------------------------------------ 1511 1511 1512 Complex operations can make modifications to 1512 Complex operations can make modifications to multiple per-AG data structures 1513 with a chain of transactions. 1513 with a chain of transactions. 1514 These chains, once committed to the log, are 1514 These chains, once committed to the log, are restarted during log recovery if 1515 the system crashes while processing the chain 1515 the system crashes while processing the chain. 1516 Because the AG header buffers are unlocked be 1516 Because the AG header buffers are unlocked between transactions within a chain, 1517 online checking must coordinate with chained 1517 online checking must coordinate with chained operations that are in progress to 1518 avoid incorrectly detecting inconsistencies d 1518 avoid incorrectly detecting inconsistencies due to pending chains. 1519 Furthermore, online repair must not run when 1519 Furthermore, online repair must not run when operations are pending because 1520 the metadata are temporarily inconsistent wit 1520 the metadata are temporarily inconsistent with each other, and rebuilding is 1521 not possible. 1521 not possible. 1522 1522 1523 Only online fsck has this requirement of tota 1523 Only online fsck has this requirement of total consistency of AG metadata, and 1524 should be relatively rare as compared to file 1524 should be relatively rare as compared to filesystem change operations. 1525 Online fsck coordinates with transaction chai 1525 Online fsck coordinates with transaction chains as follows: 1526 1526 1527 * For each AG, maintain a count of intent ite 1527 * For each AG, maintain a count of intent items targeting that AG. 1528 The count should be bumped whenever a new i 1528 The count should be bumped whenever a new item is added to the chain. 1529 The count should be dropped when the filesy 1529 The count should be dropped when the filesystem has locked the AG header 1530 buffers and finished the work. 1530 buffers and finished the work. 1531 1531 1532 * When online fsck wants to examine an AG, it 1532 * When online fsck wants to examine an AG, it should lock the AG header 1533 buffers to quiesce all transaction chains t 1533 buffers to quiesce all transaction chains that want to modify that AG. 1534 If the count is zero, proceed with the chec 1534 If the count is zero, proceed with the checking operation. 1535 If it is nonzero, cycle the buffer locks to 1535 If it is nonzero, cycle the buffer locks to allow the chain to make forward 1536 progress. 1536 progress. 1537 1537 1538 This may lead to online fsck taking a long ti 1538 This may lead to online fsck taking a long time to complete, but regular 1539 filesystem updates take precedence over backg 1539 filesystem updates take precedence over background checking activity. 1540 Details about the discovery of this situation 1540 Details about the discovery of this situation are presented in the 1541 :ref:`next section <chain_coordination>`, and 1541 :ref:`next section <chain_coordination>`, and details about the solution 1542 are presented :ref:`after that<intent_drains> 1542 are presented :ref:`after that<intent_drains>`. 1543 1543 1544 .. _chain_coordination: 1544 .. _chain_coordination: 1545 1545 1546 Discovery of the Problem 1546 Discovery of the Problem 1547 ```````````````````````` 1547 ```````````````````````` 1548 1548 1549 Midway through the development of online scru 1549 Midway through the development of online scrubbing, the fsstress tests 1550 uncovered a misinteraction between online fsc 1550 uncovered a misinteraction between online fsck and compound transaction chains 1551 created by other writer threads that resulted 1551 created by other writer threads that resulted in false reports of metadata 1552 inconsistency. 1552 inconsistency. 1553 The root cause of these reports is the eventu 1553 The root cause of these reports is the eventual consistency model introduced by 1554 the expansion of deferred work items and comp 1554 the expansion of deferred work items and compound transaction chains when 1555 reverse mapping and reflink were introduced. 1555 reverse mapping and reflink were introduced. 1556 1556 1557 Originally, transaction chains were added to 1557 Originally, transaction chains were added to XFS to avoid deadlocks when 1558 unmapping space from files. 1558 unmapping space from files. 1559 Deadlock avoidance rules require that AGs onl 1559 Deadlock avoidance rules require that AGs only be locked in increasing order, 1560 which makes it impossible (say) to use a sing 1560 which makes it impossible (say) to use a single transaction to free a space 1561 extent in AG 7 and then try to free a now sup 1561 extent in AG 7 and then try to free a now superfluous block mapping btree block 1562 in AG 3. 1562 in AG 3. 1563 To avoid these kinds of deadlocks, XFS create 1563 To avoid these kinds of deadlocks, XFS creates Extent Freeing Intent (EFI) log 1564 items to commit to freeing some space in one 1564 items to commit to freeing some space in one transaction while deferring the 1565 actual metadata updates to a fresh transactio 1565 actual metadata updates to a fresh transaction. 1566 The transaction sequence looks like this: 1566 The transaction sequence looks like this: 1567 1567 1568 1. The first transaction contains a physical 1568 1. The first transaction contains a physical update to the file's block mapping 1569 structures to remove the mapping from the 1569 structures to remove the mapping from the btree blocks. 1570 It then attaches to the in-memory transact 1570 It then attaches to the in-memory transaction an action item to schedule 1571 deferred freeing of space. 1571 deferred freeing of space. 1572 Concretely, each transaction maintains a l 1572 Concretely, each transaction maintains a list of ``struct 1573 xfs_defer_pending`` objects, each of which 1573 xfs_defer_pending`` objects, each of which maintains a list of ``struct 1574 xfs_extent_free_item`` objects. 1574 xfs_extent_free_item`` objects. 1575 Returning to the example above, the action 1575 Returning to the example above, the action item tracks the freeing of both 1576 the unmapped space from AG 7 and the block 1576 the unmapped space from AG 7 and the block mapping btree (BMBT) block from 1577 AG 3. 1577 AG 3. 1578 Deferred frees recorded in this manner are 1578 Deferred frees recorded in this manner are committed in the log by creating 1579 an EFI log item from the ``struct xfs_exte 1579 an EFI log item from the ``struct xfs_extent_free_item`` object and 1580 attaching the log item to the transaction. 1580 attaching the log item to the transaction. 1581 When the log is persisted to disk, the EFI 1581 When the log is persisted to disk, the EFI item is written into the ondisk 1582 transaction record. 1582 transaction record. 1583 EFIs can list up to 16 extents to free, al 1583 EFIs can list up to 16 extents to free, all sorted in AG order. 1584 1584 1585 2. The second transaction contains a physical 1585 2. The second transaction contains a physical update to the free space btrees 1586 of AG 3 to release the former BMBT block a 1586 of AG 3 to release the former BMBT block and a second physical update to the 1587 free space btrees of AG 7 to release the u 1587 free space btrees of AG 7 to release the unmapped file space. 1588 Observe that the physical updates are rese 1588 Observe that the physical updates are resequenced in the correct order 1589 when possible. 1589 when possible. 1590 Attached to the transaction is a an extent 1590 Attached to the transaction is a an extent free done (EFD) log item. 1591 The EFD contains a pointer to the EFI logg 1591 The EFD contains a pointer to the EFI logged in transaction #1 so that log 1592 recovery can tell if the EFI needs to be r 1592 recovery can tell if the EFI needs to be replayed. 1593 1593 1594 If the system goes down after transaction #1 1594 If the system goes down after transaction #1 is written back to the filesystem 1595 but before #2 is committed, a scan of the fil 1595 but before #2 is committed, a scan of the filesystem metadata would show 1596 inconsistent filesystem metadata because ther 1596 inconsistent filesystem metadata because there would not appear to be any owner 1597 of the unmapped space. 1597 of the unmapped space. 1598 Happily, log recovery corrects this inconsist 1598 Happily, log recovery corrects this inconsistency for us -- when recovery finds 1599 an intent log item but does not find a corres 1599 an intent log item but does not find a corresponding intent done item, it will 1600 reconstruct the incore state of the intent it 1600 reconstruct the incore state of the intent item and finish it. 1601 In the example above, the log must replay bot 1601 In the example above, the log must replay both frees described in the recovered 1602 EFI to complete the recovery phase. 1602 EFI to complete the recovery phase. 1603 1603 1604 There are subtleties to XFS' transaction chai 1604 There are subtleties to XFS' transaction chaining strategy to consider: 1605 1605 1606 * Log items must be added to a transaction in 1606 * Log items must be added to a transaction in the correct order to prevent 1607 conflicts with principal objects that are n 1607 conflicts with principal objects that are not held by the transaction. 1608 In other words, all per-AG metadata updates 1608 In other words, all per-AG metadata updates for an unmapped block must be 1609 completed before the last update to free th 1609 completed before the last update to free the extent, and extents should not 1610 be reallocated until that last update commi 1610 be reallocated until that last update commits to the log. 1611 1611 1612 * AG header buffers are released between each 1612 * AG header buffers are released between each transaction in a chain. 1613 This means that other threads can observe a 1613 This means that other threads can observe an AG in an intermediate state, 1614 but as long as the first subtlety is handle 1614 but as long as the first subtlety is handled, this should not affect the 1615 correctness of filesystem operations. 1615 correctness of filesystem operations. 1616 1616 1617 * Unmounting the filesystem flushes all pendi 1617 * Unmounting the filesystem flushes all pending work to disk, which means that 1618 offline fsck never sees the temporary incon 1618 offline fsck never sees the temporary inconsistencies caused by deferred 1619 work item processing. 1619 work item processing. 1620 1620 1621 In this manner, XFS employs a form of eventua 1621 In this manner, XFS employs a form of eventual consistency to avoid deadlocks 1622 and increase parallelism. 1622 and increase parallelism. 1623 1623 1624 During the design phase of the reverse mappin 1624 During the design phase of the reverse mapping and reflink features, it was 1625 decided that it was impractical to cram all t 1625 decided that it was impractical to cram all the reverse mapping updates for a 1626 single filesystem change into a single transa 1626 single filesystem change into a single transaction because a single file 1627 mapping operation can explode into many small 1627 mapping operation can explode into many small updates: 1628 1628 1629 * The block mapping update itself 1629 * The block mapping update itself 1630 * A reverse mapping update for the block mapp 1630 * A reverse mapping update for the block mapping update 1631 * Fixing the freelist 1631 * Fixing the freelist 1632 * A reverse mapping update for the freelist f 1632 * A reverse mapping update for the freelist fix 1633 1633 1634 * A shape change to the block mapping btree 1634 * A shape change to the block mapping btree 1635 * A reverse mapping update for the btree upda 1635 * A reverse mapping update for the btree update 1636 * Fixing the freelist (again) 1636 * Fixing the freelist (again) 1637 * A reverse mapping update for the freelist f 1637 * A reverse mapping update for the freelist fix 1638 1638 1639 * An update to the reference counting informa 1639 * An update to the reference counting information 1640 * A reverse mapping update for the refcount u 1640 * A reverse mapping update for the refcount update 1641 * Fixing the freelist (a third time) 1641 * Fixing the freelist (a third time) 1642 * A reverse mapping update for the freelist f 1642 * A reverse mapping update for the freelist fix 1643 1643 1644 * Freeing any space that was unmapped and not 1644 * Freeing any space that was unmapped and not owned by any other file 1645 * Fixing the freelist (a fourth time) 1645 * Fixing the freelist (a fourth time) 1646 * A reverse mapping update for the freelist f 1646 * A reverse mapping update for the freelist fix 1647 1647 1648 * Freeing the space used by the block mapping 1648 * Freeing the space used by the block mapping btree 1649 * Fixing the freelist (a fifth time) 1649 * Fixing the freelist (a fifth time) 1650 * A reverse mapping update for the freelist f 1650 * A reverse mapping update for the freelist fix 1651 1651 1652 Free list fixups are not usually needed more 1652 Free list fixups are not usually needed more than once per AG per transaction 1653 chain, but it is theoretically possible if sp 1653 chain, but it is theoretically possible if space is very tight. 1654 For copy-on-write updates this is even worse, 1654 For copy-on-write updates this is even worse, because this must be done once to 1655 remove the space from a staging area and agai 1655 remove the space from a staging area and again to map it into the file! 1656 1656 1657 To deal with this explosion in a calm manner, 1657 To deal with this explosion in a calm manner, XFS expands its use of deferred 1658 work items to cover most reverse mapping upda 1658 work items to cover most reverse mapping updates and all refcount updates. 1659 This reduces the worst case size of transacti 1659 This reduces the worst case size of transaction reservations by breaking the 1660 work into a long chain of small updates, whic 1660 work into a long chain of small updates, which increases the degree of eventual 1661 consistency in the system. 1661 consistency in the system. 1662 Again, this generally isn't a problem because 1662 Again, this generally isn't a problem because XFS orders its deferred work 1663 items carefully to avoid resource reuse confl 1663 items carefully to avoid resource reuse conflicts between unsuspecting threads. 1664 1664 1665 However, online fsck changes the rules -- rem 1665 However, online fsck changes the rules -- remember that although physical 1666 updates to per-AG structures are coordinated 1666 updates to per-AG structures are coordinated by locking the buffers for AG 1667 headers, buffer locks are dropped between tra 1667 headers, buffer locks are dropped between transactions. 1668 Once scrub acquires resources and takes locks 1668 Once scrub acquires resources and takes locks for a data structure, it must do 1669 all the validation work without releasing the 1669 all the validation work without releasing the lock. 1670 If the main lock for a space btree is an AG h 1670 If the main lock for a space btree is an AG header buffer lock, scrub may have 1671 interrupted another thread that is midway thr 1671 interrupted another thread that is midway through finishing a chain. 1672 For example, if a thread performing a copy-on 1672 For example, if a thread performing a copy-on-write has completed a reverse 1673 mapping update but not the corresponding refc 1673 mapping update but not the corresponding refcount update, the two AG btrees 1674 will appear inconsistent to scrub and an obse 1674 will appear inconsistent to scrub and an observation of corruption will be 1675 recorded. This observation will not be corre 1675 recorded. This observation will not be correct. 1676 If a repair is attempted in this state, the r 1676 If a repair is attempted in this state, the results will be catastrophic! 1677 1677 1678 Several other solutions to this problem were 1678 Several other solutions to this problem were evaluated upon discovery of this 1679 flaw and rejected: 1679 flaw and rejected: 1680 1680 1681 1. Add a higher level lock to allocation grou 1681 1. Add a higher level lock to allocation groups and require writer threads to 1682 acquire the higher level lock in AG order 1682 acquire the higher level lock in AG order before making any changes. 1683 This would be very difficult to implement 1683 This would be very difficult to implement in practice because it is 1684 difficult to determine which locks need to 1684 difficult to determine which locks need to be obtained, and in what order, 1685 without simulating the entire operation. 1685 without simulating the entire operation. 1686 Performing a dry run of a file operation t 1686 Performing a dry run of a file operation to discover necessary locks would 1687 make the filesystem very slow. 1687 make the filesystem very slow. 1688 1688 1689 2. Make the deferred work coordinator code aw 1689 2. Make the deferred work coordinator code aware of consecutive intent items 1690 targeting the same AG and have it hold the 1690 targeting the same AG and have it hold the AG header buffers locked across 1691 the transaction roll between updates. 1691 the transaction roll between updates. 1692 This would introduce a lot of complexity i 1692 This would introduce a lot of complexity into the coordinator since it is 1693 only loosely coupled with the actual defer 1693 only loosely coupled with the actual deferred work items. 1694 It would also fail to solve the problem be 1694 It would also fail to solve the problem because deferred work items can 1695 generate new deferred subtasks, but all su 1695 generate new deferred subtasks, but all subtasks must be complete before 1696 work can start on a new sibling task. 1696 work can start on a new sibling task. 1697 1697 1698 3. Teach online fsck to walk all transactions 1698 3. Teach online fsck to walk all transactions waiting for whichever lock(s) 1699 protect the data structure being scrubbed 1699 protect the data structure being scrubbed to look for pending operations. 1700 The checking and repair operations must fa 1700 The checking and repair operations must factor these pending operations into 1701 the evaluations being performed. 1701 the evaluations being performed. 1702 This solution is a nonstarter because it i 1702 This solution is a nonstarter because it is *extremely* invasive to the main 1703 filesystem. 1703 filesystem. 1704 1704 1705 .. _intent_drains: 1705 .. _intent_drains: 1706 1706 1707 Intent Drains 1707 Intent Drains 1708 ````````````` 1708 ````````````` 1709 1709 1710 Online fsck uses an atomic intent item counte 1710 Online fsck uses an atomic intent item counter and lock cycling to coordinate 1711 with transaction chains. 1711 with transaction chains. 1712 There are two key properties to the drain mec 1712 There are two key properties to the drain mechanism. 1713 First, the counter is incremented when a defe 1713 First, the counter is incremented when a deferred work item is *queued* to a 1714 transaction, and it is decremented after the 1714 transaction, and it is decremented after the associated intent done log item is 1715 *committed* to another transaction. 1715 *committed* to another transaction. 1716 The second property is that deferred work can 1716 The second property is that deferred work can be added to a transaction without 1717 holding an AG header lock, but per-AG work it 1717 holding an AG header lock, but per-AG work items cannot be marked done without 1718 locking that AG header buffer to log the phys 1718 locking that AG header buffer to log the physical updates and the intent done 1719 log item. 1719 log item. 1720 The first property enables scrub to yield to 1720 The first property enables scrub to yield to running transaction chains, which 1721 is an explicit deprioritization of online fsc 1721 is an explicit deprioritization of online fsck to benefit file operations. 1722 The second property of the drain is key to th 1722 The second property of the drain is key to the correct coordination of scrub, 1723 since scrub will always be able to decide if 1723 since scrub will always be able to decide if a conflict is possible. 1724 1724 1725 For regular filesystem code, the drain works 1725 For regular filesystem code, the drain works as follows: 1726 1726 1727 1. Call the appropriate subsystem function to 1727 1. Call the appropriate subsystem function to add a deferred work item to a 1728 transaction. 1728 transaction. 1729 1729 1730 2. The function calls ``xfs_defer_drain_bump` 1730 2. The function calls ``xfs_defer_drain_bump`` to increase the counter. 1731 1731 1732 3. When the deferred item manager wants to fi 1732 3. When the deferred item manager wants to finish the deferred work item, it 1733 calls ``->finish_item`` to complete it. 1733 calls ``->finish_item`` to complete it. 1734 1734 1735 4. The ``->finish_item`` implementation logs 1735 4. The ``->finish_item`` implementation logs some changes and calls 1736 ``xfs_defer_drain_drop`` to decrease the s 1736 ``xfs_defer_drain_drop`` to decrease the sloppy counter and wake up any threads 1737 waiting on the drain. 1737 waiting on the drain. 1738 1738 1739 5. The subtransaction commits, which unlocks 1739 5. The subtransaction commits, which unlocks the resource associated with the 1740 intent item. 1740 intent item. 1741 1741 1742 For scrub, the drain works as follows: 1742 For scrub, the drain works as follows: 1743 1743 1744 1. Lock the resource(s) associated with the m 1744 1. Lock the resource(s) associated with the metadata being scrubbed. 1745 For example, a scan of the refcount btree 1745 For example, a scan of the refcount btree would lock the AGI and AGF header 1746 buffers. 1746 buffers. 1747 1747 1748 2. If the counter is zero (``xfs_defer_drain_ 1748 2. If the counter is zero (``xfs_defer_drain_busy`` returns false), there are no 1749 chains in progress and the operation may p 1749 chains in progress and the operation may proceed. 1750 1750 1751 3. Otherwise, release the resources grabbed i 1751 3. Otherwise, release the resources grabbed in step 1. 1752 1752 1753 4. Wait for the intent counter to reach zero 1753 4. Wait for the intent counter to reach zero (``xfs_defer_drain_intents``), then go 1754 back to step 1 unless a signal has been ca 1754 back to step 1 unless a signal has been caught. 1755 1755 1756 To avoid polling in step 4, the drain provide 1756 To avoid polling in step 4, the drain provides a waitqueue for scrub threads to 1757 be woken up whenever the intent count drops t 1757 be woken up whenever the intent count drops to zero. 1758 1758 1759 The proposed patchset is the 1759 The proposed patchset is the 1760 `scrub intent drain series 1760 `scrub intent drain series 1761 <https://git.kernel.org/pub/scm/linux/kernel/ 1761 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=scrub-drain-intents>`_. 1762 1762 1763 .. _jump_labels: 1763 .. _jump_labels: 1764 1764 1765 Static Keys (aka Jump Label Patching) 1765 Static Keys (aka Jump Label Patching) 1766 ````````````````````````````````````` 1766 ````````````````````````````````````` 1767 1767 1768 Online fsck for XFS separates the regular fil 1768 Online fsck for XFS separates the regular filesystem from the checking and 1769 repair code as much as possible. 1769 repair code as much as possible. 1770 However, there are a few parts of online fsck 1770 However, there are a few parts of online fsck (such as the intent drains, and 1771 later, live update hooks) where it is useful 1771 later, live update hooks) where it is useful for the online fsck code to know 1772 what's going on in the rest of the filesystem 1772 what's going on in the rest of the filesystem. 1773 Since it is not expected that online fsck wil 1773 Since it is not expected that online fsck will be constantly running in the 1774 background, it is very important to minimize 1774 background, it is very important to minimize the runtime overhead imposed by 1775 these hooks when online fsck is compiled into 1775 these hooks when online fsck is compiled into the kernel but not actively 1776 running on behalf of userspace. 1776 running on behalf of userspace. 1777 Taking locks in the hot path of a writer thre 1777 Taking locks in the hot path of a writer thread to access a data structure only 1778 to find that no further action is necessary i 1778 to find that no further action is necessary is expensive -- on the author's 1779 computer, this have an overhead of 40-50ns pe 1779 computer, this have an overhead of 40-50ns per access. 1780 Fortunately, the kernel supports dynamic code 1780 Fortunately, the kernel supports dynamic code patching, which enables XFS to 1781 replace a static branch to hook code with ``n 1781 replace a static branch to hook code with ``nop`` sleds when online fsck isn't 1782 running. 1782 running. 1783 This sled has an overhead of however long it 1783 This sled has an overhead of however long it takes the instruction decoder to 1784 skip past the sled, which seems to be on the 1784 skip past the sled, which seems to be on the order of less than 1ns and 1785 does not access memory outside of instruction 1785 does not access memory outside of instruction fetching. 1786 1786 1787 When online fsck enables the static key, the 1787 When online fsck enables the static key, the sled is replaced with an 1788 unconditional branch to call the hook code. 1788 unconditional branch to call the hook code. 1789 The switchover is quite expensive (~22000ns) 1789 The switchover is quite expensive (~22000ns) but is paid entirely by the 1790 program that invoked online fsck, and can be 1790 program that invoked online fsck, and can be amortized if multiple threads 1791 enter online fsck at the same time, or if mul 1791 enter online fsck at the same time, or if multiple filesystems are being 1792 checked at the same time. 1792 checked at the same time. 1793 Changing the branch direction requires taking 1793 Changing the branch direction requires taking the CPU hotplug lock, and since 1794 CPU initialization requires memory allocation 1794 CPU initialization requires memory allocation, online fsck must be careful not 1795 to change a static key while holding any lock 1795 to change a static key while holding any locks or resources that could be 1796 accessed in the memory reclaim paths. 1796 accessed in the memory reclaim paths. 1797 To minimize contention on the CPU hotplug loc 1797 To minimize contention on the CPU hotplug lock, care should be taken not to 1798 enable or disable static keys unnecessarily. 1798 enable or disable static keys unnecessarily. 1799 1799 1800 Because static keys are intended to minimize 1800 Because static keys are intended to minimize hook overhead for regular 1801 filesystem operations when xfs_scrub is not r 1801 filesystem operations when xfs_scrub is not running, the intended usage 1802 patterns are as follows: 1802 patterns are as follows: 1803 1803 1804 - The hooked part of XFS should declare a sta 1804 - The hooked part of XFS should declare a static-scoped static key that 1805 defaults to false. 1805 defaults to false. 1806 The ``DEFINE_STATIC_KEY_FALSE`` macro takes 1806 The ``DEFINE_STATIC_KEY_FALSE`` macro takes care of this. 1807 The static key itself should be declared as 1807 The static key itself should be declared as a ``static`` variable. 1808 1808 1809 - When deciding to invoke code that's only us 1809 - When deciding to invoke code that's only used by scrub, the regular 1810 filesystem should call the ``static_branch_ 1810 filesystem should call the ``static_branch_unlikely`` predicate to avoid the 1811 scrub-only hook code if the static key is n 1811 scrub-only hook code if the static key is not enabled. 1812 1812 1813 - The regular filesystem should export helper 1813 - The regular filesystem should export helper functions that call 1814 ``static_branch_inc`` to enable and ``stati 1814 ``static_branch_inc`` to enable and ``static_branch_dec`` to disable the 1815 static key. 1815 static key. 1816 Wrapper functions make it easy to compile o 1816 Wrapper functions make it easy to compile out the relevant code if the kernel 1817 distributor turns off online fsck at build 1817 distributor turns off online fsck at build time. 1818 1818 1819 - Scrub functions wanting to turn on scrub-on 1819 - Scrub functions wanting to turn on scrub-only XFS functionality should call 1820 the ``xchk_fsgates_enable`` from the setup 1820 the ``xchk_fsgates_enable`` from the setup function to enable a specific 1821 hook. 1821 hook. 1822 This must be done before obtaining any reso 1822 This must be done before obtaining any resources that are used by memory 1823 reclaim. 1823 reclaim. 1824 Callers had better be sure they really need 1824 Callers had better be sure they really need the functionality gated by the 1825 static key; the ``TRY_HARDER`` flag is usef 1825 static key; the ``TRY_HARDER`` flag is useful here. 1826 1826 1827 Online scrub has resource acquisition helpers 1827 Online scrub has resource acquisition helpers (e.g. ``xchk_perag_lock``) to 1828 handle locking AGI and AGF buffers for all sc 1828 handle locking AGI and AGF buffers for all scrubber functions. 1829 If it detects a conflict between scrub and th 1829 If it detects a conflict between scrub and the running transactions, it will 1830 try to wait for intents to complete. 1830 try to wait for intents to complete. 1831 If the caller of the helper has not enabled t 1831 If the caller of the helper has not enabled the static key, the helper will 1832 return -EDEADLOCK, which should result in the 1832 return -EDEADLOCK, which should result in the scrub being restarted with the 1833 ``TRY_HARDER`` flag set. 1833 ``TRY_HARDER`` flag set. 1834 The scrub setup function should detect that f 1834 The scrub setup function should detect that flag, enable the static key, and 1835 try the scrub again. 1835 try the scrub again. 1836 Scrub teardown disables all static keys obtai 1836 Scrub teardown disables all static keys obtained by ``xchk_fsgates_enable``. 1837 1837 1838 For more information, please see the kernel d 1838 For more information, please see the kernel documentation of 1839 Documentation/staging/static-keys.rst. 1839 Documentation/staging/static-keys.rst. 1840 1840 1841 .. _xfile: 1841 .. _xfile: 1842 1842 1843 Pageable Kernel Memory 1843 Pageable Kernel Memory 1844 ---------------------- 1844 ---------------------- 1845 1845 1846 Some online checking functions work by scanni 1846 Some online checking functions work by scanning the filesystem to build a 1847 shadow copy of an ondisk metadata structure i 1847 shadow copy of an ondisk metadata structure in memory and comparing the two 1848 copies. 1848 copies. 1849 For online repair to rebuild a metadata struc 1849 For online repair to rebuild a metadata structure, it must compute the record 1850 set that will be stored in the new structure 1850 set that will be stored in the new structure before it can persist that new 1851 structure to disk. 1851 structure to disk. 1852 Ideally, repairs complete with a single atomi 1852 Ideally, repairs complete with a single atomic commit that introduces 1853 a new data structure. 1853 a new data structure. 1854 To meet these goals, the kernel needs to coll 1854 To meet these goals, the kernel needs to collect a large amount of information 1855 in a place that doesn't require the correct o 1855 in a place that doesn't require the correct operation of the filesystem. 1856 1856 1857 Kernel memory isn't suitable because: 1857 Kernel memory isn't suitable because: 1858 1858 1859 * Allocating a contiguous region of memory to 1859 * Allocating a contiguous region of memory to create a C array is very 1860 difficult, especially on 32-bit systems. 1860 difficult, especially on 32-bit systems. 1861 1861 1862 * Linked lists of records introduce double po 1862 * Linked lists of records introduce double pointer overhead which is very high 1863 and eliminate the possibility of indexed lo 1863 and eliminate the possibility of indexed lookups. 1864 1864 1865 * Kernel memory is pinned, which can drive th 1865 * Kernel memory is pinned, which can drive the system into OOM conditions. 1866 1866 1867 * The system might not have sufficient memory 1867 * The system might not have sufficient memory to stage all the information. 1868 1868 1869 At any given time, online fsck does not need 1869 At any given time, online fsck does not need to keep the entire record set in 1870 memory, which means that individual records c 1870 memory, which means that individual records can be paged out if necessary. 1871 Continued development of online fsck demonstr 1871 Continued development of online fsck demonstrated that the ability to perform 1872 indexed data storage would also be very usefu 1872 indexed data storage would also be very useful. 1873 Fortunately, the Linux kernel already has a f 1873 Fortunately, the Linux kernel already has a facility for byte-addressable and 1874 pageable storage: tmpfs. 1874 pageable storage: tmpfs. 1875 In-kernel graphics drivers (most notably i915 1875 In-kernel graphics drivers (most notably i915) take advantage of tmpfs files 1876 to store intermediate data that doesn't need 1876 to store intermediate data that doesn't need to be in memory at all times, so 1877 that usage precedent is already established. 1877 that usage precedent is already established. 1878 Hence, the ``xfile`` was born! 1878 Hence, the ``xfile`` was born! 1879 1879 1880 +-------------------------------------------- 1880 +--------------------------------------------------------------------------+ 1881 | **Historical Sidebar**: 1881 | **Historical Sidebar**: | 1882 +-------------------------------------------- 1882 +--------------------------------------------------------------------------+ 1883 | The first edition of online repair inserted 1883 | The first edition of online repair inserted records into a new btree as | 1884 | it found them, which failed because filesys 1884 | it found them, which failed because filesystem could shut down with a | 1885 | built data structure, which would be live a 1885 | built data structure, which would be live after recovery finished. | 1886 | 1886 | | 1887 | The second edition solved the half-rebuilt 1887 | The second edition solved the half-rebuilt structure problem by storing | 1888 | everything in memory, but frequently ran th 1888 | everything in memory, but frequently ran the system out of memory. | 1889 | 1889 | | 1890 | The third edition solved the OOM problem by 1890 | The third edition solved the OOM problem by using linked lists, but the | 1891 | memory overhead of the list pointers was ex 1891 | memory overhead of the list pointers was extreme. | 1892 +-------------------------------------------- 1892 +--------------------------------------------------------------------------+ 1893 1893 1894 xfile Access Models 1894 xfile Access Models 1895 ``````````````````` 1895 ``````````````````` 1896 1896 1897 A survey of the intended uses of xfiles sugge 1897 A survey of the intended uses of xfiles suggested these use cases: 1898 1898 1899 1. Arrays of fixed-sized records (space manag 1899 1. Arrays of fixed-sized records (space management btrees, directory and 1900 extended attribute entries) 1900 extended attribute entries) 1901 1901 1902 2. Sparse arrays of fixed-sized records (quot 1902 2. Sparse arrays of fixed-sized records (quotas and link counts) 1903 1903 1904 3. Large binary objects (BLOBs) of variable s 1904 3. Large binary objects (BLOBs) of variable sizes (directory and extended 1905 attribute names and values) 1905 attribute names and values) 1906 1906 1907 4. Staging btrees in memory (reverse mapping 1907 4. Staging btrees in memory (reverse mapping btrees) 1908 1908 1909 5. Arbitrary contents (realtime space managem 1909 5. Arbitrary contents (realtime space management) 1910 1910 1911 To support the first four use cases, high lev 1911 To support the first four use cases, high level data structures wrap the xfile 1912 to share functionality between online fsck fu 1912 to share functionality between online fsck functions. 1913 The rest of this section discusses the interf 1913 The rest of this section discusses the interfaces that the xfile presents to 1914 four of those five higher level data structur 1914 four of those five higher level data structures. 1915 The fifth use case is discussed in the :ref:` 1915 The fifth use case is discussed in the :ref:`realtime summary <rtsummary>` case 1916 study. 1916 study. 1917 1917 >> 1918 The most general storage interface supported by the xfile enables the reading >> 1919 and writing of arbitrary quantities of data at arbitrary offsets in the xfile. >> 1920 This capability is provided by ``xfile_pread`` and ``xfile_pwrite`` functions, >> 1921 which behave similarly to their userspace counterparts. 1918 XFS is very record-based, which suggests that 1922 XFS is very record-based, which suggests that the ability to load and store 1919 complete records is important. 1923 complete records is important. 1920 To support these cases, a pair of ``xfile_loa !! 1924 To support these cases, a pair of ``xfile_obj_load`` and ``xfile_obj_store`` 1921 functions are provided to read and persist ob !! 1925 functions are provided to read and persist objects into an xfile. 1922 error as an out of memory error. For online !! 1926 They are internally the same as pread and pwrite, except that they treat any 1923 in this manner is an acceptable behavior beca !! 1927 error as an out of memory error. 1924 the operation back to userspace. !! 1928 For online repair, squashing error conditions in this manner is an acceptable >> 1929 behavior because the only reaction is to abort the operation back to userspace. >> 1930 All five xfile usecases can be serviced by these four functions. 1925 1931 1926 However, no discussion of file access idioms 1932 However, no discussion of file access idioms is complete without answering the 1927 question, "But what about mmap?" 1933 question, "But what about mmap?" 1928 It is convenient to access storage directly w 1934 It is convenient to access storage directly with pointers, just like userspace 1929 code does with regular memory. 1935 code does with regular memory. 1930 Online fsck must not drive the system into OO 1936 Online fsck must not drive the system into OOM conditions, which means that 1931 xfiles must be responsive to memory reclamati 1937 xfiles must be responsive to memory reclamation. 1932 tmpfs can only push a pagecache folio to the 1938 tmpfs can only push a pagecache folio to the swap cache if the folio is neither 1933 pinned nor locked, which means the xfile must 1939 pinned nor locked, which means the xfile must not pin too many folios. 1934 1940 1935 Short term direct access to xfile contents is 1941 Short term direct access to xfile contents is done by locking the pagecache 1936 folio and mapping it into kernel address spac !! 1942 folio and mapping it into kernel address space. 1937 mechanism. Folio locks are not supposed to b !! 1943 Programmatic access (e.g. pread and pwrite) uses this mechanism. 1938 long term direct access to xfile contents is !! 1944 Folio locks are not supposed to be held for long periods of time, so long >> 1945 term direct access to xfile contents is done by bumping the folio refcount, 1939 mapping it into kernel address space, and dro 1946 mapping it into kernel address space, and dropping the folio lock. 1940 These long term users *must* be responsive to 1947 These long term users *must* be responsive to memory reclaim by hooking into 1941 the shrinker infrastructure to know when to r 1948 the shrinker infrastructure to know when to release folios. 1942 1949 1943 The ``xfile_get_folio`` and ``xfile_put_folio !! 1950 The ``xfile_get_page`` and ``xfile_put_page`` functions are provided to 1944 retrieve the (locked) folio that backs part o 1951 retrieve the (locked) folio that backs part of an xfile and to release it. 1945 The only code to use these folio lease functi 1952 The only code to use these folio lease functions are the xfarray 1946 :ref:`sorting<xfarray_sort>` algorithms and t 1953 :ref:`sorting<xfarray_sort>` algorithms and the :ref:`in-memory 1947 btrees<xfbtree>`. 1954 btrees<xfbtree>`. 1948 1955 1949 xfile Access Coordination 1956 xfile Access Coordination 1950 ````````````````````````` 1957 ````````````````````````` 1951 1958 1952 For security reasons, xfiles must be owned pr 1959 For security reasons, xfiles must be owned privately by the kernel. 1953 They are marked ``S_PRIVATE`` to prevent inte 1960 They are marked ``S_PRIVATE`` to prevent interference from the security system, 1954 must never be mapped into process file descri 1961 must never be mapped into process file descriptor tables, and their pages must 1955 never be mapped into userspace processes. 1962 never be mapped into userspace processes. 1956 1963 1957 To avoid locking recursion issues with the VF 1964 To avoid locking recursion issues with the VFS, all accesses to the shmfs file 1958 are performed by manipulating the page cache 1965 are performed by manipulating the page cache directly. 1959 xfile writers call the ``->write_begin`` and 1966 xfile writers call the ``->write_begin`` and ``->write_end`` functions of the 1960 xfile's address space to grab writable pages, 1967 xfile's address space to grab writable pages, copy the caller's buffer into the 1961 page, and release the pages. 1968 page, and release the pages. 1962 xfile readers call ``shmem_read_mapping_page_ 1969 xfile readers call ``shmem_read_mapping_page_gfp`` to grab pages directly 1963 before copying the contents into the caller's 1970 before copying the contents into the caller's buffer. 1964 In other words, xfiles ignore the VFS read an 1971 In other words, xfiles ignore the VFS read and write code paths to avoid 1965 having to create a dummy ``struct kiocb`` and 1972 having to create a dummy ``struct kiocb`` and to avoid taking inode and 1966 freeze locks. 1973 freeze locks. 1967 tmpfs cannot be frozen, and xfiles must not b 1974 tmpfs cannot be frozen, and xfiles must not be exposed to userspace. 1968 1975 1969 If an xfile is shared between threads to stag 1976 If an xfile is shared between threads to stage repairs, the caller must provide 1970 its own locks to coordinate access. 1977 its own locks to coordinate access. 1971 For example, if a scrub function stores scan 1978 For example, if a scrub function stores scan results in an xfile and needs 1972 other threads to provide updates to the scann 1979 other threads to provide updates to the scanned data, the scrub function must 1973 provide a lock for all threads to share. 1980 provide a lock for all threads to share. 1974 1981 1975 .. _xfarray: 1982 .. _xfarray: 1976 1983 1977 Arrays of Fixed-Sized Records 1984 Arrays of Fixed-Sized Records 1978 ````````````````````````````` 1985 ````````````````````````````` 1979 1986 1980 In XFS, each type of indexed space metadata ( 1987 In XFS, each type of indexed space metadata (free space, inodes, reference 1981 counts, file fork space, and reverse mappings 1988 counts, file fork space, and reverse mappings) consists of a set of fixed-size 1982 records indexed with a classic B+ tree. 1989 records indexed with a classic B+ tree. 1983 Directories have a set of fixed-size dirent r 1990 Directories have a set of fixed-size dirent records that point to the names, 1984 and extended attributes have a set of fixed-s 1991 and extended attributes have a set of fixed-size attribute keys that point to 1985 names and values. 1992 names and values. 1986 Quota counters and file link counters index r 1993 Quota counters and file link counters index records with numbers. 1987 During a repair, scrub needs to stage new rec 1994 During a repair, scrub needs to stage new records during the gathering step and 1988 retrieve them during the btree building step. 1995 retrieve them during the btree building step. 1989 1996 1990 Although this requirement can be satisfied by 1997 Although this requirement can be satisfied by calling the read and write 1991 methods of the xfile directly, it is simpler 1998 methods of the xfile directly, it is simpler for callers for there to be a 1992 higher level abstraction to take care of comp 1999 higher level abstraction to take care of computing array offsets, to provide 1993 iterator functions, and to deal with sparse r 2000 iterator functions, and to deal with sparse records and sorting. 1994 The ``xfarray`` abstraction presents a linear 2001 The ``xfarray`` abstraction presents a linear array for fixed-size records atop 1995 the byte-accessible xfile. 2002 the byte-accessible xfile. 1996 2003 1997 .. _xfarray_access_patterns: 2004 .. _xfarray_access_patterns: 1998 2005 1999 Array Access Patterns 2006 Array Access Patterns 2000 ^^^^^^^^^^^^^^^^^^^^^ 2007 ^^^^^^^^^^^^^^^^^^^^^ 2001 2008 2002 Array access patterns in online fsck tend to 2009 Array access patterns in online fsck tend to fall into three categories. 2003 Iteration of records is assumed to be necessa 2010 Iteration of records is assumed to be necessary for all cases and will be 2004 covered in the next section. 2011 covered in the next section. 2005 2012 2006 The first type of caller handles records that 2013 The first type of caller handles records that are indexed by position. 2007 Gaps may exist between records, and a record 2014 Gaps may exist between records, and a record may be updated multiple times 2008 during the collection step. 2015 during the collection step. 2009 In other words, these callers want a sparse l 2016 In other words, these callers want a sparse linearly addressed table file. 2010 The typical use case are quota records or fil 2017 The typical use case are quota records or file link count records. 2011 Access to array elements is performed program 2018 Access to array elements is performed programmatically via ``xfarray_load`` and 2012 ``xfarray_store`` functions, which wrap the s 2019 ``xfarray_store`` functions, which wrap the similarly-named xfile functions to 2013 provide loading and storing of array elements 2020 provide loading and storing of array elements at arbitrary array indices. 2014 Gaps are defined to be null records, and null 2021 Gaps are defined to be null records, and null records are defined to be a 2015 sequence of all zero bytes. 2022 sequence of all zero bytes. 2016 Null records are detected by calling ``xfarra 2023 Null records are detected by calling ``xfarray_element_is_null``. 2017 They are created either by calling ``xfarray_ 2024 They are created either by calling ``xfarray_unset`` to null out an existing 2018 record or by never storing anything to an arr 2025 record or by never storing anything to an array index. 2019 2026 2020 The second type of caller handles records tha 2027 The second type of caller handles records that are not indexed by position 2021 and do not require multiple updates to a reco 2028 and do not require multiple updates to a record. 2022 The typical use case here is rebuilding space 2029 The typical use case here is rebuilding space btrees and key/value btrees. 2023 These callers can add records to the array wi 2030 These callers can add records to the array without caring about array indices 2024 via the ``xfarray_append`` function, which st 2031 via the ``xfarray_append`` function, which stores a record at the end of the 2025 array. 2032 array. 2026 For callers that require records to be presen 2033 For callers that require records to be presentable in a specific order (e.g. 2027 rebuilding btree data), the ``xfarray_sort`` 2034 rebuilding btree data), the ``xfarray_sort`` function can arrange the sorted 2028 records; this function will be covered later. 2035 records; this function will be covered later. 2029 2036 2030 The third type of caller is a bag, which is u 2037 The third type of caller is a bag, which is useful for counting records. 2031 The typical use case here is constructing spa 2038 The typical use case here is constructing space extent reference counts from 2032 reverse mapping information. 2039 reverse mapping information. 2033 Records can be put in the bag in any order, t 2040 Records can be put in the bag in any order, they can be removed from the bag 2034 at any time, and uniqueness of records is lef 2041 at any time, and uniqueness of records is left to callers. 2035 The ``xfarray_store_anywhere`` function is us 2042 The ``xfarray_store_anywhere`` function is used to insert a record in any 2036 null record slot in the bag; and the ``xfarra 2043 null record slot in the bag; and the ``xfarray_unset`` function removes a 2037 record from the bag. 2044 record from the bag. 2038 2045 2039 The proposed patchset is the 2046 The proposed patchset is the 2040 `big in-memory array 2047 `big in-memory array 2041 <https://git.kernel.org/pub/scm/linux/kernel/ 2048 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=big-array>`_. 2042 2049 2043 Iterating Array Elements 2050 Iterating Array Elements 2044 ^^^^^^^^^^^^^^^^^^^^^^^^ 2051 ^^^^^^^^^^^^^^^^^^^^^^^^ 2045 2052 2046 Most users of the xfarray require the ability 2053 Most users of the xfarray require the ability to iterate the records stored in 2047 the array. 2054 the array. 2048 Callers can probe every possible array index 2055 Callers can probe every possible array index with the following: 2049 2056 2050 .. code-block:: c 2057 .. code-block:: c 2051 2058 2052 xfarray_idx_t i; 2059 xfarray_idx_t i; 2053 foreach_xfarray_idx(array, i) { 2060 foreach_xfarray_idx(array, i) { 2054 xfarray_load(array, i, &rec); 2061 xfarray_load(array, i, &rec); 2055 2062 2056 /* do something with rec */ 2063 /* do something with rec */ 2057 } 2064 } 2058 2065 2059 All users of this idiom must be prepared to h 2066 All users of this idiom must be prepared to handle null records or must already 2060 know that there aren't any. 2067 know that there aren't any. 2061 2068 2062 For xfarray users that want to iterate a spar 2069 For xfarray users that want to iterate a sparse array, the ``xfarray_iter`` 2063 function ignores indices in the xfarray that 2070 function ignores indices in the xfarray that have never been written to by 2064 calling ``xfile_seek_data`` (which internally 2071 calling ``xfile_seek_data`` (which internally uses ``SEEK_DATA``) to skip areas 2065 of the array that are not populated with memo 2072 of the array that are not populated with memory pages. 2066 Once it finds a page, it will skip the zeroed 2073 Once it finds a page, it will skip the zeroed areas of the page. 2067 2074 2068 .. code-block:: c 2075 .. code-block:: c 2069 2076 2070 xfarray_idx_t i = XFARRAY_CURSOR_INIT 2077 xfarray_idx_t i = XFARRAY_CURSOR_INIT; 2071 while ((ret = xfarray_iter(array, &i, 2078 while ((ret = xfarray_iter(array, &i, &rec)) == 1) { 2072 /* do something with rec */ 2079 /* do something with rec */ 2073 } 2080 } 2074 2081 2075 .. _xfarray_sort: 2082 .. _xfarray_sort: 2076 2083 2077 Sorting Array Elements 2084 Sorting Array Elements 2078 ^^^^^^^^^^^^^^^^^^^^^^ 2085 ^^^^^^^^^^^^^^^^^^^^^^ 2079 2086 2080 During the fourth demonstration of online rep 2087 During the fourth demonstration of online repair, a community reviewer remarked 2081 that for performance reasons, online repair o 2088 that for performance reasons, online repair ought to load batches of records 2082 into btree record blocks instead of inserting 2089 into btree record blocks instead of inserting records into a new btree one at a 2083 time. 2090 time. 2084 The btree insertion code in XFS is responsibl 2091 The btree insertion code in XFS is responsible for maintaining correct ordering 2085 of the records, so naturally the xfarray must 2092 of the records, so naturally the xfarray must also support sorting the record 2086 set prior to bulk loading. 2093 set prior to bulk loading. 2087 2094 2088 Case Study: Sorting xfarrays 2095 Case Study: Sorting xfarrays 2089 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2096 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2090 2097 2091 The sorting algorithm used in the xfarray is 2098 The sorting algorithm used in the xfarray is actually a combination of adaptive 2092 quicksort and a heapsort subalgorithm in the 2099 quicksort and a heapsort subalgorithm in the spirit of 2093 `Sedgewick <https://algs4.cs.princeton.edu/23 2100 `Sedgewick <https://algs4.cs.princeton.edu/23quicksort/>`_ and 2094 `pdqsort <https://github.com/orlp/pdqsort>`_, 2101 `pdqsort <https://github.com/orlp/pdqsort>`_, with customizations for the Linux 2095 kernel. 2102 kernel. 2096 To sort records in a reasonably short amount 2103 To sort records in a reasonably short amount of time, ``xfarray`` takes 2097 advantage of the binary subpartitioning offer 2104 advantage of the binary subpartitioning offered by quicksort, but it also uses 2098 heapsort to hedge against performance collaps 2105 heapsort to hedge against performance collapse if the chosen quicksort pivots 2099 are poor. 2106 are poor. 2100 Both algorithms are (in general) O(n * lg(n)) 2107 Both algorithms are (in general) O(n * lg(n)), but there is a wide performance 2101 gulf between the two implementations. 2108 gulf between the two implementations. 2102 2109 2103 The Linux kernel already contains a reasonabl 2110 The Linux kernel already contains a reasonably fast implementation of heapsort. 2104 It only operates on regular C arrays, which l 2111 It only operates on regular C arrays, which limits the scope of its usefulness. 2105 There are two key places where the xfarray us 2112 There are two key places where the xfarray uses it: 2106 2113 2107 * Sorting any record subset backed by a singl 2114 * Sorting any record subset backed by a single xfile page. 2108 2115 2109 * Loading a small number of xfarray records f 2116 * Loading a small number of xfarray records from potentially disparate parts 2110 of the xfarray into a memory buffer, and so 2117 of the xfarray into a memory buffer, and sorting the buffer. 2111 2118 2112 In other words, ``xfarray`` uses heapsort to 2119 In other words, ``xfarray`` uses heapsort to constrain the nested recursion of 2113 quicksort, thereby mitigating quicksort's wor 2120 quicksort, thereby mitigating quicksort's worst runtime behavior. 2114 2121 2115 Choosing a quicksort pivot is a tricky busine 2122 Choosing a quicksort pivot is a tricky business. 2116 A good pivot splits the set to sort in half, 2123 A good pivot splits the set to sort in half, leading to the divide and conquer 2117 behavior that is crucial to O(n * lg(n)) per 2124 behavior that is crucial to O(n * lg(n)) performance. 2118 A poor pivot barely splits the subset at all, 2125 A poor pivot barely splits the subset at all, leading to O(n\ :sup:`2`) 2119 runtime. 2126 runtime. 2120 The xfarray sort routine tries to avoid picki 2127 The xfarray sort routine tries to avoid picking a bad pivot by sampling nine 2121 records into a memory buffer and using the ke 2128 records into a memory buffer and using the kernel heapsort to identify the 2122 median of the nine. 2129 median of the nine. 2123 2130 2124 Most modern quicksort implementations employ 2131 Most modern quicksort implementations employ Tukey's "ninther" to select a 2125 pivot from a classic C array. 2132 pivot from a classic C array. 2126 Typical ninther implementations pick three un 2133 Typical ninther implementations pick three unique triads of records, sort each 2127 of the triads, and then sort the middle value 2134 of the triads, and then sort the middle value of each triad to determine the 2128 ninther value. 2135 ninther value. 2129 As stated previously, however, xfile accesses 2136 As stated previously, however, xfile accesses are not entirely cheap. 2130 It turned out to be much more performant to r 2137 It turned out to be much more performant to read the nine elements into a 2131 memory buffer, run the kernel's in-memory hea 2138 memory buffer, run the kernel's in-memory heapsort on the buffer, and choose 2132 the 4th element of that buffer as the pivot. 2139 the 4th element of that buffer as the pivot. 2133 Tukey's ninthers are described in J. W. Tukey 2140 Tukey's ninthers are described in J. W. Tukey, `The ninther, a technique for 2134 low-effort robust (resistant) location in lar 2141 low-effort robust (resistant) location in large samples`, in *Contributions to 2135 Survey Sampling and Applied Statistics*, edit 2142 Survey Sampling and Applied Statistics*, edited by H. David, (Academic Press, 2136 1978), pp. 251–257. 2143 1978), pp. 251–257. 2137 2144 2138 The partitioning of quicksort is fairly textb 2145 The partitioning of quicksort is fairly textbook -- rearrange the record 2139 subset around the pivot, then set up the curr 2146 subset around the pivot, then set up the current and next stack frames to 2140 sort with the larger and the smaller halves o 2147 sort with the larger and the smaller halves of the pivot, respectively. 2141 This keeps the stack space requirements to lo 2148 This keeps the stack space requirements to log2(record count). 2142 2149 2143 As a final performance optimization, the hi a 2150 As a final performance optimization, the hi and lo scanning phase of quicksort 2144 keeps examined xfile pages mapped in the kern 2151 keeps examined xfile pages mapped in the kernel for as long as possible to 2145 reduce map/unmap cycles. 2152 reduce map/unmap cycles. 2146 Surprisingly, this reduces overall sort runti 2153 Surprisingly, this reduces overall sort runtime by nearly half again after 2147 accounting for the application of heapsort di 2154 accounting for the application of heapsort directly onto xfile pages. 2148 2155 2149 .. _xfblob: 2156 .. _xfblob: 2150 2157 2151 Blob Storage 2158 Blob Storage 2152 ```````````` 2159 ```````````` 2153 2160 2154 Extended attributes and directories add an ad 2161 Extended attributes and directories add an additional requirement for staging 2155 records: arbitrary byte sequences of finite l 2162 records: arbitrary byte sequences of finite length. 2156 Each directory entry record needs to store en 2163 Each directory entry record needs to store entry name, 2157 and each extended attribute needs to store bo 2164 and each extended attribute needs to store both the attribute name and value. 2158 The names, keys, and values can consume a lar 2165 The names, keys, and values can consume a large amount of memory, so the 2159 ``xfblob`` abstraction was created to simplif 2166 ``xfblob`` abstraction was created to simplify management of these blobs 2160 atop an xfile. 2167 atop an xfile. 2161 2168 2162 Blob arrays provide ``xfblob_load`` and ``xfb 2169 Blob arrays provide ``xfblob_load`` and ``xfblob_store`` functions to retrieve 2163 and persist objects. 2170 and persist objects. 2164 The store function returns a magic cookie for 2171 The store function returns a magic cookie for every object that it persists. 2165 Later, callers provide this cookie to the ``x 2172 Later, callers provide this cookie to the ``xblob_load`` to recall the object. 2166 The ``xfblob_free`` function frees a specific 2173 The ``xfblob_free`` function frees a specific blob, and the ``xfblob_truncate`` 2167 function frees them all because compaction is 2174 function frees them all because compaction is not needed. 2168 2175 2169 The details of repairing directories and exte 2176 The details of repairing directories and extended attributes will be discussed 2170 in a subsequent section about atomic file con !! 2177 in a subsequent section about atomic extent swapping. 2171 However, it should be noted that these repair 2178 However, it should be noted that these repair functions only use blob storage 2172 to cache a small number of entries before add 2179 to cache a small number of entries before adding them to a temporary ondisk 2173 file, which is why compaction is not required 2180 file, which is why compaction is not required. 2174 2181 2175 The proposed patchset is at the start of the 2182 The proposed patchset is at the start of the 2176 `extended attribute repair 2183 `extended attribute repair 2177 <https://git.kernel.org/pub/scm/linux/kernel/ 2184 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=repair-xattrs>`_ series. 2178 2185 2179 .. _xfbtree: 2186 .. _xfbtree: 2180 2187 2181 In-Memory B+Trees 2188 In-Memory B+Trees 2182 ````````````````` 2189 ````````````````` 2183 2190 2184 The chapter about :ref:`secondary metadata<se 2191 The chapter about :ref:`secondary metadata<secondary_metadata>` mentioned that 2185 checking and repairing of secondary metadata 2192 checking and repairing of secondary metadata commonly requires coordination 2186 between a live metadata scan of the filesyste 2193 between a live metadata scan of the filesystem and writer threads that are 2187 updating that metadata. 2194 updating that metadata. 2188 Keeping the scan data up to date requires req 2195 Keeping the scan data up to date requires requires the ability to propagate 2189 metadata updates from the filesystem into the 2196 metadata updates from the filesystem into the data being collected by the scan. 2190 This *can* be done by appending concurrent up 2197 This *can* be done by appending concurrent updates into a separate log file and 2191 applying them before writing the new metadata 2198 applying them before writing the new metadata to disk, but this leads to 2192 unbounded memory consumption if the rest of t 2199 unbounded memory consumption if the rest of the system is very busy. 2193 Another option is to skip the side-log and co 2200 Another option is to skip the side-log and commit live updates from the 2194 filesystem directly into the scan data, which 2201 filesystem directly into the scan data, which trades more overhead for a lower 2195 maximum memory requirement. 2202 maximum memory requirement. 2196 In both cases, the data structure holding the 2203 In both cases, the data structure holding the scan results must support indexed 2197 access to perform well. 2204 access to perform well. 2198 2205 2199 Given that indexed lookups of scan data is re 2206 Given that indexed lookups of scan data is required for both strategies, online 2200 fsck employs the second strategy of committin 2207 fsck employs the second strategy of committing live updates directly into 2201 scan data. 2208 scan data. 2202 Because xfarrays are not indexed and do not e 2209 Because xfarrays are not indexed and do not enforce record ordering, they 2203 are not suitable for this task. 2210 are not suitable for this task. 2204 Conveniently, however, XFS has a library to c 2211 Conveniently, however, XFS has a library to create and maintain ordered reverse 2205 mapping records: the existing rmap btree code 2212 mapping records: the existing rmap btree code! 2206 If only there was a means to create one in me 2213 If only there was a means to create one in memory. 2207 2214 2208 Recall that the :ref:`xfile <xfile>` abstract 2215 Recall that the :ref:`xfile <xfile>` abstraction represents memory pages as a 2209 regular file, which means that the kernel can 2216 regular file, which means that the kernel can create byte or block addressable 2210 virtual address spaces at will. 2217 virtual address spaces at will. 2211 The XFS buffer cache specializes in abstracti 2218 The XFS buffer cache specializes in abstracting IO to block-oriented address 2212 spaces, which means that adaptation of the bu 2219 spaces, which means that adaptation of the buffer cache to interface with 2213 xfiles enables reuse of the entire btree libr 2220 xfiles enables reuse of the entire btree library. 2214 Btrees built atop an xfile are collectively k 2221 Btrees built atop an xfile are collectively known as ``xfbtrees``. 2215 The next few sections describe how they actua 2222 The next few sections describe how they actually work. 2216 2223 2217 The proposed patchset is the 2224 The proposed patchset is the 2218 `in-memory btree 2225 `in-memory btree 2219 <https://git.kernel.org/pub/scm/linux/kernel/ 2226 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=in-memory-btrees>`_ 2220 series. 2227 series. 2221 2228 2222 Using xfiles as a Buffer Cache Target 2229 Using xfiles as a Buffer Cache Target 2223 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2230 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2224 2231 2225 Two modifications are necessary to support xf 2232 Two modifications are necessary to support xfiles as a buffer cache target. 2226 The first is to make it possible for the ``st 2233 The first is to make it possible for the ``struct xfs_buftarg`` structure to 2227 host the ``struct xfs_buf`` rhashtable, becau 2234 host the ``struct xfs_buf`` rhashtable, because normally those are held by a 2228 per-AG structure. 2235 per-AG structure. 2229 The second change is to modify the buffer ``i 2236 The second change is to modify the buffer ``ioapply`` function to "read" cached 2230 pages from the xfile and "write" cached pages 2237 pages from the xfile and "write" cached pages back to the xfile. 2231 Multiple access to individual buffers is cont 2238 Multiple access to individual buffers is controlled by the ``xfs_buf`` lock, 2232 since the xfile does not provide any locking 2239 since the xfile does not provide any locking on its own. 2233 With this adaptation in place, users of the x 2240 With this adaptation in place, users of the xfile-backed buffer cache use 2234 exactly the same APIs as users of the disk-ba 2241 exactly the same APIs as users of the disk-backed buffer cache. 2235 The separation between xfile and buffer cache 2242 The separation between xfile and buffer cache implies higher memory usage since 2236 they do not share pages, but this property co 2243 they do not share pages, but this property could some day enable transactional 2237 updates to an in-memory btree. 2244 updates to an in-memory btree. 2238 Today, however, it simply eliminates the need 2245 Today, however, it simply eliminates the need for new code. 2239 2246 2240 Space Management with an xfbtree 2247 Space Management with an xfbtree 2241 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2248 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2242 2249 2243 Space management for an xfile is very simple 2250 Space management for an xfile is very simple -- each btree block is one memory 2244 page in size. 2251 page in size. 2245 These blocks use the same header format as an 2252 These blocks use the same header format as an on-disk btree, but the in-memory 2246 block verifiers ignore the checksums, assumin 2253 block verifiers ignore the checksums, assuming that xfile memory is no more 2247 corruption-prone than regular DRAM. 2254 corruption-prone than regular DRAM. 2248 Reusing existing code here is more important 2255 Reusing existing code here is more important than absolute memory efficiency. 2249 2256 2250 The very first block of an xfile backing an x 2257 The very first block of an xfile backing an xfbtree contains a header block. 2251 The header describes the owner, height, and t 2258 The header describes the owner, height, and the block number of the root 2252 xfbtree block. 2259 xfbtree block. 2253 2260 2254 To allocate a btree block, use ``xfile_seek_d 2261 To allocate a btree block, use ``xfile_seek_data`` to find a gap in the file. 2255 If there are no gaps, create one by extending 2262 If there are no gaps, create one by extending the length of the xfile. 2256 Preallocate space for the block with ``xfile_ 2263 Preallocate space for the block with ``xfile_prealloc``, and hand back the 2257 location. 2264 location. 2258 To free an xfbtree block, use ``xfile_discard 2265 To free an xfbtree block, use ``xfile_discard`` (which internally uses 2259 ``FALLOC_FL_PUNCH_HOLE``) to remove the memor 2266 ``FALLOC_FL_PUNCH_HOLE``) to remove the memory page from the xfile. 2260 2267 2261 Populating an xfbtree 2268 Populating an xfbtree 2262 ^^^^^^^^^^^^^^^^^^^^^ 2269 ^^^^^^^^^^^^^^^^^^^^^ 2263 2270 2264 An online fsck function that wants to create 2271 An online fsck function that wants to create an xfbtree should proceed as 2265 follows: 2272 follows: 2266 2273 2267 1. Call ``xfile_create`` to create an xfile. 2274 1. Call ``xfile_create`` to create an xfile. 2268 2275 2269 2. Call ``xfs_alloc_memory_buftarg`` to creat 2276 2. Call ``xfs_alloc_memory_buftarg`` to create a buffer cache target structure 2270 pointing to the xfile. 2277 pointing to the xfile. 2271 2278 2272 3. Pass the buffer cache target, buffer ops, 2279 3. Pass the buffer cache target, buffer ops, and other information to 2273 ``xfbtree_init`` to initialize the passed !! 2280 ``xfbtree_create`` to write an initial tree header and root block to the 2274 initial root block to the xfile. !! 2281 xfile. 2275 Each btree type should define a wrapper th 2282 Each btree type should define a wrapper that passes necessary arguments to 2276 the creation function. 2283 the creation function. 2277 For example, rmap btrees define ``xfs_rmap 2284 For example, rmap btrees define ``xfs_rmapbt_mem_create`` to take care of 2278 all the necessary details for callers. 2285 all the necessary details for callers. >> 2286 A ``struct xfbtree`` object will be returned. 2279 2287 2280 4. Pass the xfbtree object to the btree curso 2288 4. Pass the xfbtree object to the btree cursor creation function for the 2281 btree type. 2289 btree type. 2282 Following the example above, ``xfs_rmapbt_ 2290 Following the example above, ``xfs_rmapbt_mem_cursor`` takes care of this 2283 for callers. 2291 for callers. 2284 2292 2285 5. Pass the btree cursor to the regular btree 2293 5. Pass the btree cursor to the regular btree functions to make queries against 2286 and to update the in-memory btree. 2294 and to update the in-memory btree. 2287 For example, a btree cursor for an rmap xf 2295 For example, a btree cursor for an rmap xfbtree can be passed to the 2288 ``xfs_rmap_*`` functions just like any oth 2296 ``xfs_rmap_*`` functions just like any other btree cursor. 2289 See the :ref:`next section<xfbtree_commit> 2297 See the :ref:`next section<xfbtree_commit>` for information on dealing with 2290 xfbtree updates that are logged to a trans 2298 xfbtree updates that are logged to a transaction. 2291 2299 2292 6. When finished, delete the btree cursor, de 2300 6. When finished, delete the btree cursor, destroy the xfbtree object, free the 2293 buffer target, and the destroy the xfile t 2301 buffer target, and the destroy the xfile to release all resources. 2294 2302 2295 .. _xfbtree_commit: 2303 .. _xfbtree_commit: 2296 2304 2297 Committing Logged xfbtree Buffers 2305 Committing Logged xfbtree Buffers 2298 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2306 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2299 2307 2300 Although it is a clever hack to reuse the rma 2308 Although it is a clever hack to reuse the rmap btree code to handle the staging 2301 structure, the ephemeral nature of the in-mem 2309 structure, the ephemeral nature of the in-memory btree block storage presents 2302 some challenges of its own. 2310 some challenges of its own. 2303 The XFS transaction manager must not commit b 2311 The XFS transaction manager must not commit buffer log items for buffers backed 2304 by an xfile because the log format does not u 2312 by an xfile because the log format does not understand updates for devices 2305 other than the data device. 2313 other than the data device. 2306 An ephemeral xfbtree probably will not exist 2314 An ephemeral xfbtree probably will not exist by the time the AIL checkpoints 2307 log transactions back into the filesystem, an 2315 log transactions back into the filesystem, and certainly won't exist during 2308 log recovery. 2316 log recovery. 2309 For these reasons, any code updating an xfbtr 2317 For these reasons, any code updating an xfbtree in transaction context must 2310 remove the buffer log items from the transact 2318 remove the buffer log items from the transaction and write the updates into the 2311 backing xfile before committing or cancelling 2319 backing xfile before committing or cancelling the transaction. 2312 2320 2313 The ``xfbtree_trans_commit`` and ``xfbtree_tr 2321 The ``xfbtree_trans_commit`` and ``xfbtree_trans_cancel`` functions implement 2314 this functionality as follows: 2322 this functionality as follows: 2315 2323 2316 1. Find each buffer log item whose buffer tar 2324 1. Find each buffer log item whose buffer targets the xfile. 2317 2325 2318 2. Record the dirty/ordered status of the log 2326 2. Record the dirty/ordered status of the log item. 2319 2327 2320 3. Detach the log item from the buffer. 2328 3. Detach the log item from the buffer. 2321 2329 2322 4. Queue the buffer to a special delwri list. 2330 4. Queue the buffer to a special delwri list. 2323 2331 2324 5. Clear the transaction dirty flag if the on 2332 5. Clear the transaction dirty flag if the only dirty log items were the ones 2325 that were detached in step 3. 2333 that were detached in step 3. 2326 2334 2327 6. Submit the delwri list to commit the chang 2335 6. Submit the delwri list to commit the changes to the xfile, if the updates 2328 are being committed. 2336 are being committed. 2329 2337 2330 After removing xfile logged buffers from the 2338 After removing xfile logged buffers from the transaction in this manner, the 2331 transaction can be committed or cancelled. 2339 transaction can be committed or cancelled. 2332 2340 2333 Bulk Loading of Ondisk B+Trees 2341 Bulk Loading of Ondisk B+Trees 2334 ------------------------------ 2342 ------------------------------ 2335 2343 2336 As mentioned previously, early iterations of 2344 As mentioned previously, early iterations of online repair built new btree 2337 structures by creating a new btree and adding 2345 structures by creating a new btree and adding observations individually. 2338 Loading a btree one record at a time had a sl 2346 Loading a btree one record at a time had a slight advantage of not requiring 2339 the incore records to be sorted prior to comm 2347 the incore records to be sorted prior to commit, but was very slow and leaked 2340 blocks if the system went down during a repai 2348 blocks if the system went down during a repair. 2341 Loading records one at a time also meant that 2349 Loading records one at a time also meant that repair could not control the 2342 loading factor of the blocks in the new btree 2350 loading factor of the blocks in the new btree. 2343 2351 2344 Fortunately, the venerable ``xfs_repair`` too 2352 Fortunately, the venerable ``xfs_repair`` tool had a more efficient means for 2345 rebuilding a btree index from a collection of 2353 rebuilding a btree index from a collection of records -- bulk btree loading. 2346 This was implemented rather inefficiently cod 2354 This was implemented rather inefficiently code-wise, since ``xfs_repair`` 2347 had separate copy-pasted implementations for 2355 had separate copy-pasted implementations for each btree type. 2348 2356 2349 To prepare for online fsck, each of the four 2357 To prepare for online fsck, each of the four bulk loaders were studied, notes 2350 were taken, and the four were refactored into 2358 were taken, and the four were refactored into a single generic btree bulk 2351 loading mechanism. 2359 loading mechanism. 2352 Those notes in turn have been refreshed and a 2360 Those notes in turn have been refreshed and are presented below. 2353 2361 2354 Geometry Computation 2362 Geometry Computation 2355 ```````````````````` 2363 ```````````````````` 2356 2364 2357 The zeroth step of bulk loading is to assembl 2365 The zeroth step of bulk loading is to assemble the entire record set that will 2358 be stored in the new btree, and sort the reco 2366 be stored in the new btree, and sort the records. 2359 Next, call ``xfs_btree_bload_compute_geometry 2367 Next, call ``xfs_btree_bload_compute_geometry`` to compute the shape of the 2360 btree from the record set, the type of btree, 2368 btree from the record set, the type of btree, and any load factor preferences. 2361 This information is required for resource res 2369 This information is required for resource reservation. 2362 2370 2363 First, the geometry computation computes the 2371 First, the geometry computation computes the minimum and maximum records that 2364 will fit in a leaf block from the size of a b 2372 will fit in a leaf block from the size of a btree block and the size of the 2365 block header. 2373 block header. 2366 Roughly speaking, the maximum number of recor 2374 Roughly speaking, the maximum number of records is:: 2367 2375 2368 maxrecs = (block_size - header_size) 2376 maxrecs = (block_size - header_size) / record_size 2369 2377 2370 The XFS design specifies that btree blocks sh 2378 The XFS design specifies that btree blocks should be merged when possible, 2371 which means the minimum number of records is 2379 which means the minimum number of records is half of maxrecs:: 2372 2380 2373 minrecs = maxrecs / 2 2381 minrecs = maxrecs / 2 2374 2382 2375 The next variable to determine is the desired 2383 The next variable to determine is the desired loading factor. 2376 This must be at least minrecs and no more tha 2384 This must be at least minrecs and no more than maxrecs. 2377 Choosing minrecs is undesirable because it wa 2385 Choosing minrecs is undesirable because it wastes half the block. 2378 Choosing maxrecs is also undesirable because 2386 Choosing maxrecs is also undesirable because adding a single record to each 2379 newly rebuilt leaf block will cause a tree sp 2387 newly rebuilt leaf block will cause a tree split, which causes a noticeable 2380 drop in performance immediately afterwards. 2388 drop in performance immediately afterwards. 2381 The default loading factor was chosen to be 7 2389 The default loading factor was chosen to be 75% of maxrecs, which provides a 2382 reasonably compact structure without any imme 2390 reasonably compact structure without any immediate split penalties:: 2383 2391 2384 default_load_factor = (maxrecs + minr 2392 default_load_factor = (maxrecs + minrecs) / 2 2385 2393 2386 If space is tight, the loading factor will be 2394 If space is tight, the loading factor will be set to maxrecs to try to avoid 2387 running out of space:: 2395 running out of space:: 2388 2396 2389 leaf_load_factor = enough space ? def 2397 leaf_load_factor = enough space ? default_load_factor : maxrecs 2390 2398 2391 Load factor is computed for btree node blocks 2399 Load factor is computed for btree node blocks using the combined size of the 2392 btree key and pointer as the record size:: 2400 btree key and pointer as the record size:: 2393 2401 2394 maxrecs = (block_size - header_size) 2402 maxrecs = (block_size - header_size) / (key_size + ptr_size) 2395 minrecs = maxrecs / 2 2403 minrecs = maxrecs / 2 2396 node_load_factor = enough space ? def 2404 node_load_factor = enough space ? default_load_factor : maxrecs 2397 2405 2398 Once that's done, the number of leaf blocks r 2406 Once that's done, the number of leaf blocks required to store the record set 2399 can be computed as:: 2407 can be computed as:: 2400 2408 2401 leaf_blocks = ceil(record_count / lea 2409 leaf_blocks = ceil(record_count / leaf_load_factor) 2402 2410 2403 The number of node blocks needed to point to 2411 The number of node blocks needed to point to the next level down in the tree 2404 is computed as:: 2412 is computed as:: 2405 2413 2406 n_blocks = (n == 0 ? leaf_blocks : no 2414 n_blocks = (n == 0 ? leaf_blocks : node_blocks[n]) 2407 node_blocks[n + 1] = ceil(n_blocks / 2415 node_blocks[n + 1] = ceil(n_blocks / node_load_factor) 2408 2416 2409 The entire computation is performed recursive 2417 The entire computation is performed recursively until the current level only 2410 needs one block. 2418 needs one block. 2411 The resulting geometry is as follows: 2419 The resulting geometry is as follows: 2412 2420 2413 - For AG-rooted btrees, this level is the roo 2421 - For AG-rooted btrees, this level is the root level, so the height of the new 2414 tree is ``level + 1`` and the space needed 2422 tree is ``level + 1`` and the space needed is the summation of the number of 2415 blocks on each level. 2423 blocks on each level. 2416 2424 2417 - For inode-rooted btrees where the records i 2425 - For inode-rooted btrees where the records in the top level do not fit in the 2418 inode fork area, the height is ``level + 2` 2426 inode fork area, the height is ``level + 2``, the space needed is the 2419 summation of the number of blocks on each l 2427 summation of the number of blocks on each level, and the inode fork points to 2420 the root block. 2428 the root block. 2421 2429 2422 - For inode-rooted btrees where the records i 2430 - For inode-rooted btrees where the records in the top level can be stored in 2423 the inode fork area, then the root block ca 2431 the inode fork area, then the root block can be stored in the inode, the 2424 height is ``level + 1``, and the space need 2432 height is ``level + 1``, and the space needed is one less than the summation 2425 of the number of blocks on each level. 2433 of the number of blocks on each level. 2426 This only becomes relevant when non-bmap bt 2434 This only becomes relevant when non-bmap btrees gain the ability to root in 2427 an inode, which is a future patchset and on 2435 an inode, which is a future patchset and only included here for completeness. 2428 2436 2429 .. _newbt: 2437 .. _newbt: 2430 2438 2431 Reserving New B+Tree Blocks 2439 Reserving New B+Tree Blocks 2432 ``````````````````````````` 2440 ``````````````````````````` 2433 2441 2434 Once repair knows the number of blocks needed 2442 Once repair knows the number of blocks needed for the new btree, it allocates 2435 those blocks using the free space information 2443 those blocks using the free space information. 2436 Each reserved extent is tracked separately by 2444 Each reserved extent is tracked separately by the btree builder state data. 2437 To improve crash resilience, the reservation 2445 To improve crash resilience, the reservation code also logs an Extent Freeing 2438 Intent (EFI) item in the same transaction as 2446 Intent (EFI) item in the same transaction as each space allocation and attaches 2439 its in-memory ``struct xfs_extent_free_item`` 2447 its in-memory ``struct xfs_extent_free_item`` object to the space reservation. 2440 If the system goes down, log recovery will us 2448 If the system goes down, log recovery will use the unfinished EFIs to free the 2441 unused space, the free space, leaving the fil 2449 unused space, the free space, leaving the filesystem unchanged. 2442 2450 2443 Each time the btree builder claims a block fo 2451 Each time the btree builder claims a block for the btree from a reserved 2444 extent, it updates the in-memory reservation 2452 extent, it updates the in-memory reservation to reflect the claimed space. 2445 Block reservation tries to allocate as much c 2453 Block reservation tries to allocate as much contiguous space as possible to 2446 reduce the number of EFIs in play. 2454 reduce the number of EFIs in play. 2447 2455 2448 While repair is writing these new btree block 2456 While repair is writing these new btree blocks, the EFIs created for the space 2449 reservations pin the tail of the ondisk log. 2457 reservations pin the tail of the ondisk log. 2450 It's possible that other parts of the system 2458 It's possible that other parts of the system will remain busy and push the head 2451 of the log towards the pinned tail. 2459 of the log towards the pinned tail. 2452 To avoid livelocking the filesystem, the EFIs 2460 To avoid livelocking the filesystem, the EFIs must not pin the tail of the log 2453 for too long. 2461 for too long. 2454 To alleviate this problem, the dynamic relogg 2462 To alleviate this problem, the dynamic relogging capability of the deferred ops 2455 mechanism is reused here to commit a transact 2463 mechanism is reused here to commit a transaction at the log head containing an 2456 EFD for the old EFI and new EFI at the head. 2464 EFD for the old EFI and new EFI at the head. 2457 This enables the log to release the old EFI t 2465 This enables the log to release the old EFI to keep the log moving forwards. 2458 2466 2459 EFIs have a role to play during the commit an 2467 EFIs have a role to play during the commit and reaping phases; please see the 2460 next section and the section about :ref:`reap 2468 next section and the section about :ref:`reaping<reaping>` for more details. 2461 2469 2462 Proposed patchsets are the 2470 Proposed patchsets are the 2463 `bitmap rework 2471 `bitmap rework 2464 <https://git.kernel.org/pub/scm/linux/kernel/ 2472 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=repair-bitmap-rework>`_ 2465 and the 2473 and the 2466 `preparation for bulk loading btrees 2474 `preparation for bulk loading btrees 2467 <https://git.kernel.org/pub/scm/linux/kernel/ 2475 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=repair-prep-for-bulk-loading>`_. 2468 2476 2469 2477 2470 Writing the New Tree 2478 Writing the New Tree 2471 ```````````````````` 2479 ```````````````````` 2472 2480 2473 This part is pretty simple -- the btree build 2481 This part is pretty simple -- the btree builder (``xfs_btree_bulkload``) claims 2474 a block from the reserved list, writes the ne 2482 a block from the reserved list, writes the new btree block header, fills the 2475 rest of the block with records, and adds the 2483 rest of the block with records, and adds the new leaf block to a list of 2476 written blocks:: 2484 written blocks:: 2477 2485 2478 ┌────┠2486 ┌────┠2479 │leaf│ 2487 │leaf│ 2480 │RRR │ 2488 │RRR │ 2481 └────┘ 2489 └────┘ 2482 2490 2483 Sibling pointers are set every time a new blo 2491 Sibling pointers are set every time a new block is added to the level:: 2484 2492 2485 ┌────┠┌────┠┌┠2493 ┌────┠┌────┠┌────┠┌────┠2486 │leaf│→│leaf│→│leaf│→│l 2494 │leaf│→│leaf│→│leaf│→│leaf│ 2487 │RRR │â†â”‚RRR │â†â”‚RRR │â†â”‚R 2495 │RRR │â†â”‚RRR │â†â”‚RRR │â†â”‚RRR │ 2488 └────┘ └────┘ └┠2496 └────┘ └────┘ └────┘ └────┘ 2489 2497 2490 When it finishes writing the record leaf bloc 2498 When it finishes writing the record leaf blocks, it moves on to the node 2491 blocks 2499 blocks 2492 To fill a node block, it walks each block in 2500 To fill a node block, it walks each block in the next level down in the tree 2493 to compute the relevant keys and write them i 2501 to compute the relevant keys and write them into the parent node:: 2494 2502 2495 ┌────┠┌───┠2503 ┌────┠┌────┠2496 │node│──────→│nodeâ 2504 │node│──────→│node│ 2497 │PP │â†â”€â”€â”€â”€â”€â”€â”‚PP â 2505 │PP │â†â”€â”€â”€â”€â”€â”€â”‚PP │ 2498 └────┘ └───┠2506 └────┘ └────┘ 2499 ↙ ↘ ↙ ↘ 2507 ↙ ↘ ↙ ↘ 2500 ┌────┠┌────┠┌┠2508 ┌────┠┌────┠┌────┠┌────┠2501 │leaf│→│leaf│→│leaf│→│l 2509 │leaf│→│leaf│→│leaf│→│leaf│ 2502 │RRR │â†â”‚RRR │â†â”‚RRR │â†â”‚R 2510 │RRR │â†â”‚RRR │â†â”‚RRR │â†â”‚RRR │ 2503 └────┘ └────┘ └┠2511 └────┘ └────┘ └────┘ └────┘ 2504 2512 2505 When it reaches the root level, it is ready t 2513 When it reaches the root level, it is ready to commit the new btree!:: 2506 2514 2507 ┌─────────┠2515 ┌─────────┠2508 │ root │ 2516 │ root │ 2509 │ PP │ 2517 │ PP │ 2510 └─────────┘ 2518 └─────────┘ 2511 ↙ ↘ 2519 ↙ ↘ 2512 ┌────┠┌───┠2520 ┌────┠┌────┠2513 │node│──────→│nodeâ 2521 │node│──────→│node│ 2514 │PP │â†â”€â”€â”€â”€â”€â”€â”‚PP â 2522 │PP │â†â”€â”€â”€â”€â”€â”€â”‚PP │ 2515 └────┘ └───┠2523 └────┘ └────┘ 2516 ↙ ↘ ↙ ↘ 2524 ↙ ↘ ↙ ↘ 2517 ┌────┠┌────┠┌┠2525 ┌────┠┌────┠┌────┠┌────┠2518 │leaf│→│leaf│→│leaf│→│l 2526 │leaf│→│leaf│→│leaf│→│leaf│ 2519 │RRR │â†â”‚RRR │â†â”‚RRR │â†â”‚R 2527 │RRR │â†â”‚RRR │â†â”‚RRR │â†â”‚RRR │ 2520 └────┘ └────┘ └┠2528 └────┘ └────┘ └────┘ └────┘ 2521 2529 2522 The first step to commit the new btree is to 2530 The first step to commit the new btree is to persist the btree blocks to disk 2523 synchronously. 2531 synchronously. 2524 This is a little complicated because a new bt 2532 This is a little complicated because a new btree block could have been freed 2525 in the recent past, so the builder must use ` 2533 in the recent past, so the builder must use ``xfs_buf_delwri_queue_here`` to 2526 remove the (stale) buffer from the AIL list b 2534 remove the (stale) buffer from the AIL list before it can write the new blocks 2527 to disk. 2535 to disk. 2528 Blocks are queued for IO using a delwri list 2536 Blocks are queued for IO using a delwri list and written in one large batch 2529 with ``xfs_buf_delwri_submit``. 2537 with ``xfs_buf_delwri_submit``. 2530 2538 2531 Once the new blocks have been persisted to di 2539 Once the new blocks have been persisted to disk, control returns to the 2532 individual repair function that called the bu 2540 individual repair function that called the bulk loader. 2533 The repair function must log the location of 2541 The repair function must log the location of the new root in a transaction, 2534 clean up the space reservations that were mad 2542 clean up the space reservations that were made for the new btree, and reap the 2535 old metadata blocks: 2543 old metadata blocks: 2536 2544 2537 1. Commit the location of the new btree root. 2545 1. Commit the location of the new btree root. 2538 2546 2539 2. For each incore reservation: 2547 2. For each incore reservation: 2540 2548 2541 a. Log Extent Freeing Done (EFD) items for 2549 a. Log Extent Freeing Done (EFD) items for all the space that was consumed 2542 by the btree builder. The new EFDs mus 2550 by the btree builder. The new EFDs must point to the EFIs attached to 2543 the reservation to prevent log recovery 2551 the reservation to prevent log recovery from freeing the new blocks. 2544 2552 2545 b. For unclaimed portions of incore reserv 2553 b. For unclaimed portions of incore reservations, create a regular deferred 2546 extent free work item to be free the un 2554 extent free work item to be free the unused space later in the 2547 transaction chain. 2555 transaction chain. 2548 2556 2549 c. The EFDs and EFIs logged in steps 2a an 2557 c. The EFDs and EFIs logged in steps 2a and 2b must not overrun the 2550 reservation of the committing transacti 2558 reservation of the committing transaction. 2551 If the btree loading code suspects this 2559 If the btree loading code suspects this might be about to happen, it must 2552 call ``xrep_defer_finish`` to clear out 2560 call ``xrep_defer_finish`` to clear out the deferred work and obtain a 2553 fresh transaction. 2561 fresh transaction. 2554 2562 2555 3. Clear out the deferred work a second time 2563 3. Clear out the deferred work a second time to finish the commit and clean 2556 the repair transaction. 2564 the repair transaction. 2557 2565 2558 The transaction rolling in steps 2c and 3 rep 2566 The transaction rolling in steps 2c and 3 represent a weakness in the repair 2559 algorithm, because a log flush and a crash be 2567 algorithm, because a log flush and a crash before the end of the reap step can 2560 result in space leaking. 2568 result in space leaking. 2561 Online repair functions minimize the chances 2569 Online repair functions minimize the chances of this occurring by using very 2562 large transactions, which each can accommodat 2570 large transactions, which each can accommodate many thousands of block freeing 2563 instructions. 2571 instructions. 2564 Repair moves on to reaping the old blocks, wh 2572 Repair moves on to reaping the old blocks, which will be presented in a 2565 subsequent :ref:`section<reaping>` after a fe 2573 subsequent :ref:`section<reaping>` after a few case studies of bulk loading. 2566 2574 2567 Case Study: Rebuilding the Inode Index 2575 Case Study: Rebuilding the Inode Index 2568 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2576 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2569 2577 2570 The high level process to rebuild the inode i 2578 The high level process to rebuild the inode index btree is: 2571 2579 2572 1. Walk the reverse mapping records to genera 2580 1. Walk the reverse mapping records to generate ``struct xfs_inobt_rec`` 2573 records from the inode chunk information a 2581 records from the inode chunk information and a bitmap of the old inode btree 2574 blocks. 2582 blocks. 2575 2583 2576 2. Append the records to an xfarray in inode 2584 2. Append the records to an xfarray in inode order. 2577 2585 2578 3. Use the ``xfs_btree_bload_compute_geometry 2586 3. Use the ``xfs_btree_bload_compute_geometry`` function to compute the number 2579 of blocks needed for the inode btree. 2587 of blocks needed for the inode btree. 2580 If the free space inode btree is enabled, 2588 If the free space inode btree is enabled, call it again to estimate the 2581 geometry of the finobt. 2589 geometry of the finobt. 2582 2590 2583 4. Allocate the number of blocks computed in 2591 4. Allocate the number of blocks computed in the previous step. 2584 2592 2585 5. Use ``xfs_btree_bload`` to write the xfarr 2593 5. Use ``xfs_btree_bload`` to write the xfarray records to btree blocks and 2586 generate the internal node blocks. 2594 generate the internal node blocks. 2587 If the free space inode btree is enabled, 2595 If the free space inode btree is enabled, call it again to load the finobt. 2588 2596 2589 6. Commit the location of the new btree root 2597 6. Commit the location of the new btree root block(s) to the AGI. 2590 2598 2591 7. Reap the old btree blocks using the bitmap 2599 7. Reap the old btree blocks using the bitmap created in step 1. 2592 2600 2593 Details are as follows. 2601 Details are as follows. 2594 2602 2595 The inode btree maps inumbers to the ondisk l 2603 The inode btree maps inumbers to the ondisk location of the associated 2596 inode records, which means that the inode btr 2604 inode records, which means that the inode btrees can be rebuilt from the 2597 reverse mapping information. 2605 reverse mapping information. 2598 Reverse mapping records with an owner of ``XF 2606 Reverse mapping records with an owner of ``XFS_RMAP_OWN_INOBT`` marks the 2599 location of the old inode btree blocks. 2607 location of the old inode btree blocks. 2600 Each reverse mapping record with an owner of 2608 Each reverse mapping record with an owner of ``XFS_RMAP_OWN_INODES`` marks the 2601 location of at least one inode cluster buffer 2609 location of at least one inode cluster buffer. 2602 A cluster is the smallest number of ondisk in 2610 A cluster is the smallest number of ondisk inodes that can be allocated or 2603 freed in a single transaction; it is never sm 2611 freed in a single transaction; it is never smaller than 1 fs block or 4 inodes. 2604 2612 2605 For the space represented by each inode clust 2613 For the space represented by each inode cluster, ensure that there are no 2606 records in the free space btrees nor any reco 2614 records in the free space btrees nor any records in the reference count btree. 2607 If there are, the space metadata inconsistenc 2615 If there are, the space metadata inconsistencies are reason enough to abort the 2608 operation. 2616 operation. 2609 Otherwise, read each cluster buffer to check 2617 Otherwise, read each cluster buffer to check that its contents appear to be 2610 ondisk inodes and to decide if the file is al 2618 ondisk inodes and to decide if the file is allocated 2611 (``xfs_dinode.i_mode != 0``) or free (``xfs_d 2619 (``xfs_dinode.i_mode != 0``) or free (``xfs_dinode.i_mode == 0``). 2612 Accumulate the results of successive inode cl 2620 Accumulate the results of successive inode cluster buffer reads until there is 2613 enough information to fill a single inode chu 2621 enough information to fill a single inode chunk record, which is 64 consecutive 2614 numbers in the inumber keyspace. 2622 numbers in the inumber keyspace. 2615 If the chunk is sparse, the chunk record may 2623 If the chunk is sparse, the chunk record may include holes. 2616 2624 2617 Once the repair function accumulates one chun 2625 Once the repair function accumulates one chunk's worth of data, it calls 2618 ``xfarray_append`` to add the inode btree rec 2626 ``xfarray_append`` to add the inode btree record to the xfarray. 2619 This xfarray is walked twice during the btree 2627 This xfarray is walked twice during the btree creation step -- once to populate 2620 the inode btree with all inode chunk records, 2628 the inode btree with all inode chunk records, and a second time to populate the 2621 free inode btree with records for chunks that 2629 free inode btree with records for chunks that have free non-sparse inodes. 2622 The number of records for the inode btree is 2630 The number of records for the inode btree is the number of xfarray records, 2623 but the record count for the free inode btree 2631 but the record count for the free inode btree has to be computed as inode chunk 2624 records are stored in the xfarray. 2632 records are stored in the xfarray. 2625 2633 2626 The proposed patchset is the 2634 The proposed patchset is the 2627 `AG btree repair 2635 `AG btree repair 2628 <https://git.kernel.org/pub/scm/linux/kernel/ 2636 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=repair-ag-btrees>`_ 2629 series. 2637 series. 2630 2638 2631 Case Study: Rebuilding the Space Reference Co 2639 Case Study: Rebuilding the Space Reference Counts 2632 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2640 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2633 2641 2634 Reverse mapping records are used to rebuild t 2642 Reverse mapping records are used to rebuild the reference count information. 2635 Reference counts are required for correct ope 2643 Reference counts are required for correct operation of copy on write for shared 2636 file data. 2644 file data. 2637 Imagine the reverse mapping entries as rectan 2645 Imagine the reverse mapping entries as rectangles representing extents of 2638 physical blocks, and that the rectangles can 2646 physical blocks, and that the rectangles can be laid down to allow them to 2639 overlap each other. 2647 overlap each other. 2640 From the diagram below, it is apparent that a 2648 From the diagram below, it is apparent that a reference count record must start 2641 or end wherever the height of the stack chang 2649 or end wherever the height of the stack changes. 2642 In other words, the record emission stimulus 2650 In other words, the record emission stimulus is level-triggered:: 2643 2651 2644 â–ˆ ███ 2652 â–ˆ ███ 2645 ██ █████ â–ˆ 2653 ██ █████ ████ ███ ██████ 2646 ██ ████ ███■2654 ██ ████ ███████████ ████ █████████ 2647 ████████████â 2655 ████████████████████████████████ ███████████ 2648 ^ ^ ^^ ^^ ^ ^^ ^^^ ^^^^ ^ ^^ ^ 2656 ^ ^ ^^ ^^ ^ ^^ ^^^ ^^^^ ^ ^^ ^ ^ ^ 2649 2 1 23 21 3 43 234 2123 1 01 2 2657 2 1 23 21 3 43 234 2123 1 01 2 3 0 2650 2658 2651 The ondisk reference count btree does not sto 2659 The ondisk reference count btree does not store the refcount == 0 cases because 2652 the free space btree already records which bl 2660 the free space btree already records which blocks are free. 2653 Extents being used to stage copy-on-write ope 2661 Extents being used to stage copy-on-write operations should be the only records 2654 with refcount == 1. 2662 with refcount == 1. 2655 Single-owner file blocks aren't recorded in e 2663 Single-owner file blocks aren't recorded in either the free space or the 2656 reference count btrees. 2664 reference count btrees. 2657 2665 2658 The high level process to rebuild the referen 2666 The high level process to rebuild the reference count btree is: 2659 2667 2660 1. Walk the reverse mapping records to genera 2668 1. Walk the reverse mapping records to generate ``struct xfs_refcount_irec`` 2661 records for any space having more than one 2669 records for any space having more than one reverse mapping and add them to 2662 the xfarray. 2670 the xfarray. 2663 Any records owned by ``XFS_RMAP_OWN_COW`` 2671 Any records owned by ``XFS_RMAP_OWN_COW`` are also added to the xfarray 2664 because these are extents allocated to sta 2672 because these are extents allocated to stage a copy on write operation and 2665 are tracked in the refcount btree. 2673 are tracked in the refcount btree. 2666 2674 2667 Use any records owned by ``XFS_RMAP_OWN_RE 2675 Use any records owned by ``XFS_RMAP_OWN_REFC`` to create a bitmap of old 2668 refcount btree blocks. 2676 refcount btree blocks. 2669 2677 2670 2. Sort the records in physical extent order, 2678 2. Sort the records in physical extent order, putting the CoW staging extents 2671 at the end of the xfarray. 2679 at the end of the xfarray. 2672 This matches the sorting order of records 2680 This matches the sorting order of records in the refcount btree. 2673 2681 2674 3. Use the ``xfs_btree_bload_compute_geometry 2682 3. Use the ``xfs_btree_bload_compute_geometry`` function to compute the number 2675 of blocks needed for the new tree. 2683 of blocks needed for the new tree. 2676 2684 2677 4. Allocate the number of blocks computed in 2685 4. Allocate the number of blocks computed in the previous step. 2678 2686 2679 5. Use ``xfs_btree_bload`` to write the xfarr 2687 5. Use ``xfs_btree_bload`` to write the xfarray records to btree blocks and 2680 generate the internal node blocks. 2688 generate the internal node blocks. 2681 2689 2682 6. Commit the location of new btree root bloc 2690 6. Commit the location of new btree root block to the AGF. 2683 2691 2684 7. Reap the old btree blocks using the bitmap 2692 7. Reap the old btree blocks using the bitmap created in step 1. 2685 2693 2686 Details are as follows; the same algorithm is 2694 Details are as follows; the same algorithm is used by ``xfs_repair`` to 2687 generate refcount information from reverse ma 2695 generate refcount information from reverse mapping records. 2688 2696 2689 - Until the reverse mapping btree runs out of 2697 - Until the reverse mapping btree runs out of records: 2690 2698 2691 - Retrieve the next record from the btree a 2699 - Retrieve the next record from the btree and put it in a bag. 2692 2700 2693 - Collect all records with the same startin 2701 - Collect all records with the same starting block from the btree and put 2694 them in the bag. 2702 them in the bag. 2695 2703 2696 - While the bag isn't empty: 2704 - While the bag isn't empty: 2697 2705 2698 - Among the mappings in the bag, compute 2706 - Among the mappings in the bag, compute the lowest block number where the 2699 reference count changes. 2707 reference count changes. 2700 This position will be either the starti 2708 This position will be either the starting block number of the next 2701 unprocessed reverse mapping or the next 2709 unprocessed reverse mapping or the next block after the shortest mapping 2702 in the bag. 2710 in the bag. 2703 2711 2704 - Remove all mappings from the bag that e 2712 - Remove all mappings from the bag that end at this position. 2705 2713 2706 - Collect all reverse mappings that start 2714 - Collect all reverse mappings that start at this position from the btree 2707 and put them in the bag. 2715 and put them in the bag. 2708 2716 2709 - If the size of the bag changed and is g 2717 - If the size of the bag changed and is greater than one, create a new 2710 refcount record associating the block n 2718 refcount record associating the block number range that we just walked to 2711 the size of the bag. 2719 the size of the bag. 2712 2720 2713 The bag-like structure in this case is a type 2721 The bag-like structure in this case is a type 2 xfarray as discussed in the 2714 :ref:`xfarray access patterns<xfarray_access_ 2722 :ref:`xfarray access patterns<xfarray_access_patterns>` section. 2715 Reverse mappings are added to the bag using ` 2723 Reverse mappings are added to the bag using ``xfarray_store_anywhere`` and 2716 removed via ``xfarray_unset``. 2724 removed via ``xfarray_unset``. 2717 Bag members are examined through ``xfarray_it 2725 Bag members are examined through ``xfarray_iter`` loops. 2718 2726 2719 The proposed patchset is the 2727 The proposed patchset is the 2720 `AG btree repair 2728 `AG btree repair 2721 <https://git.kernel.org/pub/scm/linux/kernel/ 2729 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=repair-ag-btrees>`_ 2722 series. 2730 series. 2723 2731 2724 Case Study: Rebuilding File Fork Mapping Indi 2732 Case Study: Rebuilding File Fork Mapping Indices 2725 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2733 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2726 2734 2727 The high level process to rebuild a data/attr 2735 The high level process to rebuild a data/attr fork mapping btree is: 2728 2736 2729 1. Walk the reverse mapping records to genera 2737 1. Walk the reverse mapping records to generate ``struct xfs_bmbt_rec`` 2730 records from the reverse mapping records f 2738 records from the reverse mapping records for that inode and fork. 2731 Append these records to an xfarray. 2739 Append these records to an xfarray. 2732 Compute the bitmap of the old bmap btree b 2740 Compute the bitmap of the old bmap btree blocks from the ``BMBT_BLOCK`` 2733 records. 2741 records. 2734 2742 2735 2. Use the ``xfs_btree_bload_compute_geometry 2743 2. Use the ``xfs_btree_bload_compute_geometry`` function to compute the number 2736 of blocks needed for the new tree. 2744 of blocks needed for the new tree. 2737 2745 2738 3. Sort the records in file offset order. 2746 3. Sort the records in file offset order. 2739 2747 2740 4. If the extent records would fit in the ino 2748 4. If the extent records would fit in the inode fork immediate area, commit the 2741 records to that immediate area and skip to 2749 records to that immediate area and skip to step 8. 2742 2750 2743 5. Allocate the number of blocks computed in 2751 5. Allocate the number of blocks computed in the previous step. 2744 2752 2745 6. Use ``xfs_btree_bload`` to write the xfarr 2753 6. Use ``xfs_btree_bload`` to write the xfarray records to btree blocks and 2746 generate the internal node blocks. 2754 generate the internal node blocks. 2747 2755 2748 7. Commit the new btree root block to the ino 2756 7. Commit the new btree root block to the inode fork immediate area. 2749 2757 2750 8. Reap the old btree blocks using the bitmap 2758 8. Reap the old btree blocks using the bitmap created in step 1. 2751 2759 2752 There are some complications here: 2760 There are some complications here: 2753 First, it's possible to move the fork offset 2761 First, it's possible to move the fork offset to adjust the sizes of the 2754 immediate areas if the data and attr forks ar 2762 immediate areas if the data and attr forks are not both in BMBT format. 2755 Second, if there are sufficiently few fork ma 2763 Second, if there are sufficiently few fork mappings, it may be possible to use 2756 EXTENTS format instead of BMBT, which may req 2764 EXTENTS format instead of BMBT, which may require a conversion. 2757 Third, the incore extent map must be reloaded 2765 Third, the incore extent map must be reloaded carefully to avoid disturbing 2758 any delayed allocation extents. 2766 any delayed allocation extents. 2759 2767 2760 The proposed patchset is the 2768 The proposed patchset is the 2761 `file mapping repair 2769 `file mapping repair 2762 <https://git.kernel.org/pub/scm/linux/kernel/ 2770 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=repair-file-mappings>`_ 2763 series. 2771 series. 2764 2772 2765 .. _reaping: 2773 .. _reaping: 2766 2774 2767 Reaping Old Metadata Blocks 2775 Reaping Old Metadata Blocks 2768 --------------------------- 2776 --------------------------- 2769 2777 2770 Whenever online fsck builds a new data struct 2778 Whenever online fsck builds a new data structure to replace one that is 2771 suspect, there is a question of how to find a 2779 suspect, there is a question of how to find and dispose of the blocks that 2772 belonged to the old structure. 2780 belonged to the old structure. 2773 The laziest method of course is not to deal w 2781 The laziest method of course is not to deal with them at all, but this slowly 2774 leads to service degradations as space leaks 2782 leads to service degradations as space leaks out of the filesystem. 2775 Hopefully, someone will schedule a rebuild of 2783 Hopefully, someone will schedule a rebuild of the free space information to 2776 plug all those leaks. 2784 plug all those leaks. 2777 Offline repair rebuilds all space metadata af 2785 Offline repair rebuilds all space metadata after recording the usage of 2778 the files and directories that it decides not 2786 the files and directories that it decides not to clear, hence it can build new 2779 structures in the discovered free space and a 2787 structures in the discovered free space and avoid the question of reaping. 2780 2788 2781 As part of a repair, online fsck relies heavi 2789 As part of a repair, online fsck relies heavily on the reverse mapping records 2782 to find space that is owned by the correspond 2790 to find space that is owned by the corresponding rmap owner yet truly free. 2783 Cross referencing rmap records with other rma 2791 Cross referencing rmap records with other rmap records is necessary because 2784 there may be other data structures that also 2792 there may be other data structures that also think they own some of those 2785 blocks (e.g. crosslinked trees). 2793 blocks (e.g. crosslinked trees). 2786 Permitting the block allocator to hand them o 2794 Permitting the block allocator to hand them out again will not push the system 2787 towards consistency. 2795 towards consistency. 2788 2796 2789 For space metadata, the process of finding ex 2797 For space metadata, the process of finding extents to dispose of generally 2790 follows this format: 2798 follows this format: 2791 2799 2792 1. Create a bitmap of space used by data stru 2800 1. Create a bitmap of space used by data structures that must be preserved. 2793 The space reservations used to create the 2801 The space reservations used to create the new metadata can be used here if 2794 the same rmap owner code is used to denote 2802 the same rmap owner code is used to denote all of the objects being rebuilt. 2795 2803 2796 2. Survey the reverse mapping data to create 2804 2. Survey the reverse mapping data to create a bitmap of space owned by the 2797 same ``XFS_RMAP_OWN_*`` number for the met 2805 same ``XFS_RMAP_OWN_*`` number for the metadata that is being preserved. 2798 2806 2799 3. Use the bitmap disunion operator to subtra 2807 3. Use the bitmap disunion operator to subtract (1) from (2). 2800 The remaining set bits represent candidate 2808 The remaining set bits represent candidate extents that could be freed. 2801 The process moves on to step 4 below. 2809 The process moves on to step 4 below. 2802 2810 2803 Repairs for file-based metadata such as exten 2811 Repairs for file-based metadata such as extended attributes, directories, 2804 symbolic links, quota files and realtime bitm 2812 symbolic links, quota files and realtime bitmaps are performed by building a 2805 new structure attached to a temporary file an !! 2813 new structure attached to a temporary file and swapping the forks. 2806 file forks. << 2807 Afterward, the mappings in the old file fork 2814 Afterward, the mappings in the old file fork are the candidate blocks for 2808 disposal. 2815 disposal. 2809 2816 2810 The process for disposing of old extents is a 2817 The process for disposing of old extents is as follows: 2811 2818 2812 4. For each candidate extent, count the numbe 2819 4. For each candidate extent, count the number of reverse mapping records for 2813 the first block in that extent that do not 2820 the first block in that extent that do not have the same rmap owner for the 2814 data structure being repaired. 2821 data structure being repaired. 2815 2822 2816 - If zero, the block has a single owner an 2823 - If zero, the block has a single owner and can be freed. 2817 2824 2818 - If not, the block is part of a crosslink 2825 - If not, the block is part of a crosslinked structure and must not be 2819 freed. 2826 freed. 2820 2827 2821 5. Starting with the next block in the extent 2828 5. Starting with the next block in the extent, figure out how many more blocks 2822 have the same zero/nonzero other owner sta 2829 have the same zero/nonzero other owner status as that first block. 2823 2830 2824 6. If the region is crosslinked, delete the r 2831 6. If the region is crosslinked, delete the reverse mapping entry for the 2825 structure being repaired and move on to th 2832 structure being repaired and move on to the next region. 2826 2833 2827 7. If the region is to be freed, mark any cor 2834 7. If the region is to be freed, mark any corresponding buffers in the buffer 2828 cache as stale to prevent log writeback. 2835 cache as stale to prevent log writeback. 2829 2836 2830 8. Free the region and move on. 2837 8. Free the region and move on. 2831 2838 2832 However, there is one complication to this pr 2839 However, there is one complication to this procedure. 2833 Transactions are of finite size, so the reapi 2840 Transactions are of finite size, so the reaping process must be careful to roll 2834 the transactions to avoid overruns. 2841 the transactions to avoid overruns. 2835 Overruns come from two sources: 2842 Overruns come from two sources: 2836 2843 2837 a. EFIs logged on behalf of space that is no 2844 a. EFIs logged on behalf of space that is no longer occupied 2838 2845 2839 b. Log items for buffer invalidations 2846 b. Log items for buffer invalidations 2840 2847 2841 This is also a window in which a crash during 2848 This is also a window in which a crash during the reaping process can leak 2842 blocks. 2849 blocks. 2843 As stated earlier, online repair functions us 2850 As stated earlier, online repair functions use very large transactions to 2844 minimize the chances of this occurring. 2851 minimize the chances of this occurring. 2845 2852 2846 The proposed patchset is the 2853 The proposed patchset is the 2847 `preparation for bulk loading btrees 2854 `preparation for bulk loading btrees 2848 <https://git.kernel.org/pub/scm/linux/kernel/ 2855 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=repair-prep-for-bulk-loading>`_ 2849 series. 2856 series. 2850 2857 2851 Case Study: Reaping After a Regular Btree Rep 2858 Case Study: Reaping After a Regular Btree Repair 2852 ````````````````````````````````````````````` 2859 ```````````````````````````````````````````````` 2853 2860 2854 Old reference count and inode btrees are the 2861 Old reference count and inode btrees are the easiest to reap because they have 2855 rmap records with special owner codes: ``XFS_ 2862 rmap records with special owner codes: ``XFS_RMAP_OWN_REFC`` for the refcount 2856 btree, and ``XFS_RMAP_OWN_INOBT`` for the ino 2863 btree, and ``XFS_RMAP_OWN_INOBT`` for the inode and free inode btrees. 2857 Creating a list of extents to reap the old bt 2864 Creating a list of extents to reap the old btree blocks is quite simple, 2858 conceptually: 2865 conceptually: 2859 2866 2860 1. Lock the relevant AGI/AGF header buffers t 2867 1. Lock the relevant AGI/AGF header buffers to prevent allocation and frees. 2861 2868 2862 2. For each reverse mapping record with an rm 2869 2. For each reverse mapping record with an rmap owner corresponding to the 2863 metadata structure being rebuilt, set the 2870 metadata structure being rebuilt, set the corresponding range in a bitmap. 2864 2871 2865 3. Walk the current data structures that have 2872 3. Walk the current data structures that have the same rmap owner. 2866 For each block visited, clear that range i 2873 For each block visited, clear that range in the above bitmap. 2867 2874 2868 4. Each set bit in the bitmap represents a bl 2875 4. Each set bit in the bitmap represents a block that could be a block from the 2869 old data structures and hence is a candida 2876 old data structures and hence is a candidate for reaping. 2870 In other words, ``(rmap_records_owned_by & 2877 In other words, ``(rmap_records_owned_by & ~blocks_reachable_by_walk)`` 2871 are the blocks that might be freeable. 2878 are the blocks that might be freeable. 2872 2879 2873 If it is possible to maintain the AGF lock th 2880 If it is possible to maintain the AGF lock throughout the repair (which is the 2874 common case), then step 2 can be performed at 2881 common case), then step 2 can be performed at the same time as the reverse 2875 mapping record walk that creates the records 2882 mapping record walk that creates the records for the new btree. 2876 2883 2877 Case Study: Rebuilding the Free Space Indices 2884 Case Study: Rebuilding the Free Space Indices 2878 ````````````````````````````````````````````` 2885 ````````````````````````````````````````````` 2879 2886 2880 The high level process to rebuild the free sp 2887 The high level process to rebuild the free space indices is: 2881 2888 2882 1. Walk the reverse mapping records to genera 2889 1. Walk the reverse mapping records to generate ``struct xfs_alloc_rec_incore`` 2883 records from the gaps in the reverse mappi 2890 records from the gaps in the reverse mapping btree. 2884 2891 2885 2. Append the records to an xfarray. 2892 2. Append the records to an xfarray. 2886 2893 2887 3. Use the ``xfs_btree_bload_compute_geometry 2894 3. Use the ``xfs_btree_bload_compute_geometry`` function to compute the number 2888 of blocks needed for each new tree. 2895 of blocks needed for each new tree. 2889 2896 2890 4. Allocate the number of blocks computed in 2897 4. Allocate the number of blocks computed in the previous step from the free 2891 space information collected. 2898 space information collected. 2892 2899 2893 5. Use ``xfs_btree_bload`` to write the xfarr 2900 5. Use ``xfs_btree_bload`` to write the xfarray records to btree blocks and 2894 generate the internal node blocks for the 2901 generate the internal node blocks for the free space by length index. 2895 Call it again for the free space by block 2902 Call it again for the free space by block number index. 2896 2903 2897 6. Commit the locations of the new btree root 2904 6. Commit the locations of the new btree root blocks to the AGF. 2898 2905 2899 7. Reap the old btree blocks by looking for s 2906 7. Reap the old btree blocks by looking for space that is not recorded by the 2900 reverse mapping btree, the new free space 2907 reverse mapping btree, the new free space btrees, or the AGFL. 2901 2908 2902 Repairing the free space btrees has three key 2909 Repairing the free space btrees has three key complications over a regular 2903 btree repair: 2910 btree repair: 2904 2911 2905 First, free space is not explicitly tracked i 2912 First, free space is not explicitly tracked in the reverse mapping records. 2906 Hence, the new free space records must be inf 2913 Hence, the new free space records must be inferred from gaps in the physical 2907 space component of the keyspace of the revers 2914 space component of the keyspace of the reverse mapping btree. 2908 2915 2909 Second, free space repairs cannot use the com 2916 Second, free space repairs cannot use the common btree reservation code because 2910 new blocks are reserved out of the free space 2917 new blocks are reserved out of the free space btrees. 2911 This is impossible when repairing the free sp 2918 This is impossible when repairing the free space btrees themselves. 2912 However, repair holds the AGF buffer lock for 2919 However, repair holds the AGF buffer lock for the duration of the free space 2913 index reconstruction, so it can use the colle 2920 index reconstruction, so it can use the collected free space information to 2914 supply the blocks for the new free space btre 2921 supply the blocks for the new free space btrees. 2915 It is not necessary to back each reserved ext 2922 It is not necessary to back each reserved extent with an EFI because the new 2916 free space btrees are constructed in what the 2923 free space btrees are constructed in what the ondisk filesystem thinks is 2917 unowned space. 2924 unowned space. 2918 However, if reserving blocks for the new btre 2925 However, if reserving blocks for the new btrees from the collected free space 2919 information changes the number of free space 2926 information changes the number of free space records, repair must re-estimate 2920 the new free space btree geometry with the ne 2927 the new free space btree geometry with the new record count until the 2921 reservation is sufficient. 2928 reservation is sufficient. 2922 As part of committing the new btrees, repair 2929 As part of committing the new btrees, repair must ensure that reverse mappings 2923 are created for the reserved blocks and that 2930 are created for the reserved blocks and that unused reserved blocks are 2924 inserted into the free space btrees. 2931 inserted into the free space btrees. 2925 Deferrred rmap and freeing operations are use 2932 Deferrred rmap and freeing operations are used to ensure that this transition 2926 is atomic, similar to the other btree repair 2933 is atomic, similar to the other btree repair functions. 2927 2934 2928 Third, finding the blocks to reap after the r 2935 Third, finding the blocks to reap after the repair is not overly 2929 straightforward. 2936 straightforward. 2930 Blocks for the free space btrees and the reve 2937 Blocks for the free space btrees and the reverse mapping btrees are supplied by 2931 the AGFL. 2938 the AGFL. 2932 Blocks put onto the AGFL have reverse mapping 2939 Blocks put onto the AGFL have reverse mapping records with the owner 2933 ``XFS_RMAP_OWN_AG``. 2940 ``XFS_RMAP_OWN_AG``. 2934 This ownership is retained when blocks move f 2941 This ownership is retained when blocks move from the AGFL into the free space 2935 btrees or the reverse mapping btrees. 2942 btrees or the reverse mapping btrees. 2936 When repair walks reverse mapping records to 2943 When repair walks reverse mapping records to synthesize free space records, it 2937 creates a bitmap (``ag_owner_bitmap``) of all 2944 creates a bitmap (``ag_owner_bitmap``) of all the space claimed by 2938 ``XFS_RMAP_OWN_AG`` records. 2945 ``XFS_RMAP_OWN_AG`` records. 2939 The repair context maintains a second bitmap 2946 The repair context maintains a second bitmap corresponding to the rmap btree 2940 blocks and the AGFL blocks (``rmap_agfl_bitma 2947 blocks and the AGFL blocks (``rmap_agfl_bitmap``). 2941 When the walk is complete, the bitmap disunio 2948 When the walk is complete, the bitmap disunion operation ``(ag_owner_bitmap & 2942 ~rmap_agfl_bitmap)`` computes the extents tha 2949 ~rmap_agfl_bitmap)`` computes the extents that are used by the old free space 2943 btrees. 2950 btrees. 2944 These blocks can then be reaped using the met 2951 These blocks can then be reaped using the methods outlined above. 2945 2952 2946 The proposed patchset is the 2953 The proposed patchset is the 2947 `AG btree repair 2954 `AG btree repair 2948 <https://git.kernel.org/pub/scm/linux/kernel/ 2955 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=repair-ag-btrees>`_ 2949 series. 2956 series. 2950 2957 2951 .. _rmap_reap: 2958 .. _rmap_reap: 2952 2959 2953 Case Study: Reaping After Repairing Reverse M 2960 Case Study: Reaping After Repairing Reverse Mapping Btrees 2954 ````````````````````````````````````````````` 2961 `````````````````````````````````````````````````````````` 2955 2962 2956 Old reverse mapping btrees are less difficult 2963 Old reverse mapping btrees are less difficult to reap after a repair. 2957 As mentioned in the previous section, blocks 2964 As mentioned in the previous section, blocks on the AGFL, the two free space 2958 btree blocks, and the reverse mapping btree b 2965 btree blocks, and the reverse mapping btree blocks all have reverse mapping 2959 records with ``XFS_RMAP_OWN_AG`` as the owner 2966 records with ``XFS_RMAP_OWN_AG`` as the owner. 2960 The full process of gathering reverse mapping 2967 The full process of gathering reverse mapping records and building a new btree 2961 are described in the case study of 2968 are described in the case study of 2962 :ref:`live rebuilds of rmap data <rmap_repair 2969 :ref:`live rebuilds of rmap data <rmap_repair>`, but a crucial point from that 2963 discussion is that the new rmap btree will no 2970 discussion is that the new rmap btree will not contain any records for the old 2964 rmap btree, nor will the old btree blocks be 2971 rmap btree, nor will the old btree blocks be tracked in the free space btrees. 2965 The list of candidate reaping blocks is compu 2972 The list of candidate reaping blocks is computed by setting the bits 2966 corresponding to the gaps in the new rmap btr 2973 corresponding to the gaps in the new rmap btree records, and then clearing the 2967 bits corresponding to extents in the free spa 2974 bits corresponding to extents in the free space btrees and the current AGFL 2968 blocks. 2975 blocks. 2969 The result ``(new_rmapbt_gaps & ~(agfl | bnob 2976 The result ``(new_rmapbt_gaps & ~(agfl | bnobt_records))`` are reaped using the 2970 methods outlined above. 2977 methods outlined above. 2971 2978 2972 The rest of the process of rebuildng the reve 2979 The rest of the process of rebuildng the reverse mapping btree is discussed 2973 in a separate :ref:`case study<rmap_repair>`. 2980 in a separate :ref:`case study<rmap_repair>`. 2974 2981 2975 The proposed patchset is the 2982 The proposed patchset is the 2976 `AG btree repair 2983 `AG btree repair 2977 <https://git.kernel.org/pub/scm/linux/kernel/ 2984 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=repair-ag-btrees>`_ 2978 series. 2985 series. 2979 2986 2980 Case Study: Rebuilding the AGFL 2987 Case Study: Rebuilding the AGFL 2981 ``````````````````````````````` 2988 ``````````````````````````````` 2982 2989 2983 The allocation group free block list (AGFL) i 2990 The allocation group free block list (AGFL) is repaired as follows: 2984 2991 2985 1. Create a bitmap for all the space that the 2992 1. Create a bitmap for all the space that the reverse mapping data claims is 2986 owned by ``XFS_RMAP_OWN_AG``. 2993 owned by ``XFS_RMAP_OWN_AG``. 2987 2994 2988 2. Subtract the space used by the two free sp 2995 2. Subtract the space used by the two free space btrees and the rmap btree. 2989 2996 2990 3. Subtract any space that the reverse mappin 2997 3. Subtract any space that the reverse mapping data claims is owned by any 2991 other owner, to avoid re-adding crosslinke 2998 other owner, to avoid re-adding crosslinked blocks to the AGFL. 2992 2999 2993 4. Once the AGFL is full, reap any blocks lef 3000 4. Once the AGFL is full, reap any blocks leftover. 2994 3001 2995 5. The next operation to fix the freelist wil 3002 5. The next operation to fix the freelist will right-size the list. 2996 3003 2997 See `fs/xfs/scrub/agheader_repair.c <https:// 3004 See `fs/xfs/scrub/agheader_repair.c <https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/fs/xfs/scrub/agheader_repair.c>`_ for more details. 2998 3005 2999 Inode Record Repairs 3006 Inode Record Repairs 3000 -------------------- 3007 -------------------- 3001 3008 3002 Inode records must be handled carefully, beca 3009 Inode records must be handled carefully, because they have both ondisk records 3003 ("dinodes") and an in-memory ("cached") repre 3010 ("dinodes") and an in-memory ("cached") representation. 3004 There is a very high potential for cache cohe 3011 There is a very high potential for cache coherency issues if online fsck is not 3005 careful to access the ondisk metadata *only* 3012 careful to access the ondisk metadata *only* when the ondisk metadata is so 3006 badly damaged that the filesystem cannot load 3013 badly damaged that the filesystem cannot load the in-memory representation. 3007 When online fsck wants to open a damaged file 3014 When online fsck wants to open a damaged file for scrubbing, it must use 3008 specialized resource acquisition functions th 3015 specialized resource acquisition functions that return either the in-memory 3009 representation *or* a lock on whichever objec 3016 representation *or* a lock on whichever object is necessary to prevent any 3010 update to the ondisk location. 3017 update to the ondisk location. 3011 3018 3012 The only repairs that should be made to the o 3019 The only repairs that should be made to the ondisk inode buffers are whatever 3013 is necessary to get the in-core structure loa 3020 is necessary to get the in-core structure loaded. 3014 This means fixing whatever is caught by the i 3021 This means fixing whatever is caught by the inode cluster buffer and inode fork 3015 verifiers, and retrying the ``iget`` operatio 3022 verifiers, and retrying the ``iget`` operation. 3016 If the second ``iget`` fails, the repair has 3023 If the second ``iget`` fails, the repair has failed. 3017 3024 3018 Once the in-memory representation is loaded, 3025 Once the in-memory representation is loaded, repair can lock the inode and can 3019 subject it to comprehensive checks, repairs, 3026 subject it to comprehensive checks, repairs, and optimizations. 3020 Most inode attributes are easy to check and c 3027 Most inode attributes are easy to check and constrain, or are user-controlled 3021 arbitrary bit patterns; these are both easy t 3028 arbitrary bit patterns; these are both easy to fix. 3022 Dealing with the data and attr fork extent co 3029 Dealing with the data and attr fork extent counts and the file block counts is 3023 more complicated, because computing the corre 3030 more complicated, because computing the correct value requires traversing the 3024 forks, or if that fails, leaving the fields i 3031 forks, or if that fails, leaving the fields invalid and waiting for the fork 3025 fsck functions to run. 3032 fsck functions to run. 3026 3033 3027 The proposed patchset is the 3034 The proposed patchset is the 3028 `inode 3035 `inode 3029 <https://git.kernel.org/pub/scm/linux/kernel/ 3036 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=repair-inodes>`_ 3030 repair series. 3037 repair series. 3031 3038 3032 Quota Record Repairs 3039 Quota Record Repairs 3033 -------------------- 3040 -------------------- 3034 3041 3035 Similar to inodes, quota records ("dquots") a 3042 Similar to inodes, quota records ("dquots") also have both ondisk records and 3036 an in-memory representation, and hence are su 3043 an in-memory representation, and hence are subject to the same cache coherency 3037 issues. 3044 issues. 3038 Somewhat confusingly, both are known as dquot 3045 Somewhat confusingly, both are known as dquots in the XFS codebase. 3039 3046 3040 The only repairs that should be made to the o 3047 The only repairs that should be made to the ondisk quota record buffers are 3041 whatever is necessary to get the in-core stru 3048 whatever is necessary to get the in-core structure loaded. 3042 Once the in-memory representation is loaded, 3049 Once the in-memory representation is loaded, the only attributes needing 3043 checking are obviously bad limits and timer v 3050 checking are obviously bad limits and timer values. 3044 3051 3045 Quota usage counters are checked, repaired, a 3052 Quota usage counters are checked, repaired, and discussed separately in the 3046 section about :ref:`live quotacheck <quotache 3053 section about :ref:`live quotacheck <quotacheck>`. 3047 3054 3048 The proposed patchset is the 3055 The proposed patchset is the 3049 `quota 3056 `quota 3050 <https://git.kernel.org/pub/scm/linux/kernel/ 3057 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=repair-quota>`_ 3051 repair series. 3058 repair series. 3052 3059 3053 .. _fscounters: 3060 .. _fscounters: 3054 3061 3055 Freezing to Fix Summary Counters 3062 Freezing to Fix Summary Counters 3056 -------------------------------- 3063 -------------------------------- 3057 3064 3058 Filesystem summary counters track availabilit 3065 Filesystem summary counters track availability of filesystem resources such 3059 as free blocks, free inodes, and allocated in 3066 as free blocks, free inodes, and allocated inodes. 3060 This information could be compiled by walking 3067 This information could be compiled by walking the free space and inode indexes, 3061 but this is a slow process, so XFS maintains 3068 but this is a slow process, so XFS maintains a copy in the ondisk superblock 3062 that should reflect the ondisk metadata, at l 3069 that should reflect the ondisk metadata, at least when the filesystem has been 3063 unmounted cleanly. 3070 unmounted cleanly. 3064 For performance reasons, XFS also maintains i 3071 For performance reasons, XFS also maintains incore copies of those counters, 3065 which are key to enabling resource reservatio 3072 which are key to enabling resource reservations for active transactions. 3066 Writer threads reserve the worst-case quantit 3073 Writer threads reserve the worst-case quantities of resources from the 3067 incore counter and give back whatever they do 3074 incore counter and give back whatever they don't use at commit time. 3068 It is therefore only necessary to serialize o 3075 It is therefore only necessary to serialize on the superblock when the 3069 superblock is being committed to disk. 3076 superblock is being committed to disk. 3070 3077 3071 The lazy superblock counter feature introduce 3078 The lazy superblock counter feature introduced in XFS v5 took this even further 3072 by training log recovery to recompute the sum 3079 by training log recovery to recompute the summary counters from the AG headers, 3073 which eliminated the need for most transactio 3080 which eliminated the need for most transactions even to touch the superblock. 3074 The only time XFS commits the summary counter 3081 The only time XFS commits the summary counters is at filesystem unmount. 3075 To reduce contention even further, the incore 3082 To reduce contention even further, the incore counter is implemented as a 3076 percpu counter, which means that each CPU is 3083 percpu counter, which means that each CPU is allocated a batch of blocks from a 3077 global incore counter and can satisfy small a 3084 global incore counter and can satisfy small allocations from the local batch. 3078 3085 3079 The high-performance nature of the summary co 3086 The high-performance nature of the summary counters makes it difficult for 3080 online fsck to check them, since there is no 3087 online fsck to check them, since there is no way to quiesce a percpu counter 3081 while the system is running. 3088 while the system is running. 3082 Although online fsck can read the filesystem 3089 Although online fsck can read the filesystem metadata to compute the correct 3083 values of the summary counters, there's no wa 3090 values of the summary counters, there's no way to hold the value of a percpu 3084 counter stable, so it's quite possible that t 3091 counter stable, so it's quite possible that the counter will be out of date by 3085 the time the walk is complete. 3092 the time the walk is complete. 3086 Earlier versions of online scrub would return 3093 Earlier versions of online scrub would return to userspace with an incomplete 3087 scan flag, but this is not a satisfying outco 3094 scan flag, but this is not a satisfying outcome for a system administrator. 3088 For repairs, the in-memory counters must be s 3095 For repairs, the in-memory counters must be stabilized while walking the 3089 filesystem metadata to get an accurate readin 3096 filesystem metadata to get an accurate reading and install it in the percpu 3090 counter. 3097 counter. 3091 3098 3092 To satisfy this requirement, online fsck must 3099 To satisfy this requirement, online fsck must prevent other programs in the 3093 system from initiating new writes to the file 3100 system from initiating new writes to the filesystem, it must disable background 3094 garbage collection threads, and it must wait 3101 garbage collection threads, and it must wait for existing writer programs to 3095 exit the kernel. 3102 exit the kernel. 3096 Once that has been established, scrub can wal 3103 Once that has been established, scrub can walk the AG free space indexes, the 3097 inode btrees, and the realtime bitmap to comp 3104 inode btrees, and the realtime bitmap to compute the correct value of all 3098 four summary counters. 3105 four summary counters. 3099 This is very similar to a filesystem freeze, 3106 This is very similar to a filesystem freeze, though not all of the pieces are 3100 necessary: 3107 necessary: 3101 3108 3102 - The final freeze state is set one higher th 3109 - The final freeze state is set one higher than ``SB_FREEZE_COMPLETE`` to 3103 prevent other threads from thawing the file 3110 prevent other threads from thawing the filesystem, or other scrub threads 3104 from initiating another fscounters freeze. 3111 from initiating another fscounters freeze. 3105 3112 3106 - It does not quiesce the log. 3113 - It does not quiesce the log. 3107 3114 3108 With this code in place, it is now possible t 3115 With this code in place, it is now possible to pause the filesystem for just 3109 long enough to check and correct the summary 3116 long enough to check and correct the summary counters. 3110 3117 3111 +-------------------------------------------- 3118 +--------------------------------------------------------------------------+ 3112 | **Historical Sidebar**: 3119 | **Historical Sidebar**: | 3113 +-------------------------------------------- 3120 +--------------------------------------------------------------------------+ 3114 | The initial implementation used the actual 3121 | The initial implementation used the actual VFS filesystem freeze | 3115 | mechanism to quiesce filesystem activity. 3122 | mechanism to quiesce filesystem activity. | 3116 | With the filesystem frozen, it is possible 3123 | With the filesystem frozen, it is possible to resolve the counter values | 3117 | with exact precision, but there are many pr 3124 | with exact precision, but there are many problems with calling the VFS | 3118 | methods directly: 3125 | methods directly: | 3119 | 3126 | | 3120 | - Other programs can unfreeze the filesyste 3127 | - Other programs can unfreeze the filesystem without our knowledge. | 3121 | This leads to incorrect scan results and 3128 | This leads to incorrect scan results and incorrect repairs. | 3122 | 3129 | | 3123 | - Adding an extra lock to prevent others fr 3130 | - Adding an extra lock to prevent others from thawing the filesystem | 3124 | required the addition of a ``->freeze_sup 3131 | required the addition of a ``->freeze_super`` function to wrap | 3125 | ``freeze_fs()``. 3132 | ``freeze_fs()``. | 3126 | This in turn caused other subtle problems 3133 | This in turn caused other subtle problems because it turns out that | 3127 | the VFS ``freeze_super`` and ``thaw_super 3134 | the VFS ``freeze_super`` and ``thaw_super`` functions can drop the | 3128 | last reference to the VFS superblock, and 3135 | last reference to the VFS superblock, and any subsequent access | 3129 | becomes a UAF bug! 3136 | becomes a UAF bug! | 3130 | This can happen if the filesystem is unmo 3137 | This can happen if the filesystem is unmounted while the underlying | 3131 | block device has frozen the filesystem. 3138 | block device has frozen the filesystem. | 3132 | This problem could be solved by grabbing 3139 | This problem could be solved by grabbing extra references to the | 3133 | superblock, but it felt suboptimal given 3140 | superblock, but it felt suboptimal given the other inadequacies of | 3134 | this approach. 3141 | this approach. | 3135 | 3142 | | 3136 | - The log need not be quiesced to check the 3143 | - The log need not be quiesced to check the summary counters, but a VFS | 3137 | freeze initiates one anyway. 3144 | freeze initiates one anyway. | 3138 | This adds unnecessary runtime to live fsc 3145 | This adds unnecessary runtime to live fscounter fsck operations. | 3139 | 3146 | | 3140 | - Quiescing the log means that XFS flushes 3147 | - Quiescing the log means that XFS flushes the (possibly incorrect) | 3141 | counters to disk as part of cleaning the 3148 | counters to disk as part of cleaning the log. | 3142 | 3149 | | 3143 | - A bug in the VFS meant that freeze could 3150 | - A bug in the VFS meant that freeze could complete even when | 3144 | sync_filesystem fails to flush the filesy 3151 | sync_filesystem fails to flush the filesystem and returns an error. | 3145 | This bug was fixed in Linux 5.17. 3152 | This bug was fixed in Linux 5.17. | 3146 +-------------------------------------------- 3153 +--------------------------------------------------------------------------+ 3147 3154 3148 The proposed patchset is the 3155 The proposed patchset is the 3149 `summary counter cleanup 3156 `summary counter cleanup 3150 <https://git.kernel.org/pub/scm/linux/kernel/ 3157 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=repair-fscounters>`_ 3151 series. 3158 series. 3152 3159 3153 Full Filesystem Scans 3160 Full Filesystem Scans 3154 --------------------- 3161 --------------------- 3155 3162 3156 Certain types of metadata can only be checked 3163 Certain types of metadata can only be checked by walking every file in the 3157 entire filesystem to record observations and 3164 entire filesystem to record observations and comparing the observations against 3158 what's recorded on disk. 3165 what's recorded on disk. 3159 Like every other type of online repair, repai 3166 Like every other type of online repair, repairs are made by writing those 3160 observations to disk in a replacement structu 3167 observations to disk in a replacement structure and committing it atomically. 3161 However, it is not practical to shut down the 3168 However, it is not practical to shut down the entire filesystem to examine 3162 hundreds of billions of files because the dow 3169 hundreds of billions of files because the downtime would be excessive. 3163 Therefore, online fsck must build the infrast 3170 Therefore, online fsck must build the infrastructure to manage a live scan of 3164 all the files in the filesystem. 3171 all the files in the filesystem. 3165 There are two questions that need to be solve 3172 There are two questions that need to be solved to perform a live walk: 3166 3173 3167 - How does scrub manage the scan while it is 3174 - How does scrub manage the scan while it is collecting data? 3168 3175 3169 - How does the scan keep abreast of changes b 3176 - How does the scan keep abreast of changes being made to the system by other 3170 threads? 3177 threads? 3171 3178 3172 .. _iscan: 3179 .. _iscan: 3173 3180 3174 Coordinated Inode Scans 3181 Coordinated Inode Scans 3175 ``````````````````````` 3182 ``````````````````````` 3176 3183 3177 In the original Unix filesystems of the 1970s 3184 In the original Unix filesystems of the 1970s, each directory entry contained 3178 an index number (*inumber*) which was used as 3185 an index number (*inumber*) which was used as an index into on ondisk array 3179 (*itable*) of fixed-size records (*inodes*) d 3186 (*itable*) of fixed-size records (*inodes*) describing a file's attributes and 3180 its data block mapping. 3187 its data block mapping. 3181 This system is described by J. Lions, `"inode 3188 This system is described by J. Lions, `"inode (5659)" 3182 <http://www.lemis.com/grog/Documentation/Lion 3189 <http://www.lemis.com/grog/Documentation/Lions/>`_ in *Lions' Commentary on 3183 UNIX, 6th Edition*, (Dept. of Computer Scienc 3190 UNIX, 6th Edition*, (Dept. of Computer Science, the University of New South 3184 Wales, November 1977), pp. 18-2; and later by 3191 Wales, November 1977), pp. 18-2; and later by D. Ritchie and K. Thompson, 3185 `"Implementation of the File System" 3192 `"Implementation of the File System" 3186 <https://archive.org/details/bstj57-6-1905/pa 3193 <https://archive.org/details/bstj57-6-1905/page/n8/mode/1up>`_, from *The UNIX 3187 Time-Sharing System*, (The Bell System Techni 3194 Time-Sharing System*, (The Bell System Technical Journal, July 1978), pp. 3188 1913-4. 3195 1913-4. 3189 3196 3190 XFS retains most of this design, except now i 3197 XFS retains most of this design, except now inumbers are search keys over all 3191 the space in the data section filesystem. 3198 the space in the data section filesystem. 3192 They form a continuous keyspace that can be e 3199 They form a continuous keyspace that can be expressed as a 64-bit integer, 3193 though the inodes themselves are sparsely dis 3200 though the inodes themselves are sparsely distributed within the keyspace. 3194 Scans proceed in a linear fashion across the 3201 Scans proceed in a linear fashion across the inumber keyspace, starting from 3195 ``0x0`` and ending at ``0xFFFFFFFFFFFFFFFF``. 3202 ``0x0`` and ending at ``0xFFFFFFFFFFFFFFFF``. 3196 Naturally, a scan through a keyspace requires 3203 Naturally, a scan through a keyspace requires a scan cursor object to track the 3197 scan progress. 3204 scan progress. 3198 Because this keyspace is sparse, this cursor 3205 Because this keyspace is sparse, this cursor contains two parts. 3199 The first part of this scan cursor object tra 3206 The first part of this scan cursor object tracks the inode that will be 3200 examined next; call this the examination curs 3207 examined next; call this the examination cursor. 3201 Somewhat less obviously, the scan cursor obje 3208 Somewhat less obviously, the scan cursor object must also track which parts of 3202 the keyspace have already been visited, which 3209 the keyspace have already been visited, which is critical for deciding if a 3203 concurrent filesystem update needs to be inco 3210 concurrent filesystem update needs to be incorporated into the scan data. 3204 Call this the visited inode cursor. 3211 Call this the visited inode cursor. 3205 3212 3206 Advancing the scan cursor is a multi-step pro 3213 Advancing the scan cursor is a multi-step process encapsulated in 3207 ``xchk_iscan_iter``: 3214 ``xchk_iscan_iter``: 3208 3215 3209 1. Lock the AGI buffer of the AG containing t 3216 1. Lock the AGI buffer of the AG containing the inode pointed to by the visited 3210 inode cursor. 3217 inode cursor. 3211 This guarantee that inodes in this AG cann 3218 This guarantee that inodes in this AG cannot be allocated or freed while 3212 advancing the cursor. 3219 advancing the cursor. 3213 3220 3214 2. Use the per-AG inode btree to look up the 3221 2. Use the per-AG inode btree to look up the next inumber after the one that 3215 was just visited, since it may not be keys 3222 was just visited, since it may not be keyspace adjacent. 3216 3223 3217 3. If there are no more inodes left in this A 3224 3. If there are no more inodes left in this AG: 3218 3225 3219 a. Move the examination cursor to the poin 3226 a. Move the examination cursor to the point of the inumber keyspace that 3220 corresponds to the start of the next AG 3227 corresponds to the start of the next AG. 3221 3228 3222 b. Adjust the visited inode cursor to indi 3229 b. Adjust the visited inode cursor to indicate that it has "visited" the 3223 last possible inode in the current AG's 3230 last possible inode in the current AG's inode keyspace. 3224 XFS inumbers are segmented, so the curs 3231 XFS inumbers are segmented, so the cursor needs to be marked as having 3225 visited the entire keyspace up to just 3232 visited the entire keyspace up to just before the start of the next AG's 3226 inode keyspace. 3233 inode keyspace. 3227 3234 3228 c. Unlock the AGI and return to step 1 if 3235 c. Unlock the AGI and return to step 1 if there are unexamined AGs in the 3229 filesystem. 3236 filesystem. 3230 3237 3231 d. If there are no more AGs to examine, se 3238 d. If there are no more AGs to examine, set both cursors to the end of the 3232 inumber keyspace. 3239 inumber keyspace. 3233 The scan is now complete. 3240 The scan is now complete. 3234 3241 3235 4. Otherwise, there is at least one more inod 3242 4. Otherwise, there is at least one more inode to scan in this AG: 3236 3243 3237 a. Move the examination cursor ahead to th 3244 a. Move the examination cursor ahead to the next inode marked as allocated 3238 by the inode btree. 3245 by the inode btree. 3239 3246 3240 b. Adjust the visited inode cursor to poin 3247 b. Adjust the visited inode cursor to point to the inode just prior to where 3241 the examination cursor is now. 3248 the examination cursor is now. 3242 Because the scanner holds the AGI buffe 3249 Because the scanner holds the AGI buffer lock, no inodes could have been 3243 created in the part of the inode keyspa 3250 created in the part of the inode keyspace that the visited inode cursor 3244 just advanced. 3251 just advanced. 3245 3252 3246 5. Get the incore inode for the inumber of th 3253 5. Get the incore inode for the inumber of the examination cursor. 3247 By maintaining the AGI buffer lock until t 3254 By maintaining the AGI buffer lock until this point, the scanner knows that 3248 it was safe to advance the examination cur 3255 it was safe to advance the examination cursor across the entire keyspace, 3249 and that it has stabilized this next inode 3256 and that it has stabilized this next inode so that it cannot disappear from 3250 the filesystem until the scan releases the 3257 the filesystem until the scan releases the incore inode. 3251 3258 3252 6. Drop the AGI lock and return the incore in 3259 6. Drop the AGI lock and return the incore inode to the caller. 3253 3260 3254 Online fsck functions scan all files in the f 3261 Online fsck functions scan all files in the filesystem as follows: 3255 3262 3256 1. Start a scan by calling ``xchk_iscan_start 3263 1. Start a scan by calling ``xchk_iscan_start``. 3257 3264 3258 2. Advance the scan cursor (``xchk_iscan_iter 3265 2. Advance the scan cursor (``xchk_iscan_iter``) to get the next inode. 3259 If one is provided: 3266 If one is provided: 3260 3267 3261 a. Lock the inode to prevent updates durin 3268 a. Lock the inode to prevent updates during the scan. 3262 3269 3263 b. Scan the inode. 3270 b. Scan the inode. 3264 3271 3265 c. While still holding the inode lock, adj 3272 c. While still holding the inode lock, adjust the visited inode cursor 3266 (``xchk_iscan_mark_visited``) to point 3273 (``xchk_iscan_mark_visited``) to point to this inode. 3267 3274 3268 d. Unlock and release the inode. 3275 d. Unlock and release the inode. 3269 3276 3270 8. Call ``xchk_iscan_teardown`` to complete t 3277 8. Call ``xchk_iscan_teardown`` to complete the scan. 3271 3278 3272 There are subtleties with the inode cache tha 3279 There are subtleties with the inode cache that complicate grabbing the incore 3273 inode for the caller. 3280 inode for the caller. 3274 Obviously, it is an absolute requirement that 3281 Obviously, it is an absolute requirement that the inode metadata be consistent 3275 enough to load it into the inode cache. 3282 enough to load it into the inode cache. 3276 Second, if the incore inode is stuck in some 3283 Second, if the incore inode is stuck in some intermediate state, the scan 3277 coordinator must release the AGI and push the 3284 coordinator must release the AGI and push the main filesystem to get the inode 3278 back into a loadable state. 3285 back into a loadable state. 3279 3286 3280 The proposed patches are the 3287 The proposed patches are the 3281 `inode scanner 3288 `inode scanner 3282 <https://git.kernel.org/pub/scm/linux/kernel/ 3289 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=scrub-iscan>`_ 3283 series. 3290 series. 3284 The first user of the new functionality is th 3291 The first user of the new functionality is the 3285 `online quotacheck 3292 `online quotacheck 3286 <https://git.kernel.org/pub/scm/linux/kernel/ 3293 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=repair-quotacheck>`_ 3287 series. 3294 series. 3288 3295 3289 Inode Management 3296 Inode Management 3290 ```````````````` 3297 ```````````````` 3291 3298 3292 In regular filesystem code, references to all 3299 In regular filesystem code, references to allocated XFS incore inodes are 3293 always obtained (``xfs_iget``) outside of tra 3300 always obtained (``xfs_iget``) outside of transaction context because the 3294 creation of the incore context for an existin 3301 creation of the incore context for an existing file does not require metadata 3295 updates. 3302 updates. 3296 However, it is important to note that referen 3303 However, it is important to note that references to incore inodes obtained as 3297 part of file creation must be performed in tr 3304 part of file creation must be performed in transaction context because the 3298 filesystem must ensure the atomicity of the o 3305 filesystem must ensure the atomicity of the ondisk inode btree index updates 3299 and the initialization of the actual ondisk i 3306 and the initialization of the actual ondisk inode. 3300 3307 3301 References to incore inodes are always releas 3308 References to incore inodes are always released (``xfs_irele``) outside of 3302 transaction context because there are a handf 3309 transaction context because there are a handful of activities that might 3303 require ondisk updates: 3310 require ondisk updates: 3304 3311 3305 - The VFS may decide to kick off writeback as 3312 - The VFS may decide to kick off writeback as part of a ``DONTCACHE`` inode 3306 release. 3313 release. 3307 3314 3308 - Speculative preallocations need to be unres 3315 - Speculative preallocations need to be unreserved. 3309 3316 3310 - An unlinked file may have lost its last ref 3317 - An unlinked file may have lost its last reference, in which case the entire 3311 file must be inactivated, which involves re 3318 file must be inactivated, which involves releasing all of its resources in 3312 the ondisk metadata and freeing the inode. 3319 the ondisk metadata and freeing the inode. 3313 3320 3314 These activities are collectively called inod 3321 These activities are collectively called inode inactivation. 3315 Inactivation has two parts -- the VFS part, w 3322 Inactivation has two parts -- the VFS part, which initiates writeback on all 3316 dirty file pages, and the XFS part, which cle 3323 dirty file pages, and the XFS part, which cleans up XFS-specific information 3317 and frees the inode if it was unlinked. 3324 and frees the inode if it was unlinked. 3318 If the inode is unlinked (or unconnected afte 3325 If the inode is unlinked (or unconnected after a file handle operation), the 3319 kernel drops the inode into the inactivation 3326 kernel drops the inode into the inactivation machinery immediately. 3320 3327 3321 During normal operation, resource acquisition 3328 During normal operation, resource acquisition for an update follows this order 3322 to avoid deadlocks: 3329 to avoid deadlocks: 3323 3330 3324 1. Inode reference (``iget``). 3331 1. Inode reference (``iget``). 3325 3332 3326 2. Filesystem freeze protection, if repairing 3333 2. Filesystem freeze protection, if repairing (``mnt_want_write_file``). 3327 3334 3328 3. Inode ``IOLOCK`` (VFS ``i_rwsem``) lock to 3335 3. Inode ``IOLOCK`` (VFS ``i_rwsem``) lock to control file IO. 3329 3336 3330 4. Inode ``MMAPLOCK`` (page cache ``invalidat 3337 4. Inode ``MMAPLOCK`` (page cache ``invalidate_lock``) lock for operations that 3331 can update page cache mappings. 3338 can update page cache mappings. 3332 3339 3333 5. Log feature enablement. 3340 5. Log feature enablement. 3334 3341 3335 6. Transaction log space grant. 3342 6. Transaction log space grant. 3336 3343 3337 7. Space on the data and realtime devices for 3344 7. Space on the data and realtime devices for the transaction. 3338 3345 3339 8. Incore dquot references, if a file is bein 3346 8. Incore dquot references, if a file is being repaired. 3340 Note that they are not locked, merely acqu 3347 Note that they are not locked, merely acquired. 3341 3348 3342 9. Inode ``ILOCK`` for file metadata updates. 3349 9. Inode ``ILOCK`` for file metadata updates. 3343 3350 3344 10. AG header buffer locks / Realtime metadat 3351 10. AG header buffer locks / Realtime metadata inode ILOCK. 3345 3352 3346 11. Realtime metadata buffer locks, if applic 3353 11. Realtime metadata buffer locks, if applicable. 3347 3354 3348 12. Extent mapping btree blocks, if applicabl 3355 12. Extent mapping btree blocks, if applicable. 3349 3356 3350 Resources are often released in the reverse o 3357 Resources are often released in the reverse order, though this is not required. 3351 However, online fsck differs from regular XFS 3358 However, online fsck differs from regular XFS operations because it may examine 3352 an object that normally is acquired in a late 3359 an object that normally is acquired in a later stage of the locking order, and 3353 then decide to cross-reference the object wit 3360 then decide to cross-reference the object with an object that is acquired 3354 earlier in the order. 3361 earlier in the order. 3355 The next few sections detail the specific way 3362 The next few sections detail the specific ways in which online fsck takes care 3356 to avoid deadlocks. 3363 to avoid deadlocks. 3357 3364 3358 iget and irele During a Scrub 3365 iget and irele During a Scrub 3359 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 3366 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 3360 3367 3361 An inode scan performed on behalf of a scrub 3368 An inode scan performed on behalf of a scrub operation runs in transaction 3362 context, and possibly with resources already 3369 context, and possibly with resources already locked and bound to it. 3363 This isn't much of a problem for ``iget`` sin 3370 This isn't much of a problem for ``iget`` since it can operate in the context 3364 of an existing transaction, as long as all of 3371 of an existing transaction, as long as all of the bound resources are acquired 3365 before the inode reference in the regular fil 3372 before the inode reference in the regular filesystem. 3366 3373 3367 When the VFS ``iput`` function is given a lin 3374 When the VFS ``iput`` function is given a linked inode with no other 3368 references, it normally puts the inode on an 3375 references, it normally puts the inode on an LRU list in the hope that it can 3369 save time if another process re-opens the fil 3376 save time if another process re-opens the file before the system runs out 3370 of memory and frees it. 3377 of memory and frees it. 3371 Filesystem callers can short-circuit the LRU 3378 Filesystem callers can short-circuit the LRU process by setting a ``DONTCACHE`` 3372 flag on the inode to cause the kernel to try 3379 flag on the inode to cause the kernel to try to drop the inode into the 3373 inactivation machinery immediately. 3380 inactivation machinery immediately. 3374 3381 3375 In the past, inactivation was always done fro 3382 In the past, inactivation was always done from the process that dropped the 3376 inode, which was a problem for scrub because 3383 inode, which was a problem for scrub because scrub may already hold a 3377 transaction, and XFS does not support nesting 3384 transaction, and XFS does not support nesting transactions. 3378 On the other hand, if there is no scrub trans 3385 On the other hand, if there is no scrub transaction, it is desirable to drop 3379 otherwise unused inodes immediately to avoid 3386 otherwise unused inodes immediately to avoid polluting caches. 3380 To capture these nuances, the online fsck cod 3387 To capture these nuances, the online fsck code has a separate ``xchk_irele`` 3381 function to set or clear the ``DONTCACHE`` fl 3388 function to set or clear the ``DONTCACHE`` flag to get the required release 3382 behavior. 3389 behavior. 3383 3390 3384 Proposed patchsets include fixing 3391 Proposed patchsets include fixing 3385 `scrub iget usage 3392 `scrub iget usage 3386 <https://git.kernel.org/pub/scm/linux/kernel/ 3393 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=scrub-iget-fixes>`_ and 3387 `dir iget usage 3394 `dir iget usage 3388 <https://git.kernel.org/pub/scm/linux/kernel/ 3395 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=scrub-dir-iget-fixes>`_. 3389 3396 3390 .. _ilocking: 3397 .. _ilocking: 3391 3398 3392 Locking Inodes 3399 Locking Inodes 3393 ^^^^^^^^^^^^^^ 3400 ^^^^^^^^^^^^^^ 3394 3401 3395 In regular filesystem code, the VFS and XFS w 3402 In regular filesystem code, the VFS and XFS will acquire multiple IOLOCK locks 3396 in a well-known order: parent → child when 3403 in a well-known order: parent → child when updating the directory tree, and 3397 in numerical order of the addresses of their 3404 in numerical order of the addresses of their ``struct inode`` object otherwise. 3398 For regular files, the MMAPLOCK can be acquir 3405 For regular files, the MMAPLOCK can be acquired after the IOLOCK to stop page 3399 faults. 3406 faults. 3400 If two MMAPLOCKs must be acquired, they are a 3407 If two MMAPLOCKs must be acquired, they are acquired in numerical order of 3401 the addresses of their ``struct address_space 3408 the addresses of their ``struct address_space`` objects. 3402 Due to the structure of existing filesystem c 3409 Due to the structure of existing filesystem code, IOLOCKs and MMAPLOCKs must be 3403 acquired before transactions are allocated. 3410 acquired before transactions are allocated. 3404 If two ILOCKs must be acquired, they are acqu 3411 If two ILOCKs must be acquired, they are acquired in inumber order. 3405 3412 3406 Inode lock acquisition must be done carefully 3413 Inode lock acquisition must be done carefully during a coordinated inode scan. 3407 Online fsck cannot abide these conventions, b 3414 Online fsck cannot abide these conventions, because for a directory tree 3408 scanner, the scrub process holds the IOLOCK o 3415 scanner, the scrub process holds the IOLOCK of the file being scanned and it 3409 needs to take the IOLOCK of the file at the o 3416 needs to take the IOLOCK of the file at the other end of the directory link. 3410 If the directory tree is corrupt because it c 3417 If the directory tree is corrupt because it contains a cycle, ``xfs_scrub`` 3411 cannot use the regular inode locking function 3418 cannot use the regular inode locking functions and avoid becoming trapped in an 3412 ABBA deadlock. 3419 ABBA deadlock. 3413 3420 3414 Solving both of these problems is straightfor 3421 Solving both of these problems is straightforward -- any time online fsck 3415 needs to take a second lock of the same class 3422 needs to take a second lock of the same class, it uses trylock to avoid an ABBA 3416 deadlock. 3423 deadlock. 3417 If the trylock fails, scrub drops all inode l 3424 If the trylock fails, scrub drops all inode locks and use trylock loops to 3418 (re)acquire all necessary resources. 3425 (re)acquire all necessary resources. 3419 Trylock loops enable scrub to check for pendi 3426 Trylock loops enable scrub to check for pending fatal signals, which is how 3420 scrub avoids deadlocking the filesystem or be 3427 scrub avoids deadlocking the filesystem or becoming an unresponsive process. 3421 However, trylock loops means that online fsck 3428 However, trylock loops means that online fsck must be prepared to measure the 3422 resource being scrubbed before and after the 3429 resource being scrubbed before and after the lock cycle to detect changes and 3423 react accordingly. 3430 react accordingly. 3424 3431 3425 .. _dirparent: 3432 .. _dirparent: 3426 3433 3427 Case Study: Finding a Directory Parent 3434 Case Study: Finding a Directory Parent 3428 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 3435 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 3429 3436 3430 Consider the directory parent pointer repair 3437 Consider the directory parent pointer repair code as an example. 3431 Online fsck must verify that the dotdot diren 3438 Online fsck must verify that the dotdot dirent of a directory points up to a 3432 parent directory, and that the parent directo 3439 parent directory, and that the parent directory contains exactly one dirent 3433 pointing down to the child directory. 3440 pointing down to the child directory. 3434 Fully validating this relationship (and repai 3441 Fully validating this relationship (and repairing it if possible) requires a 3435 walk of every directory on the filesystem whi 3442 walk of every directory on the filesystem while holding the child locked, and 3436 while updates to the directory tree are being 3443 while updates to the directory tree are being made. 3437 The coordinated inode scan provides a way to 3444 The coordinated inode scan provides a way to walk the filesystem without the 3438 possibility of missing an inode. 3445 possibility of missing an inode. 3439 The child directory is kept locked to prevent 3446 The child directory is kept locked to prevent updates to the dotdot dirent, but 3440 if the scanner fails to lock a parent, it can 3447 if the scanner fails to lock a parent, it can drop and relock both the child 3441 and the prospective parent. 3448 and the prospective parent. 3442 If the dotdot entry changes while the directo 3449 If the dotdot entry changes while the directory is unlocked, then a move or 3443 rename operation must have changed the child' 3450 rename operation must have changed the child's parentage, and the scan can 3444 exit early. 3451 exit early. 3445 3452 3446 The proposed patchset is the 3453 The proposed patchset is the 3447 `directory repair 3454 `directory repair 3448 <https://git.kernel.org/pub/scm/linux/kernel/ 3455 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=repair-dirs>`_ 3449 series. 3456 series. 3450 3457 3451 .. _fshooks: 3458 .. _fshooks: 3452 3459 3453 Filesystem Hooks 3460 Filesystem Hooks 3454 ````````````````` 3461 ````````````````` 3455 3462 3456 The second piece of support that online fsck 3463 The second piece of support that online fsck functions need during a full 3457 filesystem scan is the ability to stay inform 3464 filesystem scan is the ability to stay informed about updates being made by 3458 other threads in the filesystem, since compar 3465 other threads in the filesystem, since comparisons against the past are useless 3459 in a dynamic environment. 3466 in a dynamic environment. 3460 Two pieces of Linux kernel infrastructure ena 3467 Two pieces of Linux kernel infrastructure enable online fsck to monitor regular 3461 filesystem operations: filesystem hooks and : 3468 filesystem operations: filesystem hooks and :ref:`static keys<jump_labels>`. 3462 3469 3463 Filesystem hooks convey information about an 3470 Filesystem hooks convey information about an ongoing filesystem operation to 3464 a downstream consumer. 3471 a downstream consumer. 3465 In this case, the downstream consumer is alwa 3472 In this case, the downstream consumer is always an online fsck function. 3466 Because multiple fsck functions can run in pa 3473 Because multiple fsck functions can run in parallel, online fsck uses the Linux 3467 notifier call chain facility to dispatch upda 3474 notifier call chain facility to dispatch updates to any number of interested 3468 fsck processes. 3475 fsck processes. 3469 Call chains are a dynamic list, which means t 3476 Call chains are a dynamic list, which means that they can be configured at 3470 run time. 3477 run time. 3471 Because these hooks are private to the XFS mo 3478 Because these hooks are private to the XFS module, the information passed along 3472 contains exactly what the checking function n 3479 contains exactly what the checking function needs to update its observations. 3473 3480 3474 The current implementation of XFS hooks uses 3481 The current implementation of XFS hooks uses SRCU notifier chains to reduce the 3475 impact to highly threaded workloads. 3482 impact to highly threaded workloads. 3476 Regular blocking notifier chains use a rwsem 3483 Regular blocking notifier chains use a rwsem and seem to have a much lower 3477 overhead for single-threaded applications. 3484 overhead for single-threaded applications. 3478 However, it may turn out that the combination 3485 However, it may turn out that the combination of blocking chains and static 3479 keys are a more performant combination; more 3486 keys are a more performant combination; more study is needed here. 3480 3487 3481 The following pieces are necessary to hook a 3488 The following pieces are necessary to hook a certain point in the filesystem: 3482 3489 3483 - A ``struct xfs_hooks`` object must be embed 3490 - A ``struct xfs_hooks`` object must be embedded in a convenient place such as 3484 a well-known incore filesystem object. 3491 a well-known incore filesystem object. 3485 3492 3486 - Each hook must define an action code and a 3493 - Each hook must define an action code and a structure containing more context 3487 about the action. 3494 about the action. 3488 3495 3489 - Hook providers should provide appropriate w 3496 - Hook providers should provide appropriate wrapper functions and structs 3490 around the ``xfs_hooks`` and ``xfs_hook`` o 3497 around the ``xfs_hooks`` and ``xfs_hook`` objects to take advantage of type 3491 checking to ensure correct usage. 3498 checking to ensure correct usage. 3492 3499 3493 - A callsite in the regular filesystem code m 3500 - A callsite in the regular filesystem code must be chosen to call 3494 ``xfs_hooks_call`` with the action code and 3501 ``xfs_hooks_call`` with the action code and data structure. 3495 This place should be adjacent to (and not e 3502 This place should be adjacent to (and not earlier than) the place where 3496 the filesystem update is committed to the t 3503 the filesystem update is committed to the transaction. 3497 In general, when the filesystem calls a hoo 3504 In general, when the filesystem calls a hook chain, it should be able to 3498 handle sleeping and should not be vulnerabl 3505 handle sleeping and should not be vulnerable to memory reclaim or locking 3499 recursion. 3506 recursion. 3500 However, the exact requirements are very de 3507 However, the exact requirements are very dependent on the context of the hook 3501 caller and the callee. 3508 caller and the callee. 3502 3509 3503 - The online fsck function should define a st 3510 - The online fsck function should define a structure to hold scan data, a lock 3504 to coordinate access to the scan data, and 3511 to coordinate access to the scan data, and a ``struct xfs_hook`` object. 3505 The scanner function and the regular filesy 3512 The scanner function and the regular filesystem code must acquire resources 3506 in the same order; see the next section for 3513 in the same order; see the next section for details. 3507 3514 3508 - The online fsck code must contain a C funct 3515 - The online fsck code must contain a C function to catch the hook action code 3509 and data structure. 3516 and data structure. 3510 If the object being updated has already bee 3517 If the object being updated has already been visited by the scan, then the 3511 hook information must be applied to the sca 3518 hook information must be applied to the scan data. 3512 3519 3513 - Prior to unlocking inodes to start the scan 3520 - Prior to unlocking inodes to start the scan, online fsck must call 3514 ``xfs_hooks_setup`` to initialize the ``str 3521 ``xfs_hooks_setup`` to initialize the ``struct xfs_hook``, and 3515 ``xfs_hooks_add`` to enable the hook. 3522 ``xfs_hooks_add`` to enable the hook. 3516 3523 3517 - Online fsck must call ``xfs_hooks_del`` to 3524 - Online fsck must call ``xfs_hooks_del`` to disable the hook once the scan is 3518 complete. 3525 complete. 3519 3526 3520 The number of hooks should be kept to a minim 3527 The number of hooks should be kept to a minimum to reduce complexity. 3521 Static keys are used to reduce the overhead o 3528 Static keys are used to reduce the overhead of filesystem hooks to nearly 3522 zero when online fsck is not running. 3529 zero when online fsck is not running. 3523 3530 3524 .. _liveupdate: 3531 .. _liveupdate: 3525 3532 3526 Live Updates During a Scan 3533 Live Updates During a Scan 3527 `````````````````````````` 3534 `````````````````````````` 3528 3535 3529 The code paths of the online fsck scanning co 3536 The code paths of the online fsck scanning code and the :ref:`hooked<fshooks>` 3530 filesystem code look like this:: 3537 filesystem code look like this:: 3531 3538 3532 other program 3539 other program 3533 ↓ 3540 ↓ 3534 inode lock â†â”€â”€â”€â”€â”€â”€â 3541 inode lock â†â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â” 3535 ↓ 3542 ↓ │ 3536 AG header lock â 3543 AG header lock │ 3537 ↓ 3544 ↓ │ 3538 filesystem function â 3545 filesystem function │ 3539 ↓ 3546 ↓ │ 3540 notifier call chain â 3547 notifier call chain │ same 3541 ↓ 3548 ↓ ├─── inode 3542 scrub hook function â 3549 scrub hook function │ lock 3543 ↓ 3550 ↓ │ 3544 scan data mutex â†â”€â”€â” s 3551 scan data mutex â†â”€â”€â” same │ 3545 ↓ ├─── 3552 ↓ ├─── scan │ 3546 update scan data │ lock 3553 update scan data │ lock │ 3547 ↑ │ 3554 ↑ │ │ 3548 scan data mutex â†â”€â”€â”˜ 3555 scan data mutex â†â”€â”€â”˜ │ 3549 ↑ 3556 ↑ │ 3550 inode lock â†â”€â”€â”€â”€â”€â”€â 3557 inode lock â†â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”˜ 3551 ↑ 3558 ↑ 3552 scrub function 3559 scrub function 3553 ↑ 3560 ↑ 3554 inode scanner 3561 inode scanner 3555 ↑ 3562 ↑ 3556 xfs_scrub 3563 xfs_scrub 3557 3564 3558 These rules must be followed to ensure correc 3565 These rules must be followed to ensure correct interactions between the 3559 checking code and the code making an update t 3566 checking code and the code making an update to the filesystem: 3560 3567 3561 - Prior to invoking the notifier call chain, 3568 - Prior to invoking the notifier call chain, the filesystem function being 3562 hooked must acquire the same lock that the 3569 hooked must acquire the same lock that the scrub scanning function acquires 3563 to scan the inode. 3570 to scan the inode. 3564 3571 3565 - The scanning function and the scrub hook fu 3572 - The scanning function and the scrub hook function must coordinate access to 3566 the scan data by acquiring a lock on the sc 3573 the scan data by acquiring a lock on the scan data. 3567 3574 3568 - Scrub hook function must not add the live u 3575 - Scrub hook function must not add the live update information to the scan 3569 observations unless the inode being updated 3576 observations unless the inode being updated has already been scanned. 3570 The scan coordinator has a helper predicate 3577 The scan coordinator has a helper predicate (``xchk_iscan_want_live_update``) 3571 for this. 3578 for this. 3572 3579 3573 - Scrub hook functions must not change the ca 3580 - Scrub hook functions must not change the caller's state, including the 3574 transaction that it is running. 3581 transaction that it is running. 3575 They must not acquire any resources that mi 3582 They must not acquire any resources that might conflict with the filesystem 3576 function being hooked. 3583 function being hooked. 3577 3584 3578 - The hook function can abort the inode scan 3585 - The hook function can abort the inode scan to avoid breaking the other rules. 3579 3586 3580 The inode scan APIs are pretty simple: 3587 The inode scan APIs are pretty simple: 3581 3588 3582 - ``xchk_iscan_start`` starts a scan 3589 - ``xchk_iscan_start`` starts a scan 3583 3590 3584 - ``xchk_iscan_iter`` grabs a reference to th 3591 - ``xchk_iscan_iter`` grabs a reference to the next inode in the scan or 3585 returns zero if there is nothing left to sc 3592 returns zero if there is nothing left to scan 3586 3593 3587 - ``xchk_iscan_want_live_update`` to decide i 3594 - ``xchk_iscan_want_live_update`` to decide if an inode has already been 3588 visited in the scan. 3595 visited in the scan. 3589 This is critical for hook functions to deci 3596 This is critical for hook functions to decide if they need to update the 3590 in-memory scan information. 3597 in-memory scan information. 3591 3598 3592 - ``xchk_iscan_mark_visited`` to mark an inod 3599 - ``xchk_iscan_mark_visited`` to mark an inode as having been visited in the 3593 scan 3600 scan 3594 3601 3595 - ``xchk_iscan_teardown`` to finish the scan 3602 - ``xchk_iscan_teardown`` to finish the scan 3596 3603 3597 This functionality is also a part of the 3604 This functionality is also a part of the 3598 `inode scanner 3605 `inode scanner 3599 <https://git.kernel.org/pub/scm/linux/kernel/ 3606 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=scrub-iscan>`_ 3600 series. 3607 series. 3601 3608 3602 .. _quotacheck: 3609 .. _quotacheck: 3603 3610 3604 Case Study: Quota Counter Checking 3611 Case Study: Quota Counter Checking 3605 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 3612 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 3606 3613 3607 It is useful to compare the mount time quotac 3614 It is useful to compare the mount time quotacheck code to the online repair 3608 quotacheck code. 3615 quotacheck code. 3609 Mount time quotacheck does not have to conten 3616 Mount time quotacheck does not have to contend with concurrent operations, so 3610 it does the following: 3617 it does the following: 3611 3618 3612 1. Make sure the ondisk dquots are in good en 3619 1. Make sure the ondisk dquots are in good enough shape that all the incore 3613 dquots will actually load, and zero the re 3620 dquots will actually load, and zero the resource usage counters in the 3614 ondisk buffer. 3621 ondisk buffer. 3615 3622 3616 2. Walk every inode in the filesystem. 3623 2. Walk every inode in the filesystem. 3617 Add each file's resource usage to the inco 3624 Add each file's resource usage to the incore dquot. 3618 3625 3619 3. Walk each incore dquot. 3626 3. Walk each incore dquot. 3620 If the incore dquot is not being flushed, 3627 If the incore dquot is not being flushed, add the ondisk buffer backing the 3621 incore dquot to a delayed write (delwri) l 3628 incore dquot to a delayed write (delwri) list. 3622 3629 3623 4. Write the buffer list to disk. 3630 4. Write the buffer list to disk. 3624 3631 3625 Like most online fsck functions, online quota 3632 Like most online fsck functions, online quotacheck can't write to regular 3626 filesystem objects until the newly collected 3633 filesystem objects until the newly collected metadata reflect all filesystem 3627 state. 3634 state. 3628 Therefore, online quotacheck records file res 3635 Therefore, online quotacheck records file resource usage to a shadow dquot 3629 index implemented with a sparse ``xfarray``, 3636 index implemented with a sparse ``xfarray``, and only writes to the real dquots 3630 once the scan is complete. 3637 once the scan is complete. 3631 Handling transactional updates is tricky beca 3638 Handling transactional updates is tricky because quota resource usage updates 3632 are handled in phases to minimize contention 3639 are handled in phases to minimize contention on dquots: 3633 3640 3634 1. The inodes involved are joined and locked 3641 1. The inodes involved are joined and locked to a transaction. 3635 3642 3636 2. For each dquot attached to the file: 3643 2. For each dquot attached to the file: 3637 3644 3638 a. The dquot is locked. 3645 a. The dquot is locked. 3639 3646 3640 b. A quota reservation is added to the dqu 3647 b. A quota reservation is added to the dquot's resource usage. 3641 The reservation is recorded in the tran 3648 The reservation is recorded in the transaction. 3642 3649 3643 c. The dquot is unlocked. 3650 c. The dquot is unlocked. 3644 3651 3645 3. Changes in actual quota usage are tracked 3652 3. Changes in actual quota usage are tracked in the transaction. 3646 3653 3647 4. At transaction commit time, each dquot is 3654 4. At transaction commit time, each dquot is examined again: 3648 3655 3649 a. The dquot is locked again. 3656 a. The dquot is locked again. 3650 3657 3651 b. Quota usage changes are logged and unus 3658 b. Quota usage changes are logged and unused reservation is given back to 3652 the dquot. 3659 the dquot. 3653 3660 3654 c. The dquot is unlocked. 3661 c. The dquot is unlocked. 3655 3662 3656 For online quotacheck, hooks are placed in st 3663 For online quotacheck, hooks are placed in steps 2 and 4. 3657 The step 2 hook creates a shadow version of t 3664 The step 2 hook creates a shadow version of the transaction dquot context 3658 (``dqtrx``) that operates in a similar manner 3665 (``dqtrx``) that operates in a similar manner to the regular code. 3659 The step 4 hook commits the shadow ``dqtrx`` 3666 The step 4 hook commits the shadow ``dqtrx`` changes to the shadow dquots. 3660 Notice that both hooks are called with the in 3667 Notice that both hooks are called with the inode locked, which is how the 3661 live update coordinates with the inode scanne 3668 live update coordinates with the inode scanner. 3662 3669 3663 The quotacheck scan looks like this: 3670 The quotacheck scan looks like this: 3664 3671 3665 1. Set up a coordinated inode scan. 3672 1. Set up a coordinated inode scan. 3666 3673 3667 2. For each inode returned by the inode scan 3674 2. For each inode returned by the inode scan iterator: 3668 3675 3669 a. Grab and lock the inode. 3676 a. Grab and lock the inode. 3670 3677 3671 b. Determine that inode's resource usage ( 3678 b. Determine that inode's resource usage (data blocks, inode counts, 3672 realtime blocks) and add that to the sh 3679 realtime blocks) and add that to the shadow dquots for the user, group, 3673 and project ids associated with the ino 3680 and project ids associated with the inode. 3674 3681 3675 c. Unlock and release the inode. 3682 c. Unlock and release the inode. 3676 3683 3677 3. For each dquot in the system: 3684 3. For each dquot in the system: 3678 3685 3679 a. Grab and lock the dquot. 3686 a. Grab and lock the dquot. 3680 3687 3681 b. Check the dquot against the shadow dquo 3688 b. Check the dquot against the shadow dquots created by the scan and updated 3682 by the live hooks. 3689 by the live hooks. 3683 3690 3684 Live updates are key to being able to walk ev 3691 Live updates are key to being able to walk every quota record without 3685 needing to hold any locks for a long duration 3692 needing to hold any locks for a long duration. 3686 If repairs are desired, the real and shadow d 3693 If repairs are desired, the real and shadow dquots are locked and their 3687 resource counts are set to the values in the 3694 resource counts are set to the values in the shadow dquot. 3688 3695 3689 The proposed patchset is the 3696 The proposed patchset is the 3690 `online quotacheck 3697 `online quotacheck 3691 <https://git.kernel.org/pub/scm/linux/kernel/ 3698 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=repair-quotacheck>`_ 3692 series. 3699 series. 3693 3700 3694 .. _nlinks: 3701 .. _nlinks: 3695 3702 3696 Case Study: File Link Count Checking 3703 Case Study: File Link Count Checking 3697 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 3704 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 3698 3705 3699 File link count checking also uses live updat 3706 File link count checking also uses live update hooks. 3700 The coordinated inode scanner is used to visi 3707 The coordinated inode scanner is used to visit all directories on the 3701 filesystem, and per-file link count records a 3708 filesystem, and per-file link count records are stored in a sparse ``xfarray`` 3702 indexed by inumber. 3709 indexed by inumber. 3703 During the scanning phase, each entry in a di 3710 During the scanning phase, each entry in a directory generates observation 3704 data as follows: 3711 data as follows: 3705 3712 3706 1. If the entry is a dotdot (``'..'``) entry 3713 1. If the entry is a dotdot (``'..'``) entry of the root directory, the 3707 directory's parent link count is bumped be 3714 directory's parent link count is bumped because the root directory's dotdot 3708 entry is self referential. 3715 entry is self referential. 3709 3716 3710 2. If the entry is a dotdot entry of a subdir 3717 2. If the entry is a dotdot entry of a subdirectory, the parent's backref 3711 count is bumped. 3718 count is bumped. 3712 3719 3713 3. If the entry is neither a dot nor a dotdot 3720 3. If the entry is neither a dot nor a dotdot entry, the target file's parent 3714 count is bumped. 3721 count is bumped. 3715 3722 3716 4. If the target is a subdirectory, the paren 3723 4. If the target is a subdirectory, the parent's child link count is bumped. 3717 3724 3718 A crucial point to understand about how the l 3725 A crucial point to understand about how the link count inode scanner interacts 3719 with the live update hooks is that the scan c 3726 with the live update hooks is that the scan cursor tracks which *parent* 3720 directories have been scanned. 3727 directories have been scanned. 3721 In other words, the live updates ignore any u 3728 In other words, the live updates ignore any update about ``A → B`` when A has 3722 not been scanned, even if B has been scanned. 3729 not been scanned, even if B has been scanned. 3723 Furthermore, a subdirectory A with a dotdot e 3730 Furthermore, a subdirectory A with a dotdot entry pointing back to B is 3724 accounted as a backref counter in the shadow 3731 accounted as a backref counter in the shadow data for A, since child dotdot 3725 entries affect the parent's link count. 3732 entries affect the parent's link count. 3726 Live update hooks are carefully placed in all 3733 Live update hooks are carefully placed in all parts of the filesystem that 3727 create, change, or remove directory entries, 3734 create, change, or remove directory entries, since those operations involve 3728 bumplink and droplink. 3735 bumplink and droplink. 3729 3736 3730 For any file, the correct link count is the n 3737 For any file, the correct link count is the number of parents plus the number 3731 of child subdirectories. 3738 of child subdirectories. 3732 Non-directories never have children of any ki 3739 Non-directories never have children of any kind. 3733 The backref information is used to detect inc 3740 The backref information is used to detect inconsistencies in the number of 3734 links pointing to child subdirectories and th 3741 links pointing to child subdirectories and the number of dotdot entries 3735 pointing back. 3742 pointing back. 3736 3743 3737 After the scan completes, the link count of e 3744 After the scan completes, the link count of each file can be checked by locking 3738 both the inode and the shadow data, and compa 3745 both the inode and the shadow data, and comparing the link counts. 3739 A second coordinated inode scan cursor is use 3746 A second coordinated inode scan cursor is used for comparisons. 3740 Live updates are key to being able to walk ev 3747 Live updates are key to being able to walk every inode without needing to hold 3741 any locks between inodes. 3748 any locks between inodes. 3742 If repairs are desired, the inode's link coun 3749 If repairs are desired, the inode's link count is set to the value in the 3743 shadow information. 3750 shadow information. 3744 If no parents are found, the file must be :re 3751 If no parents are found, the file must be :ref:`reparented <orphanage>` to the 3745 orphanage to prevent the file from being lost 3752 orphanage to prevent the file from being lost forever. 3746 3753 3747 The proposed patchset is the 3754 The proposed patchset is the 3748 `file link count repair 3755 `file link count repair 3749 <https://git.kernel.org/pub/scm/linux/kernel/ 3756 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=scrub-nlinks>`_ 3750 series. 3757 series. 3751 3758 3752 .. _rmap_repair: 3759 .. _rmap_repair: 3753 3760 3754 Case Study: Rebuilding Reverse Mapping Record 3761 Case Study: Rebuilding Reverse Mapping Records 3755 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 3762 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 3756 3763 3757 Most repair functions follow the same pattern 3764 Most repair functions follow the same pattern: lock filesystem resources, 3758 walk the surviving ondisk metadata looking fo 3765 walk the surviving ondisk metadata looking for replacement metadata records, 3759 and use an :ref:`in-memory array <xfarray>` t 3766 and use an :ref:`in-memory array <xfarray>` to store the gathered observations. 3760 The primary advantage of this approach is the 3767 The primary advantage of this approach is the simplicity and modularity of the 3761 repair code -- code and data are entirely con 3768 repair code -- code and data are entirely contained within the scrub module, 3762 do not require hooks in the main filesystem, 3769 do not require hooks in the main filesystem, and are usually the most efficient 3763 in memory use. 3770 in memory use. 3764 A secondary advantage of this repair approach 3771 A secondary advantage of this repair approach is atomicity -- once the kernel 3765 decides a structure is corrupt, no other thre 3772 decides a structure is corrupt, no other threads can access the metadata until 3766 the kernel finishes repairing and revalidatin 3773 the kernel finishes repairing and revalidating the metadata. 3767 3774 3768 For repairs going on within a shard of the fi 3775 For repairs going on within a shard of the filesystem, these advantages 3769 outweigh the delays inherent in locking the s 3776 outweigh the delays inherent in locking the shard while repairing parts of the 3770 shard. 3777 shard. 3771 Unfortunately, repairs to the reverse mapping 3778 Unfortunately, repairs to the reverse mapping btree cannot use the "standard" 3772 btree repair strategy because it must scan ev 3779 btree repair strategy because it must scan every space mapping of every fork of 3773 every file in the filesystem, and the filesys 3780 every file in the filesystem, and the filesystem cannot stop. 3774 Therefore, rmap repair foregoes atomicity bet 3781 Therefore, rmap repair foregoes atomicity between scrub and repair. 3775 It combines a :ref:`coordinated inode scanner 3782 It combines a :ref:`coordinated inode scanner <iscan>`, :ref:`live update hooks 3776 <liveupdate>`, and an :ref:`in-memory rmap bt 3783 <liveupdate>`, and an :ref:`in-memory rmap btree <xfbtree>` to complete the 3777 scan for reverse mapping records. 3784 scan for reverse mapping records. 3778 3785 3779 1. Set up an xfbtree to stage rmap records. 3786 1. Set up an xfbtree to stage rmap records. 3780 3787 3781 2. While holding the locks on the AGI and AGF 3788 2. While holding the locks on the AGI and AGF buffers acquired during the 3782 scrub, generate reverse mappings for all A 3789 scrub, generate reverse mappings for all AG metadata: inodes, btrees, CoW 3783 staging extents, and the internal log. 3790 staging extents, and the internal log. 3784 3791 3785 3. Set up an inode scanner. 3792 3. Set up an inode scanner. 3786 3793 3787 4. Hook into rmap updates for the AG being re 3794 4. Hook into rmap updates for the AG being repaired so that the live scan data 3788 can receive updates to the rmap btree from 3795 can receive updates to the rmap btree from the rest of the filesystem during 3789 the file scan. 3796 the file scan. 3790 3797 3791 5. For each space mapping found in either for 3798 5. For each space mapping found in either fork of each file scanned, 3792 decide if the mapping matches the AG of in 3799 decide if the mapping matches the AG of interest. 3793 If so: 3800 If so: 3794 3801 3795 a. Create a btree cursor for the in-memory 3802 a. Create a btree cursor for the in-memory btree. 3796 3803 3797 b. Use the rmap code to add the record to 3804 b. Use the rmap code to add the record to the in-memory btree. 3798 3805 3799 c. Use the :ref:`special commit function < 3806 c. Use the :ref:`special commit function <xfbtree_commit>` to write the 3800 xfbtree changes to the xfile. 3807 xfbtree changes to the xfile. 3801 3808 3802 6. For each live update received via the hook 3809 6. For each live update received via the hook, decide if the owner has already 3803 been scanned. 3810 been scanned. 3804 If so, apply the live update into the scan 3811 If so, apply the live update into the scan data: 3805 3812 3806 a. Create a btree cursor for the in-memory 3813 a. Create a btree cursor for the in-memory btree. 3807 3814 3808 b. Replay the operation into the in-memory 3815 b. Replay the operation into the in-memory btree. 3809 3816 3810 c. Use the :ref:`special commit function < 3817 c. Use the :ref:`special commit function <xfbtree_commit>` to write the 3811 xfbtree changes to the xfile. 3818 xfbtree changes to the xfile. 3812 This is performed with an empty transac 3819 This is performed with an empty transaction to avoid changing the 3813 caller's state. 3820 caller's state. 3814 3821 3815 7. When the inode scan finishes, create a new 3822 7. When the inode scan finishes, create a new scrub transaction and relock the 3816 two AG headers. 3823 two AG headers. 3817 3824 3818 8. Compute the new btree geometry using the n 3825 8. Compute the new btree geometry using the number of rmap records in the 3819 shadow btree, like all other btree rebuild 3826 shadow btree, like all other btree rebuilding functions. 3820 3827 3821 9. Allocate the number of blocks computed in 3828 9. Allocate the number of blocks computed in the previous step. 3822 3829 3823 10. Perform the usual btree bulk loading and 3830 10. Perform the usual btree bulk loading and commit to install the new rmap 3824 btree. 3831 btree. 3825 3832 3826 11. Reap the old rmap btree blocks as discuss 3833 11. Reap the old rmap btree blocks as discussed in the case study about how 3827 to :ref:`reap after rmap btree repair <rm 3834 to :ref:`reap after rmap btree repair <rmap_reap>`. 3828 3835 3829 12. Free the xfbtree now that it not needed. 3836 12. Free the xfbtree now that it not needed. 3830 3837 3831 The proposed patchset is the 3838 The proposed patchset is the 3832 `rmap repair 3839 `rmap repair 3833 <https://git.kernel.org/pub/scm/linux/kernel/ 3840 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=repair-rmap-btree>`_ 3834 series. 3841 series. 3835 3842 3836 Staging Repairs with Temporary Files on Disk 3843 Staging Repairs with Temporary Files on Disk 3837 -------------------------------------------- 3844 -------------------------------------------- 3838 3845 3839 XFS stores a substantial amount of metadata i 3846 XFS stores a substantial amount of metadata in file forks: directories, 3840 extended attributes, symbolic link targets, f 3847 extended attributes, symbolic link targets, free space bitmaps and summary 3841 information for the realtime volume, and quot 3848 information for the realtime volume, and quota records. 3842 File forks map 64-bit logical file fork space 3849 File forks map 64-bit logical file fork space extents to physical storage space 3843 extents, similar to how a memory management u 3850 extents, similar to how a memory management unit maps 64-bit virtual addresses 3844 to physical memory addresses. 3851 to physical memory addresses. 3845 Therefore, file-based tree structures (such a 3852 Therefore, file-based tree structures (such as directories and extended 3846 attributes) use blocks mapped in the file for 3853 attributes) use blocks mapped in the file fork offset address space that point 3847 to other blocks mapped within that same addre 3854 to other blocks mapped within that same address space, and file-based linear 3848 structures (such as bitmaps and quota records 3855 structures (such as bitmaps and quota records) compute array element offsets in 3849 the file fork offset address space. 3856 the file fork offset address space. 3850 3857 3851 Because file forks can consume as much space 3858 Because file forks can consume as much space as the entire filesystem, repairs 3852 cannot be staged in memory, even when a pagin 3859 cannot be staged in memory, even when a paging scheme is available. 3853 Therefore, online repair of file-based metada 3860 Therefore, online repair of file-based metadata createas a temporary file in 3854 the XFS filesystem, writes a new structure at 3861 the XFS filesystem, writes a new structure at the correct offsets into the 3855 temporary file, and atomically exchanges all !! 3862 temporary file, and atomically swaps the fork mappings (and hence the fork 3856 fork contents) to commit the repair. !! 3863 contents) to commit the repair. 3857 Once the repair is complete, the old fork can 3864 Once the repair is complete, the old fork can be reaped as necessary; if the 3858 system goes down during the reap, the iunlink 3865 system goes down during the reap, the iunlink code will delete the blocks 3859 during log recovery. 3866 during log recovery. 3860 3867 3861 **Note**: All space usage and inode indices i 3868 **Note**: All space usage and inode indices in the filesystem *must* be 3862 consistent to use a temporary file safely! 3869 consistent to use a temporary file safely! 3863 This dependency is the reason why online repa 3870 This dependency is the reason why online repair can only use pageable kernel 3864 memory to stage ondisk space usage informatio 3871 memory to stage ondisk space usage information. 3865 3872 3866 Exchanging metadata file mappings with a temp !! 3873 Swapping metadata extents with a temporary file requires the owner field of the 3867 field of the block headers to match the file !! 3874 block headers to match the file being repaired and not the temporary file. The 3868 temporary file. !! 3875 directory, extended attribute, and symbolic link functions were all modified to 3869 The directory, extended attribute, and symbol !! 3876 allow callers to specify owner numbers explicitly. 3870 modified to allow callers to specify owner nu << 3871 3877 3872 There is a downside to the reaping process -- 3878 There is a downside to the reaping process -- if the system crashes during the 3873 reap phase and the fork extents are crosslink 3879 reap phase and the fork extents are crosslinked, the iunlink processing will 3874 fail because freeing space will find the extr 3880 fail because freeing space will find the extra reverse mappings and abort. 3875 3881 3876 Temporary files created for repair are simila 3882 Temporary files created for repair are similar to ``O_TMPFILE`` files created 3877 by userspace. 3883 by userspace. 3878 They are not linked into a directory and the 3884 They are not linked into a directory and the entire file will be reaped when 3879 the last reference to the file is lost. 3885 the last reference to the file is lost. 3880 The key differences are that these files must 3886 The key differences are that these files must have no access permission outside 3881 the kernel at all, they must be specially mar 3887 the kernel at all, they must be specially marked to prevent them from being 3882 opened by handle, and they must never be link 3888 opened by handle, and they must never be linked into the directory tree. 3883 3889 3884 +-------------------------------------------- 3890 +--------------------------------------------------------------------------+ 3885 | **Historical Sidebar**: 3891 | **Historical Sidebar**: | 3886 +-------------------------------------------- 3892 +--------------------------------------------------------------------------+ 3887 | In the initial iteration of file metadata r 3893 | In the initial iteration of file metadata repair, the damaged metadata | 3888 | blocks would be scanned for salvageable dat 3894 | blocks would be scanned for salvageable data; the extents in the file | 3889 | fork would be reaped; and then a new struct 3895 | fork would be reaped; and then a new structure would be built in its | 3890 | place. 3896 | place. | 3891 | This strategy did not survive the introduct 3897 | This strategy did not survive the introduction of the atomic repair | 3892 | requirement expressed earlier in this docum 3898 | requirement expressed earlier in this document. | 3893 | 3899 | | 3894 | The second iteration explored building a se 3900 | The second iteration explored building a second structure at a high | 3895 | offset in the fork from the salvage data, r 3901 | offset in the fork from the salvage data, reaping the old extents, and | 3896 | using a ``COLLAPSE_RANGE`` operation to sli 3902 | using a ``COLLAPSE_RANGE`` operation to slide the new extents into | 3897 | place. 3903 | place. | 3898 | 3904 | | 3899 | This had many drawbacks: 3905 | This had many drawbacks: | 3900 | 3906 | | 3901 | - Array structures are linearly addressed, 3907 | - Array structures are linearly addressed, and the regular filesystem | 3902 | codebase does not have the concept of a l 3908 | codebase does not have the concept of a linear offset that could be | 3903 | applied to the record offset computation 3909 | applied to the record offset computation to build an alternate copy. | 3904 | 3910 | | 3905 | - Extended attributes are allowed to use th 3911 | - Extended attributes are allowed to use the entire attr fork offset | 3906 | address space. 3912 | address space. | 3907 | 3913 | | 3908 | - Even if repair could build an alternate c 3914 | - Even if repair could build an alternate copy of a data structure in a | 3909 | different part of the fork address space, 3915 | different part of the fork address space, the atomic repair commit | 3910 | requirement means that online repair woul 3916 | requirement means that online repair would have to be able to perform | 3911 | a log assisted ``COLLAPSE_RANGE`` operati 3917 | a log assisted ``COLLAPSE_RANGE`` operation to ensure that the old | 3912 | structure was completely replaced. 3918 | structure was completely replaced. | 3913 | 3919 | | 3914 | - A crash after construction of the seconda 3920 | - A crash after construction of the secondary tree but before the range | 3915 | collapse would leave unreachable blocks i 3921 | collapse would leave unreachable blocks in the file fork. | 3916 | This would likely confuse things further. 3922 | This would likely confuse things further. | 3917 | 3923 | | 3918 | - Reaping blocks after a repair is not a si 3924 | - Reaping blocks after a repair is not a simple operation, and | 3919 | initiating a reap operation from a restar 3925 | initiating a reap operation from a restarted range collapse operation | 3920 | during log recovery is daunting. 3926 | during log recovery is daunting. | 3921 | 3927 | | 3922 | - Directory entry blocks and quota records 3928 | - Directory entry blocks and quota records record the file fork offset | 3923 | in the header area of each block. 3929 | in the header area of each block. | 3924 | An atomic range collapse operation would 3930 | An atomic range collapse operation would have to rewrite this part of | 3925 | each block header. 3931 | each block header. | 3926 | Rewriting a single field in block headers 3932 | Rewriting a single field in block headers is not a huge problem, but | 3927 | it's something to be aware of. 3933 | it's something to be aware of. | 3928 | 3934 | | 3929 | - Each block in a directory or extended att 3935 | - Each block in a directory or extended attributes btree index contains | 3930 | sibling and child block pointers. 3936 | sibling and child block pointers. | 3931 | Were the atomic commit to use a range col 3937 | Were the atomic commit to use a range collapse operation, each block | 3932 | would have to be rewritten very carefully 3938 | would have to be rewritten very carefully to preserve the graph | 3933 | structure. 3939 | structure. | 3934 | Doing this as part of a range collapse me 3940 | Doing this as part of a range collapse means rewriting a large number | 3935 | of blocks repeatedly, which is not conduc 3941 | of blocks repeatedly, which is not conducive to quick repairs. | 3936 | 3942 | | 3937 | This lead to the introduction of temporary 3943 | This lead to the introduction of temporary file staging. | 3938 +-------------------------------------------- 3944 +--------------------------------------------------------------------------+ 3939 3945 3940 Using a Temporary File 3946 Using a Temporary File 3941 `````````````````````` 3947 `````````````````````` 3942 3948 3943 Online repair code should use the ``xrep_temp 3949 Online repair code should use the ``xrep_tempfile_create`` function to create a 3944 temporary file inside the filesystem. 3950 temporary file inside the filesystem. 3945 This allocates an inode, marks the in-core in 3951 This allocates an inode, marks the in-core inode private, and attaches it to 3946 the scrub context. 3952 the scrub context. 3947 These files are hidden from userspace, may no 3953 These files are hidden from userspace, may not be added to the directory tree, 3948 and must be kept private. 3954 and must be kept private. 3949 3955 3950 Temporary files only use two inode locks: the 3956 Temporary files only use two inode locks: the IOLOCK and the ILOCK. 3951 The MMAPLOCK is not needed here, because ther 3957 The MMAPLOCK is not needed here, because there must not be page faults from 3952 userspace for data fork blocks. 3958 userspace for data fork blocks. 3953 The usage patterns of these two locks are the 3959 The usage patterns of these two locks are the same as for any other XFS file -- 3954 access to file data are controlled via the IO 3960 access to file data are controlled via the IOLOCK, and access to file metadata 3955 are controlled via the ILOCK. 3961 are controlled via the ILOCK. 3956 Locking helpers are provided so that the temp 3962 Locking helpers are provided so that the temporary file and its lock state can 3957 be cleaned up by the scrub context. 3963 be cleaned up by the scrub context. 3958 To comply with the nested locking strategy la 3964 To comply with the nested locking strategy laid out in the :ref:`inode 3959 locking<ilocking>` section, it is recommended 3965 locking<ilocking>` section, it is recommended that scrub functions use the 3960 xrep_tempfile_ilock*_nowait lock helpers. 3966 xrep_tempfile_ilock*_nowait lock helpers. 3961 3967 3962 Data can be written to a temporary file by tw 3968 Data can be written to a temporary file by two means: 3963 3969 3964 1. ``xrep_tempfile_copyin`` can be used to se 3970 1. ``xrep_tempfile_copyin`` can be used to set the contents of a regular 3965 temporary file from an xfile. 3971 temporary file from an xfile. 3966 3972 3967 2. The regular directory, symbolic link, and 3973 2. The regular directory, symbolic link, and extended attribute functions can 3968 be used to write to the temporary file. 3974 be used to write to the temporary file. 3969 3975 3970 Once a good copy of a data file has been cons 3976 Once a good copy of a data file has been constructed in a temporary file, it 3971 must be conveyed to the file being repaired, 3977 must be conveyed to the file being repaired, which is the topic of the next 3972 section. 3978 section. 3973 3979 3974 The proposed patches are in the 3980 The proposed patches are in the 3975 `repair temporary files 3981 `repair temporary files 3976 <https://git.kernel.org/pub/scm/linux/kernel/ 3982 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=repair-tempfiles>`_ 3977 series. 3983 series. 3978 3984 3979 Logged File Content Exchanges !! 3985 Atomic Extent Swapping 3980 ----------------------------- !! 3986 ---------------------- 3981 3987 3982 Once repair builds a temporary file with a ne 3988 Once repair builds a temporary file with a new data structure written into 3983 it, it must commit the new changes into the e 3989 it, it must commit the new changes into the existing file. 3984 It is not possible to swap the inumbers of tw 3990 It is not possible to swap the inumbers of two files, so instead the new 3985 metadata must replace the old. 3991 metadata must replace the old. 3986 This suggests the need for the ability to swa 3992 This suggests the need for the ability to swap extents, but the existing extent 3987 swapping code used by the file defragmenting 3993 swapping code used by the file defragmenting tool ``xfs_fsr`` is not sufficient 3988 for online repair because: 3994 for online repair because: 3989 3995 3990 a. When the reverse-mapping btree is enabled, 3996 a. When the reverse-mapping btree is enabled, the swap code must keep the 3991 reverse mapping information up to date wit 3997 reverse mapping information up to date with every exchange of mappings. 3992 Therefore, it can only exchange one mappin 3998 Therefore, it can only exchange one mapping per transaction, and each 3993 transaction is independent. 3999 transaction is independent. 3994 4000 3995 b. Reverse-mapping is critical for the operat 4001 b. Reverse-mapping is critical for the operation of online fsck, so the old 3996 defragmentation code (which swapped entire 4002 defragmentation code (which swapped entire extent forks in a single 3997 operation) is not useful here. 4003 operation) is not useful here. 3998 4004 3999 c. Defragmentation is assumed to occur betwee 4005 c. Defragmentation is assumed to occur between two files with identical 4000 contents. 4006 contents. 4001 For this use case, an incomplete exchange 4007 For this use case, an incomplete exchange will not result in a user-visible 4002 change in file contents, even if the opera 4008 change in file contents, even if the operation is interrupted. 4003 4009 4004 d. Online repair needs to swap the contents o 4010 d. Online repair needs to swap the contents of two files that are by definition 4005 *not* identical. 4011 *not* identical. 4006 For directory and xattr repairs, the user- 4012 For directory and xattr repairs, the user-visible contents might be the 4007 same, but the contents of individual block 4013 same, but the contents of individual blocks may be very different. 4008 4014 4009 e. Old blocks in the file may be cross-linked 4015 e. Old blocks in the file may be cross-linked with another structure and must 4010 not reappear if the system goes down mid-r 4016 not reappear if the system goes down mid-repair. 4011 4017 4012 These problems are overcome by creating a new 4018 These problems are overcome by creating a new deferred operation and a new type 4013 of log intent item to track the progress of a 4019 of log intent item to track the progress of an operation to exchange two file 4014 ranges. 4020 ranges. 4015 The new exchange operation type chains togeth !! 4021 The new deferred operation type chains together the same transactions used by 4016 the reverse-mapping extent swap code, but rec !! 4022 the reverse-mapping extent swap code. 4017 log so that operations can be restarted after << 4018 This new functionality is called the file con << 4019 code. << 4020 The underlying implementation exchanges file << 4021 The new log item records the progress of the 4023 The new log item records the progress of the exchange to ensure that once an 4022 exchange begins, it will always run to comple 4024 exchange begins, it will always run to completion, even there are 4023 interruptions. 4025 interruptions. 4024 The new ``XFS_SB_FEAT_INCOMPAT_EXCHRANGE`` in !! 4026 The new ``XFS_SB_FEAT_INCOMPAT_LOG_ATOMIC_SWAP`` log-incompatible feature flag 4025 in the superblock protects these new log item 4027 in the superblock protects these new log item records from being replayed on 4026 old kernels. 4028 old kernels. 4027 4029 4028 The proposed patchset is the 4030 The proposed patchset is the 4029 `file contents exchange !! 4031 `atomic extent swap 4030 <https://git.kernel.org/pub/scm/linux/kernel/ 4032 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=atomic-file-updates>`_ 4031 series. 4033 series. 4032 4034 4033 +-------------------------------------------- 4035 +--------------------------------------------------------------------------+ 4034 | **Sidebar: Using Log-Incompatible Feature F 4036 | **Sidebar: Using Log-Incompatible Feature Flags** | 4035 +-------------------------------------------- 4037 +--------------------------------------------------------------------------+ 4036 | Starting with XFS v5, the superblock contai 4038 | Starting with XFS v5, the superblock contains a | 4037 | ``sb_features_log_incompat`` field to indic 4039 | ``sb_features_log_incompat`` field to indicate that the log contains | 4038 | records that might not readable by all kern 4040 | records that might not readable by all kernels that could mount this | 4039 | filesystem. 4041 | filesystem. | 4040 | In short, log incompat features protect the 4042 | In short, log incompat features protect the log contents against kernels | 4041 | that will not understand the contents. 4043 | that will not understand the contents. | 4042 | Unlike the other superblock feature bits, l 4044 | Unlike the other superblock feature bits, log incompat bits are | 4043 | ephemeral because an empty (clean) log does 4045 | ephemeral because an empty (clean) log does not need protection. | 4044 | The log cleans itself after its contents ha 4046 | The log cleans itself after its contents have been committed into the | 4045 | filesystem, either as part of an unmount or 4047 | filesystem, either as part of an unmount or because the system is | 4046 | otherwise idle. 4048 | otherwise idle. | 4047 | Because upper level code can be working on 4049 | Because upper level code can be working on a transaction at the same | 4048 | time that the log cleans itself, it is nece 4050 | time that the log cleans itself, it is necessary for upper level code to | 4049 | communicate to the log when it is going to 4051 | communicate to the log when it is going to use a log incompatible | 4050 | feature. 4052 | feature. | 4051 | 4053 | | 4052 | The log coordinates access to incompatible 4054 | The log coordinates access to incompatible features through the use of | 4053 | one ``struct rw_semaphore`` for each featur 4055 | one ``struct rw_semaphore`` for each feature. | 4054 | The log cleaning code tries to take this rw 4056 | The log cleaning code tries to take this rwsem in exclusive mode to | 4055 | clear the bit; if the lock attempt fails, t 4057 | clear the bit; if the lock attempt fails, the feature bit remains set. | >> 4058 | Filesystem code signals its intention to use a log incompat feature in a | >> 4059 | transaction by calling ``xlog_use_incompat_feat``, which takes the rwsem | >> 4060 | in shared mode. | 4056 | The code supporting a log incompat feature 4061 | The code supporting a log incompat feature should create wrapper | 4057 | functions to obtain the log feature and cal 4062 | functions to obtain the log feature and call | 4058 | ``xfs_add_incompat_log_feature`` to set the 4063 | ``xfs_add_incompat_log_feature`` to set the feature bits in the primary | 4059 | superblock. 4064 | superblock. | 4060 | The superblock update is performed transact 4065 | The superblock update is performed transactionally, so the wrapper to | 4061 | obtain log assistance must be called just p 4066 | obtain log assistance must be called just prior to the creation of the | 4062 | transaction that uses the functionality. 4067 | transaction that uses the functionality. | 4063 | For a file operation, this step must happen 4068 | For a file operation, this step must happen after taking the IOLOCK | 4064 | and the MMAPLOCK, but before allocating the 4069 | and the MMAPLOCK, but before allocating the transaction. | 4065 | When the transaction is complete, the ``xlo 4070 | When the transaction is complete, the ``xlog_drop_incompat_feat`` | 4066 | function is called to release the feature. 4071 | function is called to release the feature. | 4067 | The feature bit will not be cleared from th 4072 | The feature bit will not be cleared from the superblock until the log | 4068 | becomes clean. 4073 | becomes clean. | 4069 | 4074 | | 4070 | Log-assisted extended attribute updates and !! 4075 | Log-assisted extended attribute updates and atomic extent swaps both use | 4071 | use log incompat features and provide conve !! 4076 | log incompat features and provide convenience wrappers around the | 4072 | functionality. 4077 | functionality. | 4073 +-------------------------------------------- 4078 +--------------------------------------------------------------------------+ 4074 4079 4075 Mechanics of a Logged File Content Exchange !! 4080 Mechanics of an Atomic Extent Swap 4076 ``````````````````````````````````````````` !! 4081 `````````````````````````````````` 4077 4082 4078 Exchanging contents between file forks is a c !! 4083 Swapping entire file forks is a complex task. 4079 The goal is to exchange all file fork mapping 4084 The goal is to exchange all file fork mappings between two file fork offset 4080 ranges. 4085 ranges. 4081 There are likely to be many extent mappings i 4086 There are likely to be many extent mappings in each fork, and the edges of 4082 the mappings aren't necessarily aligned. 4087 the mappings aren't necessarily aligned. 4083 Furthermore, there may be other updates that !! 4088 Furthermore, there may be other updates that need to happen after the swap, 4084 such as exchanging file sizes, inode flags, o 4089 such as exchanging file sizes, inode flags, or conversion of fork data to local 4085 format. 4090 format. 4086 This is roughly the format of the new deferre !! 4091 This is roughly the format of the new deferred extent swap work item: 4087 4092 4088 .. code-block:: c 4093 .. code-block:: c 4089 4094 4090 struct xfs_exchmaps_intent { !! 4095 struct xfs_swapext_intent { 4091 /* Inodes participating in the op 4096 /* Inodes participating in the operation. */ 4092 struct xfs_inode *xmi_ip1; !! 4097 struct xfs_inode *sxi_ip1; 4093 struct xfs_inode *xmi_ip2; !! 4098 struct xfs_inode *sxi_ip2; 4094 4099 4095 /* File offset range information. 4100 /* File offset range information. */ 4096 xfs_fileoff_t xmi_startoff1 !! 4101 xfs_fileoff_t sxi_startoff1; 4097 xfs_fileoff_t xmi_startoff2 !! 4102 xfs_fileoff_t sxi_startoff2; 4098 xfs_filblks_t xmi_blockcoun !! 4103 xfs_filblks_t sxi_blockcount; 4099 4104 4100 /* Set these file sizes after the 4105 /* Set these file sizes after the operation, unless negative. */ 4101 xfs_fsize_t xmi_isize1; !! 4106 xfs_fsize_t sxi_isize1; 4102 xfs_fsize_t xmi_isize2; !! 4107 xfs_fsize_t sxi_isize2; 4103 4108 4104 /* XFS_EXCHMAPS_* log operation f !! 4109 /* XFS_SWAP_EXT_* log operation flags */ 4105 uint64_t xmi_flags; !! 4110 uint64_t sxi_flags; 4106 }; 4111 }; 4107 4112 4108 The new log intent item contains enough infor 4113 The new log intent item contains enough information to track two logical fork 4109 offset ranges: ``(inode1, startoff1, blockcou 4114 offset ranges: ``(inode1, startoff1, blockcount)`` and ``(inode2, startoff2, 4110 blockcount)``. 4115 blockcount)``. 4111 Each step of an exchange operation exchanges !! 4116 Each step of a swap operation exchanges the largest file range mapping possible 4112 possible from one file to the other. !! 4117 from one file to the other. 4113 After each step in the exchange operation, th !! 4118 After each step in the swap operation, the two startoff fields are incremented 4114 incremented and the blockcount field is decre !! 4119 and the blockcount field is decremented to reflect the progress made. 4115 made. !! 4120 The flags field captures behavioral parameters such as swapping the attr fork 4116 The flags field captures behavioral parameter !! 4121 instead of the data fork and other work to be done after the extent swap. 4117 mappings instead of the data fork and other w !! 4122 The two isize fields are used to swap the file size at the end of the operation 4118 The two isize fields are used to exchange the !! 4123 if the file data fork is the target of the swap operation. 4119 operation if the file data fork is the target !! 4124 4120 !! 4125 When the extent swap is initiated, the sequence of operations is as follows: 4121 When the exchange is initiated, the sequence !! 4126 4122 !! 4127 1. Create a deferred work item for the extent swap. 4123 1. Create a deferred work item for the file m !! 4128 At the start, it should contain the entirety of the file ranges to be 4124 At the start, it should contain the entire !! 4129 swapped. 4125 exchanged. << 4126 4130 4127 2. Call ``xfs_defer_finish`` to process the e 4131 2. Call ``xfs_defer_finish`` to process the exchange. 4128 This is encapsulated in ``xrep_tempexch_co !! 4132 This is encapsulated in ``xrep_tempswap_contents`` for scrub operations. 4129 This will log an extent swap intent item t 4133 This will log an extent swap intent item to the transaction for the deferred 4130 mapping exchange work item. !! 4134 extent swap work item. 4131 4135 4132 3. Until ``xmi_blockcount`` of the deferred m !! 4136 3. Until ``sxi_blockcount`` of the deferred extent swap work item is zero, 4133 4137 4134 a. Read the block maps of both file ranges !! 4138 a. Read the block maps of both file ranges starting at ``sxi_startoff1`` and 4135 ``xmi_startoff2``, respectively, and co !! 4139 ``sxi_startoff2``, respectively, and compute the longest extent that can 4136 be exchanged in a single step. !! 4140 be swapped in a single step. 4137 This is the minimum of the two ``br_blo 4141 This is the minimum of the two ``br_blockcount`` s in the mappings. 4138 Keep advancing through the file forks u 4142 Keep advancing through the file forks until at least one of the mappings 4139 contains written blocks. 4143 contains written blocks. 4140 Mutual holes, unwritten extents, and ex 4144 Mutual holes, unwritten extents, and extent mappings to the same physical 4141 space are not exchanged. 4145 space are not exchanged. 4142 4146 4143 For the next few steps, this document w 4147 For the next few steps, this document will refer to the mapping that came 4144 from file 1 as "map1", and the mapping 4148 from file 1 as "map1", and the mapping that came from file 2 as "map2". 4145 4149 4146 b. Create a deferred block mapping update 4150 b. Create a deferred block mapping update to unmap map1 from file 1. 4147 4151 4148 c. Create a deferred block mapping update 4152 c. Create a deferred block mapping update to unmap map2 from file 2. 4149 4153 4150 d. Create a deferred block mapping update 4154 d. Create a deferred block mapping update to map map1 into file 2. 4151 4155 4152 e. Create a deferred block mapping update 4156 e. Create a deferred block mapping update to map map2 into file 1. 4153 4157 4154 f. Log the block, quota, and extent count 4158 f. Log the block, quota, and extent count updates for both files. 4155 4159 4156 g. Extend the ondisk size of either file i 4160 g. Extend the ondisk size of either file if necessary. 4157 4161 4158 h. Log a mapping exchange done log item fo !! 4162 h. Log an extent swap done log item for the extent swap intent log item 4159 item that was read at the start of step !! 4163 that was read at the start of step 3. 4160 4164 4161 i. Compute the amount of file range that h 4165 i. Compute the amount of file range that has just been covered. 4162 This quantity is ``(map1.br_startoff + 4166 This quantity is ``(map1.br_startoff + map1.br_blockcount - 4163 xmi_startoff1)``, because step 3a could !! 4167 sxi_startoff1)``, because step 3a could have skipped holes. 4164 4168 4165 j. Increase the starting offsets of ``xmi_ !! 4169 j. Increase the starting offsets of ``sxi_startoff1`` and ``sxi_startoff2`` 4166 by the number of blocks computed in the 4170 by the number of blocks computed in the previous step, and decrease 4167 ``xmi_blockcount`` by the same quantity !! 4171 ``sxi_blockcount`` by the same quantity. 4168 This advances the cursor. 4172 This advances the cursor. 4169 4173 4170 k. Log a new mapping exchange intent log i !! 4174 k. Log a new extent swap intent log item reflecting the advanced state of 4171 of the work item. !! 4175 the work item. 4172 4176 4173 l. Return the proper error code (EAGAIN) t 4177 l. Return the proper error code (EAGAIN) to the deferred operation manager 4174 to inform it that there is more work to 4178 to inform it that there is more work to be done. 4175 The operation manager completes the def 4179 The operation manager completes the deferred work in steps 3b-3e before 4176 moving back to the start of step 3. 4180 moving back to the start of step 3. 4177 4181 4178 4. Perform any post-processing. 4182 4. Perform any post-processing. 4179 This will be discussed in more detail in s 4183 This will be discussed in more detail in subsequent sections. 4180 4184 4181 If the filesystem goes down in the middle of 4185 If the filesystem goes down in the middle of an operation, log recovery will 4182 find the most recent unfinished maping exchan !! 4186 find the most recent unfinished extent swap log intent item and restart from 4183 from there. !! 4187 there. 4184 This is how atomic file mapping exchanges gua !! 4188 This is how extent swapping guarantees that an outside observer will either see 4185 will either see the old broken structure or t !! 4189 the old broken structure or the new one, and never a mismash of both. 4186 both. << 4187 4190 4188 Preparation for File Content Exchanges !! 4191 Preparation for Extent Swapping 4189 `````````````````````````````````````` !! 4192 ``````````````````````````````` 4190 4193 4191 There are a few things that need to be taken 4194 There are a few things that need to be taken care of before initiating an 4192 atomic file mapping exchange operation. !! 4195 atomic extent swap operation. 4193 First, regular files require the page cache t 4196 First, regular files require the page cache to be flushed to disk before the 4194 operation begins, and directio writes to be q 4197 operation begins, and directio writes to be quiesced. 4195 Like any filesystem operation, file mapping e !! 4198 Like any filesystem operation, extent swapping must determine the maximum 4196 maximum amount of disk space and quota that c !! 4199 amount of disk space and quota that can be consumed on behalf of both files in 4197 files in the operation, and reserve that quan !! 4200 the operation, and reserve that quantity of resources to avoid an unrecoverable 4198 unrecoverable out of space failure once it st !! 4201 out of space failure once it starts dirtying metadata. 4199 The preparation step scans the ranges of both 4202 The preparation step scans the ranges of both files to estimate: 4200 4203 4201 - Data device blocks needed to handle the rep 4204 - Data device blocks needed to handle the repeated updates to the fork 4202 mappings. 4205 mappings. 4203 - Change in data and realtime block counts fo 4206 - Change in data and realtime block counts for both files. 4204 - Increase in quota usage for both files, if 4207 - Increase in quota usage for both files, if the two files do not share the 4205 same set of quota ids. 4208 same set of quota ids. 4206 - The number of extent mappings that will be 4209 - The number of extent mappings that will be added to each file. 4207 - Whether or not there are partially written 4210 - Whether or not there are partially written realtime extents. 4208 User programs must never be able to access 4211 User programs must never be able to access a realtime file extent that maps 4209 to different extents on the realtime volume 4212 to different extents on the realtime volume, which could happen if the 4210 operation fails to run to completion. 4213 operation fails to run to completion. 4211 4214 4212 The need for precise estimation increases the !! 4215 The need for precise estimation increases the run time of the swap operation, 4213 operation, but it is very important to mainta !! 4216 but it is very important to maintain correct accounting. 4214 The filesystem must not run completely out of !! 4217 The filesystem must not run completely out of free space, nor can the extent 4215 exchange ever add more extent mappings to a f !! 4218 swap ever add more extent mappings to a fork than it can support. 4216 Regular users are required to abide the quota 4219 Regular users are required to abide the quota limits, though metadata repairs 4217 may exceed quota to resolve inconsistent meta 4220 may exceed quota to resolve inconsistent metadata elsewhere. 4218 4221 4219 Special Features for Exchanging Metadata File !! 4222 Special Features for Swapping Metadata File Extents 4220 ````````````````````````````````````````````` !! 4223 ``````````````````````````````````````````````````` 4221 4224 4222 Extended attributes, symbolic links, and dire 4225 Extended attributes, symbolic links, and directories can set the fork format to 4223 "local" and treat the fork as a literal area 4226 "local" and treat the fork as a literal area for data storage. 4224 Metadata repairs must take extra steps to sup 4227 Metadata repairs must take extra steps to support these cases: 4225 4228 4226 - If both forks are in local format and the f 4229 - If both forks are in local format and the fork areas are large enough, the 4227 exchange is performed by copying the incore !! 4230 swap is performed by copying the incore fork contents, logging both forks, 4228 forks, and committing. !! 4231 and committing. 4229 The atomic file mapping exchange mechanism !! 4232 The atomic extent swap mechanism is not necessary, since this can be done 4230 be done with a single transaction. !! 4233 with a single transaction. 4231 4234 4232 - If both forks map blocks, then the regular !! 4235 - If both forks map blocks, then the regular atomic extent swap is used. 4233 used. << 4234 4236 4235 - Otherwise, only one fork is in local format 4237 - Otherwise, only one fork is in local format. 4236 The contents of the local format fork are c 4238 The contents of the local format fork are converted to a block to perform the 4237 exchange. !! 4239 swap. 4238 The conversion to block format must be done 4240 The conversion to block format must be done in the same transaction that 4239 logs the initial mapping exchange intent lo !! 4241 logs the initial extent swap intent log item. 4240 The regular atomic mapping exchange is used !! 4242 The regular atomic extent swap is used to exchange the mappings. 4241 mappings. !! 4243 Special flags are set on the swap operation so that the transaction can be 4242 Special flags are set on the exchange opera !! 4244 rolled one more time to convert the second file's fork back to local format 4243 be rolled one more time to convert the seco !! 4245 so that the second file will be ready to go as soon as the ILOCK is dropped. 4244 format so that the second file will be read << 4245 dropped. << 4246 4246 4247 Extended attributes and directories stamp the 4247 Extended attributes and directories stamp the owning inode into every block, 4248 but the buffer verifiers do not actually chec 4248 but the buffer verifiers do not actually check the inode number! 4249 Although there is no verification, it is stil 4249 Although there is no verification, it is still important to maintain 4250 referential integrity, so prior to performing !! 4250 referential integrity, so prior to performing the extent swap, online repair 4251 repair builds every block in the new data str !! 4251 builds every block in the new data structure with the owner field of the file 4252 file being repaired. !! 4252 being repaired. 4253 !! 4253 4254 After a successful exchange operation, the re !! 4254 After a successful swap operation, the repair operation must reap the old fork 4255 fork blocks by processing each fork mapping t !! 4255 blocks by processing each fork mapping through the standard :ref:`file extent 4256 extent reaping <reaping>` mechanism that is d !! 4256 reaping <reaping>` mechanism that is done post-repair. 4257 If the filesystem should go down during the r 4257 If the filesystem should go down during the reap part of the repair, the 4258 iunlink processing at the end of recovery wil 4258 iunlink processing at the end of recovery will free both the temporary file and 4259 whatever blocks were not reaped. 4259 whatever blocks were not reaped. 4260 However, this iunlink processing omits the cr 4260 However, this iunlink processing omits the cross-link detection of online 4261 repair, and is not completely foolproof. 4261 repair, and is not completely foolproof. 4262 4262 4263 Exchanging Temporary File Contents !! 4263 Swapping Temporary File Extents 4264 `````````````````````````````````` !! 4264 ``````````````````````````````` 4265 4265 4266 To repair a metadata file, online repair proc 4266 To repair a metadata file, online repair proceeds as follows: 4267 4267 4268 1. Create a temporary repair file. 4268 1. Create a temporary repair file. 4269 4269 4270 2. Use the staging data to write out new cont 4270 2. Use the staging data to write out new contents into the temporary repair 4271 file. 4271 file. 4272 The same fork must be written to as is bei 4272 The same fork must be written to as is being repaired. 4273 4273 4274 3. Commit the scrub transaction, since the ex !! 4274 3. Commit the scrub transaction, since the swap estimation step must be 4275 must be completed before transaction reser !! 4275 completed before transaction reservations are made. 4276 4276 4277 4. Call ``xrep_tempexch_trans_alloc`` to allo !! 4277 4. Call ``xrep_tempswap_trans_alloc`` to allocate a new scrub transaction with 4278 the appropriate resource reservations, loc 4278 the appropriate resource reservations, locks, and fill out a ``struct 4279 xfs_exchmaps_req`` with the details of the !! 4279 xfs_swapext_req`` with the details of the swap operation. 4280 4280 4281 5. Call ``xrep_tempexch_contents`` to exchang !! 4281 5. Call ``xrep_tempswap_contents`` to swap the contents. 4282 4282 4283 6. Commit the transaction to complete the rep 4283 6. Commit the transaction to complete the repair. 4284 4284 4285 .. _rtsummary: 4285 .. _rtsummary: 4286 4286 4287 Case Study: Repairing the Realtime Summary Fi 4287 Case Study: Repairing the Realtime Summary File 4288 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 4288 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 4289 4289 4290 In the "realtime" section of an XFS filesyste 4290 In the "realtime" section of an XFS filesystem, free space is tracked via a 4291 bitmap, similar to Unix FFS. 4291 bitmap, similar to Unix FFS. 4292 Each bit in the bitmap represents one realtim 4292 Each bit in the bitmap represents one realtime extent, which is a multiple of 4293 the filesystem block size between 4KiB and 1G 4293 the filesystem block size between 4KiB and 1GiB in size. 4294 The realtime summary file indexes the number 4294 The realtime summary file indexes the number of free extents of a given size to 4295 the offset of the block within the realtime f 4295 the offset of the block within the realtime free space bitmap where those free 4296 extents begin. 4296 extents begin. 4297 In other words, the summary file helps the al 4297 In other words, the summary file helps the allocator find free extents by 4298 length, similar to what the free space by cou 4298 length, similar to what the free space by count (cntbt) btree does for the data 4299 section. 4299 section. 4300 4300 4301 The summary file itself is a flat file (with 4301 The summary file itself is a flat file (with no block headers or checksums!) 4302 partitioned into ``log2(total rt extents)`` s 4302 partitioned into ``log2(total rt extents)`` sections containing enough 32-bit 4303 counters to match the number of blocks in the 4303 counters to match the number of blocks in the rt bitmap. 4304 Each counter records the number of free exten 4304 Each counter records the number of free extents that start in that bitmap block 4305 and can satisfy a power-of-two allocation req 4305 and can satisfy a power-of-two allocation request. 4306 4306 4307 To check the summary file against the bitmap: 4307 To check the summary file against the bitmap: 4308 4308 4309 1. Take the ILOCK of both the realtime bitmap 4309 1. Take the ILOCK of both the realtime bitmap and summary files. 4310 4310 4311 2. For each free space extent recorded in the 4311 2. For each free space extent recorded in the bitmap: 4312 4312 4313 a. Compute the position in the summary fil 4313 a. Compute the position in the summary file that contains a counter that 4314 represents this free extent. 4314 represents this free extent. 4315 4315 4316 b. Read the counter from the xfile. 4316 b. Read the counter from the xfile. 4317 4317 4318 c. Increment it, and write it back to the 4318 c. Increment it, and write it back to the xfile. 4319 4319 4320 3. Compare the contents of the xfile against 4320 3. Compare the contents of the xfile against the ondisk file. 4321 4321 4322 To repair the summary file, write the xfile c 4322 To repair the summary file, write the xfile contents into the temporary file 4323 and use atomic mapping exchange to commit the !! 4323 and use atomic extent swap to commit the new contents. 4324 The temporary file is then reaped. 4324 The temporary file is then reaped. 4325 4325 4326 The proposed patchset is the 4326 The proposed patchset is the 4327 `realtime summary repair 4327 `realtime summary repair 4328 <https://git.kernel.org/pub/scm/linux/kernel/ 4328 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=repair-rtsummary>`_ 4329 series. 4329 series. 4330 4330 4331 Case Study: Salvaging Extended Attributes 4331 Case Study: Salvaging Extended Attributes 4332 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 4332 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 4333 4333 4334 In XFS, extended attributes are implemented a 4334 In XFS, extended attributes are implemented as a namespaced name-value store. 4335 Values are limited in size to 64KiB, but ther 4335 Values are limited in size to 64KiB, but there is no limit in the number of 4336 names. 4336 names. 4337 The attribute fork is unpartitioned, which me 4337 The attribute fork is unpartitioned, which means that the root of the attribute 4338 structure is always in logical block zero, bu 4338 structure is always in logical block zero, but attribute leaf blocks, dabtree 4339 index blocks, and remote value blocks are int 4339 index blocks, and remote value blocks are intermixed. 4340 Attribute leaf blocks contain variable-sized 4340 Attribute leaf blocks contain variable-sized records that associate 4341 user-provided names with the user-provided va 4341 user-provided names with the user-provided values. 4342 Values larger than a block are allocated sepa 4342 Values larger than a block are allocated separate extents and written there. 4343 If the leaf information expands beyond a sing 4343 If the leaf information expands beyond a single block, a directory/attribute 4344 btree (``dabtree``) is created to map hashes 4344 btree (``dabtree``) is created to map hashes of attribute names to entries 4345 for fast lookup. 4345 for fast lookup. 4346 4346 4347 Salvaging extended attributes is done as foll 4347 Salvaging extended attributes is done as follows: 4348 4348 4349 1. Walk the attr fork mappings of the file be 4349 1. Walk the attr fork mappings of the file being repaired to find the attribute 4350 leaf blocks. 4350 leaf blocks. 4351 When one is found, 4351 When one is found, 4352 4352 4353 a. Walk the attr leaf block to find candid 4353 a. Walk the attr leaf block to find candidate keys. 4354 When one is found, 4354 When one is found, 4355 4355 4356 1. Check the name for problems, and ign 4356 1. Check the name for problems, and ignore the name if there are. 4357 4357 4358 2. Retrieve the value. 4358 2. Retrieve the value. 4359 If that succeeds, add the name and v 4359 If that succeeds, add the name and value to the staging xfarray and 4360 xfblob. 4360 xfblob. 4361 4361 4362 2. If the memory usage of the xfarray and xfb 4362 2. If the memory usage of the xfarray and xfblob exceed a certain amount of 4363 memory or there are no more attr fork bloc 4363 memory or there are no more attr fork blocks to examine, unlock the file and 4364 add the staged extended attributes to the 4364 add the staged extended attributes to the temporary file. 4365 4365 4366 3. Use atomic file mapping exchange to exchan !! 4366 3. Use atomic extent swapping to exchange the new and old extended attribute 4367 attribute structures. !! 4367 structures. 4368 The old attribute blocks are now attached 4368 The old attribute blocks are now attached to the temporary file. 4369 4369 4370 4. Reap the temporary file. 4370 4. Reap the temporary file. 4371 4371 4372 The proposed patchset is the 4372 The proposed patchset is the 4373 `extended attribute repair 4373 `extended attribute repair 4374 <https://git.kernel.org/pub/scm/linux/kernel/ 4374 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=repair-xattrs>`_ 4375 series. 4375 series. 4376 4376 4377 Fixing Directories 4377 Fixing Directories 4378 ------------------ 4378 ------------------ 4379 4379 4380 Fixing directories is difficult with currentl 4380 Fixing directories is difficult with currently available filesystem features, 4381 since directory entries are not redundant. 4381 since directory entries are not redundant. 4382 The offline repair tool scans all inodes to f 4382 The offline repair tool scans all inodes to find files with nonzero link count, 4383 and then it scans all directories to establis 4383 and then it scans all directories to establish parentage of those linked files. 4384 Damaged files and directories are zapped, and 4384 Damaged files and directories are zapped, and files with no parent are 4385 moved to the ``/lost+found`` directory. 4385 moved to the ``/lost+found`` directory. 4386 It does not try to salvage anything. 4386 It does not try to salvage anything. 4387 4387 4388 The best that online repair can do at this ti 4388 The best that online repair can do at this time is to read directory data 4389 blocks and salvage any dirents that look plau 4389 blocks and salvage any dirents that look plausible, correct link counts, and 4390 move orphans back into the directory tree. 4390 move orphans back into the directory tree. 4391 The salvage process is discussed in the case 4391 The salvage process is discussed in the case study at the end of this section. 4392 The :ref:`file link count fsck <nlinks>` code 4392 The :ref:`file link count fsck <nlinks>` code takes care of fixing link counts 4393 and moving orphans to the ``/lost+found`` dir 4393 and moving orphans to the ``/lost+found`` directory. 4394 4394 4395 Case Study: Salvaging Directories 4395 Case Study: Salvaging Directories 4396 ````````````````````````````````` 4396 ````````````````````````````````` 4397 4397 4398 Unlike extended attributes, directory blocks 4398 Unlike extended attributes, directory blocks are all the same size, so 4399 salvaging directories is straightforward: 4399 salvaging directories is straightforward: 4400 4400 4401 1. Find the parent of the directory. 4401 1. Find the parent of the directory. 4402 If the dotdot entry is not unreadable, try 4402 If the dotdot entry is not unreadable, try to confirm that the alleged 4403 parent has a child entry pointing back to 4403 parent has a child entry pointing back to the directory being repaired. 4404 Otherwise, walk the filesystem to find it. 4404 Otherwise, walk the filesystem to find it. 4405 4405 4406 2. Walk the first partition of data fork of t 4406 2. Walk the first partition of data fork of the directory to find the directory 4407 entry data blocks. 4407 entry data blocks. 4408 When one is found, 4408 When one is found, 4409 4409 4410 a. Walk the directory data block to find c 4410 a. Walk the directory data block to find candidate entries. 4411 When an entry is found: 4411 When an entry is found: 4412 4412 4413 i. Check the name for problems, and ign 4413 i. Check the name for problems, and ignore the name if there are. 4414 4414 4415 ii. Retrieve the inumber and grab the i 4415 ii. Retrieve the inumber and grab the inode. 4416 If that succeeds, add the name, ino 4416 If that succeeds, add the name, inode number, and file type to the 4417 staging xfarray and xblob. 4417 staging xfarray and xblob. 4418 4418 4419 3. If the memory usage of the xfarray and xfb 4419 3. If the memory usage of the xfarray and xfblob exceed a certain amount of 4420 memory or there are no more directory data 4420 memory or there are no more directory data blocks to examine, unlock the 4421 directory and add the staged dirents into 4421 directory and add the staged dirents into the temporary directory. 4422 Truncate the staging files. 4422 Truncate the staging files. 4423 4423 4424 4. Use atomic file mapping exchange to exchan !! 4424 4. Use atomic extent swapping to exchange the new and old directory structures. 4425 structures. << 4426 The old directory blocks are now attached 4425 The old directory blocks are now attached to the temporary file. 4427 4426 4428 5. Reap the temporary file. 4427 5. Reap the temporary file. 4429 4428 4430 **Future Work Question**: Should repair reval 4429 **Future Work Question**: Should repair revalidate the dentry cache when 4431 rebuilding a directory? 4430 rebuilding a directory? 4432 4431 4433 *Answer*: Yes, it should. 4432 *Answer*: Yes, it should. 4434 4433 4435 In theory it is necessary to scan all dentry 4434 In theory it is necessary to scan all dentry cache entries for a directory to 4436 ensure that one of the following apply: 4435 ensure that one of the following apply: 4437 4436 4438 1. The cached dentry reflects an ondisk diren 4437 1. The cached dentry reflects an ondisk dirent in the new directory. 4439 4438 4440 2. The cached dentry no longer has a correspo 4439 2. The cached dentry no longer has a corresponding ondisk dirent in the new 4441 directory and the dentry can be purged fro 4440 directory and the dentry can be purged from the cache. 4442 4441 4443 3. The cached dentry no longer has an ondisk 4442 3. The cached dentry no longer has an ondisk dirent but the dentry cannot be 4444 purged. 4443 purged. 4445 This is the problem case. 4444 This is the problem case. 4446 4445 4447 Unfortunately, the current dentry cache desig 4446 Unfortunately, the current dentry cache design doesn't provide a means to walk 4448 every child dentry of a specific directory, w 4447 every child dentry of a specific directory, which makes this a hard problem. 4449 There is no known solution. 4448 There is no known solution. 4450 4449 4451 The proposed patchset is the 4450 The proposed patchset is the 4452 `directory repair 4451 `directory repair 4453 <https://git.kernel.org/pub/scm/linux/kernel/ 4452 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=repair-dirs>`_ 4454 series. 4453 series. 4455 4454 4456 Parent Pointers 4455 Parent Pointers 4457 ``````````````` 4456 ``````````````` 4458 4457 4459 A parent pointer is a piece of file metadata 4458 A parent pointer is a piece of file metadata that enables a user to locate the 4460 file's parent directory without having to tra 4459 file's parent directory without having to traverse the directory tree from the 4461 root. 4460 root. 4462 Without them, reconstruction of directory tre 4461 Without them, reconstruction of directory trees is hindered in much the same 4463 way that the historic lack of reverse space m 4462 way that the historic lack of reverse space mapping information once hindered 4464 reconstruction of filesystem space metadata. 4463 reconstruction of filesystem space metadata. 4465 The parent pointer feature, however, makes to 4464 The parent pointer feature, however, makes total directory reconstruction 4466 possible. 4465 possible. 4467 4466 4468 XFS parent pointers contain the information n !! 4467 XFS parent pointers include the dirent name and location of the entry within 4469 corresponding directory entry in the parent d !! 4468 the parent directory. 4470 In other words, child files use extended attr 4469 In other words, child files use extended attributes to store pointers to 4471 parents in the form ``(dirent_name) → (pare !! 4470 parents in the form ``(parent_inum, parent_gen, dirent_pos) → (dirent_name)``. 4472 The directory checking process can be strengt 4471 The directory checking process can be strengthened to ensure that the target of 4473 each dirent also contains a parent pointer po 4472 each dirent also contains a parent pointer pointing back to the dirent. 4474 Likewise, each parent pointer can be checked 4473 Likewise, each parent pointer can be checked by ensuring that the target of 4475 each parent pointer is a directory and that i 4474 each parent pointer is a directory and that it contains a dirent matching 4476 the parent pointer. 4475 the parent pointer. 4477 Both online and offline repair can use this s 4476 Both online and offline repair can use this strategy. 4478 4477 >> 4478 **Note**: The ondisk format of parent pointers is not yet finalized. >> 4479 4479 +-------------------------------------------- 4480 +--------------------------------------------------------------------------+ 4480 | **Historical Sidebar**: 4481 | **Historical Sidebar**: | 4481 +-------------------------------------------- 4482 +--------------------------------------------------------------------------+ 4482 | Directory parent pointers were first propos 4483 | Directory parent pointers were first proposed as an XFS feature more | 4483 | than a decade ago by SGI. 4484 | than a decade ago by SGI. | 4484 | Each link from a parent directory to a chil 4485 | Each link from a parent directory to a child file is mirrored with an | 4485 | extended attribute in the child that could 4486 | extended attribute in the child that could be used to identify the | 4486 | parent directory. 4487 | parent directory. | 4487 | Unfortunately, this early implementation ha 4488 | Unfortunately, this early implementation had major shortcomings and was | 4488 | never merged into Linux XFS: 4489 | never merged into Linux XFS: | 4489 | 4490 | | 4490 | 1. The XFS codebase of the late 2000s did n 4491 | 1. The XFS codebase of the late 2000s did not have the infrastructure to | 4491 | enforce strong referential integrity in 4492 | enforce strong referential integrity in the directory tree. | 4492 | It did not guarantee that a change in a 4493 | It did not guarantee that a change in a forward link would always be | 4493 | followed up with the corresponding chang 4494 | followed up with the corresponding change to the reverse links. | 4494 | 4495 | | 4495 | 2. Referential integrity was not integrated 4496 | 2. Referential integrity was not integrated into offline repair. | 4496 | Checking and repairs were performed on m 4497 | Checking and repairs were performed on mounted filesystems without | 4497 | taking any kernel or inode locks to coor 4498 | taking any kernel or inode locks to coordinate access. | 4498 | It is not clear how this actually worked 4499 | It is not clear how this actually worked properly. | 4499 | 4500 | | 4500 | 3. The extended attribute did not record th 4501 | 3. The extended attribute did not record the name of the directory entry | 4501 | in the parent, so the SGI parent pointer 4502 | in the parent, so the SGI parent pointer implementation cannot be | 4502 | used to reconnect the directory tree. 4503 | used to reconnect the directory tree. | 4503 | 4504 | | 4504 | 4. Extended attribute forks only support 65 4505 | 4. Extended attribute forks only support 65,536 extents, which means | 4505 | that parent pointer attribute creation i 4506 | that parent pointer attribute creation is likely to fail at some | 4506 | point before the maximum file link count 4507 | point before the maximum file link count is achieved. | 4507 | 4508 | | 4508 | The original parent pointer design was too 4509 | The original parent pointer design was too unstable for something like | 4509 | a file system repair to depend on. 4510 | a file system repair to depend on. | 4510 | Allison Henderson, Chandan Babu, and Cather 4511 | Allison Henderson, Chandan Babu, and Catherine Hoang are working on a | 4511 | second implementation that solves all short 4512 | second implementation that solves all shortcomings of the first. | 4512 | During 2022, Allison introduced log intent 4513 | During 2022, Allison introduced log intent items to track physical | 4513 | manipulations of the extended attribute str 4514 | manipulations of the extended attribute structures. | 4514 | This solves the referential integrity probl 4515 | This solves the referential integrity problem by making it possible to | 4515 | commit a dirent update and a parent pointer 4516 | commit a dirent update and a parent pointer update in the same | 4516 | transaction. 4517 | transaction. | 4517 | Chandan increased the maximum extent counts 4518 | Chandan increased the maximum extent counts of both data and attribute | 4518 | forks, thereby ensuring that the extended a 4519 | forks, thereby ensuring that the extended attribute structure can grow | 4519 | to handle the maximum hardlink count of any 4520 | to handle the maximum hardlink count of any file. | 4520 | << 4521 | For this second effort, the ondisk parent p << 4522 | proposed was ``(parent_inum, parent_gen, di << 4523 | The format was changed during development t << 4524 | of repair tools needing to to ensure that t << 4525 | always matched when reconstructing a direct << 4526 | << 4527 | There were a few other ways to have solved << 4528 | << 4529 | 1. The field could be designated advisory, << 4530 | are sufficient to find the entry in the << 4531 | However, this makes indexed key lookup i << 4532 | ongoing. << 4533 | << 4534 | 2. We could allow creating directory entrie << 4535 | solves the referential integrity problem << 4536 | dirent creation will fail due to conflic << 4537 | directory. << 4538 | << 4539 | These conflicts could be resolved by app << 4540 | and amending the xattr code to support u << 4541 | reindexing the dabtree, though this woul << 4542 | the parent directory still locked. << 4543 | << 4544 | 3. Same as above, but remove the old parent << 4545 | one atomically. << 4546 | << 4547 | 4. Change the ondisk xattr format to << 4548 | ``(parent_inum, name) → (parent_gen)`` << 4549 | name uniqueness that we require, without << 4550 | update the dirent position. << 4551 | Unfortunately, this requires changes to << 4552 | attr names as long as 263 bytes. << 4553 | << 4554 | 5. Change the ondisk xattr format to ``(par << 4555 | (name, parent_gen)``. << 4556 | If the hash is sufficiently resistant to << 4557 | then this should provide the attr name u << 4558 | Names shorter than 247 bytes could be st << 4559 | << 4560 | 6. Change the ondisk xattr format to ``(dir << 4561 | parent_gen)``. This format doesn't requ << 4562 | nested name hashing of the previous sugg << 4563 | discovered that multiple hardlinks to th << 4564 | filename caused performance problems wit << 4565 | the parent inumber is now xor'd into the << 4566 | << 4567 | In the end, it was decided that solution #6 << 4568 | most performant. A new hash function was d << 4569 +-------------------------------------------- 4521 +--------------------------------------------------------------------------+ 4570 4522 4571 << 4572 Case Study: Repairing Directories with Parent 4523 Case Study: Repairing Directories with Parent Pointers 4573 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 4524 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 4574 4525 4575 Directory rebuilding uses a :ref:`coordinated 4526 Directory rebuilding uses a :ref:`coordinated inode scan <iscan>` and 4576 a :ref:`directory entry live update hook <liv 4527 a :ref:`directory entry live update hook <liveupdate>` as follows: 4577 4528 4578 1. Set up a temporary directory for generatin 4529 1. Set up a temporary directory for generating the new directory structure, 4579 an xfblob for storing entry names, and an !! 4530 an xfblob for storing entry names, and an xfarray for stashing directory 4580 size fields involved in a directory update !! 4531 updates. 4581 remove, name cookie, ftype)``. << 4582 4532 4583 2. Set up an inode scanner and hook into the 4533 2. Set up an inode scanner and hook into the directory entry code to receive 4584 updates on directory operations. 4534 updates on directory operations. 4585 4535 4586 3. For each parent pointer found in each file 4536 3. For each parent pointer found in each file scanned, decide if the parent 4587 pointer references the directory of intere 4537 pointer references the directory of interest. 4588 If so: 4538 If so: 4589 4539 4590 a. Stash the parent pointer name and an ad !! 4540 a. Stash an addname entry for this dirent in the xfarray for later. 4591 xfblob and xfarray, respectively. << 4592 4541 4593 b. When finished scanning that file or the !! 4542 b. When finished scanning that file, flush the stashed updates to the 4594 a threshold, flush the stashed updates !! 4543 temporary directory. 4595 4544 4596 4. For each live directory update received vi 4545 4. For each live directory update received via the hook, decide if the child 4597 has already been scanned. 4546 has already been scanned. 4598 If so: 4547 If so: 4599 4548 4600 a. Stash the parent pointer name an addnam !! 4549 a. Stash an addname or removename entry for this dirent update in the 4601 dirent update in the xfblob and xfarray !! 4550 xfarray for later. 4602 We cannot write directly to the tempora 4551 We cannot write directly to the temporary directory because hook 4603 functions are not allowed to modify fil 4552 functions are not allowed to modify filesystem metadata. 4604 Instead, we stash updates in the xfarra 4553 Instead, we stash updates in the xfarray and rely on the scanner thread 4605 to apply the stashed updates to the tem 4554 to apply the stashed updates to the temporary directory. 4606 4555 4607 5. When the scan is complete, replay any stas !! 4556 5. When the scan is complete, atomically swap the contents of the temporary 4608 << 4609 6. When the scan is complete, atomically exch << 4610 directory and the directory being repaired 4557 directory and the directory being repaired. 4611 The temporary directory now contains the d 4558 The temporary directory now contains the damaged directory structure. 4612 4559 4613 7. Reap the temporary directory. !! 4560 6. Reap the temporary directory. >> 4561 >> 4562 7. Update the dirent position field of parent pointers as necessary. >> 4563 This may require the queuing of a substantial number of xattr log intent >> 4564 items. 4614 4565 4615 The proposed patchset is the 4566 The proposed patchset is the 4616 `parent pointers directory repair 4567 `parent pointers directory repair 4617 <https://git.kernel.org/pub/scm/linux/kernel/ !! 4568 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=pptrs-online-dir-repair>`_ 4618 series. 4569 series. 4619 4570 >> 4571 **Unresolved Question**: How will repair ensure that the ``dirent_pos`` fields >> 4572 match in the reconstructed directory? >> 4573 >> 4574 *Answer*: There are a few ways to solve this problem: >> 4575 >> 4576 1. The field could be designated advisory, since the other three values are >> 4577 sufficient to find the entry in the parent. >> 4578 However, this makes indexed key lookup impossible while repairs are ongoing. >> 4579 >> 4580 2. We could allow creating directory entries at specified offsets, which solves >> 4581 the referential integrity problem but runs the risk that dirent creation >> 4582 will fail due to conflicts with the free space in the directory. >> 4583 >> 4584 These conflicts could be resolved by appending the directory entry and >> 4585 amending the xattr code to support updating an xattr key and reindexing the >> 4586 dabtree, though this would have to be performed with the parent directory >> 4587 still locked. >> 4588 >> 4589 3. Same as above, but remove the old parent pointer entry and add a new one >> 4590 atomically. >> 4591 >> 4592 4. Change the ondisk xattr format to ``(parent_inum, name) → (parent_gen)``, >> 4593 which would provide the attr name uniqueness that we require, without >> 4594 forcing repair code to update the dirent position. >> 4595 Unfortunately, this requires changes to the xattr code to support attr >> 4596 names as long as 263 bytes. >> 4597 >> 4598 5. Change the ondisk xattr format to ``(parent_inum, hash(name)) → >> 4599 (name, parent_gen)``. >> 4600 If the hash is sufficiently resistant to collisions (e.g. sha256) then >> 4601 this should provide the attr name uniqueness that we require. >> 4602 Names shorter than 247 bytes could be stored directly. >> 4603 >> 4604 Discussion is ongoing under the `parent pointers patch deluge >> 4605 <https://www.spinics.net/lists/linux-xfs/msg69397.html>`_. >> 4606 4620 Case Study: Repairing Parent Pointers 4607 Case Study: Repairing Parent Pointers 4621 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 4608 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 4622 4609 4623 Online reconstruction of a file's parent poin 4610 Online reconstruction of a file's parent pointer information works similarly to 4624 directory reconstruction: 4611 directory reconstruction: 4625 4612 4626 1. Set up a temporary file for generating a n 4613 1. Set up a temporary file for generating a new extended attribute structure, 4627 an xfblob for storing parent pointer names !! 4614 an `xfblob<xfblob>` for storing parent pointer names, and an xfarray for 4628 fixed size fields involved in a parent poi !! 4615 stashing parent pointer updates. 4629 parent generation, add vs. remove, name co << 4630 4616 4631 2. Set up an inode scanner and hook into the 4617 2. Set up an inode scanner and hook into the directory entry code to receive 4632 updates on directory operations. 4618 updates on directory operations. 4633 4619 4634 3. For each directory entry found in each dir 4620 3. For each directory entry found in each directory scanned, decide if the 4635 dirent references the file of interest. 4621 dirent references the file of interest. 4636 If so: 4622 If so: 4637 4623 4638 a. Stash the dirent name and an addpptr en !! 4624 a. Stash an addpptr entry for this parent pointer in the xfblob and xfarray 4639 xfblob and xfarray, respectively. !! 4625 for later. 4640 4626 4641 b. When finished scanning the directory or !! 4627 b. When finished scanning the directory, flush the stashed updates to the 4642 exceeds a threshold, flush the stashed !! 4628 temporary directory. 4643 4629 4644 4. For each live directory update received vi 4630 4. For each live directory update received via the hook, decide if the parent 4645 has already been scanned. 4631 has already been scanned. 4646 If so: 4632 If so: 4647 4633 4648 a. Stash the dirent name and an addpptr or !! 4634 a. Stash an addpptr or removepptr entry for this dirent update in the 4649 update in the xfblob and xfarray for la !! 4635 xfarray for later. 4650 We cannot write parent pointers directl 4636 We cannot write parent pointers directly to the temporary file because 4651 hook functions are not allowed to modif 4637 hook functions are not allowed to modify filesystem metadata. 4652 Instead, we stash updates in the xfarra 4638 Instead, we stash updates in the xfarray and rely on the scanner thread 4653 to apply the stashed parent pointer upd 4639 to apply the stashed parent pointer updates to the temporary file. 4654 4640 4655 5. When the scan is complete, replay any stas !! 4641 5. Copy all non-parent pointer extended attributes to the temporary file. 4656 4642 4657 6. Copy all non-parent pointer extended attri !! 4643 6. When the scan is complete, atomically swap the attribute fork of the 4658 !! 4644 temporary file and the file being repaired. 4659 7. When the scan is complete, atomically exch << 4660 forks of the temporary file and the file b << 4661 The temporary file now contains the damage 4645 The temporary file now contains the damaged extended attribute structure. 4662 4646 4663 8. Reap the temporary file. !! 4647 7. Reap the temporary file. 4664 4648 4665 The proposed patchset is the 4649 The proposed patchset is the 4666 `parent pointers repair 4650 `parent pointers repair 4667 <https://git.kernel.org/pub/scm/linux/kernel/ !! 4651 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=pptrs-online-parent-repair>`_ 4668 series. 4652 series. 4669 4653 4670 Digression: Offline Checking of Parent Pointe 4654 Digression: Offline Checking of Parent Pointers 4671 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 4655 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 4672 4656 4673 Examining parent pointers in offline repair w 4657 Examining parent pointers in offline repair works differently because corrupt 4674 files are erased long before directory tree c 4658 files are erased long before directory tree connectivity checks are performed. 4675 Parent pointer checks are therefore a second 4659 Parent pointer checks are therefore a second pass to be added to the existing 4676 connectivity checks: 4660 connectivity checks: 4677 4661 4678 1. After the set of surviving files has been !! 4662 1. After the set of surviving files has been established (i.e. phase 6), 4679 walk the surviving directories of each AG 4663 walk the surviving directories of each AG in the filesystem. 4680 This is already performed as part of the c 4664 This is already performed as part of the connectivity checks. 4681 4665 4682 2. For each directory entry found, !! 4666 2. For each directory entry found, record the name in an xfblob, and store 4683 !! 4667 ``(child_ag_inum, parent_inum, parent_gen, dirent_pos)`` tuples in a 4684 a. If the name has already been stored in !! 4668 per-AG in-memory slab. 4685 and skip the next step. << 4686 << 4687 b. Otherwise, record the name in an xfblob << 4688 Unique mappings are critical for << 4689 << 4690 1. Deduplicating names to reduce memory << 4691 << 4692 2. Creating a stable sort key for the p << 4693 parent pointer validation described << 4694 << 4695 c. Store ``(child_ag_inum, parent_inum, pa << 4696 name_cookie)`` tuples in a per-AG in-me << 4697 referenced in this section is the regul << 4698 the specialized one used for parent poi << 4699 4669 4700 3. For each AG in the filesystem, 4670 3. For each AG in the filesystem, 4701 4671 4702 a. Sort the per-AG tuple set in order of ` !! 4672 a. Sort the per-AG tuples in order of child_ag_inum, parent_inum, and 4703 ``name_hash``, and ``name_cookie``. !! 4673 dirent_pos. 4704 Having a single ``name_cookie`` for eac << 4705 handling the uncommon case of a directo << 4706 to the same file where all the names ha << 4707 4674 4708 b. For each inode in the AG, 4675 b. For each inode in the AG, 4709 4676 4710 1. Scan the inode for parent pointers. 4677 1. Scan the inode for parent pointers. 4711 For each parent pointer found, !! 4678 Record the names in a per-file xfblob, and store ``(parent_inum, 4712 !! 4679 parent_gen, dirent_pos)`` tuples in a per-file slab. 4713 a. Validate the ondisk parent pointe << 4714 If validation fails, move on to t << 4715 file. << 4716 << 4717 b. If the name has already been stor << 4718 cookie and skip the next step. << 4719 << 4720 c. Record the name in a per-file xfb << 4721 cookie. << 4722 << 4723 d. Store ``(parent_inum, parent_gen, << 4724 name_cookie)`` tuples in a per-fi << 4725 4680 4726 2. Sort the per-file tuples in order of !! 4681 2. Sort the per-file tuples in order of parent_inum, and dirent_pos. 4727 and ``name_cookie``. << 4728 4682 4729 3. Position one slab cursor at the star 4683 3. Position one slab cursor at the start of the inode's records in the 4730 per-AG tuple slab. 4684 per-AG tuple slab. 4731 This should be trivial since the per 4685 This should be trivial since the per-AG tuples are in child inumber 4732 order. 4686 order. 4733 4687 4734 4. Position a second slab cursor at the 4688 4. Position a second slab cursor at the start of the per-file tuple slab. 4735 4689 4736 5. Iterate the two cursors in lockstep, !! 4690 5. Iterate the two cursors in lockstep, comparing the parent_ino and 4737 ``name_hash``, and ``name_cookie`` f !! 4691 dirent_pos fields of the records under each cursor. 4738 cursor: !! 4692 4739 !! 4693 a. Tuples in the per-AG list but not the per-file list are missing and 4740 a. If the per-AG cursor is at a lowe !! 4694 need to be written to the inode. 4741 per-file cursor, then the per-AG !! 4695 4742 pointer. !! 4696 b. Tuples in the per-file list but not the per-AG list are dangling 4743 Add the parent pointer to the ino !! 4697 and need to be removed from the inode. 4744 cursor. !! 4698 4745 !! 4699 c. For tuples in both lists, update the parent_gen and name components 4746 b. If the per-file cursor is at a lo !! 4700 of the parent pointer if necessary. 4747 the per-AG cursor, then the per-f << 4748 parent pointer. << 4749 Remove the parent pointer from th << 4750 cursor. << 4751 << 4752 c. Otherwise, both cursors point at << 4753 Update the parent_gen component i << 4754 Advance both cursors. << 4755 4701 4756 4. Move on to examining link counts, as we do 4702 4. Move on to examining link counts, as we do today. 4757 4703 4758 The proposed patchset is the 4704 The proposed patchset is the 4759 `offline parent pointers repair 4705 `offline parent pointers repair 4760 <https://git.kernel.org/pub/scm/linux/kernel/ !! 4706 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfsprogs-dev.git/log/?h=pptrs-repair>`_ 4761 series. 4707 series. 4762 4708 4763 Rebuilding directories from parent pointers i !! 4709 Rebuilding directories from parent pointers in offline repair is very 4764 challenging because xfs_repair currently uses !! 4710 challenging because it currently uses a single-pass scan of the filesystem 4765 filesystem during phases 3 and 4 to decide wh !! 4711 during phase 3 to decide which files are corrupt enough to be zapped. 4766 zapped. << 4767 This scan would have to be converted into a m 4712 This scan would have to be converted into a multi-pass scan: 4768 4713 4769 1. The first pass of the scan zaps corrupt in 4714 1. The first pass of the scan zaps corrupt inodes, forks, and attributes 4770 much as it does now. 4715 much as it does now. 4771 Corrupt directories are noted but not zapp 4716 Corrupt directories are noted but not zapped. 4772 4717 4773 2. The next pass records parent pointers poin 4718 2. The next pass records parent pointers pointing to the directories noted 4774 as being corrupt in the first pass. 4719 as being corrupt in the first pass. 4775 This second pass may have to happen after 4720 This second pass may have to happen after the phase 4 scan for duplicate 4776 blocks, if phase 4 is also capable of zapp 4721 blocks, if phase 4 is also capable of zapping directories. 4777 4722 4778 3. The third pass resets corrupt directories 4723 3. The third pass resets corrupt directories to an empty shortform directory. 4779 Free space metadata has not been ensured y 4724 Free space metadata has not been ensured yet, so repair cannot yet use the 4780 directory building code in libxfs. 4725 directory building code in libxfs. 4781 4726 4782 4. At the start of phase 6, space metadata ha 4727 4. At the start of phase 6, space metadata have been rebuilt. 4783 Use the parent pointer information recorde 4728 Use the parent pointer information recorded during step 2 to reconstruct 4784 the dirents and add them to the now-empty 4729 the dirents and add them to the now-empty directories. 4785 4730 4786 This code has not yet been constructed. 4731 This code has not yet been constructed. 4787 4732 4788 .. _dirtree: << 4789 << 4790 Case Study: Directory Tree Structure << 4791 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ << 4792 << 4793 As mentioned earlier, the filesystem director << 4794 directed acylic graph structure. << 4795 However, each node in this graph is a separat << 4796 own locks, which makes validating the tree qu << 4797 Fortunately, non-directories are allowed to h << 4798 have children, so only directories need to be << 4799 Directories typically constitute 5-10% of the << 4800 reduces the amount of work dramatically. << 4801 << 4802 If the directory tree could be frozen, it wou << 4803 disconnected regions by running a depth (or b << 4804 from the root directory and marking a bitmap << 4805 At any point in the walk, trying to set an al << 4806 cycle. << 4807 After the scan completes, XORing the marked i << 4808 allocation bitmap reveals disconnected inodes << 4809 However, one of online repair's design goals << 4810 filesystem unless it's absolutely necessary. << 4811 Directory tree updates can move subtrees acro << 4812 filesystem, so the bitmap algorithm cannot be << 4813 << 4814 Directory parent pointers enable an increment << 4815 tree structure. << 4816 Instead of using one thread to scan the entir << 4817 walk from individual subdirectories upwards t << 4818 For this to work, all directory entries and p << 4819 consistent, each directory entry must have a << 4820 counts of all directories must be correct. << 4821 Each scanner thread must be able to take the << 4822 directory while holding the IOLOCK of the chi << 4823 directory from being moved within the tree. << 4824 This is not possible since the VFS does not t << 4825 subdirectory when moving that subdirectory, s << 4826 the parent -> child relationship by taking th << 4827 update hook to detect changes. << 4828 << 4829 The scanning process uses a dirent hook to de << 4830 mentioned in the scan data. << 4831 The scan works as follows: << 4832 << 4833 1. For each subdirectory in the filesystem, << 4834 << 4835 a. For each parent pointer of that subdire << 4836 << 4837 1. Create a path object for that parent << 4838 subdirectory inode number in the pat << 4839 << 4840 2. Record the parent pointer name and i << 4841 << 4842 3. If the alleged parent is the subdire << 4843 a cycle. << 4844 Mark the path for deletion and repea << 4845 subdirectory parent pointer. << 4846 << 4847 4. Try to mark the alleged parent inode << 4848 object. << 4849 If the bit is already set, then ther << 4850 tree. << 4851 Mark the path as a cycle and repeat << 4852 parent pointer. << 4853 << 4854 5. Load the alleged parent. << 4855 If the alleged parent is not a linke << 4856 because the parent pointer informati << 4857 << 4858 6. For each parent pointer of this alle << 4859 << 4860 a. Record the parent pointer name an << 4861 if no parent has been set for tha << 4862 << 4863 b. If an ancestor has more than one << 4864 Repeat step 1a with the next subd << 4865 << 4866 c. Repeat steps 1a3-1a6 for the ance << 4867 This repeats until the directory << 4868 are found. << 4869 << 4870 7. If the walk terminates at the root d << 4871 << 4872 8. If the walk terminates without reach << 4873 disconnected. << 4874 << 4875 2. If the directory entry update hook trigger << 4876 by the scan. << 4877 If the entry matches part of a path, mark << 4878 When the scanner thread sees that the scan << 4879 all scan data and starts over. << 4880 << 4881 Repairing the directory tree works as follows << 4882 << 4883 1. Walk each path of the target subdirectory. << 4884 << 4885 a. Corrupt paths and cycle paths are count << 4886 << 4887 b. Paths already marked for deletion are c << 4888 << 4889 c. Paths that reached the root are counted << 4890 << 4891 2. If the subdirectory is either the root dir << 4892 delete all incoming directory entries in t << 4893 Repairs are complete. << 4894 << 4895 3. If the subdirectory has exactly one path, << 4896 parent and exit. << 4897 << 4898 4. If the subdirectory has at least one good << 4899 incoming directory entries in the immediat << 4900 << 4901 5. If the subdirectory has no good paths and << 4902 all the other incoming directory entries i << 4903 << 4904 6. If the subdirectory has zero paths, attach << 4905 << 4906 The proposed patches are in the << 4907 `directory tree repair << 4908 <https://git.kernel.org/pub/scm/linux/kernel/ << 4909 series. << 4910 << 4911 << 4912 .. _orphanage: 4733 .. _orphanage: 4913 4734 4914 The Orphanage 4735 The Orphanage 4915 ------------- 4736 ------------- 4916 4737 4917 Filesystems present files as a directed, and 4738 Filesystems present files as a directed, and hopefully acyclic, graph. 4918 In other words, a tree. 4739 In other words, a tree. 4919 The root of the filesystem is a directory, an 4740 The root of the filesystem is a directory, and each entry in a directory points 4920 downwards either to more subdirectories or to 4741 downwards either to more subdirectories or to non-directory files. 4921 Unfortunately, a disruption in the directory 4742 Unfortunately, a disruption in the directory graph pointers result in a 4922 disconnected graph, which makes files impossi 4743 disconnected graph, which makes files impossible to access via regular path 4923 resolution. 4744 resolution. 4924 4745 4925 Without parent pointers, the directory parent 4746 Without parent pointers, the directory parent pointer online scrub code can 4926 detect a dotdot entry pointing to a parent di 4747 detect a dotdot entry pointing to a parent directory that doesn't have a link 4927 back to the child directory and the file link 4748 back to the child directory and the file link count checker can detect a file 4928 that isn't pointed to by any directory in the 4749 that isn't pointed to by any directory in the filesystem. 4929 If such a file has a positive link count, the 4750 If such a file has a positive link count, the file is an orphan. 4930 4751 4931 With parent pointers, directories can be rebu 4752 With parent pointers, directories can be rebuilt by scanning parent pointers 4932 and parent pointers can be rebuilt by scannin 4753 and parent pointers can be rebuilt by scanning directories. 4933 This should reduce the incidence of files end 4754 This should reduce the incidence of files ending up in ``/lost+found``. 4934 4755 4935 When orphans are found, they should be reconn 4756 When orphans are found, they should be reconnected to the directory tree. 4936 Offline fsck solves the problem by creating a 4757 Offline fsck solves the problem by creating a directory ``/lost+found`` to 4937 serve as an orphanage, and linking orphan fil 4758 serve as an orphanage, and linking orphan files into the orphanage by using the 4938 inumber as the name. 4759 inumber as the name. 4939 Reparenting a file to the orphanage does not 4760 Reparenting a file to the orphanage does not reset any of its permissions or 4940 ACLs. 4761 ACLs. 4941 4762 4942 This process is more involved in the kernel t 4763 This process is more involved in the kernel than it is in userspace. 4943 The directory and file link count repair setu 4764 The directory and file link count repair setup functions must use the regular 4944 VFS mechanisms to create the orphanage direct 4765 VFS mechanisms to create the orphanage directory with all the necessary 4945 security attributes and dentry cache entries, 4766 security attributes and dentry cache entries, just like a regular directory 4946 tree modification. 4767 tree modification. 4947 4768 4948 Orphaned files are adopted by the orphanage a 4769 Orphaned files are adopted by the orphanage as follows: 4949 4770 4950 1. Call ``xrep_orphanage_try_create`` at the 4771 1. Call ``xrep_orphanage_try_create`` at the start of the scrub setup function 4951 to try to ensure that the lost and found d 4772 to try to ensure that the lost and found directory actually exists. 4952 This also attaches the orphanage directory 4773 This also attaches the orphanage directory to the scrub context. 4953 4774 4954 2. If the decision is made to reconnect a fil 4775 2. If the decision is made to reconnect a file, take the IOLOCK of both the 4955 orphanage and the file being reattached. 4776 orphanage and the file being reattached. 4956 The ``xrep_orphanage_iolock_two`` function 4777 The ``xrep_orphanage_iolock_two`` function follows the inode locking 4957 strategy discussed earlier. 4778 strategy discussed earlier. 4958 4779 4959 3. Use ``xrep_adoption_trans_alloc`` to reser !! 4780 3. Call ``xrep_orphanage_compute_blkres`` and ``xrep_orphanage_compute_name`` 4960 transaction. !! 4781 to compute the new name in the orphanage and the block reservation required. 4961 << 4962 4. Call ``xrep_orphanage_compute_name`` to co << 4963 orphanage. << 4964 4782 4965 5. If the adoption is going to happen, call ` !! 4783 4. Use ``xrep_orphanage_adoption_prep`` to reserve resources to the repair 4966 reparent the orphaned file into the lost a !! 4784 transaction. 4967 cache. << 4968 << 4969 6. Call ``xrep_adoption_finish`` to commit an << 4970 orphanage ILOCK, and clean the scrub trans << 4971 ``xrep_adoption_commit`` to commit the upd << 4972 4785 4973 7. If a runtime error happens, call ``xrep_ad !! 4786 5. Call ``xrep_orphanage_adopt`` to reparent the orphaned file into the lost 4974 resources. !! 4787 and found, and update the kernel dentry cache. 4975 4788 4976 The proposed patches are in the 4789 The proposed patches are in the 4977 `orphanage adoption 4790 `orphanage adoption 4978 <https://git.kernel.org/pub/scm/linux/kernel/ 4791 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=repair-orphanage>`_ 4979 series. 4792 series. 4980 4793 4981 6. Userspace Algorithms and Data Structures 4794 6. Userspace Algorithms and Data Structures 4982 =========================================== 4795 =========================================== 4983 4796 4984 This section discusses the key algorithms and 4797 This section discusses the key algorithms and data structures of the userspace 4985 program, ``xfs_scrub``, that provide the abil 4798 program, ``xfs_scrub``, that provide the ability to drive metadata checks and 4986 repairs in the kernel, verify file data, and 4799 repairs in the kernel, verify file data, and look for other potential problems. 4987 4800 4988 .. _scrubcheck: 4801 .. _scrubcheck: 4989 4802 4990 Checking Metadata 4803 Checking Metadata 4991 ----------------- 4804 ----------------- 4992 4805 4993 Recall the :ref:`phases of fsck work<scrubpha 4806 Recall the :ref:`phases of fsck work<scrubphases>` outlined earlier. 4994 That structure follows naturally from the dat 4807 That structure follows naturally from the data dependencies designed into the 4995 filesystem from its beginnings in 1993. 4808 filesystem from its beginnings in 1993. 4996 In XFS, there are several groups of metadata 4809 In XFS, there are several groups of metadata dependencies: 4997 4810 4998 a. Filesystem summary counts depend on consis 4811 a. Filesystem summary counts depend on consistency within the inode indices, 4999 the allocation group space btrees, and the 4812 the allocation group space btrees, and the realtime volume space 5000 information. 4813 information. 5001 4814 5002 b. Quota resource counts depend on consistenc 4815 b. Quota resource counts depend on consistency within the quota file data 5003 forks, inode indices, inode records, and t 4816 forks, inode indices, inode records, and the forks of every file on the 5004 system. 4817 system. 5005 4818 5006 c. The naming hierarchy depends on consistenc 4819 c. The naming hierarchy depends on consistency within the directory and 5007 extended attribute structures. 4820 extended attribute structures. 5008 This includes file link counts. 4821 This includes file link counts. 5009 4822 5010 d. Directories, extended attributes, and file 4823 d. Directories, extended attributes, and file data depend on consistency within 5011 the file forks that map directory and exte 4824 the file forks that map directory and extended attribute data to physical 5012 storage media. 4825 storage media. 5013 4826 5014 e. The file forks depends on consistency with 4827 e. The file forks depends on consistency within inode records and the space 5015 metadata indices of the allocation groups 4828 metadata indices of the allocation groups and the realtime volume. 5016 This includes quota and realtime metadata 4829 This includes quota and realtime metadata files. 5017 4830 5018 f. Inode records depends on consistency withi 4831 f. Inode records depends on consistency within the inode metadata indices. 5019 4832 5020 g. Realtime space metadata depend on the inod 4833 g. Realtime space metadata depend on the inode records and data forks of the 5021 realtime metadata inodes. 4834 realtime metadata inodes. 5022 4835 5023 h. The allocation group metadata indices (fre 4836 h. The allocation group metadata indices (free space, inodes, reference count, 5024 and reverse mapping btrees) depend on cons 4837 and reverse mapping btrees) depend on consistency within the AG headers and 5025 between all the AG metadata btrees. 4838 between all the AG metadata btrees. 5026 4839 5027 i. ``xfs_scrub`` depends on the filesystem be 4840 i. ``xfs_scrub`` depends on the filesystem being mounted and kernel support 5028 for online fsck functionality. 4841 for online fsck functionality. 5029 4842 5030 Therefore, a metadata dependency graph is a c 4843 Therefore, a metadata dependency graph is a convenient way to schedule checking 5031 operations in the ``xfs_scrub`` program: 4844 operations in the ``xfs_scrub`` program: 5032 4845 5033 - Phase 1 checks that the provided path maps 4846 - Phase 1 checks that the provided path maps to an XFS filesystem and detect 5034 the kernel's scrubbing abilities, which val 4847 the kernel's scrubbing abilities, which validates group (i). 5035 4848 5036 - Phase 2 scrubs groups (g) and (h) in parall 4849 - Phase 2 scrubs groups (g) and (h) in parallel using a threaded workqueue. 5037 4850 5038 - Phase 3 scans inodes in parallel. 4851 - Phase 3 scans inodes in parallel. 5039 For each inode, groups (f), (e), and (d) ar 4852 For each inode, groups (f), (e), and (d) are checked, in that order. 5040 4853 5041 - Phase 4 repairs everything in groups (i) th 4854 - Phase 4 repairs everything in groups (i) through (d) so that phases 5 and 6 5042 may run reliably. 4855 may run reliably. 5043 4856 5044 - Phase 5 starts by checking groups (b) and ( 4857 - Phase 5 starts by checking groups (b) and (c) in parallel before moving on 5045 to checking names. 4858 to checking names. 5046 4859 5047 - Phase 6 depends on groups (i) through (b) t 4860 - Phase 6 depends on groups (i) through (b) to find file data blocks to verify, 5048 to read them, and to report which blocks of 4861 to read them, and to report which blocks of which files are affected. 5049 4862 5050 - Phase 7 checks group (a), having validated 4863 - Phase 7 checks group (a), having validated everything else. 5051 4864 5052 Notice that the data dependencies between gro 4865 Notice that the data dependencies between groups are enforced by the structure 5053 of the program flow. 4866 of the program flow. 5054 4867 5055 Parallel Inode Scans 4868 Parallel Inode Scans 5056 -------------------- 4869 -------------------- 5057 4870 5058 An XFS filesystem can easily contain hundreds 4871 An XFS filesystem can easily contain hundreds of millions of inodes. 5059 Given that XFS targets installations with lar 4872 Given that XFS targets installations with large high-performance storage, 5060 it is desirable to scrub inodes in parallel t 4873 it is desirable to scrub inodes in parallel to minimize runtime, particularly 5061 if the program has been invoked manually from 4874 if the program has been invoked manually from a command line. 5062 This requires careful scheduling to keep the 4875 This requires careful scheduling to keep the threads as evenly loaded as 5063 possible. 4876 possible. 5064 4877 5065 Early iterations of the ``xfs_scrub`` inode s 4878 Early iterations of the ``xfs_scrub`` inode scanner naïvely created a single 5066 workqueue and scheduled a single workqueue it 4879 workqueue and scheduled a single workqueue item per AG. 5067 Each workqueue item walked the inode btree (w 4880 Each workqueue item walked the inode btree (with ``XFS_IOC_INUMBERS``) to find 5068 inode chunks and then called bulkstat (``XFS_ 4881 inode chunks and then called bulkstat (``XFS_IOC_BULKSTAT``) to gather enough 5069 information to construct file handles. 4882 information to construct file handles. 5070 The file handle was then passed to a function 4883 The file handle was then passed to a function to generate scrub items for each 5071 metadata object of each inode. 4884 metadata object of each inode. 5072 This simple algorithm leads to thread balanci 4885 This simple algorithm leads to thread balancing problems in phase 3 if the 5073 filesystem contains one AG with a few large s 4886 filesystem contains one AG with a few large sparse files and the rest of the 5074 AGs contain many smaller files. 4887 AGs contain many smaller files. 5075 The inode scan dispatch function was not suff 4888 The inode scan dispatch function was not sufficiently granular; it should have 5076 been dispatching at the level of individual i 4889 been dispatching at the level of individual inodes, or, to constrain memory 5077 consumption, inode btree records. 4890 consumption, inode btree records. 5078 4891 5079 Thanks to Dave Chinner, bounded workqueues in 4892 Thanks to Dave Chinner, bounded workqueues in userspace enable ``xfs_scrub`` to 5080 avoid this problem with ease by adding a seco 4893 avoid this problem with ease by adding a second workqueue. 5081 Just like before, the first workqueue is seed 4894 Just like before, the first workqueue is seeded with one workqueue item per AG, 5082 and it uses INUMBERS to find inode btree chun 4895 and it uses INUMBERS to find inode btree chunks. 5083 The second workqueue, however, is configured 4896 The second workqueue, however, is configured with an upper bound on the number 5084 of items that can be waiting to be run. 4897 of items that can be waiting to be run. 5085 Each inode btree chunk found by the first wor 4898 Each inode btree chunk found by the first workqueue's workers are queued to the 5086 second workqueue, and it is this second workq 4899 second workqueue, and it is this second workqueue that queries BULKSTAT, 5087 creates a file handle, and passes it to a fun 4900 creates a file handle, and passes it to a function to generate scrub items for 5088 each metadata object of each inode. 4901 each metadata object of each inode. 5089 If the second workqueue is too full, the work 4902 If the second workqueue is too full, the workqueue add function blocks the 5090 first workqueue's workers until the backlog e 4903 first workqueue's workers until the backlog eases. 5091 This doesn't completely solve the balancing p 4904 This doesn't completely solve the balancing problem, but reduces it enough to 5092 move on to more pressing issues. 4905 move on to more pressing issues. 5093 4906 5094 The proposed patchsets are the scrub 4907 The proposed patchsets are the scrub 5095 `performance tweaks 4908 `performance tweaks 5096 <https://git.kernel.org/pub/scm/linux/kernel/ 4909 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfsprogs-dev.git/log/?h=scrub-performance-tweaks>`_ 5097 and the 4910 and the 5098 `inode scan rebalance 4911 `inode scan rebalance 5099 <https://git.kernel.org/pub/scm/linux/kernel/ 4912 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfsprogs-dev.git/log/?h=scrub-iscan-rebalance>`_ 5100 series. 4913 series. 5101 4914 5102 .. _scrubrepair: 4915 .. _scrubrepair: 5103 4916 5104 Scheduling Repairs 4917 Scheduling Repairs 5105 ------------------ 4918 ------------------ 5106 4919 5107 During phase 2, corruptions and inconsistenci 4920 During phase 2, corruptions and inconsistencies reported in any AGI header or 5108 inode btree are repaired immediately, because 4921 inode btree are repaired immediately, because phase 3 relies on proper 5109 functioning of the inode indices to find inod 4922 functioning of the inode indices to find inodes to scan. 5110 Failed repairs are rescheduled to phase 4. 4923 Failed repairs are rescheduled to phase 4. 5111 Problems reported in any other space metadata 4924 Problems reported in any other space metadata are deferred to phase 4. 5112 Optimization opportunities are always deferre 4925 Optimization opportunities are always deferred to phase 4, no matter their 5113 origin. 4926 origin. 5114 4927 5115 During phase 3, corruptions and inconsistenci 4928 During phase 3, corruptions and inconsistencies reported in any part of a 5116 file's metadata are repaired immediately if a 4929 file's metadata are repaired immediately if all space metadata were validated 5117 during phase 2. 4930 during phase 2. 5118 Repairs that fail or cannot be repaired immed 4931 Repairs that fail or cannot be repaired immediately are scheduled for phase 4. 5119 4932 5120 In the original design of ``xfs_scrub``, it w 4933 In the original design of ``xfs_scrub``, it was thought that repairs would be 5121 so infrequent that the ``struct xfs_scrub_met 4934 so infrequent that the ``struct xfs_scrub_metadata`` objects used to 5122 communicate with the kernel could also be use 4935 communicate with the kernel could also be used as the primary object to 5123 schedule repairs. 4936 schedule repairs. 5124 With recent increases in the number of optimi 4937 With recent increases in the number of optimizations possible for a given 5125 filesystem object, it became much more memory 4938 filesystem object, it became much more memory-efficient to track all eligible 5126 repairs for a given filesystem object with a 4939 repairs for a given filesystem object with a single repair item. 5127 Each repair item represents a single lockable 4940 Each repair item represents a single lockable object -- AGs, metadata files, 5128 individual inodes, or a class of summary info 4941 individual inodes, or a class of summary information. 5129 4942 5130 Phase 4 is responsible for scheduling a lot o 4943 Phase 4 is responsible for scheduling a lot of repair work in as quick a 5131 manner as is practical. 4944 manner as is practical. 5132 The :ref:`data dependencies <scrubcheck>` out 4945 The :ref:`data dependencies <scrubcheck>` outlined earlier still apply, which 5133 means that ``xfs_scrub`` must try to complete 4946 means that ``xfs_scrub`` must try to complete the repair work scheduled by 5134 phase 2 before trying repair work scheduled b 4947 phase 2 before trying repair work scheduled by phase 3. 5135 The repair process is as follows: 4948 The repair process is as follows: 5136 4949 5137 1. Start a round of repair with a workqueue a 4950 1. Start a round of repair with a workqueue and enough workers to keep the CPUs 5138 as busy as the user desires. 4951 as busy as the user desires. 5139 4952 5140 a. For each repair item queued by phase 2, 4953 a. For each repair item queued by phase 2, 5141 4954 5142 i. Ask the kernel to repair everythin 4955 i. Ask the kernel to repair everything listed in the repair item for a 5143 given filesystem object. 4956 given filesystem object. 5144 4957 5145 ii. Make a note if the kernel made any 4958 ii. Make a note if the kernel made any progress in reducing the number 5146 of repairs needed for this object. 4959 of repairs needed for this object. 5147 4960 5148 iii. If the object no longer requires r 4961 iii. If the object no longer requires repairs, revalidate all metadata 5149 associated with this object. 4962 associated with this object. 5150 If the revalidation succeeds, drop 4963 If the revalidation succeeds, drop the repair item. 5151 If not, requeue the item for more 4964 If not, requeue the item for more repairs. 5152 4965 5153 b. If any repairs were made, jump back to 4966 b. If any repairs were made, jump back to 1a to retry all the phase 2 items. 5154 4967 5155 c. For each repair item queued by phase 3, 4968 c. For each repair item queued by phase 3, 5156 4969 5157 i. Ask the kernel to repair everythin 4970 i. Ask the kernel to repair everything listed in the repair item for a 5158 given filesystem object. 4971 given filesystem object. 5159 4972 5160 ii. Make a note if the kernel made any 4973 ii. Make a note if the kernel made any progress in reducing the number 5161 of repairs needed for this object. 4974 of repairs needed for this object. 5162 4975 5163 iii. If the object no longer requires r 4976 iii. If the object no longer requires repairs, revalidate all metadata 5164 associated with this object. 4977 associated with this object. 5165 If the revalidation succeeds, drop 4978 If the revalidation succeeds, drop the repair item. 5166 If not, requeue the item for more 4979 If not, requeue the item for more repairs. 5167 4980 5168 d. If any repairs were made, jump back to 4981 d. If any repairs were made, jump back to 1c to retry all the phase 3 items. 5169 4982 5170 2. If step 1 made any repair progress of any 4983 2. If step 1 made any repair progress of any kind, jump back to step 1 to start 5171 another round of repair. 4984 another round of repair. 5172 4985 5173 3. If there are items left to repair, run the 4986 3. If there are items left to repair, run them all serially one more time. 5174 Complain if the repairs were not successfu 4987 Complain if the repairs were not successful, since this is the last chance 5175 to repair anything. 4988 to repair anything. 5176 4989 5177 Corruptions and inconsistencies encountered d 4990 Corruptions and inconsistencies encountered during phases 5 and 7 are repaired 5178 immediately. 4991 immediately. 5179 Corrupt file data blocks reported by phase 6 4992 Corrupt file data blocks reported by phase 6 cannot be recovered by the 5180 filesystem. 4993 filesystem. 5181 4994 5182 The proposed patchsets are the 4995 The proposed patchsets are the 5183 `repair warning improvements 4996 `repair warning improvements 5184 <https://git.kernel.org/pub/scm/linux/kernel/ 4997 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfsprogs-dev.git/log/?h=scrub-better-repair-warnings>`_, 5185 refactoring of the 4998 refactoring of the 5186 `repair data dependency 4999 `repair data dependency 5187 <https://git.kernel.org/pub/scm/linux/kernel/ 5000 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfsprogs-dev.git/log/?h=scrub-repair-data-deps>`_ 5188 and 5001 and 5189 `object tracking 5002 `object tracking 5190 <https://git.kernel.org/pub/scm/linux/kernel/ 5003 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfsprogs-dev.git/log/?h=scrub-object-tracking>`_, 5191 and the 5004 and the 5192 `repair scheduling 5005 `repair scheduling 5193 <https://git.kernel.org/pub/scm/linux/kernel/ 5006 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfsprogs-dev.git/log/?h=scrub-repair-scheduling>`_ 5194 improvement series. 5007 improvement series. 5195 5008 5196 Checking Names for Confusable Unicode Sequenc 5009 Checking Names for Confusable Unicode Sequences 5197 --------------------------------------------- 5010 ----------------------------------------------- 5198 5011 5199 If ``xfs_scrub`` succeeds in validating the f 5012 If ``xfs_scrub`` succeeds in validating the filesystem metadata by the end of 5200 phase 4, it moves on to phase 5, which checks 5013 phase 4, it moves on to phase 5, which checks for suspicious looking names in 5201 the filesystem. 5014 the filesystem. 5202 These names consist of the filesystem label, 5015 These names consist of the filesystem label, names in directory entries, and 5203 the names of extended attributes. 5016 the names of extended attributes. 5204 Like most Unix filesystems, XFS imposes the s 5017 Like most Unix filesystems, XFS imposes the sparest of constraints on the 5205 contents of a name: 5018 contents of a name: 5206 5019 5207 - Slashes and null bytes are not allowed in d 5020 - Slashes and null bytes are not allowed in directory entries. 5208 5021 5209 - Null bytes are not allowed in userspace-vis 5022 - Null bytes are not allowed in userspace-visible extended attributes. 5210 5023 5211 - Null bytes are not allowed in the filesyste 5024 - Null bytes are not allowed in the filesystem label. 5212 5025 5213 Directory entries and attribute keys store th 5026 Directory entries and attribute keys store the length of the name explicitly 5214 ondisk, which means that nulls are not name t 5027 ondisk, which means that nulls are not name terminators. 5215 For this section, the term "naming domain" re 5028 For this section, the term "naming domain" refers to any place where names are 5216 presented together -- all the names in a dire 5029 presented together -- all the names in a directory, or all the attributes of a 5217 file. 5030 file. 5218 5031 5219 Although the Unix naming constraints are very 5032 Although the Unix naming constraints are very permissive, the reality of most 5220 modern-day Linux systems is that programs wor 5033 modern-day Linux systems is that programs work with Unicode character code 5221 points to support international languages. 5034 points to support international languages. 5222 These programs typically encode those code po 5035 These programs typically encode those code points in UTF-8 when interfacing 5223 with the C library because the kernel expects 5036 with the C library because the kernel expects null-terminated names. 5224 In the common case, therefore, names found in 5037 In the common case, therefore, names found in an XFS filesystem are actually 5225 UTF-8 encoded Unicode data. 5038 UTF-8 encoded Unicode data. 5226 5039 5227 To maximize its expressiveness, the Unicode s 5040 To maximize its expressiveness, the Unicode standard defines separate control 5228 points for various characters that render sim 5041 points for various characters that render similarly or identically in writing 5229 systems around the world. 5042 systems around the world. 5230 For example, the character "Cyrillic Small Le 5043 For example, the character "Cyrillic Small Letter A" U+0430 "а" often renders 5231 identically to "Latin Small Letter A" U+0061 5044 identically to "Latin Small Letter A" U+0061 "a". 5232 5045 5233 The standard also permits characters to be co 5046 The standard also permits characters to be constructed in multiple ways -- 5234 either by using a defined code point, or by c 5047 either by using a defined code point, or by combining one code point with 5235 various combining marks. 5048 various combining marks. 5236 For example, the character "Angstrom Sign U+2 5049 For example, the character "Angstrom Sign U+212B "â„«" can also be expressed 5237 as "Latin Capital Letter A" U+0041 "A" follow 5050 as "Latin Capital Letter A" U+0041 "A" followed by "Combining Ring Above" 5238 U+030A "◌̊". 5051 U+030A "◌̊". 5239 Both sequences render identically. 5052 Both sequences render identically. 5240 5053 5241 Like the standards that preceded it, Unicode 5054 Like the standards that preceded it, Unicode also defines various control 5242 characters to alter the presentation of text. 5055 characters to alter the presentation of text. 5243 For example, the character "Right-to-Left Ove 5056 For example, the character "Right-to-Left Override" U+202E can trick some 5244 programs into rendering "moo\\xe2\\x80\\xaegn 5057 programs into rendering "moo\\xe2\\x80\\xaegnp.txt" as "mootxt.png". 5245 A second category of rendering problems invol 5058 A second category of rendering problems involves whitespace characters. 5246 If the character "Zero Width Space" U+200B is 5059 If the character "Zero Width Space" U+200B is encountered in a file name, the 5247 name will render identically to a name that d 5060 name will render identically to a name that does not have the zero width 5248 space. 5061 space. 5249 5062 5250 If two names within a naming domain have diff 5063 If two names within a naming domain have different byte sequences but render 5251 identically, a user may be confused by it. 5064 identically, a user may be confused by it. 5252 The kernel, in its indifference to upper leve 5065 The kernel, in its indifference to upper level encoding schemes, permits this. 5253 Most filesystem drivers persist the byte sequ 5066 Most filesystem drivers persist the byte sequence names that are given to them 5254 by the VFS. 5067 by the VFS. 5255 5068 5256 Techniques for detecting confusable names are 5069 Techniques for detecting confusable names are explained in great detail in 5257 sections 4 and 5 of the 5070 sections 4 and 5 of the 5258 `Unicode Security Mechanisms <https://unicode 5071 `Unicode Security Mechanisms <https://unicode.org/reports/tr39/>`_ 5259 document. 5072 document. 5260 When ``xfs_scrub`` detects UTF-8 encoding in 5073 When ``xfs_scrub`` detects UTF-8 encoding in use on a system, it uses the 5261 Unicode normalization form NFD in conjunction 5074 Unicode normalization form NFD in conjunction with the confusable name 5262 detection component of 5075 detection component of 5263 `libicu <https://github.com/unicode-org/icu>` 5076 `libicu <https://github.com/unicode-org/icu>`_ 5264 to identify names with a directory or within 5077 to identify names with a directory or within a file's extended attributes that 5265 could be confused for each other. 5078 could be confused for each other. 5266 Names are also checked for control characters 5079 Names are also checked for control characters, non-rendering characters, and 5267 mixing of bidirectional characters. 5080 mixing of bidirectional characters. 5268 All of these potential issues are reported to 5081 All of these potential issues are reported to the system administrator during 5269 phase 5. 5082 phase 5. 5270 5083 5271 Media Verification of File Data Extents 5084 Media Verification of File Data Extents 5272 --------------------------------------- 5085 --------------------------------------- 5273 5086 5274 The system administrator can elect to initiat 5087 The system administrator can elect to initiate a media scan of all file data 5275 blocks. 5088 blocks. 5276 This scan after validation of all filesystem 5089 This scan after validation of all filesystem metadata (except for the summary 5277 counters) as phase 6. 5090 counters) as phase 6. 5278 The scan starts by calling ``FS_IOC_GETFSMAP` 5091 The scan starts by calling ``FS_IOC_GETFSMAP`` to scan the filesystem space map 5279 to find areas that are allocated to file data 5092 to find areas that are allocated to file data fork extents. 5280 Gaps between data fork extents that are small 5093 Gaps between data fork extents that are smaller than 64k are treated as if 5281 they were data fork extents to reduce the com 5094 they were data fork extents to reduce the command setup overhead. 5282 When the space map scan accumulates a region 5095 When the space map scan accumulates a region larger than 32MB, a media 5283 verification request is sent to the disk as a 5096 verification request is sent to the disk as a directio read of the raw block 5284 device. 5097 device. 5285 5098 5286 If the verification read fails, ``xfs_scrub`` 5099 If the verification read fails, ``xfs_scrub`` retries with single-block reads 5287 to narrow down the failure to the specific re 5100 to narrow down the failure to the specific region of the media and recorded. 5288 When it has finished issuing verification req 5101 When it has finished issuing verification requests, it again uses the space 5289 mapping ioctl to map the recorded media error 5102 mapping ioctl to map the recorded media errors back to metadata structures 5290 and report what has been lost. 5103 and report what has been lost. 5291 For media errors in blocks owned by files, pa 5104 For media errors in blocks owned by files, parent pointers can be used to 5292 construct file paths from inode numbers for u 5105 construct file paths from inode numbers for user-friendly reporting. 5293 5106 5294 7. Conclusion and Future Work 5107 7. Conclusion and Future Work 5295 ============================= 5108 ============================= 5296 5109 5297 It is hoped that the reader of this document 5110 It is hoped that the reader of this document has followed the designs laid out 5298 in this document and now has some familiarity 5111 in this document and now has some familiarity with how XFS performs online 5299 rebuilding of its metadata indices, and how f 5112 rebuilding of its metadata indices, and how filesystem users can interact with 5300 that functionality. 5113 that functionality. 5301 Although the scope of this work is daunting, 5114 Although the scope of this work is daunting, it is hoped that this guide will 5302 make it easier for code readers to understand 5115 make it easier for code readers to understand what has been built, for whom it 5303 has been built, and why. 5116 has been built, and why. 5304 Please feel free to contact the XFS mailing l 5117 Please feel free to contact the XFS mailing list with questions. 5305 5118 5306 XFS_IOC_EXCHANGE_RANGE !! 5119 FIEXCHANGE_RANGE 5307 ---------------------- !! 5120 ---------------- 5308 5121 5309 As discussed earlier, a second frontend to th !! 5122 As discussed earlier, a second frontend to the atomic extent swap mechanism is 5310 mechanism is a new ioctl call that userspace !! 5123 a new ioctl call that userspace programs can use to commit updates to files 5311 to files atomically. !! 5124 atomically. 5312 This frontend has been out for review for sev 5125 This frontend has been out for review for several years now, though the 5313 necessary refinements to online repair and la 5126 necessary refinements to online repair and lack of customer demand mean that 5314 the proposal has not been pushed very hard. 5127 the proposal has not been pushed very hard. 5315 5128 5316 File Content Exchanges with Regular User File !! 5129 Extent Swapping with Regular User Files 5317 ````````````````````````````````````````````` !! 5130 ``````````````````````````````````````` 5318 5131 5319 As mentioned earlier, XFS has long had the ab 5132 As mentioned earlier, XFS has long had the ability to swap extents between 5320 files, which is used almost exclusively by `` 5133 files, which is used almost exclusively by ``xfs_fsr`` to defragment files. 5321 The earliest form of this was the fork swap m 5134 The earliest form of this was the fork swap mechanism, where the entire 5322 contents of data forks could be exchanged bet 5135 contents of data forks could be exchanged between two files by exchanging the 5323 raw bytes in each inode fork's immediate area 5136 raw bytes in each inode fork's immediate area. 5324 When XFS v5 came along with self-describing m 5137 When XFS v5 came along with self-describing metadata, this old mechanism grew 5325 some log support to continue rewriting the ow 5138 some log support to continue rewriting the owner fields of BMBT blocks during 5326 log recovery. 5139 log recovery. 5327 When the reverse mapping btree was later adde 5140 When the reverse mapping btree was later added to XFS, the only way to maintain 5328 the consistency of the fork mappings with the 5141 the consistency of the fork mappings with the reverse mapping index was to 5329 develop an iterative mechanism that used defe 5142 develop an iterative mechanism that used deferred bmap and rmap operations to 5330 swap mappings one at a time. 5143 swap mappings one at a time. 5331 This mechanism is identical to steps 2-3 from 5144 This mechanism is identical to steps 2-3 from the procedure above except for 5332 the new tracking items, because the atomic fi !! 5145 the new tracking items, because the atomic extent swap mechanism is an 5333 an iteration of an existing mechanism and not !! 5146 iteration of an existing mechanism and not something totally novel. 5334 For the narrow case of file defragmentation, 5147 For the narrow case of file defragmentation, the file contents must be 5335 identical, so the recovery guarantees are not 5148 identical, so the recovery guarantees are not much of a gain. 5336 5149 5337 Atomic file content exchanges are much more f !! 5150 Atomic extent swapping is much more flexible than the existing swapext 5338 implementations because it can guarantee that 5151 implementations because it can guarantee that the caller never sees a mix of 5339 old and new contents even after a crash, and 5152 old and new contents even after a crash, and it can operate on two arbitrary 5340 file fork ranges. 5153 file fork ranges. 5341 The extra flexibility enables several new use 5154 The extra flexibility enables several new use cases: 5342 5155 5343 - **Atomic commit of file writes**: A userspa 5156 - **Atomic commit of file writes**: A userspace process opens a file that it 5344 wants to update. 5157 wants to update. 5345 Next, it opens a temporary file and calls t 5158 Next, it opens a temporary file and calls the file clone operation to reflink 5346 the first file's contents into the temporar 5159 the first file's contents into the temporary file. 5347 Writes to the original file should instead 5160 Writes to the original file should instead be written to the temporary file. 5348 Finally, the process calls the atomic file !! 5161 Finally, the process calls the atomic extent swap system call 5349 (``XFS_IOC_EXCHANGE_RANGE``) to exchange th !! 5162 (``FIEXCHANGE_RANGE``) to exchange the file contents, thereby committing all 5350 committing all of the updates to the origin !! 5163 of the updates to the original file, or none of them. 5351 5164 5352 .. _exchrange_if_unchanged: !! 5165 .. _swapext_if_unchanged: 5353 5166 5354 - **Transactional file updates**: The same me 5167 - **Transactional file updates**: The same mechanism as above, but the caller 5355 only wants the commit to occur if the origi 5168 only wants the commit to occur if the original file's contents have not 5356 changed. 5169 changed. 5357 To make this happen, the calling process sn 5170 To make this happen, the calling process snapshots the file modification and 5358 change timestamps of the original file befo 5171 change timestamps of the original file before reflinking its data to the 5359 temporary file. 5172 temporary file. 5360 When the program is ready to commit the cha 5173 When the program is ready to commit the changes, it passes the timestamps 5361 into the kernel as arguments to the atomic !! 5174 into the kernel as arguments to the atomic extent swap system call. 5362 The kernel only commits the changes if the 5175 The kernel only commits the changes if the provided timestamps match the 5363 original file. 5176 original file. 5364 A new ioctl (``XFS_IOC_COMMIT_RANGE``) is p << 5365 5177 5366 - **Emulation of atomic block device writes** 5178 - **Emulation of atomic block device writes**: Export a block device with a 5367 logical sector size matching the filesystem 5179 logical sector size matching the filesystem block size to force all writes 5368 to be aligned to the filesystem block size. 5180 to be aligned to the filesystem block size. 5369 Stage all writes to a temporary file, and w 5181 Stage all writes to a temporary file, and when that is complete, call the 5370 atomic file mapping exchange system call wi !! 5182 atomic extent swap system call with a flag to indicate that holes in the 5371 in the temporary file should be ignored. !! 5183 temporary file should be ignored. 5372 This emulates an atomic device write in sof 5184 This emulates an atomic device write in software, and can support arbitrary 5373 scattered writes. 5185 scattered writes. 5374 5186 5375 Vectorized Scrub 5187 Vectorized Scrub 5376 ---------------- 5188 ---------------- 5377 5189 5378 As it turns out, the :ref:`refactoring <scrub 5190 As it turns out, the :ref:`refactoring <scrubrepair>` of repair items mentioned 5379 earlier was a catalyst for enabling a vectori 5191 earlier was a catalyst for enabling a vectorized scrub system call. 5380 Since 2018, the cost of making a kernel call 5192 Since 2018, the cost of making a kernel call has increased considerably on some 5381 systems to mitigate the effects of speculativ 5193 systems to mitigate the effects of speculative execution attacks. 5382 This incentivizes program authors to make as 5194 This incentivizes program authors to make as few system calls as possible to 5383 reduce the number of times an execution path 5195 reduce the number of times an execution path crosses a security boundary. 5384 5196 5385 With vectorized scrub, userspace pushes to th 5197 With vectorized scrub, userspace pushes to the kernel the identity of a 5386 filesystem object, a list of scrub types to r 5198 filesystem object, a list of scrub types to run against that object, and a 5387 simple representation of the data dependencie 5199 simple representation of the data dependencies between the selected scrub 5388 types. 5200 types. 5389 The kernel executes as much of the caller's p 5201 The kernel executes as much of the caller's plan as it can until it hits a 5390 dependency that cannot be satisfied due to a 5202 dependency that cannot be satisfied due to a corruption, and tells userspace 5391 how much was accomplished. 5203 how much was accomplished. 5392 It is hoped that ``io_uring`` will pick up en 5204 It is hoped that ``io_uring`` will pick up enough of this functionality that 5393 online fsck can use that instead of adding a 5205 online fsck can use that instead of adding a separate vectored scrub system 5394 call to XFS. 5206 call to XFS. 5395 5207 5396 The relevant patchsets are the 5208 The relevant patchsets are the 5397 `kernel vectorized scrub 5209 `kernel vectorized scrub 5398 <https://git.kernel.org/pub/scm/linux/kernel/ 5210 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=vectorized-scrub>`_ 5399 and 5211 and 5400 `userspace vectorized scrub 5212 `userspace vectorized scrub 5401 <https://git.kernel.org/pub/scm/linux/kernel/ 5213 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfsprogs-dev.git/log/?h=vectorized-scrub>`_ 5402 series. 5214 series. 5403 5215 5404 Quality of Service Targets for Scrub 5216 Quality of Service Targets for Scrub 5405 ------------------------------------ 5217 ------------------------------------ 5406 5218 5407 One serious shortcoming of the online fsck co 5219 One serious shortcoming of the online fsck code is that the amount of time that 5408 it can spend in the kernel holding resource l 5220 it can spend in the kernel holding resource locks is basically unbounded. 5409 Userspace is allowed to send a fatal signal t 5221 Userspace is allowed to send a fatal signal to the process which will cause 5410 ``xfs_scrub`` to exit when it reaches a good 5222 ``xfs_scrub`` to exit when it reaches a good stopping point, but there's no way 5411 for userspace to provide a time budget to the 5223 for userspace to provide a time budget to the kernel. 5412 Given that the scrub codebase has helpers to 5224 Given that the scrub codebase has helpers to detect fatal signals, it shouldn't 5413 be too much work to allow userspace to specif 5225 be too much work to allow userspace to specify a timeout for a scrub/repair 5414 operation and abort the operation if it excee 5226 operation and abort the operation if it exceeds budget. 5415 However, most repair functions have the prope 5227 However, most repair functions have the property that once they begin to touch 5416 ondisk metadata, the operation cannot be canc 5228 ondisk metadata, the operation cannot be cancelled cleanly, after which a QoS 5417 timeout is no longer useful. 5229 timeout is no longer useful. 5418 5230 5419 Defragmenting Free Space 5231 Defragmenting Free Space 5420 ------------------------ 5232 ------------------------ 5421 5233 5422 Over the years, many XFS users have requested 5234 Over the years, many XFS users have requested the creation of a program to 5423 clear a portion of the physical storage under 5235 clear a portion of the physical storage underlying a filesystem so that it 5424 becomes a contiguous chunk of free space. 5236 becomes a contiguous chunk of free space. 5425 Call this free space defragmenter ``clearspac 5237 Call this free space defragmenter ``clearspace`` for short. 5426 5238 5427 The first piece the ``clearspace`` program ne 5239 The first piece the ``clearspace`` program needs is the ability to read the 5428 reverse mapping index from userspace. 5240 reverse mapping index from userspace. 5429 This already exists in the form of the ``FS_I 5241 This already exists in the form of the ``FS_IOC_GETFSMAP`` ioctl. 5430 The second piece it needs is a new fallocate 5242 The second piece it needs is a new fallocate mode 5431 (``FALLOC_FL_MAP_FREE_SPACE``) that allocates 5243 (``FALLOC_FL_MAP_FREE_SPACE``) that allocates the free space in a region and 5432 maps it to a file. 5244 maps it to a file. 5433 Call this file the "space collector" file. 5245 Call this file the "space collector" file. 5434 The third piece is the ability to force an on 5246 The third piece is the ability to force an online repair. 5435 5247 5436 To clear all the metadata out of a portion of 5248 To clear all the metadata out of a portion of physical storage, clearspace 5437 uses the new fallocate map-freespace call to 5249 uses the new fallocate map-freespace call to map any free space in that region 5438 to the space collector file. 5250 to the space collector file. 5439 Next, clearspace finds all metadata blocks in 5251 Next, clearspace finds all metadata blocks in that region by way of 5440 ``GETFSMAP`` and issues forced repair request 5252 ``GETFSMAP`` and issues forced repair requests on the data structure. 5441 This often results in the metadata being rebu 5253 This often results in the metadata being rebuilt somewhere that is not being 5442 cleared. 5254 cleared. 5443 After each relocation, clearspace calls the " 5255 After each relocation, clearspace calls the "map free space" function again to 5444 collect any newly freed space in the region b 5256 collect any newly freed space in the region being cleared. 5445 5257 5446 To clear all the file data out of a portion o 5258 To clear all the file data out of a portion of the physical storage, clearspace 5447 uses the FSMAP information to find relevant f 5259 uses the FSMAP information to find relevant file data blocks. 5448 Having identified a good target, it uses the 5260 Having identified a good target, it uses the ``FICLONERANGE`` call on that part 5449 of the file to try to share the physical spac 5261 of the file to try to share the physical space with a dummy file. 5450 Cloning the extent means that the original ow 5262 Cloning the extent means that the original owners cannot overwrite the 5451 contents; any changes will be written somewhe 5263 contents; any changes will be written somewhere else via copy-on-write. 5452 Clearspace makes its own copy of the frozen e 5264 Clearspace makes its own copy of the frozen extent in an area that is not being 5453 cleared, and uses ``FIEDEUPRANGE`` (or the :r !! 5265 cleared, and uses ``FIEDEUPRANGE`` (or the :ref:`atomic extent swap 5454 <exchrange_if_unchanged>` feature) to change !! 5266 <swapext_if_unchanged>` feature) to change the target file's data extent 5455 mapping away from the area being cleared. 5267 mapping away from the area being cleared. 5456 When all other mappings have been moved, clea 5268 When all other mappings have been moved, clearspace reflinks the space into the 5457 space collector file so that it becomes unava 5269 space collector file so that it becomes unavailable. 5458 5270 5459 There are further optimizations that could ap 5271 There are further optimizations that could apply to the above algorithm. 5460 To clear a piece of physical storage that has 5272 To clear a piece of physical storage that has a high sharing factor, it is 5461 strongly desirable to retain this sharing fac 5273 strongly desirable to retain this sharing factor. 5462 In fact, these extents should be moved first 5274 In fact, these extents should be moved first to maximize sharing factor after 5463 the operation completes. 5275 the operation completes. 5464 To make this work smoothly, clearspace needs 5276 To make this work smoothly, clearspace needs a new ioctl 5465 (``FS_IOC_GETREFCOUNTS``) to report reference 5277 (``FS_IOC_GETREFCOUNTS``) to report reference count information to userspace. 5466 With the refcount information exposed, clears 5278 With the refcount information exposed, clearspace can quickly find the longest, 5467 most shared data extents in the filesystem, a 5279 most shared data extents in the filesystem, and target them first. 5468 5280 5469 **Future Work Question**: How might the files 5281 **Future Work Question**: How might the filesystem move inode chunks? 5470 5282 5471 *Answer*: To move inode chunks, Dave Chinner 5283 *Answer*: To move inode chunks, Dave Chinner constructed a prototype program 5472 that creates a new file with the old contents 5284 that creates a new file with the old contents and then locklessly runs around 5473 the filesystem updating directory entries. 5285 the filesystem updating directory entries. 5474 The operation cannot complete if the filesyst 5286 The operation cannot complete if the filesystem goes down. 5475 That problem isn't totally insurmountable: cr 5287 That problem isn't totally insurmountable: create an inode remapping table 5476 hidden behind a jump label, and a log item th 5288 hidden behind a jump label, and a log item that tracks the kernel walking the 5477 filesystem to update directory entries. 5289 filesystem to update directory entries. 5478 The trouble is, the kernel can't do anything 5290 The trouble is, the kernel can't do anything about open files, since it cannot 5479 revoke them. 5291 revoke them. 5480 5292 5481 **Future Work Question**: Can static keys be 5293 **Future Work Question**: Can static keys be used to minimize the cost of 5482 supporting ``revoke()`` on XFS files? 5294 supporting ``revoke()`` on XFS files? 5483 5295 5484 *Answer*: Yes. 5296 *Answer*: Yes. 5485 Until the first revocation, the bailout code 5297 Until the first revocation, the bailout code need not be in the call path at 5486 all. 5298 all. 5487 5299 5488 The relevant patchsets are the 5300 The relevant patchsets are the 5489 `kernel freespace defrag 5301 `kernel freespace defrag 5490 <https://git.kernel.org/pub/scm/linux/kernel/ 5302 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=defrag-freespace>`_ 5491 and 5303 and 5492 `userspace freespace defrag 5304 `userspace freespace defrag 5493 <https://git.kernel.org/pub/scm/linux/kernel/ 5305 <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfsprogs-dev.git/log/?h=defrag-freespace>`_ 5494 series. 5306 series. 5495 5307 5496 Shrinking Filesystems 5308 Shrinking Filesystems 5497 --------------------- 5309 --------------------- 5498 5310 5499 Removing the end of the filesystem ought to b 5311 Removing the end of the filesystem ought to be a simple matter of evacuating 5500 the data and metadata at the end of the files 5312 the data and metadata at the end of the filesystem, and handing the freed space 5501 to the shrink code. 5313 to the shrink code. 5502 That requires an evacuation of the space at e 5314 That requires an evacuation of the space at end of the filesystem, which is a 5503 use of free space defragmentation! 5315 use of free space defragmentation!
Linux® is a registered trademark of Linus Torvalds in the United States and other countries.
TOMOYO® is a registered trademark of NTT DATA CORPORATION.