1 .. SPDX-License-Identifier: GPL-2.0 1 .. SPDX-License-Identifier: GPL-2.0 2 2 3 ========================== 3 ========================== 4 RCU Torture Test Operation 4 RCU Torture Test Operation 5 ========================== 5 ========================== 6 6 7 7 8 CONFIG_RCU_TORTURE_TEST 8 CONFIG_RCU_TORTURE_TEST 9 ======================= 9 ======================= 10 10 11 The CONFIG_RCU_TORTURE_TEST config option is a 11 The CONFIG_RCU_TORTURE_TEST config option is available for all RCU 12 implementations. It creates an rcutorture ker 12 implementations. It creates an rcutorture kernel module that can 13 be loaded to run a torture test. The test per 13 be loaded to run a torture test. The test periodically outputs 14 status messages via printk(), which can be exa 14 status messages via printk(), which can be examined via the dmesg 15 command (perhaps grepping for "torture"). The 15 command (perhaps grepping for "torture"). The test is started 16 when the module is loaded, and stops when the 16 when the module is loaded, and stops when the module is unloaded. 17 17 18 Module parameters are prefixed by "rcutorture. 18 Module parameters are prefixed by "rcutorture." in 19 Documentation/admin-guide/kernel-parameters.tx 19 Documentation/admin-guide/kernel-parameters.txt. 20 20 21 Output 21 Output 22 ====== 22 ====== 23 23 24 The statistics output is as follows:: 24 The statistics output is as follows:: 25 25 26 rcu-torture:--- Start of test: nreader 26 rcu-torture:--- Start of test: nreaders=16 nfakewriters=4 stat_interval=30 verbose=0 test_no_idle_hz=1 shuffle_interval=3 stutter=5 irqreader=1 fqs_duration=0 fqs_holdoff=0 fqs_stutter=3 test_boost=1/0 test_boost_interval=7 test_boost_duration=4 27 rcu-torture: rtc: (null) ver 27 rcu-torture: rtc: (null) ver: 155441 tfle: 0 rta: 155441 rtaf: 8884 rtf: 155440 rtmbe: 0 rtbe: 0 rtbke: 0 rtbre: 0 rtbf: 0 rtb: 0 nt: 3055767 28 rcu-torture: Reader Pipe: 727860534 3 28 rcu-torture: Reader Pipe: 727860534 34213 0 0 0 0 0 0 0 0 0 29 rcu-torture: Reader Batch: 727877838 29 rcu-torture: Reader Batch: 727877838 17003 0 0 0 0 0 0 0 0 0 30 rcu-torture: Free-Block Circulation: 30 rcu-torture: Free-Block Circulation: 155440 155440 155440 155440 155440 155440 155440 155440 155440 155440 0 31 rcu-torture:--- End of test: SUCCESS: 31 rcu-torture:--- End of test: SUCCESS: nreaders=16 nfakewriters=4 stat_interval=30 verbose=0 test_no_idle_hz=1 shuffle_interval=3 stutter=5 irqreader=1 fqs_duration=0 fqs_holdoff=0 fqs_stutter=3 test_boost=1/0 test_boost_interval=7 test_boost_duration=4 32 32 33 The command "dmesg | grep torture:" will extra 33 The command "dmesg | grep torture:" will extract this information on 34 most systems. On more esoteric configurations 34 most systems. On more esoteric configurations, it may be necessary to 35 use other commands to access the output of the 35 use other commands to access the output of the printk()s used by 36 the RCU torture test. The printk()s use KERN_ 36 the RCU torture test. The printk()s use KERN_ALERT, so they should 37 be evident. ;-) 37 be evident. ;-) 38 38 39 The first and last lines show the rcutorture m 39 The first and last lines show the rcutorture module parameters, and the 40 last line shows either "SUCCESS" or "FAILURE", 40 last line shows either "SUCCESS" or "FAILURE", based on rcutorture's 41 automatic determination as to whether RCU oper 41 automatic determination as to whether RCU operated correctly. 42 42 43 The entries are as follows: 43 The entries are as follows: 44 44 45 * "rtc": The hexadecimal address of the 45 * "rtc": The hexadecimal address of the structure currently visible 46 to readers. 46 to readers. 47 47 48 * "ver": The number of times since boot 48 * "ver": The number of times since boot that the RCU writer task 49 has changed the structure visible to r 49 has changed the structure visible to readers. 50 50 51 * "tfle": If non-zero, indicates that th 51 * "tfle": If non-zero, indicates that the "torture freelist" 52 containing structures to be placed int 52 containing structures to be placed into the "rtc" area is empty. 53 This condition is important, since it 53 This condition is important, since it can fool you into thinking 54 that RCU is working when it is not. : 54 that RCU is working when it is not. :-/ 55 55 56 * "rta": Number of structures allocated 56 * "rta": Number of structures allocated from the torture freelist. 57 57 58 * "rtaf": Number of allocations from the 58 * "rtaf": Number of allocations from the torture freelist that have 59 failed due to the list being empty. I 59 failed due to the list being empty. It is not unusual for this 60 to be non-zero, but it is bad for it t 60 to be non-zero, but it is bad for it to be a large fraction of 61 the value indicated by "rta". 61 the value indicated by "rta". 62 62 63 * "rtf": Number of frees into the tortur 63 * "rtf": Number of frees into the torture freelist. 64 64 65 * "rtmbe": A non-zero value indicates th 65 * "rtmbe": A non-zero value indicates that rcutorture believes that 66 rcu_assign_pointer() and rcu_dereferen 66 rcu_assign_pointer() and rcu_dereference() are not working 67 correctly. This value should be zero. 67 correctly. This value should be zero. 68 68 69 * "rtbe": A non-zero value indicates tha 69 * "rtbe": A non-zero value indicates that one of the rcu_barrier() 70 family of functions is not working cor 70 family of functions is not working correctly. 71 71 72 * "rtbke": rcutorture was unable to crea 72 * "rtbke": rcutorture was unable to create the real-time kthreads 73 used to force RCU priority inversion. 73 used to force RCU priority inversion. This value should be zero. 74 74 75 * "rtbre": Although rcutorture successfu 75 * "rtbre": Although rcutorture successfully created the kthreads 76 used to force RCU priority inversion, 76 used to force RCU priority inversion, it was unable to set them 77 to the real-time priority level of 1. 77 to the real-time priority level of 1. This value should be zero. 78 78 79 * "rtbf": The number of times that RCU p 79 * "rtbf": The number of times that RCU priority boosting failed 80 to resolve RCU priority inversion. 80 to resolve RCU priority inversion. 81 81 82 * "rtb": The number of times that rcutor 82 * "rtb": The number of times that rcutorture attempted to force 83 an RCU priority inversion condition. 83 an RCU priority inversion condition. If you are testing RCU 84 priority boosting via the "test_boost" 84 priority boosting via the "test_boost" module parameter, this 85 value should be non-zero. 85 value should be non-zero. 86 86 87 * "nt": The number of times rcutorture r 87 * "nt": The number of times rcutorture ran RCU read-side code from 88 within a timer handler. This value sh 88 within a timer handler. This value should be non-zero only 89 if you specified the "irqreader" modul 89 if you specified the "irqreader" module parameter. 90 90 91 * "Reader Pipe": Histogram of "ages" of 91 * "Reader Pipe": Histogram of "ages" of structures seen by readers. 92 If any entries past the first two are 92 If any entries past the first two are non-zero, RCU is broken. 93 And rcutorture prints the error flag s 93 And rcutorture prints the error flag string "!!!" to make sure 94 you notice. The age of a newly alloca 94 you notice. The age of a newly allocated structure is zero, 95 it becomes one when removed from reade 95 it becomes one when removed from reader visibility, and is 96 incremented once per grace period subs 96 incremented once per grace period subsequently -- and is freed 97 after passing through (RCU_TORTURE_PIP 97 after passing through (RCU_TORTURE_PIPE_LEN-2) grace periods. 98 98 99 The output displayed above was taken f 99 The output displayed above was taken from a correctly working 100 RCU. If you want to see what it looks 100 RCU. If you want to see what it looks like when broken, break 101 it yourself. ;-) 101 it yourself. ;-) 102 102 103 * "Reader Batch": Another histogram of " 103 * "Reader Batch": Another histogram of "ages" of structures seen 104 by readers, but in terms of counter fl 104 by readers, but in terms of counter flips (or batches) rather 105 than in terms of grace periods. The l 105 than in terms of grace periods. The legal number of non-zero 106 entries is again two. The reason for 106 entries is again two. The reason for this separate view is that 107 it is sometimes easier to get the thir 107 it is sometimes easier to get the third entry to show up in the 108 "Reader Batch" list than in the "Reade 108 "Reader Batch" list than in the "Reader Pipe" list. 109 109 110 * "Free-Block Circulation": Shows the nu 110 * "Free-Block Circulation": Shows the number of torture structures 111 that have reached a given point in the 111 that have reached a given point in the pipeline. The first element 112 should closely correspond to the numbe 112 should closely correspond to the number of structures allocated, 113 the second to the number that have bee 113 the second to the number that have been removed from reader view, 114 and all but the last remaining to the 114 and all but the last remaining to the corresponding number of 115 passes through a grace period. The la 115 passes through a grace period. The last entry should be zero, 116 as it is only incremented if a torture 116 as it is only incremented if a torture structure's counter 117 somehow gets incremented farther than 117 somehow gets incremented farther than it should. 118 118 119 Different implementations of RCU can provide i 119 Different implementations of RCU can provide implementation-specific 120 additional information. For example, Tree SRC 120 additional information. For example, Tree SRCU provides the following 121 additional line:: 121 additional line:: 122 122 123 srcud-torture: Tree SRCU per-CPU(idx=0 123 srcud-torture: Tree SRCU per-CPU(idx=0): 0(35,-21) 1(-4,24) 2(1,1) 3(-26,20) 4(28,-47) 5(-9,4) 6(-10,14) 7(-14,11) T(1,6) 124 124 125 This line shows the per-CPU counter state, in 125 This line shows the per-CPU counter state, in this case for Tree SRCU 126 using a dynamically allocated srcu_struct (hen 126 using a dynamically allocated srcu_struct (hence "srcud-" rather than 127 "srcu-"). The numbers in parentheses are the 127 "srcu-"). The numbers in parentheses are the values of the "old" and 128 "current" counters for the corresponding CPU. 128 "current" counters for the corresponding CPU. The "idx" value maps the 129 "old" and "current" values to the underlying a 129 "old" and "current" values to the underlying array, and is useful for 130 debugging. The final "T" entry contains the t 130 debugging. The final "T" entry contains the totals of the counters. 131 131 132 Usage on Specific Kernel Builds 132 Usage on Specific Kernel Builds 133 =============================== 133 =============================== 134 134 135 It is sometimes desirable to torture RCU on a 135 It is sometimes desirable to torture RCU on a specific kernel build, 136 for example, when preparing to put that kernel 136 for example, when preparing to put that kernel build into production. 137 In that case, the kernel should be built with 137 In that case, the kernel should be built with CONFIG_RCU_TORTURE_TEST=m 138 so that the test can be started using modprobe 138 so that the test can be started using modprobe and terminated using rmmod. 139 139 140 For example, the following script may be used 140 For example, the following script may be used to torture RCU:: 141 141 142 #!/bin/sh 142 #!/bin/sh 143 143 144 modprobe rcutorture 144 modprobe rcutorture 145 sleep 3600 145 sleep 3600 146 rmmod rcutorture 146 rmmod rcutorture 147 dmesg | grep torture: 147 dmesg | grep torture: 148 148 149 The output can be manually inspected for the e 149 The output can be manually inspected for the error flag of "!!!". 150 One could of course create a more elaborate sc 150 One could of course create a more elaborate script that automatically 151 checked for such errors. The "rmmod" command 151 checked for such errors. The "rmmod" command forces a "SUCCESS", 152 "FAILURE", or "RCU_HOTPLUG" indication to be p 152 "FAILURE", or "RCU_HOTPLUG" indication to be printk()ed. The first 153 two are self-explanatory, while the last indic 153 two are self-explanatory, while the last indicates that while there 154 were no RCU failures, CPU-hotplug problems wer 154 were no RCU failures, CPU-hotplug problems were detected. 155 155 156 156 157 Usage on Mainline Kernels 157 Usage on Mainline Kernels 158 ========================= 158 ========================= 159 159 160 When using rcutorture to test changes to RCU i 160 When using rcutorture to test changes to RCU itself, it is often 161 necessary to build a number of kernels in orde 161 necessary to build a number of kernels in order to test that change 162 across a broad range of combinations of the re 162 across a broad range of combinations of the relevant Kconfig options 163 and of the relevant kernel boot parameters. I 163 and of the relevant kernel boot parameters. In this situation, use 164 of modprobe and rmmod can be quite time-consum 164 of modprobe and rmmod can be quite time-consuming and error-prone. 165 165 166 Therefore, the tools/testing/selftests/rcutort 166 Therefore, the tools/testing/selftests/rcutorture/bin/kvm.sh 167 script is available for mainline testing for x 167 script is available for mainline testing for x86, arm64, and 168 powerpc. By default, it will run the series o 168 powerpc. By default, it will run the series of tests specified by 169 tools/testing/selftests/rcutorture/configs/rcu 169 tools/testing/selftests/rcutorture/configs/rcu/CFLIST, with each test 170 running for 30 minutes within a guest OS using 170 running for 30 minutes within a guest OS using a minimal userspace 171 supplied by an automatically generated initrd. 171 supplied by an automatically generated initrd. After the tests are 172 complete, the resulting build products and con 172 complete, the resulting build products and console output are analyzed 173 for errors and the results of the runs are sum 173 for errors and the results of the runs are summarized. 174 174 175 On larger systems, rcutorture testing can be a 175 On larger systems, rcutorture testing can be accelerated by passing the 176 --cpus argument to kvm.sh. For example, on a 176 --cpus argument to kvm.sh. For example, on a 64-CPU system, "--cpus 43" 177 would use up to 43 CPUs to run tests concurren 177 would use up to 43 CPUs to run tests concurrently, which as of v5.4 would 178 complete all the scenarios in two batches, red 178 complete all the scenarios in two batches, reducing the time to complete 179 from about eight hours to about one hour (not 179 from about eight hours to about one hour (not counting the time to build 180 the sixteen kernels). The "--dryrun sched" ar 180 the sixteen kernels). The "--dryrun sched" argument will not run tests, 181 but rather tell you how the tests would be sch 181 but rather tell you how the tests would be scheduled into batches. This 182 can be useful when working out how many CPUs t 182 can be useful when working out how many CPUs to specify in the --cpus 183 argument. 183 argument. 184 184 185 Not all changes require that all scenarios be 185 Not all changes require that all scenarios be run. For example, a change 186 to Tree SRCU might run only the SRCU-N and SRC 186 to Tree SRCU might run only the SRCU-N and SRCU-P scenarios using the 187 --configs argument to kvm.sh as follows: "--c 187 --configs argument to kvm.sh as follows: "--configs 'SRCU-N SRCU-P'". 188 Large systems can run multiple copies of the f !! 188 Large systems can run multiple copies of of the full set of scenarios, 189 for example, a system with 448 hardware thread 189 for example, a system with 448 hardware threads can run five instances 190 of the full set concurrently. To make this ha 190 of the full set concurrently. To make this happen:: 191 191 192 kvm.sh --cpus 448 --configs '5*CFLIST' 192 kvm.sh --cpus 448 --configs '5*CFLIST' 193 193 194 Alternatively, such a system can run 56 concur 194 Alternatively, such a system can run 56 concurrent instances of a single 195 eight-CPU scenario:: 195 eight-CPU scenario:: 196 196 197 kvm.sh --cpus 448 --configs '56*TREE04 197 kvm.sh --cpus 448 --configs '56*TREE04' 198 198 199 Or 28 concurrent instances of each of two eigh 199 Or 28 concurrent instances of each of two eight-CPU scenarios:: 200 200 201 kvm.sh --cpus 448 --configs '28*TREE03 201 kvm.sh --cpus 448 --configs '28*TREE03 28*TREE04' 202 202 203 Of course, each concurrent instance will use m 203 Of course, each concurrent instance will use memory, which can be 204 limited using the --memory argument, which def 204 limited using the --memory argument, which defaults to 512M. Small 205 values for memory may require disabling the ca 205 values for memory may require disabling the callback-flooding tests 206 using the --bootargs parameter discussed below 206 using the --bootargs parameter discussed below. 207 207 208 Sometimes additional debugging is useful, and 208 Sometimes additional debugging is useful, and in such cases the --kconfig 209 parameter to kvm.sh may be used, for example, !! 209 parameter to kvm.sh may be used, for example, ``--kconfig 'CONFIG_KASAN=y'``. 210 In addition, there are the --gdb, --kasan, and << 211 Note that --gdb limits you to one scenario per << 212 that you have another window open from which t << 213 by the script. << 214 210 215 Kernel boot arguments can also be supplied, fo 211 Kernel boot arguments can also be supplied, for example, to control 216 rcutorture's module parameters. For example, 212 rcutorture's module parameters. For example, to test a change to RCU's 217 CPU stall-warning code, use "--bootargs 'rcuto 213 CPU stall-warning code, use "--bootargs 'rcutorture.stall_cpu=30'". 218 This will of course result in the scripting re 214 This will of course result in the scripting reporting a failure, namely 219 the resulting RCU CPU stall warning. As noted !! 215 the resuling RCU CPU stall warning. As noted above, reducing memory may 220 require disabling rcutorture's callback-floodi 216 require disabling rcutorture's callback-flooding tests:: 221 217 222 kvm.sh --cpus 448 --configs '56*TREE04 218 kvm.sh --cpus 448 --configs '56*TREE04' --memory 128M \ 223 --bootargs 'rcutorture.fwd_pro 219 --bootargs 'rcutorture.fwd_progress=0' 224 220 225 Sometimes all that is needed is a full set of 221 Sometimes all that is needed is a full set of kernel builds. This is 226 what the --buildonly parameter does. !! 222 what the --buildonly argument does. 227 223 228 The --duration parameter can override the defa !! 224 Finally, the --trust-make argument allows each kernel build to reuse what 229 For example, ``--duration 2d`` would run for t !! 225 it can from the previous kernel build. 230 would run for three hours, ``--duration 5m`` w << 231 and ``--duration 45s`` would run for 45 second << 232 for tracking down rare boot-time failures. << 233 << 234 Finally, the --trust-make parameter allows eac << 235 it can from the previous kernel build. Please << 236 --trust-make parameter, your tags files may be << 237 226 238 There are additional more arcane arguments tha 227 There are additional more arcane arguments that are documented in the 239 source code of the kvm.sh script. 228 source code of the kvm.sh script. 240 229 241 If a run contains failures, the number of buil 230 If a run contains failures, the number of buildtime and runtime failures 242 is listed at the end of the kvm.sh output, whi 231 is listed at the end of the kvm.sh output, which you really should redirect 243 to a file. The build products and console out 232 to a file. The build products and console output of each run is kept in 244 tools/testing/selftests/rcutorture/res in time 233 tools/testing/selftests/rcutorture/res in timestamped directories. A 245 given directory can be supplied to kvm-find-er 234 given directory can be supplied to kvm-find-errors.sh in order to have 246 it cycle you through summaries of errors and f 235 it cycle you through summaries of errors and full error logs. For example:: 247 236 248 tools/testing/selftests/rcutorture/bin 237 tools/testing/selftests/rcutorture/bin/kvm-find-errors.sh \ 249 tools/testing/selftests/rcutor 238 tools/testing/selftests/rcutorture/res/2020.01.20-15.54.23 250 239 251 However, it is often more convenient to access 240 However, it is often more convenient to access the files directly. 252 Files pertaining to all scenarios in a run res 241 Files pertaining to all scenarios in a run reside in the top-level 253 directory (2020.01.20-15.54.23 in the example 242 directory (2020.01.20-15.54.23 in the example above), while per-scenario 254 files reside in a subdirectory named after the 243 files reside in a subdirectory named after the scenario (for example, 255 "TREE04"). If a given scenario ran more than 244 "TREE04"). If a given scenario ran more than once (as in "--configs 256 '56*TREE04'" above), the directories correspon 245 '56*TREE04'" above), the directories corresponding to the second and 257 subsequent runs of that scenario include a seq 246 subsequent runs of that scenario include a sequence number, for example, 258 "TREE04.2", "TREE04.3", and so on. 247 "TREE04.2", "TREE04.3", and so on. 259 248 260 The most frequently used file in the top-level 249 The most frequently used file in the top-level directory is testid.txt. 261 If the test ran in a git repository, then this 250 If the test ran in a git repository, then this file contains the commit 262 that was tested and any uncommitted changes in 251 that was tested and any uncommitted changes in diff format. 263 252 264 The most frequently used files in each per-sce 253 The most frequently used files in each per-scenario-run directory are: 265 254 266 .config: 255 .config: 267 This file contains the Kconfig options 256 This file contains the Kconfig options. 268 257 269 Make.out: 258 Make.out: 270 This contains build output for a speci 259 This contains build output for a specific scenario. 271 260 272 console.log: 261 console.log: 273 This contains the console output for a 262 This contains the console output for a specific scenario. 274 This file may be examined once the ker 263 This file may be examined once the kernel has booted, but 275 it might not exist if the build failed 264 it might not exist if the build failed. 276 265 277 vmlinux: 266 vmlinux: 278 This contains the kernel, which can be 267 This contains the kernel, which can be useful with tools like 279 objdump and gdb. 268 objdump and gdb. 280 269 281 A number of additional files are available, bu 270 A number of additional files are available, but are less frequently used. 282 Many are intended for debugging of rcutorture 271 Many are intended for debugging of rcutorture itself or of its scripting. 283 272 284 As of v5.4, a successful run with the default 273 As of v5.4, a successful run with the default set of scenarios produces 285 the following summary at the end of the run on 274 the following summary at the end of the run on a 12-CPU system:: 286 275 287 SRCU-N ------- 804233 GPs (148.932/s) [src 276 SRCU-N ------- 804233 GPs (148.932/s) [srcu: g10008272 f0x0 ] 288 SRCU-P ------- 202320 GPs (37.4667/s) [src 277 SRCU-P ------- 202320 GPs (37.4667/s) [srcud: g1809476 f0x0 ] 289 SRCU-t ------- 1122086 GPs (207.794/s) [sr 278 SRCU-t ------- 1122086 GPs (207.794/s) [srcu: g0 f0x0 ] 290 SRCU-u ------- 1111285 GPs (205.794/s) [sr 279 SRCU-u ------- 1111285 GPs (205.794/s) [srcud: g1 f0x0 ] 291 TASKS01 ------- 19666 GPs (3.64185/s) [tas 280 TASKS01 ------- 19666 GPs (3.64185/s) [tasks: g0 f0x0 ] 292 TASKS02 ------- 20541 GPs (3.80389/s) [tas 281 TASKS02 ------- 20541 GPs (3.80389/s) [tasks: g0 f0x0 ] 293 TASKS03 ------- 19416 GPs (3.59556/s) [tas 282 TASKS03 ------- 19416 GPs (3.59556/s) [tasks: g0 f0x0 ] 294 TINY01 ------- 836134 GPs (154.84/s) [rcu: 283 TINY01 ------- 836134 GPs (154.84/s) [rcu: g0 f0x0 ] n_max_cbs: 34198 295 TINY02 ------- 850371 GPs (157.476/s) [rcu 284 TINY02 ------- 850371 GPs (157.476/s) [rcu: g0 f0x0 ] n_max_cbs: 2631 296 TREE01 ------- 162625 GPs (30.1157/s) [rcu 285 TREE01 ------- 162625 GPs (30.1157/s) [rcu: g1124169 f0x0 ] 297 TREE02 ------- 333003 GPs (61.6672/s) [rcu 286 TREE02 ------- 333003 GPs (61.6672/s) [rcu: g2647753 f0x0 ] n_max_cbs: 35844 298 TREE03 ------- 306623 GPs (56.782/s) [rcu: 287 TREE03 ------- 306623 GPs (56.782/s) [rcu: g2975325 f0x0 ] n_max_cbs: 1496497 299 CPU count limited from 16 to 12 288 CPU count limited from 16 to 12 300 TREE04 ------- 246149 GPs (45.5831/s) [rcu 289 TREE04 ------- 246149 GPs (45.5831/s) [rcu: g1695737 f0x0 ] n_max_cbs: 434961 301 TREE05 ------- 314603 GPs (58.2598/s) [rcu 290 TREE05 ------- 314603 GPs (58.2598/s) [rcu: g2257741 f0x2 ] n_max_cbs: 193997 302 TREE07 ------- 167347 GPs (30.9902/s) [rcu 291 TREE07 ------- 167347 GPs (30.9902/s) [rcu: g1079021 f0x0 ] n_max_cbs: 478732 303 CPU count limited from 16 to 12 292 CPU count limited from 16 to 12 304 TREE09 ------- 752238 GPs (139.303/s) [rcu 293 TREE09 ------- 752238 GPs (139.303/s) [rcu: g13075057 f0x0 ] n_max_cbs: 99011 305 << 306 << 307 Repeated Runs << 308 ============= << 309 << 310 Suppose that you are chasing down a rare boot- << 311 could use kvm.sh, doing so will rebuild the ke << 312 need (say) 1,000 runs to have confidence that << 313 these pointless rebuilds can become extremely << 314 << 315 This is why kvm-again.sh exists. << 316 << 317 Suppose that a previous kvm.sh run left its ou << 318 << 319 tools/testing/selftests/rcutorture/res << 320 << 321 Then this run can be re-run without rebuilding << 322 << 323 kvm-again.sh tools/testing/selftests/r << 324 << 325 A few of the original run's kvm.sh parameters << 326 most notably --duration and --bootargs. For e << 327 << 328 kvm-again.sh tools/testing/selftests/r << 329 --duration 45s << 330 << 331 would re-run the previous test, but for only 4 << 332 tracking down the aforementioned rare boot-tim << 333 << 334 << 335 Distributed Runs << 336 ================ << 337 << 338 Although kvm.sh is quite useful, its testing i << 339 system. It is not all that hard to use your f << 340 (say) 5 instances of kvm.sh to run on your 5 s << 341 likely unnecessarily rebuild kernels. In addi << 342 the desired rcutorture scenarios across the av << 343 painstaking and error-prone. << 344 << 345 And this is why the kvm-remote.sh script exist << 346 << 347 If you the following command works:: << 348 << 349 ssh system0 date << 350 << 351 and if it also works for system1, system2, sys << 352 and all of these systems have 64 CPUs, you can << 353 << 354 kvm-remote.sh "system0 system1 system2 << 355 --cpus 64 --duration 8h --conf << 356 << 357 This will build each default scenario's kernel << 358 spread each of five instances of each scenario << 359 running each scenario for eight hours. At the << 360 results will be gathered, recorded, and printe << 361 that kvm.sh will accept can be passed to kvm-r << 362 systems must come first. << 363 << 364 The kvm.sh ``--dryrun scenarios`` argument is << 365 how many scenarios may be run in one batch acr << 366 << 367 You can also re-run a previous remote run in a << 368 << 369 kvm-remote.sh "system0 system1 system2 << 370 tools/testing/selftests/rcutor << 371 --duration 24h << 372 << 373 In this case, most of the kvm-again.sh paramet << 374 the pathname of the old run-results directory. <<
Linux® is a registered trademark of Linus Torvalds in the United States and other countries.
TOMOYO® is a registered trademark of NTT DATA CORPORATION.