~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

TOMOYO Linux Cross Reference
Linux/tools/memory-model/Documentation/access-marking.txt

Version: ~ [ linux-6.11.5 ] ~ [ linux-6.10.14 ] ~ [ linux-6.9.12 ] ~ [ linux-6.8.12 ] ~ [ linux-6.7.12 ] ~ [ linux-6.6.58 ] ~ [ linux-6.5.13 ] ~ [ linux-6.4.16 ] ~ [ linux-6.3.13 ] ~ [ linux-6.2.16 ] ~ [ linux-6.1.114 ] ~ [ linux-6.0.19 ] ~ [ linux-5.19.17 ] ~ [ linux-5.18.19 ] ~ [ linux-5.17.15 ] ~ [ linux-5.16.20 ] ~ [ linux-5.15.169 ] ~ [ linux-5.14.21 ] ~ [ linux-5.13.19 ] ~ [ linux-5.12.19 ] ~ [ linux-5.11.22 ] ~ [ linux-5.10.228 ] ~ [ linux-5.9.16 ] ~ [ linux-5.8.18 ] ~ [ linux-5.7.19 ] ~ [ linux-5.6.19 ] ~ [ linux-5.5.19 ] ~ [ linux-5.4.284 ] ~ [ linux-5.3.18 ] ~ [ linux-5.2.21 ] ~ [ linux-5.1.21 ] ~ [ linux-5.0.21 ] ~ [ linux-4.20.17 ] ~ [ linux-4.19.322 ] ~ [ linux-4.18.20 ] ~ [ linux-4.17.19 ] ~ [ linux-4.16.18 ] ~ [ linux-4.15.18 ] ~ [ linux-4.14.336 ] ~ [ linux-4.13.16 ] ~ [ linux-4.12.14 ] ~ [ linux-4.11.12 ] ~ [ linux-4.10.17 ] ~ [ linux-4.9.337 ] ~ [ linux-4.4.302 ] ~ [ linux-3.10.108 ] ~ [ linux-2.6.32.71 ] ~ [ linux-2.6.0 ] ~ [ linux-2.4.37.11 ] ~ [ unix-v6-master ] ~ [ ccs-tools-1.8.9 ] ~ [ policy-sample ] ~
Architecture: ~ [ i386 ] ~ [ alpha ] ~ [ m68k ] ~ [ mips ] ~ [ ppc ] ~ [ sparc ] ~ [ sparc64 ] ~

Diff markup

Differences between /tools/memory-model/Documentation/access-marking.txt (Architecture i386) and /tools/memory-model/Documentation/access-marking.txt (Architecture m68k)


  1 MARKING SHARED-MEMORY ACCESSES                      1 MARKING SHARED-MEMORY ACCESSES
  2 ==============================                      2 ==============================
  3                                                     3 
  4 This document provides guidelines for marking       4 This document provides guidelines for marking intentionally concurrent
  5 normal accesses to shared memory, that is "nor      5 normal accesses to shared memory, that is "normal" as in accesses that do
  6 not use read-modify-write atomic operations.        6 not use read-modify-write atomic operations.  It also describes how to
  7 document these accesses, both with comments an      7 document these accesses, both with comments and with special assertions
  8 processed by the Kernel Concurrency Sanitizer       8 processed by the Kernel Concurrency Sanitizer (KCSAN).  This discussion
  9 builds on an earlier LWN article [1] and Linux      9 builds on an earlier LWN article [1] and Linux Foundation mentorship
 10 session [2].                                       10 session [2].
 11                                                    11 
 12                                                    12 
 13 ACCESS-MARKING OPTIONS                             13 ACCESS-MARKING OPTIONS
 14 ======================                             14 ======================
 15                                                    15 
 16 The Linux kernel provides the following access     16 The Linux kernel provides the following access-marking options:
 17                                                    17 
 18 1.      Plain C-language accesses (unmarked),      18 1.      Plain C-language accesses (unmarked), for example, "a = b;"
 19                                                    19 
 20 2.      Data-race marking, for example, "data_     20 2.      Data-race marking, for example, "data_race(a = b);"
 21                                                    21 
 22 3.      READ_ONCE(), for example, "a = READ_ON     22 3.      READ_ONCE(), for example, "a = READ_ONCE(b);"
 23         The various forms of atomic_read() als     23         The various forms of atomic_read() also fit in here.
 24                                                    24 
 25 4.      WRITE_ONCE(), for example, "WRITE_ONCE     25 4.      WRITE_ONCE(), for example, "WRITE_ONCE(a, b);"
 26         The various forms of atomic_set() also     26         The various forms of atomic_set() also fit in here.
 27                                                    27 
 28 5.      __data_racy, for example "int __data_r     28 5.      __data_racy, for example "int __data_racy a;"
 29                                                    29 
 30 6.      KCSAN's negative-marking assertions, A     30 6.      KCSAN's negative-marking assertions, ASSERT_EXCLUSIVE_ACCESS()
 31         and ASSERT_EXCLUSIVE_WRITER(), are des     31         and ASSERT_EXCLUSIVE_WRITER(), are described in the
 32         "ACCESS-DOCUMENTATION OPTIONS" section     32         "ACCESS-DOCUMENTATION OPTIONS" section below.
 33                                                    33 
 34 These may be used in combination, as shown in      34 These may be used in combination, as shown in this admittedly improbable
 35 example:                                           35 example:
 36                                                    36 
 37         WRITE_ONCE(a, b + data_race(c + d) + R     37         WRITE_ONCE(a, b + data_race(c + d) + READ_ONCE(e));
 38                                                    38 
 39 Neither plain C-language accesses nor data_rac     39 Neither plain C-language accesses nor data_race() (#1 and #2 above) place
 40 any sort of constraint on the compiler's choic     40 any sort of constraint on the compiler's choice of optimizations [3].
 41 In contrast, READ_ONCE() and WRITE_ONCE() (#3      41 In contrast, READ_ONCE() and WRITE_ONCE() (#3 and #4 above) restrict the
 42 compiler's use of code-motion and common-subex     42 compiler's use of code-motion and common-subexpression optimizations.
 43 Therefore, if a given access is involved in an     43 Therefore, if a given access is involved in an intentional data race,
 44 using READ_ONCE() for loads and WRITE_ONCE() f     44 using READ_ONCE() for loads and WRITE_ONCE() for stores is usually
 45 preferable to data_race(), which in turn is us     45 preferable to data_race(), which in turn is usually preferable to plain
 46 C-language accesses.  It is permissible to com     46 C-language accesses.  It is permissible to combine #2 and #3, for example,
 47 data_race(READ_ONCE(a)), which will both restr     47 data_race(READ_ONCE(a)), which will both restrict compiler optimizations
 48 and disable KCSAN diagnostics.                     48 and disable KCSAN diagnostics.
 49                                                    49 
 50 KCSAN will complain about many types of data r     50 KCSAN will complain about many types of data races involving plain
 51 C-language accesses, but marking all accesses      51 C-language accesses, but marking all accesses involved in a given data
 52 race with one of data_race(), READ_ONCE(), or      52 race with one of data_race(), READ_ONCE(), or WRITE_ONCE(), will prevent
 53 KCSAN from complaining.  Of course, lack of KC     53 KCSAN from complaining.  Of course, lack of KCSAN complaints does not
 54 imply correct code.  Therefore, please take a      54 imply correct code.  Therefore, please take a thoughtful approach
 55 when responding to KCSAN complaints.  Churning     55 when responding to KCSAN complaints.  Churning the code base with
 56 ill-considered additions of data_race(), READ_     56 ill-considered additions of data_race(), READ_ONCE(), and WRITE_ONCE()
 57 is unhelpful.                                      57 is unhelpful.
 58                                                    58 
 59 In fact, the following sections describe situa     59 In fact, the following sections describe situations where use of
 60 data_race() and even plain C-language accesses     60 data_race() and even plain C-language accesses is preferable to
 61 READ_ONCE() and WRITE_ONCE().                      61 READ_ONCE() and WRITE_ONCE().
 62                                                    62 
 63                                                    63 
 64 Use of the data_race() Macro                       64 Use of the data_race() Macro
 65 ----------------------------                       65 ----------------------------
 66                                                    66 
 67 Here are some situations where data_race() sho     67 Here are some situations where data_race() should be used instead of
 68 READ_ONCE() and WRITE_ONCE():                      68 READ_ONCE() and WRITE_ONCE():
 69                                                    69 
 70 1.      Data-racy loads from shared variables      70 1.      Data-racy loads from shared variables whose values are used only
 71         for diagnostic purposes.                   71         for diagnostic purposes.
 72                                                    72 
 73 2.      Data-racy reads whose values are check     73 2.      Data-racy reads whose values are checked against marked reload.
 74                                                    74 
 75 3.      Reads whose values feed into error-tol     75 3.      Reads whose values feed into error-tolerant heuristics.
 76                                                    76 
 77 4.      Writes setting values that feed into e     77 4.      Writes setting values that feed into error-tolerant heuristics.
 78                                                    78 
 79                                                    79 
 80 Data-Racy Reads for Approximate Diagnostics        80 Data-Racy Reads for Approximate Diagnostics
 81                                                    81 
 82 Approximate diagnostics include lockdep report     82 Approximate diagnostics include lockdep reports, monitoring/statistics
 83 (including /proc and /sys output), WARN*()/BUG     83 (including /proc and /sys output), WARN*()/BUG*() checks whose return
 84 values are ignored, and other situations where     84 values are ignored, and other situations where reads from shared variables
 85 are not an integral part of the core concurren     85 are not an integral part of the core concurrency design.
 86                                                    86 
 87 In fact, use of data_race() instead READ_ONCE(     87 In fact, use of data_race() instead READ_ONCE() for these diagnostic
 88 reads can enable better checking of the remain     88 reads can enable better checking of the remaining accesses implementing
 89 the core concurrency design.  For example, sup     89 the core concurrency design.  For example, suppose that the core design
 90 prevents any non-diagnostic reads from shared      90 prevents any non-diagnostic reads from shared variable x from running
 91 concurrently with updates to x.  Then using pl     91 concurrently with updates to x.  Then using plain C-language writes
 92 to x allows KCSAN to detect reads from x from      92 to x allows KCSAN to detect reads from x from within regions of code
 93 that fail to exclude the updates.  In this cas     93 that fail to exclude the updates.  In this case, it is important to use
 94 data_race() for the diagnostic reads because o     94 data_race() for the diagnostic reads because otherwise KCSAN would give
 95 false-positive warnings about these diagnostic     95 false-positive warnings about these diagnostic reads.
 96                                                    96 
 97 If it is necessary to both restrict compiler o     97 If it is necessary to both restrict compiler optimizations and disable
 98 KCSAN diagnostics, use both data_race() and RE     98 KCSAN diagnostics, use both data_race() and READ_ONCE(), for example,
 99 data_race(READ_ONCE(a)).                           99 data_race(READ_ONCE(a)).
100                                                   100 
101 In theory, plain C-language loads can also be     101 In theory, plain C-language loads can also be used for this use case.
102 However, in practice this will have the disadv    102 However, in practice this will have the disadvantage of causing KCSAN
103 to generate false positives because KCSAN will    103 to generate false positives because KCSAN will have no way of knowing
104 that the resulting data race was intentional.     104 that the resulting data race was intentional.
105                                                   105 
106                                                   106 
107 Data-Racy Reads That Are Checked Against Marke    107 Data-Racy Reads That Are Checked Against Marked Reload
108                                                   108 
109 The values from some reads are not implicitly     109 The values from some reads are not implicitly trusted.  They are instead
110 fed into some operation that checks the full v    110 fed into some operation that checks the full value against a later marked
111 load from memory, which means that the occasio    111 load from memory, which means that the occasional arbitrarily bogus value
112 is not a problem.  For example, if a bogus val    112 is not a problem.  For example, if a bogus value is fed into cmpxchg(),
113 all that happens is that this cmpxchg() fails,    113 all that happens is that this cmpxchg() fails, which normally results
114 in a retry.  Unless the race condition that re    114 in a retry.  Unless the race condition that resulted in the bogus value
115 recurs, this retry will with high probability     115 recurs, this retry will with high probability succeed, so no harm done.
116                                                   116 
117 However, please keep in mind that a data_race(    117 However, please keep in mind that a data_race() load feeding into
118 a cmpxchg_relaxed() might still be subject to     118 a cmpxchg_relaxed() might still be subject to load fusing on some
119 architectures.  Therefore, it is best to captu    119 architectures.  Therefore, it is best to capture the return value from
120 the failing cmpxchg() for the next iteration o    120 the failing cmpxchg() for the next iteration of the loop, an approach
121 that provides the compiler much less scope for    121 that provides the compiler much less scope for mischievous optimizations.
122 Capturing the return value from cmpxchg() also    122 Capturing the return value from cmpxchg() also saves a memory reference
123 in many cases.                                    123 in many cases.
124                                                   124 
125 In theory, plain C-language loads can also be     125 In theory, plain C-language loads can also be used for this use case.
126 However, in practice this will have the disadv    126 However, in practice this will have the disadvantage of causing KCSAN
127 to generate false positives because KCSAN will    127 to generate false positives because KCSAN will have no way of knowing
128 that the resulting data race was intentional.     128 that the resulting data race was intentional.
129                                                   129 
130                                                   130 
131 Reads Feeding Into Error-Tolerant Heuristics      131 Reads Feeding Into Error-Tolerant Heuristics
132                                                   132 
133 Values from some reads feed into heuristics th    133 Values from some reads feed into heuristics that can tolerate occasional
134 errors.  Such reads can use data_race(), thus     134 errors.  Such reads can use data_race(), thus allowing KCSAN to focus on
135 the other accesses to the relevant shared vari    135 the other accesses to the relevant shared variables.  But please note
136 that data_race() loads are subject to load fus    136 that data_race() loads are subject to load fusing, which can result in
137 consistent errors, which in turn are quite cap    137 consistent errors, which in turn are quite capable of breaking heuristics.
138 Therefore use of data_race() should be limited    138 Therefore use of data_race() should be limited to cases where some other
139 code (such as a barrier() call) will force the    139 code (such as a barrier() call) will force the occasional reload.
140                                                   140 
141 Note that this use case requires that the heur    141 Note that this use case requires that the heuristic be able to handle
142 any possible error.  In contrast, if the heuri    142 any possible error.  In contrast, if the heuristics might be fatally
143 confused by one or more of the possible errone    143 confused by one or more of the possible erroneous values, use READ_ONCE()
144 instead of data_race().                           144 instead of data_race().
145                                                   145 
146 In theory, plain C-language loads can also be     146 In theory, plain C-language loads can also be used for this use case.
147 However, in practice this will have the disadv    147 However, in practice this will have the disadvantage of causing KCSAN
148 to generate false positives because KCSAN will    148 to generate false positives because KCSAN will have no way of knowing
149 that the resulting data race was intentional.     149 that the resulting data race was intentional.
150                                                   150 
151                                                   151 
152 Writes Setting Values Feeding Into Error-Toler    152 Writes Setting Values Feeding Into Error-Tolerant Heuristics
153                                                   153 
154 The values read into error-tolerant heuristics    154 The values read into error-tolerant heuristics come from somewhere,
155 for example, from sysfs.  This means that some    155 for example, from sysfs.  This means that some code in sysfs writes
156 to this same variable, and these writes can al    156 to this same variable, and these writes can also use data_race().
157 After all, if the heuristic can tolerate the o    157 After all, if the heuristic can tolerate the occasional bogus value
158 due to compiler-mangled reads, it can also tol    158 due to compiler-mangled reads, it can also tolerate the occasional
159 compiler-mangled write, at least assuming that    159 compiler-mangled write, at least assuming that the proper value is in
160 place once the write completes.                   160 place once the write completes.
161                                                   161 
162 Plain C-language stores can also be used for t    162 Plain C-language stores can also be used for this use case.  However,
163 in kernels built with CONFIG_KCSAN_ASSUME_PLAI    163 in kernels built with CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=n, this
164 will have the disadvantage of causing KCSAN to    164 will have the disadvantage of causing KCSAN to generate false positives
165 because KCSAN will have no way of knowing that    165 because KCSAN will have no way of knowing that the resulting data race
166 was intentional.                                  166 was intentional.
167                                                   167 
168                                                   168 
169 Use of Plain C-Language Accesses                  169 Use of Plain C-Language Accesses
170 --------------------------------                  170 --------------------------------
171                                                   171 
172 Here are some example situations where plain C    172 Here are some example situations where plain C-language accesses should
173 used instead of READ_ONCE(), WRITE_ONCE(), and    173 used instead of READ_ONCE(), WRITE_ONCE(), and data_race():
174                                                   174 
175 1.      Accesses protected by mutual exclusion    175 1.      Accesses protected by mutual exclusion, including strict locking
176         and sequence locking.                     176         and sequence locking.
177                                                   177 
178 2.      Initialization-time and cleanup-time a    178 2.      Initialization-time and cleanup-time accesses.  This covers a
179         wide variety of situations, including     179         wide variety of situations, including the uniprocessor phase of
180         system boot, variables to be used by n    180         system boot, variables to be used by not-yet-spawned kthreads,
181         structures not yet published to refere    181         structures not yet published to reference-counted or RCU-protected
182         data structures, and the cleanup side     182         data structures, and the cleanup side of any of these situations.
183                                                   183 
184 3.      Per-CPU variables that are not accesse    184 3.      Per-CPU variables that are not accessed from other CPUs.
185                                                   185 
186 4.      Private per-task variables, including     186 4.      Private per-task variables, including on-stack variables, some
187         fields in the task_struct structure, a    187         fields in the task_struct structure, and task-private heap data.
188                                                   188 
189 5.      Any other loads for which there is not    189 5.      Any other loads for which there is not supposed to be a concurrent
190         store to that same variable.              190         store to that same variable.
191                                                   191 
192 6.      Any other stores for which there shoul    192 6.      Any other stores for which there should be neither concurrent
193         loads nor concurrent stores to that sa    193         loads nor concurrent stores to that same variable.
194                                                   194 
195         But note that KCSAN makes two explicit    195         But note that KCSAN makes two explicit exceptions to this rule
196         by default, refraining from flagging p    196         by default, refraining from flagging plain C-language stores:
197                                                   197 
198         a.      No matter what.  You can overr    198         a.      No matter what.  You can override this default by building
199                 with CONFIG_KCSAN_ASSUME_PLAIN    199                 with CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=n.
200                                                   200 
201         b.      When the store writes the valu    201         b.      When the store writes the value already contained in
202                 that variable.  You can overri    202                 that variable.  You can override this default by building
203                 with CONFIG_KCSAN_REPORT_VALUE    203                 with CONFIG_KCSAN_REPORT_VALUE_CHANGE_ONLY=n.
204                                                   204 
205         c.      When one of the stores is in a    205         c.      When one of the stores is in an interrupt handler and
206                 the other in the interrupted c    206                 the other in the interrupted code.  You can override this
207                 default by building with CONFI    207                 default by building with CONFIG_KCSAN_INTERRUPT_WATCHER=y.
208                                                   208 
209 Note that it is important to use plain C-langu    209 Note that it is important to use plain C-language accesses in these cases,
210 because doing otherwise prevents KCSAN from de    210 because doing otherwise prevents KCSAN from detecting violations of your
211 code's synchronization rules.                     211 code's synchronization rules.
212                                                   212 
213                                                   213 
214 Use of __data_racy                                214 Use of __data_racy
215 ------------------                                215 ------------------
216                                                   216 
217 Adding the __data_racy type qualifier to the d    217 Adding the __data_racy type qualifier to the declaration of a variable
218 causes KCSAN to treat all accesses to that var    218 causes KCSAN to treat all accesses to that variable as if they were
219 enclosed by data_race().  However, __data_racy    219 enclosed by data_race().  However, __data_racy does not affect the
220 compiler, though one could imagine hardened ke    220 compiler, though one could imagine hardened kernel builds treating the
221 __data_racy type qualifier as if it was the vo    221 __data_racy type qualifier as if it was the volatile keyword.
222                                                   222 
223 Note well that __data_racy is subject to the s    223 Note well that __data_racy is subject to the same pointer-declaration
224 rules as are other type qualifiers such as con    224 rules as are other type qualifiers such as const and volatile.
225 For example:                                      225 For example:
226                                                   226 
227         int __data_racy *p; // Pointer to data    227         int __data_racy *p; // Pointer to data-racy data.
228         int *__data_racy p; // Data-racy point    228         int *__data_racy p; // Data-racy pointer to non-data-racy data.
229                                                   229 
230                                                   230 
231 ACCESS-DOCUMENTATION OPTIONS                      231 ACCESS-DOCUMENTATION OPTIONS
232 ============================                      232 ============================
233                                                   233 
234 It is important to comment marked accesses so     234 It is important to comment marked accesses so that people reading your
235 code, yourself included, are reminded of the s    235 code, yourself included, are reminded of the synchronization design.
236 However, it is even more important to comment     236 However, it is even more important to comment plain C-language accesses
237 that are intentionally involved in data races.    237 that are intentionally involved in data races.  Such comments are
238 needed to remind people reading your code, aga    238 needed to remind people reading your code, again, yourself included,
239 of how the compiler has been prevented from op    239 of how the compiler has been prevented from optimizing those accesses
240 into concurrency bugs.                            240 into concurrency bugs.
241                                                   241 
242 It is also possible to tell KCSAN about your s    242 It is also possible to tell KCSAN about your synchronization design.
243 For example, ASSERT_EXCLUSIVE_ACCESS(foo) tell    243 For example, ASSERT_EXCLUSIVE_ACCESS(foo) tells KCSAN that any
244 concurrent access to variable foo by any other    244 concurrent access to variable foo by any other CPU is an error, even
245 if that concurrent access is marked with READ_    245 if that concurrent access is marked with READ_ONCE().  In addition,
246 ASSERT_EXCLUSIVE_WRITER(foo) tells KCSAN that     246 ASSERT_EXCLUSIVE_WRITER(foo) tells KCSAN that although it is OK for there
247 to be concurrent reads from foo from other CPU    247 to be concurrent reads from foo from other CPUs, it is an error for some
248 other CPU to be concurrently writing to foo, e    248 other CPU to be concurrently writing to foo, even if that concurrent
249 write is marked with data_race() or WRITE_ONCE    249 write is marked with data_race() or WRITE_ONCE().
250                                                   250 
251 Note that although KCSAN will call out data ra    251 Note that although KCSAN will call out data races involving either
252 ASSERT_EXCLUSIVE_ACCESS() or ASSERT_EXCLUSIVE_    252 ASSERT_EXCLUSIVE_ACCESS() or ASSERT_EXCLUSIVE_WRITER() on the one hand
253 and data_race() writes on the other, KCSAN wil    253 and data_race() writes on the other, KCSAN will not report the location
254 of these data_race() writes.                      254 of these data_race() writes.
255                                                   255 
256                                                   256 
257 EXAMPLES                                          257 EXAMPLES
258 ========                                          258 ========
259                                                   259 
260 As noted earlier, the goal is to prevent the c    260 As noted earlier, the goal is to prevent the compiler from destroying
261 your concurrent algorithm, to help the human r    261 your concurrent algorithm, to help the human reader, and to inform
262 KCSAN of aspects of your concurrency design.      262 KCSAN of aspects of your concurrency design.  This section looks at a
263 few examples showing how this can be done.        263 few examples showing how this can be done.
264                                                   264 
265                                                   265 
266 Lock Protection With Lockless Diagnostic Acces    266 Lock Protection With Lockless Diagnostic Access
267 ----------------------------------------------    267 -----------------------------------------------
268                                                   268 
269 For example, suppose a shared variable "foo" i    269 For example, suppose a shared variable "foo" is read only while a
270 reader-writer spinlock is read-held, written o    270 reader-writer spinlock is read-held, written only while that same
271 spinlock is write-held, except that it is also    271 spinlock is write-held, except that it is also read locklessly for
272 diagnostic purposes.  The code might look as f    272 diagnostic purposes.  The code might look as follows:
273                                                   273 
274         int foo;                                  274         int foo;
275         DEFINE_RWLOCK(foo_rwlock);                275         DEFINE_RWLOCK(foo_rwlock);
276                                                   276 
277         void update_foo(int newval)               277         void update_foo(int newval)
278         {                                         278         {
279                 write_lock(&foo_rwlock);          279                 write_lock(&foo_rwlock);
280                 foo = newval;                     280                 foo = newval;
281                 do_something(newval);             281                 do_something(newval);
282                 write_unlock(&foo_rwlock);        282                 write_unlock(&foo_rwlock);
283         }                                         283         }
284                                                   284 
285         int read_foo(void)                        285         int read_foo(void)
286         {                                         286         {
287                 int ret;                          287                 int ret;
288                                                   288 
289                 read_lock(&foo_rwlock);           289                 read_lock(&foo_rwlock);
290                 do_something_else();              290                 do_something_else();
291                 ret = foo;                        291                 ret = foo;
292                 read_unlock(&foo_rwlock);         292                 read_unlock(&foo_rwlock);
293                 return ret;                       293                 return ret;
294         }                                         294         }
295                                                   295 
296         void read_foo_diagnostic(void)            296         void read_foo_diagnostic(void)
297         {                                         297         {
298                 pr_info("Current value of foo:    298                 pr_info("Current value of foo: %d\n", data_race(foo));
299         }                                         299         }
300                                                   300 
301 The reader-writer lock prevents the compiler f    301 The reader-writer lock prevents the compiler from introducing concurrency
302 bugs into any part of the main algorithm using    302 bugs into any part of the main algorithm using foo, which means that
303 the accesses to foo within both update_foo() a    303 the accesses to foo within both update_foo() and read_foo() can (and
304 should) be plain C-language accesses.  One ben    304 should) be plain C-language accesses.  One benefit of making them be
305 plain C-language accesses is that KCSAN can de    305 plain C-language accesses is that KCSAN can detect any erroneous lockless
306 reads from or updates to foo.  The data_race()    306 reads from or updates to foo.  The data_race() in read_foo_diagnostic()
307 tells KCSAN that data races are expected, and     307 tells KCSAN that data races are expected, and should be silently
308 ignored.  This data_race() also tells the huma    308 ignored.  This data_race() also tells the human reading the code that
309 read_foo_diagnostic() might sometimes return a    309 read_foo_diagnostic() might sometimes return a bogus value.
310                                                   310 
311 If it is necessary to suppress compiler optimi    311 If it is necessary to suppress compiler optimization and also detect
312 buggy lockless writes, read_foo_diagnostic() c    312 buggy lockless writes, read_foo_diagnostic() can be updated as follows:
313                                                   313 
314         void read_foo_diagnostic(void)            314         void read_foo_diagnostic(void)
315         {                                         315         {
316                 pr_info("Current value of foo:    316                 pr_info("Current value of foo: %d\n", data_race(READ_ONCE(foo)));
317         }                                         317         }
318                                                   318 
319 Alternatively, given that KCSAN is to ignore a    319 Alternatively, given that KCSAN is to ignore all accesses in this function,
320 this function can be marked __no_kcsan and the    320 this function can be marked __no_kcsan and the data_race() can be dropped:
321                                                   321 
322         void __no_kcsan read_foo_diagnostic(vo    322         void __no_kcsan read_foo_diagnostic(void)
323         {                                         323         {
324                 pr_info("Current value of foo:    324                 pr_info("Current value of foo: %d\n", READ_ONCE(foo));
325         }                                         325         }
326                                                   326 
327 However, in order for KCSAN to detect buggy lo    327 However, in order for KCSAN to detect buggy lockless writes, your kernel
328 must be built with CONFIG_KCSAN_ASSUME_PLAIN_W    328 must be built with CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=n.  If you
329 need KCSAN to detect such a write even if that    329 need KCSAN to detect such a write even if that write did not change
330 the value of foo, you also need CONFIG_KCSAN_R    330 the value of foo, you also need CONFIG_KCSAN_REPORT_VALUE_CHANGE_ONLY=n.
331 If you need KCSAN to detect such a write happe    331 If you need KCSAN to detect such a write happening in an interrupt handler
332 running on the same CPU doing the legitimate l    332 running on the same CPU doing the legitimate lock-protected write, you
333 also need CONFIG_KCSAN_INTERRUPT_WATCHER=y.  W    333 also need CONFIG_KCSAN_INTERRUPT_WATCHER=y.  With some or all of these
334 Kconfig options set properly, KCSAN can be qui    334 Kconfig options set properly, KCSAN can be quite helpful, although
335 it is not necessarily a full replacement for h    335 it is not necessarily a full replacement for hardware watchpoints.
336 On the other hand, neither are hardware watchp    336 On the other hand, neither are hardware watchpoints a full replacement
337 for KCSAN because it is not always easy to tel    337 for KCSAN because it is not always easy to tell hardware watchpoint to
338 conditionally trap on accesses.                   338 conditionally trap on accesses.
339                                                   339 
340                                                   340 
341 Lock-Protected Writes With Lockless Reads         341 Lock-Protected Writes With Lockless Reads
342 -----------------------------------------         342 -----------------------------------------
343                                                   343 
344 For another example, suppose a shared variable    344 For another example, suppose a shared variable "foo" is updated only
345 while holding a spinlock, but is read lockless    345 while holding a spinlock, but is read locklessly.  The code might look
346 as follows:                                       346 as follows:
347                                                   347 
348         int foo;                                  348         int foo;
349         DEFINE_SPINLOCK(foo_lock);                349         DEFINE_SPINLOCK(foo_lock);
350                                                   350 
351         void update_foo(int newval)               351         void update_foo(int newval)
352         {                                         352         {
353                 spin_lock(&foo_lock);             353                 spin_lock(&foo_lock);
354                 WRITE_ONCE(foo, newval);          354                 WRITE_ONCE(foo, newval);
355                 ASSERT_EXCLUSIVE_WRITER(foo);     355                 ASSERT_EXCLUSIVE_WRITER(foo);
356                 do_something(newval);             356                 do_something(newval);
357                 spin_unlock(&foo_wlock);          357                 spin_unlock(&foo_wlock);
358         }                                         358         }
359                                                   359 
360         int read_foo(void)                        360         int read_foo(void)
361         {                                         361         {
362                 do_something_else();              362                 do_something_else();
363                 return READ_ONCE(foo);            363                 return READ_ONCE(foo);
364         }                                         364         }
365                                                   365 
366 Because foo is read locklessly, all accesses a    366 Because foo is read locklessly, all accesses are marked.  The purpose
367 of the ASSERT_EXCLUSIVE_WRITER() is to allow K    367 of the ASSERT_EXCLUSIVE_WRITER() is to allow KCSAN to check for a buggy
368 concurrent write, whether marked or not.          368 concurrent write, whether marked or not.
369                                                   369 
370                                                   370 
371 Lock-Protected Writes With Heuristic Lockless     371 Lock-Protected Writes With Heuristic Lockless Reads
372 ----------------------------------------------    372 ---------------------------------------------------
373                                                   373 
374 For another example, suppose that the code can    374 For another example, suppose that the code can normally make use of
375 a per-data-structure lock, but there are times    375 a per-data-structure lock, but there are times when a global lock
376 is required.  These times are indicated via a     376 is required.  These times are indicated via a global flag.  The code
377 might look as follows, and is based loosely on    377 might look as follows, and is based loosely on nf_conntrack_lock(),
378 nf_conntrack_all_lock(), and nf_conntrack_all_    378 nf_conntrack_all_lock(), and nf_conntrack_all_unlock():
379                                                   379 
380         bool global_flag;                         380         bool global_flag;
381         DEFINE_SPINLOCK(global_lock);             381         DEFINE_SPINLOCK(global_lock);
382         struct foo {                              382         struct foo {
383                 spinlock_t f_lock;                383                 spinlock_t f_lock;
384                 int f_data;                       384                 int f_data;
385         };                                        385         };
386                                                   386 
387         /* All foo structures are in the follo    387         /* All foo structures are in the following array. */
388         int nfoo;                                 388         int nfoo;
389         struct foo *foo_array;                    389         struct foo *foo_array;
390                                                   390 
391         void do_something_locked(struct foo *f    391         void do_something_locked(struct foo *fp)
392         {                                         392         {
393                 /* This works even if data_rac    393                 /* This works even if data_race() returns nonsense. */
394                 if (!data_race(global_flag)) {    394                 if (!data_race(global_flag)) {
395                         spin_lock(&fp->f_lock)    395                         spin_lock(&fp->f_lock);
396                         if (!smp_load_acquire(    396                         if (!smp_load_acquire(&global_flag)) {
397                                 do_something(f    397                                 do_something(fp);
398                                 spin_unlock(&f    398                                 spin_unlock(&fp->f_lock);
399                                 return;           399                                 return;
400                         }                         400                         }
401                         spin_unlock(&fp->f_loc    401                         spin_unlock(&fp->f_lock);
402                 }                                 402                 }
403                 spin_lock(&global_lock);          403                 spin_lock(&global_lock);
404                 /* global_lock held, thus glob    404                 /* global_lock held, thus global flag cannot be set. */
405                 spin_lock(&fp->f_lock);           405                 spin_lock(&fp->f_lock);
406                 spin_unlock(&global_lock);        406                 spin_unlock(&global_lock);
407                 /*                                407                 /*
408                  * global_flag might be set he    408                  * global_flag might be set here, but begin_global()
409                  * will wait for ->f_lock to b    409                  * will wait for ->f_lock to be released.
410                  */                               410                  */
411                 do_something(fp);                 411                 do_something(fp);
412                 spin_unlock(&fp->f_lock);         412                 spin_unlock(&fp->f_lock);
413         }                                         413         }
414                                                   414 
415         void begin_global(void)                   415         void begin_global(void)
416         {                                         416         {
417                 int i;                            417                 int i;
418                                                   418 
419                 spin_lock(&global_lock);          419                 spin_lock(&global_lock);
420                 WRITE_ONCE(global_flag, true);    420                 WRITE_ONCE(global_flag, true);
421                 for (i = 0; i < nfoo; i++) {      421                 for (i = 0; i < nfoo; i++) {
422                         /*                        422                         /*
423                          * Wait for pre-existi    423                          * Wait for pre-existing local locks.  One at
424                          * a time to avoid loc    424                          * a time to avoid lockdep limitations.
425                          */                       425                          */
426                         spin_lock(&fp->f_lock)    426                         spin_lock(&fp->f_lock);
427                         spin_unlock(&fp->f_loc    427                         spin_unlock(&fp->f_lock);
428                 }                                 428                 }
429         }                                         429         }
430                                                   430 
431         void end_global(void)                     431         void end_global(void)
432         {                                         432         {
433                 smp_store_release(&global_flag    433                 smp_store_release(&global_flag, false);
434                 spin_unlock(&global_lock);        434                 spin_unlock(&global_lock);
435         }                                         435         }
436                                                   436 
437 All code paths leading from the do_something_l    437 All code paths leading from the do_something_locked() function's first
438 read from global_flag acquire a lock, so endle    438 read from global_flag acquire a lock, so endless load fusing cannot
439 happen.                                           439 happen.
440                                                   440 
441 If the value read from global_flag is true, th    441 If the value read from global_flag is true, then global_flag is
442 rechecked while holding ->f_lock, which, if gl    442 rechecked while holding ->f_lock, which, if global_flag is now false,
443 prevents begin_global() from completing.  It i    443 prevents begin_global() from completing.  It is therefore safe to invoke
444 do_something().                                   444 do_something().
445                                                   445 
446 Otherwise, if either value read from global_fl    446 Otherwise, if either value read from global_flag is true, then after
447 global_lock is acquired global_flag must be fa    447 global_lock is acquired global_flag must be false.  The acquisition of
448 ->f_lock will prevent any call to begin_global    448 ->f_lock will prevent any call to begin_global() from returning, which
449 means that it is safe to release global_lock a    449 means that it is safe to release global_lock and invoke do_something().
450                                                   450 
451 For this to work, only those foo structures in    451 For this to work, only those foo structures in foo_array[] may be passed
452 to do_something_locked().  The reason for this    452 to do_something_locked().  The reason for this is that the synchronization
453 with begin_global() relies on momentarily hold    453 with begin_global() relies on momentarily holding the lock of each and
454 every foo structure.                              454 every foo structure.
455                                                   455 
456 The smp_load_acquire() and smp_store_release()    456 The smp_load_acquire() and smp_store_release() are required because
457 changes to a foo structure between calls to be    457 changes to a foo structure between calls to begin_global() and
458 end_global() are carried out without holding t    458 end_global() are carried out without holding that structure's ->f_lock.
459 The smp_load_acquire() and smp_store_release()    459 The smp_load_acquire() and smp_store_release() ensure that the next
460 invocation of do_something() from do_something    460 invocation of do_something() from do_something_locked() will see those
461 changes.                                          461 changes.
462                                                   462 
463                                                   463 
464 Lockless Reads and Writes                         464 Lockless Reads and Writes
465 -------------------------                         465 -------------------------
466                                                   466 
467 For another example, suppose a shared variable    467 For another example, suppose a shared variable "foo" is both read and
468 updated locklessly.  The code might look as fo    468 updated locklessly.  The code might look as follows:
469                                                   469 
470         int foo;                                  470         int foo;
471                                                   471 
472         int update_foo(int newval)                472         int update_foo(int newval)
473         {                                         473         {
474                 int ret;                          474                 int ret;
475                                                   475 
476                 ret = xchg(&foo, newval);         476                 ret = xchg(&foo, newval);
477                 do_something(newval);             477                 do_something(newval);
478                 return ret;                       478                 return ret;
479         }                                         479         }
480                                                   480 
481         int read_foo(void)                        481         int read_foo(void)
482         {                                         482         {
483                 do_something_else();              483                 do_something_else();
484                 return READ_ONCE(foo);            484                 return READ_ONCE(foo);
485         }                                         485         }
486                                                   486 
487 Because foo is accessed locklessly, all access    487 Because foo is accessed locklessly, all accesses are marked.  It does
488 not make sense to use ASSERT_EXCLUSIVE_WRITER(    488 not make sense to use ASSERT_EXCLUSIVE_WRITER() in this case because
489 there really can be concurrent lockless writer    489 there really can be concurrent lockless writers.  KCSAN would
490 flag any concurrent plain C-language reads fro    490 flag any concurrent plain C-language reads from foo, and given
491 CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=n, als    491 CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=n, also any concurrent plain
492 C-language writes to foo.                         492 C-language writes to foo.
493                                                   493 
494                                                   494 
495 Lockless Reads and Writes, But With Single-Thr    495 Lockless Reads and Writes, But With Single-Threaded Initialization
496 ----------------------------------------------    496 ------------------------------------------------------------------
497                                                   497 
498 For yet another example, suppose that foo is i    498 For yet another example, suppose that foo is initialized in a
499 single-threaded manner, but that a number of k    499 single-threaded manner, but that a number of kthreads are then created
500 that locklessly and concurrently access foo.      500 that locklessly and concurrently access foo.  Some snippets of this code
501 might look as follows:                            501 might look as follows:
502                                                   502 
503         int foo;                                  503         int foo;
504                                                   504 
505         void initialize_foo(int initval, int n    505         void initialize_foo(int initval, int nkthreads)
506         {                                         506         {
507                 int i;                            507                 int i;
508                                                   508 
509                 foo = initval;                    509                 foo = initval;
510                 ASSERT_EXCLUSIVE_ACCESS(foo);     510                 ASSERT_EXCLUSIVE_ACCESS(foo);
511                 for (i = 0; i < nkthreads; i++    511                 for (i = 0; i < nkthreads; i++)
512                         kthread_run(access_foo    512                         kthread_run(access_foo_concurrently, ...);
513         }                                         513         }
514                                                   514 
515         /* Called from access_foo_concurrently    515         /* Called from access_foo_concurrently(). */
516         int update_foo(int newval)                516         int update_foo(int newval)
517         {                                         517         {
518                 int ret;                          518                 int ret;
519                                                   519 
520                 ret = xchg(&foo, newval);         520                 ret = xchg(&foo, newval);
521                 do_something(newval);             521                 do_something(newval);
522                 return ret;                       522                 return ret;
523         }                                         523         }
524                                                   524 
525         /* Also called from access_foo_concurr    525         /* Also called from access_foo_concurrently(). */
526         int read_foo(void)                        526         int read_foo(void)
527         {                                         527         {
528                 do_something_else();              528                 do_something_else();
529                 return READ_ONCE(foo);            529                 return READ_ONCE(foo);
530         }                                         530         }
531                                                   531 
532 The initialize_foo() uses a plain C-language w    532 The initialize_foo() uses a plain C-language write to foo because there
533 are not supposed to be concurrent accesses dur    533 are not supposed to be concurrent accesses during initialization.  The
534 ASSERT_EXCLUSIVE_ACCESS() allows KCSAN to flag    534 ASSERT_EXCLUSIVE_ACCESS() allows KCSAN to flag buggy concurrent unmarked
535 reads, and the ASSERT_EXCLUSIVE_ACCESS() call     535 reads, and the ASSERT_EXCLUSIVE_ACCESS() call further allows KCSAN to
536 flag buggy concurrent writes, even if:  (1) Th    536 flag buggy concurrent writes, even if:  (1) Those writes are marked or
537 (2) The kernel was built with CONFIG_KCSAN_ASS    537 (2) The kernel was built with CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=y.
538                                                   538 
539                                                   539 
540 Checking Stress-Test Race Coverage                540 Checking Stress-Test Race Coverage
541 ----------------------------------                541 ----------------------------------
542                                                   542 
543 When designing stress tests it is important to    543 When designing stress tests it is important to ensure that race conditions
544 of interest really do occur.  For example, con    544 of interest really do occur.  For example, consider the following code
545 fragment:                                         545 fragment:
546                                                   546 
547         int foo;                                  547         int foo;
548                                                   548 
549         int update_foo(int newval)                549         int update_foo(int newval)
550         {                                         550         {
551                 return xchg(&foo, newval);        551                 return xchg(&foo, newval);
552         }                                         552         }
553                                                   553 
554         int xor_shift_foo(int shift, int mask)    554         int xor_shift_foo(int shift, int mask)
555         {                                         555         {
556                 int old, new, newold;             556                 int old, new, newold;
557                                                   557 
558                 newold = data_race(foo); /* Ch    558                 newold = data_race(foo); /* Checked by cmpxchg(). */
559                 do {                              559                 do {
560                         old = newold;             560                         old = newold;
561                         new = (old << shift) ^    561                         new = (old << shift) ^ mask;
562                         newold = cmpxchg(&foo,    562                         newold = cmpxchg(&foo, old, new);
563                 } while (newold != old);          563                 } while (newold != old);
564                 return old;                       564                 return old;
565         }                                         565         }
566                                                   566 
567         int read_foo(void)                        567         int read_foo(void)
568         {                                         568         {
569                 return READ_ONCE(foo);            569                 return READ_ONCE(foo);
570         }                                         570         }
571                                                   571 
572 If it is possible for update_foo(), xor_shift_    572 If it is possible for update_foo(), xor_shift_foo(), and read_foo() to be
573 invoked concurrently, the stress test should f    573 invoked concurrently, the stress test should force this concurrency to
574 actually happen.  KCSAN can evaluate the stres    574 actually happen.  KCSAN can evaluate the stress test when the above code
575 is modified to read as follows:                   575 is modified to read as follows:
576                                                   576 
577         int foo;                                  577         int foo;
578                                                   578 
579         int update_foo(int newval)                579         int update_foo(int newval)
580         {                                         580         {
581                 ASSERT_EXCLUSIVE_ACCESS(foo);     581                 ASSERT_EXCLUSIVE_ACCESS(foo);
582                 return xchg(&foo, newval);        582                 return xchg(&foo, newval);
583         }                                         583         }
584                                                   584 
585         int xor_shift_foo(int shift, int mask)    585         int xor_shift_foo(int shift, int mask)
586         {                                         586         {
587                 int old, new, newold;             587                 int old, new, newold;
588                                                   588 
589                 newold = data_race(foo); /* Ch    589                 newold = data_race(foo); /* Checked by cmpxchg(). */
590                 do {                              590                 do {
591                         old = newold;             591                         old = newold;
592                         new = (old << shift) ^    592                         new = (old << shift) ^ mask;
593                         ASSERT_EXCLUSIVE_ACCES    593                         ASSERT_EXCLUSIVE_ACCESS(foo);
594                         newold = cmpxchg(&foo,    594                         newold = cmpxchg(&foo, old, new);
595                 } while (newold != old);          595                 } while (newold != old);
596                 return old;                       596                 return old;
597         }                                         597         }
598                                                   598 
599                                                   599 
600         int read_foo(void)                        600         int read_foo(void)
601         {                                         601         {
602                 ASSERT_EXCLUSIVE_ACCESS(foo);     602                 ASSERT_EXCLUSIVE_ACCESS(foo);
603                 return READ_ONCE(foo);            603                 return READ_ONCE(foo);
604         }                                         604         }
605                                                   605 
606 If a given stress-test run does not result in     606 If a given stress-test run does not result in KCSAN complaints from
607 each possible pair of ASSERT_EXCLUSIVE_ACCESS(    607 each possible pair of ASSERT_EXCLUSIVE_ACCESS() invocations, the
608 stress test needs improvement.  If the stress     608 stress test needs improvement.  If the stress test was to be evaluated
609 on a regular basis, it would be wise to place     609 on a regular basis, it would be wise to place the above instances of
610 ASSERT_EXCLUSIVE_ACCESS() under #ifdef so that    610 ASSERT_EXCLUSIVE_ACCESS() under #ifdef so that they did not result in
611 false positives when not evaluating the stress    611 false positives when not evaluating the stress test.
612                                                   612 
613                                                   613 
614 REFERENCES                                        614 REFERENCES
615 ==========                                        615 ==========
616                                                   616 
617 [1] "Concurrency bugs should fear the big bad     617 [1] "Concurrency bugs should fear the big bad data-race detector (part 2)"
618     https://lwn.net/Articles/816854/              618     https://lwn.net/Articles/816854/
619                                                   619 
620 [2] "The Kernel Concurrency Sanitizer"            620 [2] "The Kernel Concurrency Sanitizer"
621     https://www.linuxfoundation.org/webinars/t    621     https://www.linuxfoundation.org/webinars/the-kernel-concurrency-sanitizer
622                                                   622 
623 [3] "Who's afraid of a big bad optimizing comp    623 [3] "Who's afraid of a big bad optimizing compiler?"
624     https://lwn.net/Articles/793253/              624     https://lwn.net/Articles/793253/
                                                      

~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

kernel.org | git.kernel.org | LWN.net | Project Home | SVN repository | Mail admin

Linux® is a registered trademark of Linus Torvalds in the United States and other countries.
TOMOYO® is a registered trademark of NTT DATA CORPORATION.

sflogo.php