1 This document gives an overview of the categories of memory-ordering 2 operations provided by the Linux-kernel memory model (LKMM). 3 4 5 Categories of Ordering 6 ====================== 7 8 This section lists LKMM's three top-level categories of memory-ordering 9 operations in decreasing order of strength: 10 11 1. Barriers (also known as "fences"). A barrier orders some or 12 all of the CPU's prior operations against some or all of its 13 subsequent operations. 14 15 2. Ordered memory accesses. These operations order themselves 16 against some or all of the CPU's prior accesses or some or all 17 of the CPU's subsequent accesses, depending on the subcategory 18 of the operation. 19 20 3. Unordered accesses, as the name indicates, have no ordering 21 properties except to the extent that they interact with an 22 operation in the previous categories. This being the real world, 23 some of these "unordered" operations provide limited ordering 24 in some special situations. 25 26 Each of the above categories is described in more detail by one of the 27 following sections. 28 29 30 Barriers 31 ======== 32 33 Each of the following categories of barriers is described in its own 34 subsection below: 35 36 a. Full memory barriers. 37 38 b. Read-modify-write (RMW) ordering augmentation barriers. 39 40 c. Write memory barrier. 41 42 d. Read memory barrier. 43 44 e. Compiler barrier. 45 46 Note well that many of these primitives generate absolutely no code 47 in kernels built with CONFIG_SMP=n. Therefore, if you are writing 48 a device driver, which must correctly order accesses to a physical 49 device even in kernels built with CONFIG_SMP=n, please use the 50 ordering primitives provided for that purpose. For example, instead of 51 smp_mb(), use mb(). See the "Linux Kernel Device Drivers" book or the 52 https://lwn.net/Articles/698014/ article for more information. 53 54 55 Full Memory Barriers 56 -------------------- 57 58 The Linux-kernel primitives that provide full ordering include: 59 60 o The smp_mb() full memory barrier. 61 62 o Value-returning RMW atomic operations whose names do not end in 63 _acquire, _release, or _relaxed. 64 65 o RCU's grace-period primitives. 66 67 First, the smp_mb() full memory barrier orders all of the CPU's prior 68 accesses against all subsequent accesses from the viewpoint of all CPUs. 69 In other words, all CPUs will agree that any earlier action taken 70 by that CPU happened before any later action taken by that same CPU. 71 For example, consider the following: 72 73 WRITE_ONCE(x, 1); 74 smp_mb(); // Order store to x before load from y. 75 r1 = READ_ONCE(y); 76 77 All CPUs will agree that the store to "x" happened before the load 78 from "y", as indicated by the comment. And yes, please comment your 79 memory-ordering primitives. It is surprisingly hard to remember their 80 purpose after even a few months. 81 82 Second, some RMW atomic operations provide full ordering. These 83 operations include value-returning RMW atomic operations (that is, those 84 with non-void return types) whose names do not end in _acquire, _release, 85 or _relaxed. Examples include atomic_add_return(), atomic_dec_and_test(), 86 cmpxchg(), and xchg(). Note that conditional RMW atomic operations such 87 as cmpxchg() are only guaranteed to provide ordering when they succeed. 88 When RMW atomic operations provide full ordering, they partition the 89 CPU's accesses into three groups: 90 91 1. All code that executed prior to the RMW atomic operation. 92 93 2. The RMW atomic operation itself. 94 95 3. All code that executed after the RMW atomic operation. 96 97 All CPUs will agree that any operation in a given partition happened 98 before any operation in a higher-numbered partition. 99 100 In contrast, non-value-returning RMW atomic operations (that is, those 101 with void return types) do not guarantee any ordering whatsoever. Nor do 102 value-returning RMW atomic operations whose names end in _relaxed. 103 Examples of the former include atomic_inc() and atomic_dec(), 104 while examples of the latter include atomic_cmpxchg_relaxed() and 105 atomic_xchg_relaxed(). Similarly, value-returning non-RMW atomic 106 operations such as atomic_read() do not guarantee full ordering, and 107 are covered in the later section on unordered operations. 108 109 Value-returning RMW atomic operations whose names end in _acquire or 110 _release provide limited ordering, and will be described later in this 111 document. 112 113 Finally, RCU's grace-period primitives provide full ordering. These 114 primitives include synchronize_rcu(), synchronize_rcu_expedited(), 115 synchronize_srcu() and so on. However, these primitives have orders 116 of magnitude greater overhead than smp_mb(), atomic_xchg(), and so on. 117 Furthermore, RCU's grace-period primitives can only be invoked in 118 sleepable contexts. Therefore, RCU's grace-period primitives are 119 typically instead used to provide ordering against RCU read-side critical 120 sections, as documented in their comment headers. But of course if you 121 need a synchronize_rcu() to interact with readers, it costs you nothing 122 to also rely on its additional full-memory-barrier semantics. Just please 123 carefully comment this, otherwise your future self will hate you. 124 125 126 RMW Ordering Augmentation Barriers 127 ---------------------------------- 128 129 As noted in the previous section, non-value-returning RMW operations 130 such as atomic_inc() and atomic_dec() guarantee no ordering whatsoever. 131 Nevertheless, a number of popular CPU families, including x86, provide 132 full ordering for these primitives. One way to obtain full ordering on 133 all architectures is to add a call to smp_mb(): 134 135 WRITE_ONCE(x, 1); 136 atomic_inc(&my_counter); 137 smp_mb(); // Inefficient on x86!!! 138 r1 = READ_ONCE(y); 139 140 This works, but the added smp_mb() adds needless overhead for 141 x86, on which atomic_inc() provides full ordering all by itself. 142 The smp_mb__after_atomic() primitive can be used instead: 143 144 WRITE_ONCE(x, 1); 145 atomic_inc(&my_counter); 146 smp_mb__after_atomic(); // Order store to x before load from y. 147 r1 = READ_ONCE(y); 148 149 The smp_mb__after_atomic() primitive emits code only on CPUs whose 150 atomic_inc() implementations do not guarantee full ordering, thus 151 incurring no unnecessary overhead on x86. There are a number of 152 variations on the smp_mb__*() theme: 153 154 o smp_mb__before_atomic(), which provides full ordering prior 155 to an unordered RMW atomic operation. 156 157 o smp_mb__after_atomic(), which, as shown above, provides full 158 ordering subsequent to an unordered RMW atomic operation. 159 160 o smp_mb__after_spinlock(), which provides full ordering subsequent 161 to a successful spinlock acquisition. Note that spin_lock() is 162 always successful but spin_trylock() might not be. 163 164 o smp_mb__after_srcu_read_unlock(), which provides full ordering 165 subsequent to an srcu_read_unlock(). 166 167 It is bad practice to place code between the smp__*() primitive and the 168 operation whose ordering that it is augmenting. The reason is that the 169 ordering of this intervening code will differ from one CPU architecture 170 to another. 171 172 173 Write Memory Barrier 174 -------------------- 175 176 The Linux kernel's write memory barrier is smp_wmb(). If a CPU executes 177 the following code: 178 179 WRITE_ONCE(x, 1); 180 smp_wmb(); 181 WRITE_ONCE(y, 1); 182 183 Then any given CPU will see the write to "x" has having happened before 184 the write to "y". However, you are usually better off using a release 185 store, as described in the "Release Operations" section below. 186 187 Note that smp_wmb() might fail to provide ordering for unmarked C-language 188 stores because profile-driven optimization could determine that the 189 value being overwritten is almost always equal to the new value. Such a 190 compiler might then reasonably decide to transform "x = 1" and "y = 1" 191 as follows: 192 193 if (x != 1) 194 x = 1; 195 smp_wmb(); // BUG: does not order the reads!!! 196 if (y != 1) 197 y = 1; 198 199 Therefore, if you need to use smp_wmb() with unmarked C-language writes, 200 you will need to make sure that none of the compilers used to build 201 the Linux kernel carry out this sort of transformation, both now and in 202 the future. 203 204 205 Read Memory Barrier 206 ------------------- 207 208 The Linux kernel's read memory barrier is smp_rmb(). If a CPU executes 209 the following code: 210 211 r0 = READ_ONCE(y); 212 smp_rmb(); 213 r1 = READ_ONCE(x); 214 215 Then any given CPU will see the read from "y" as having preceded the read from 216 "x". However, you are usually better off using an acquire load, as described 217 in the "Acquire Operations" section below. 218 219 Compiler Barrier 220 ---------------- 221 222 The Linux kernel's compiler barrier is barrier(). This primitive 223 prohibits compiler code-motion optimizations that might move memory 224 references across the point in the code containing the barrier(), but 225 does not constrain hardware memory ordering. For example, this can be 226 used to prevent to compiler from moving code across an infinite loop: 227 228 WRITE_ONCE(x, 1); 229 while (dontstop) 230 barrier(); 231 r1 = READ_ONCE(y); 232 233 Without the barrier(), the compiler would be within its rights to move the 234 WRITE_ONCE() to follow the loop. This code motion could be problematic 235 in the case where an interrupt handler terminates the loop. Another way 236 to handle this is to use READ_ONCE() for the load of "dontstop". 237 238 Note that the barriers discussed previously use barrier() or its low-level 239 equivalent in their implementations. 240 241 242 Ordered Memory Accesses 243 ======================= 244 245 The Linux kernel provides a wide variety of ordered memory accesses: 246 247 a. Release operations. 248 249 b. Acquire operations. 250 251 c. RCU read-side ordering. 252 253 d. Control dependencies. 254 255 Each of the above categories has its own section below. 256 257 258 Release Operations 259 ------------------ 260 261 Release operations include smp_store_release(), atomic_set_release(), 262 rcu_assign_pointer(), and value-returning RMW operations whose names 263 end in _release. These operations order their own store against all 264 of the CPU's prior memory accesses. Release operations often provide 265 improved readability and performance compared to explicit barriers. 266 For example, use of smp_store_release() saves a line compared to the 267 smp_wmb() example above: 268 269 WRITE_ONCE(x, 1); 270 smp_store_release(&y, 1); 271 272 More important, smp_store_release() makes it easier to connect up the 273 different pieces of the concurrent algorithm. The variable stored to 274 by the smp_store_release(), in this case "y", will normally be used in 275 an acquire operation in other parts of the concurrent algorithm. 276 277 To see the performance advantages, suppose that the above example read 278 from "x" instead of writing to it. Then an smp_wmb() could not guarantee 279 ordering, and an smp_mb() would be needed instead: 280 281 r1 = READ_ONCE(x); 282 smp_mb(); 283 WRITE_ONCE(y, 1); 284 285 But smp_mb() often incurs much higher overhead than does 286 smp_store_release(), which still provides the needed ordering of "x" 287 against "y". On x86, the version using smp_store_release() might compile 288 to a simple load instruction followed by a simple store instruction. 289 In contrast, the smp_mb() compiles to an expensive instruction that 290 provides the needed ordering. 291 292 There is a wide variety of release operations: 293 294 o Store operations, including not only the aforementioned 295 smp_store_release(), but also atomic_set_release(), and 296 atomic_long_set_release(). 297 298 o RCU's rcu_assign_pointer() operation. This is the same as 299 smp_store_release() except that: (1) It takes the pointer to 300 be assigned to instead of a pointer to that pointer, (2) It 301 is intended to be used in conjunction with rcu_dereference() 302 and similar rather than smp_load_acquire(), and (3) It checks 303 for an RCU-protected pointer in "sparse" runs. 304 305 o Value-returning RMW operations whose names end in _release, 306 such as atomic_fetch_add_release() and cmpxchg_release(). 307 Note that release ordering is guaranteed only against the 308 memory-store portion of the RMW operation, and not against the 309 memory-load portion. Note also that conditional operations such 310 as cmpxchg_release() are only guaranteed to provide ordering 311 when they succeed. 312 313 As mentioned earlier, release operations are often paired with acquire 314 operations, which are the subject of the next section. 315 316 317 Acquire Operations 318 ------------------ 319 320 Acquire operations include smp_load_acquire(), atomic_read_acquire(), 321 and value-returning RMW operations whose names end in _acquire. These 322 operations order their own load against all of the CPU's subsequent 323 memory accesses. Acquire operations often provide improved performance 324 and readability compared to explicit barriers. For example, use of 325 smp_load_acquire() saves a line compared to the smp_rmb() example above: 326 327 r0 = smp_load_acquire(&y); 328 r1 = READ_ONCE(x); 329 330 As with smp_store_release(), this also makes it easier to connect 331 the different pieces of the concurrent algorithm by looking for the 332 smp_store_release() that stores to "y". In addition, smp_load_acquire() 333 improves upon smp_rmb() by ordering against subsequent stores as well 334 as against subsequent loads. 335 336 There are a couple of categories of acquire operations: 337 338 o Load operations, including not only the aforementioned 339 smp_load_acquire(), but also atomic_read_acquire(), and 340 atomic64_read_acquire(). 341 342 o Value-returning RMW operations whose names end in _acquire, 343 such as atomic_xchg_acquire() and atomic_cmpxchg_acquire(). 344 Note that acquire ordering is guaranteed only against the 345 memory-load portion of the RMW operation, and not against the 346 memory-store portion. Note also that conditional operations 347 such as atomic_cmpxchg_acquire() are only guaranteed to provide 348 ordering when they succeed. 349 350 Symmetry being what it is, acquire operations are often paired with the 351 release operations covered earlier. For example, consider the following 352 example, where task0() and task1() execute concurrently: 353 354 void task0(void) 355 { 356 WRITE_ONCE(x, 1); 357 smp_store_release(&y, 1); 358 } 359 360 void task1(void) 361 { 362 r0 = smp_load_acquire(&y); 363 r1 = READ_ONCE(x); 364 } 365 366 If "x" and "y" are both initially zero, then either r0's final value 367 will be zero or r1's final value will be one, thus providing the required 368 ordering. 369 370 371 RCU Read-Side Ordering 372 ---------------------- 373 374 This category includes read-side markers such as rcu_read_lock() 375 and rcu_read_unlock() as well as pointer-traversal primitives such as 376 rcu_dereference() and srcu_dereference(). 377 378 Compared to locking primitives and RMW atomic operations, markers 379 for RCU read-side critical sections incur very low overhead because 380 they interact only with the corresponding grace-period primitives. 381 For example, the rcu_read_lock() and rcu_read_unlock() markers interact 382 with synchronize_rcu(), synchronize_rcu_expedited(), and call_rcu(). 383 The way this works is that if a given call to synchronize_rcu() cannot 384 prove that it started before a given call to rcu_read_lock(), then 385 that synchronize_rcu() must block until the matching rcu_read_unlock() 386 is reached. For more information, please see the synchronize_rcu() 387 docbook header comment and the material in Documentation/RCU. 388 389 RCU's pointer-traversal primitives, including rcu_dereference() and 390 srcu_dereference(), order their load (which must be a pointer) against any 391 of the CPU's subsequent memory accesses whose address has been calculated 392 from the value loaded. There is said to be an *address dependency* 393 from the value returned by the rcu_dereference() or srcu_dereference() 394 to that subsequent memory access. 395 396 A call to rcu_dereference() for a given RCU-protected pointer is 397 usually paired with a call to a call to rcu_assign_pointer() for that 398 same pointer in much the same way that a call to smp_load_acquire() is 399 paired with a call to smp_store_release(). Calls to rcu_dereference() 400 and rcu_assign_pointer are often buried in other APIs, for example, 401 the RCU list API members defined in include/linux/rculist.h. For more 402 information, please see the docbook headers in that file, the most 403 recent LWN article on the RCU API (https://lwn.net/Articles/777036/), 404 and of course the material in Documentation/RCU. 405 406 If the pointer value is manipulated between the rcu_dereference() 407 that returned it and a later dereference(), please read 408 Documentation/RCU/rcu_dereference.rst. It can also be quite helpful to 409 review uses in the Linux kernel. 410 411 412 Control Dependencies 413 -------------------- 414 415 A control dependency extends from a marked load (READ_ONCE() or stronger) 416 through an "if" condition to a marked store (WRITE_ONCE() or stronger) 417 that is executed only by one of the legs of that "if" statement. 418 Control dependencies are so named because they are mediated by 419 control-flow instructions such as comparisons and conditional branches. 420 421 In short, you can use a control dependency to enforce ordering between 422 an READ_ONCE() and a WRITE_ONCE() when there is an "if" condition 423 between them. The canonical example is as follows: 424 425 q = READ_ONCE(a); 426 if (q) 427 WRITE_ONCE(b, 1); 428 429 In this case, all CPUs would see the read from "a" as happening before 430 the write to "b". 431 432 However, control dependencies are easily destroyed by compiler 433 optimizations, so any use of control dependencies must take into account 434 all of the compilers used to build the Linux kernel. Please see the 435 "control-dependencies.txt" file for more information. 436 437 438 Unordered Accesses 439 ================== 440 441 Each of these two categories of unordered accesses has a section below: 442 443 a. Unordered marked operations. 444 445 b. Unmarked C-language accesses. 446 447 448 Unordered Marked Operations 449 --------------------------- 450 451 Unordered operations to different variables are just that, unordered. 452 However, if a group of CPUs apply these operations to a single variable, 453 all the CPUs will agree on the operation order. Of course, the ordering 454 of unordered marked accesses can also be constrained using the mechanisms 455 described earlier in this document. 456 457 These operations come in three categories: 458 459 o Marked writes, such as WRITE_ONCE() and atomic_set(). These 460 primitives required the compiler to emit the corresponding store 461 instructions in the expected execution order, thus suppressing 462 a number of destructive optimizations. However, they provide no 463 hardware ordering guarantees, and in fact many CPUs will happily 464 reorder marked writes with each other or with other unordered 465 operations, unless these operations are to the same variable. 466 467 o Marked reads, such as READ_ONCE() and atomic_read(). These 468 primitives required the compiler to emit the corresponding load 469 instructions in the expected execution order, thus suppressing 470 a number of destructive optimizations. However, they provide no 471 hardware ordering guarantees, and in fact many CPUs will happily 472 reorder marked reads with each other or with other unordered 473 operations, unless these operations are to the same variable. 474 475 o Unordered RMW atomic operations. These are non-value-returning 476 RMW atomic operations whose names do not end in _acquire or 477 _release, and also value-returning RMW operations whose names 478 end in _relaxed. Examples include atomic_add(), atomic_or(), 479 and atomic64_fetch_xor_relaxed(). These operations do carry 480 out the specified RMW operation atomically, for example, five 481 concurrent atomic_inc() operations applied to a given variable 482 will reliably increase the value of that variable by five. 483 However, many CPUs will happily reorder these operations with 484 each other or with other unordered operations. 485 486 This category of operations can be efficiently ordered using 487 smp_mb__before_atomic() and smp_mb__after_atomic(), as was 488 discussed in the "RMW Ordering Augmentation Barriers" section. 489 490 In short, these operations can be freely reordered unless they are all 491 operating on a single variable or unless they are constrained by one of 492 the operations called out earlier in this document. 493 494 495 Unmarked C-Language Accesses 496 ---------------------------- 497 498 Unmarked C-language accesses are normal variable accesses to normal 499 variables, that is, to variables that are not "volatile" and are not 500 C11 atomic variables. These operations provide no ordering guarantees, 501 and further do not guarantee "atomic" access. For example, the compiler 502 might (and sometimes does) split a plain C-language store into multiple 503 smaller stores. A load from that same variable running on some other 504 CPU while such a store is executing might see a value that is a mashup 505 of the old value and the new value. 506 507 Unmarked C-language accesses are unordered, and are also subject to 508 any number of compiler optimizations, many of which can break your 509 concurrent code. It is possible to used unmarked C-language accesses for 510 shared variables that are subject to concurrent access, but great care 511 is required on an ongoing basis. The compiler-constraining barrier() 512 primitive can be helpful, as can the various ordering primitives discussed 513 in this document. It nevertheless bears repeating that use of unmarked 514 C-language accesses requires careful attention to not just your code, 515 but to all the compilers that might be used to build it. Such compilers 516 might replace a series of loads with a single load, and might replace 517 a series of stores with a single store. Some compilers will even split 518 a single store into multiple smaller stores. 519 520 But there are some ways of using unmarked C-language accesses for shared 521 variables without such worries: 522 523 o Guard all accesses to a given variable by a particular lock, 524 so that there are never concurrent conflicting accesses to 525 that variable. (There are "conflicting accesses" when 526 (1) at least one of the concurrent accesses to a variable is an 527 unmarked C-language access and (2) when at least one of those 528 accesses is a write, whether marked or not.) 529 530 o As above, but using other synchronization primitives such 531 as reader-writer locks or sequence locks. 532 533 o Use locking or other means to ensure that all concurrent accesses 534 to a given variable are reads. 535 536 o Restrict use of a given variable to statistics or heuristics 537 where the occasional bogus value can be tolerated. 538 539 o Declare the accessed variables as C11 atomics. 540 https://lwn.net/Articles/691128/ 541 542 o Declare the accessed variables as "volatile". 543 544 If you need to live more dangerously, please do take the time to 545 understand the compilers. One place to start is these two LWN 546 articles: 547 548 Who's afraid of a big bad optimizing compiler? 549 https://lwn.net/Articles/793253 550 Calibrating your fear of big bad optimizing compilers 551 https://lwn.net/Articles/799218 552 553 Used properly, unmarked C-language accesses can reduce overhead on 554 fastpaths. However, the price is great care and continual attention 555 to your compiler as new versions come out and as new optimizations 556 are enabled.
Linux® is a registered trademark of Linus Torvalds in the United States and other countries.
TOMOYO® is a registered trademark of NTT DATA CORPORATION.