1 ===================== 2 LINUX KERNEL MEMORY B 3 ===================== 4 5 By: David Howells <dhowells@redhat.com> 6 Paul E. McKenney <paulmck@linux.ibm.com> 7 Will Deacon <will.deacon@arm.com> 8 Peter Zijlstra <peterz@infradead.org> 9 10 ========== 11 DISCLAIMER 12 ========== 13 14 This document is not a specification; it is in 15 brevity) and unintentionally (due to being hum 16 meant as a guide to using the various memory b 17 in case of any doubt (and there are many) plea 18 resolved by referring to the formal memory con 19 documentation at tools/memory-model/. Neverth 20 model should be viewed as the collective opini 21 than as an infallible oracle. 22 23 To repeat, this document is not a specificatio 24 hardware. 25 26 The purpose of this document is twofold: 27 28 (1) to specify the minimum functionality that 29 particular barrier, and 30 31 (2) to provide a guide as to how to use the b 32 33 Note that an architecture can provide more tha 34 for any particular barrier, but if the archite 35 that, that architecture is incorrect. 36 37 Note also that it is possible that a barrier m 38 architecture because the way that arch works r 39 unnecessary in that case. 40 41 42 ======== 43 CONTENTS 44 ======== 45 46 (*) Abstract memory access model. 47 48 - Device operations. 49 - Guarantees. 50 51 (*) What are memory barriers? 52 53 - Varieties of memory barrier. 54 - What may not be assumed about memory ba 55 - Address-dependency barriers (historical 56 - Control dependencies. 57 - SMP barrier pairing. 58 - Examples of memory barrier sequences. 59 - Read memory barriers vs load speculatio 60 - Multicopy atomicity. 61 62 (*) Explicit kernel barriers. 63 64 - Compiler barrier. 65 - CPU memory barriers. 66 67 (*) Implicit kernel memory barriers. 68 69 - Lock acquisition functions. 70 - Interrupt disabling functions. 71 - Sleep and wake-up functions. 72 - Miscellaneous functions. 73 74 (*) Inter-CPU acquiring barrier effects. 75 76 - Acquires vs memory accesses. 77 78 (*) Where are memory barriers needed? 79 80 - Interprocessor interaction. 81 - Atomic operations. 82 - Accessing devices. 83 - Interrupts. 84 85 (*) Kernel I/O barrier effects. 86 87 (*) Assumed minimum execution ordering model. 88 89 (*) The effects of the cpu cache. 90 91 - Cache coherency. 92 - Cache coherency vs DMA. 93 - Cache coherency vs MMIO. 94 95 (*) The things CPUs get up to. 96 97 - And then there's the Alpha. 98 - Virtual Machine Guests. 99 100 (*) Example uses. 101 102 - Circular buffers. 103 104 (*) References. 105 106 107 ============================ 108 ABSTRACT MEMORY ACCESS MODEL 109 ============================ 110 111 Consider the following abstract model of the s 112 113 : : 114 : : 115 : : 116 +-------+ : +--------+ : 117 | | : | | : 118 | | : | | : 119 | CPU 1 |<----->| Memory |<--- 120 | | : | | : 121 | | : | | : 122 +-------+ : +--------+ : 123 ^ : ^ : 124 | : | : 125 | : | : 126 | : v : 127 | : +--------+ : 128 | : | | : 129 | : | | : 130 +---------->| Device |<--- 131 : | | : 132 : | | : 133 : +--------+ : 134 : : 135 136 Each CPU executes a program that generates mem 137 abstract CPU, memory operation ordering is ver 138 perform the memory operations in any order it 139 appears to be maintained. Similarly, the comp 140 instructions it emits in any order it likes, p 141 apparent operation of the program. 142 143 So in the above diagram, the effects of the me 144 CPU are perceived by the rest of the system as 145 interface between the CPU and rest of the syst 146 147 148 For example, consider the following sequence o 149 150 CPU 1 CPU 2 151 =============== =============== 152 { A == 1; B == 2 } 153 A = 3; x = B; 154 B = 4; y = A; 155 156 The set of accesses as seen by the memory syst 157 in 24 different combinations: 158 159 STORE A=3, STORE B=4, y=LOAD 160 STORE A=3, STORE B=4, x=LOAD 161 STORE A=3, y=LOAD A->3, STORE 162 STORE A=3, y=LOAD A->3, x=LOAD 163 STORE A=3, x=LOAD B->2, STORE 164 STORE A=3, x=LOAD B->2, y=LOAD 165 STORE B=4, STORE A=3, y=LOAD 166 STORE B=4, ... 167 ... 168 169 and can thus result in four different combinat 170 171 x == 2, y == 1 172 x == 2, y == 3 173 x == 4, y == 1 174 x == 4, y == 3 175 176 177 Furthermore, the stores committed by a CPU to 178 perceived by the loads made by another CPU in 179 committed. 180 181 182 As a further example, consider this sequence o 183 184 CPU 1 CPU 2 185 =============== =============== 186 { A == 1, B == 2, C == 3, P == &A, Q = 187 B = 4; Q = P; 188 P = &B; D = *Q; 189 190 There is an obvious address dependency here, a 191 on the address retrieved from P by CPU 2. At 192 the following results are possible: 193 194 (Q == &A) and (D == 1) 195 (Q == &B) and (D == 2) 196 (Q == &B) and (D == 4) 197 198 Note that CPU 2 will never try and load C into 199 into Q before issuing the load of *Q. 200 201 202 DEVICE OPERATIONS 203 ----------------- 204 205 Some devices present their control interfaces 206 locations, but the order in which the control 207 important. For instance, imagine an ethernet 208 registers that are accessed through an address 209 port register (D). To read internal register 210 be used: 211 212 *A = 5; 213 x = *D; 214 215 but this might show up as either of the follow 216 217 STORE *A = 5, x = LOAD *D 218 x = LOAD *D, STORE *A = 5 219 220 the second of which will almost certainly resu 221 the address _after_ attempting to read the reg 222 223 224 GUARANTEES 225 ---------- 226 227 There are some minimal guarantees that may be 228 229 (*) On any given CPU, dependent memory access 230 respect to itself. This means that for: 231 232 Q = READ_ONCE(P); D = READ_ONCE(*Q); 233 234 the CPU will issue the following memory o 235 236 Q = LOAD P, D = LOAD *Q 237 238 and always in that order. However, on DE 239 emits a memory-barrier instruction, so th 240 instead issue the following memory operat 241 242 Q = LOAD P, MEMORY_BARRIER, D = LOAD * 243 244 Whether on DEC Alpha or not, the READ_ONC 245 mischief. 246 247 (*) Overlapping loads and stores within a par 248 ordered within that CPU. This means that 249 250 a = READ_ONCE(*X); WRITE_ONCE(*X, b); 251 252 the CPU will only issue the following seq 253 254 a = LOAD *X, STORE *X = b 255 256 And for: 257 258 WRITE_ONCE(*X, c); d = READ_ONCE(*X); 259 260 the CPU will only issue: 261 262 STORE *X = c, d = LOAD *X 263 264 (Loads and stores overlap if they are tar 265 memory). 266 267 And there are a number of things that _must_ o 268 269 (*) It _must_not_ be assumed that the compile 270 with memory references that are not prote 271 WRITE_ONCE(). Without them, the compiler 272 do all sorts of "creative" transformation 273 the COMPILER BARRIER section. 274 275 (*) It _must_not_ be assumed that independent 276 in the order given. This means that for: 277 278 X = *A; Y = *B; *D = Z; 279 280 we may get any of the following sequences 281 282 X = LOAD *A, Y = LOAD *B, STORE *D = 283 X = LOAD *A, STORE *D = Z, Y = LOAD * 284 Y = LOAD *B, X = LOAD *A, STORE *D = 285 Y = LOAD *B, STORE *D = Z, X = LOAD * 286 STORE *D = Z, X = LOAD *A, Y = LOAD * 287 STORE *D = Z, Y = LOAD *B, X = LOAD * 288 289 (*) It _must_ be assumed that overlapping mem 290 discarded. This means that for: 291 292 X = *A; Y = *(A + 4); 293 294 we may get any one of the following seque 295 296 X = LOAD *A; Y = LOAD *(A + 4); 297 Y = LOAD *(A + 4); X = LOAD *A; 298 {X, Y} = LOAD {*A, *(A + 4) }; 299 300 And for: 301 302 *A = X; *(A + 4) = Y; 303 304 we may get any of: 305 306 STORE *A = X; STORE *(A + 4) = Y; 307 STORE *(A + 4) = Y; STORE *A = X; 308 STORE {*A, *(A + 4) } = {X, Y}; 309 310 And there are anti-guarantees: 311 312 (*) These guarantees do not apply to bitfield 313 generate code to modify these using non-a 314 sequences. Do not attempt to use bitfiel 315 algorithms. 316 317 (*) Even in cases where bitfields are protect 318 in a given bitfield must be protected by 319 in a given bitfield are protected by diff 320 non-atomic read-modify-write sequences ca 321 field to corrupt the value of an adjacent 322 323 (*) These guarantees apply only to properly a 324 variables. "Properly sized" currently me 325 the same size as "char", "short", "int" a 326 aligned" means the natural alignment, thu 327 "char", two-byte alignment for "short", f 328 "int", and either four-byte or eight-byte 329 on 32-bit and 64-bit systems, respectivel 330 guarantees were introduced into the C11 s 331 using older pre-C11 compilers (for exampl 332 of the standard containing this guarantee 333 defines "memory location" as follows: 334 335 memory location 336 either an object of scalar typ 337 of adjacent bit-fields all hav 338 339 NOTE 1: Two threads of executi 340 separate memory locations with 341 each other. 342 343 NOTE 2: A bit-field and an adj 344 are in separate memory locatio 345 to two bit-fields, if one is d 346 structure declaration and the 347 are separated by a zero-length 348 or if they are separated by a 349 declaration. It is not safe to 350 bit-fields in the same structu 351 between them are also bit-fiel 352 sizes of those intervening bit 353 354 355 ========================= 356 WHAT ARE MEMORY BARRIERS? 357 ========================= 358 359 As can be seen above, independent memory opera 360 in random order, but this can be a problem for 361 What is required is some way of intervening to 362 CPU to restrict the order. 363 364 Memory barriers are such interventions. They 365 ordering over the memory operations on either 366 367 Such enforcement is important because the CPUs 368 can use a variety of tricks to improve perform 369 deferral and combination of memory operations; 370 branch prediction and various types of caching 371 override or suppress these tricks, allowing th 372 interaction of multiple CPUs and/or devices. 373 374 375 VARIETIES OF MEMORY BARRIER 376 --------------------------- 377 378 Memory barriers come in four basic varieties: 379 380 (1) Write (or store) memory barriers. 381 382 A write memory barrier gives a guarantee 383 specified before the barrier will appear 384 operations specified after the barrier wi 385 components of the system. 386 387 A write barrier is a partial ordering on 388 to have any effect on loads. 389 390 A CPU can be viewed as committing a seque 391 memory system as time progresses. All st 392 will occur _before_ all the stores after 393 394 [!] Note that write barriers should norma 395 address-dependency barriers; see the "SMP 396 397 398 (2) Address-dependency barriers (historical). 399 [!] This section is marked as HISTORICAL: 400 smp_read_barrier_depends() macro, the sem 401 implicit in all marked accesses. For mor 402 including how compiler transformations ca 403 dependencies, see Documentation/RCU/rcu_d 404 405 An address-dependency barrier is a weaker 406 case where two loads are performed such t 407 result of the first (eg: the first load r 408 the second load will be directed), an add 409 be required to make sure that the target 410 after the address obtained by the first l 411 412 An address-dependency barrier is a partia 413 loads only; it is not required to have an 414 loads or overlapping loads. 415 416 As mentioned in (1), the other CPUs in th 417 committing sequences of stores to the mem 418 considered can then perceive. An address 419 the CPU under consideration guarantees th 420 if that load touches one of a sequence of 421 by the time the barrier completes, the ef 422 that touched by the load will be percepti 423 the address-dependency barrier. 424 425 See the "Examples of memory barrier seque 426 showing the ordering constraints. 427 428 [!] Note that the first load really has t 429 not a control dependency. If the address 430 on the first load, but the dependency is 431 actually loading the address itself, then 432 a full read barrier or better is required 433 subsection for more information. 434 435 [!] Note that address-dependency barriers 436 write barriers; see the "SMP barrier pair 437 438 [!] Kernel release v5.9 removed kernel AP 439 dependency barriers. Nowadays, APIs for 440 variables such as READ_ONCE() and rcu_der 441 address-dependency barriers. 442 443 (3) Read (or load) memory barriers. 444 445 A read barrier is an address-dependency b 446 the LOAD operations specified before the 447 before all the LOAD operations specified 448 the other components of the system. 449 450 A read barrier is a partial ordering on l 451 have any effect on stores. 452 453 Read memory barriers imply address-depend 454 substitute for them. 455 456 [!] Note that read barriers should normal 457 see the "SMP barrier pairing" subsection. 458 459 460 (4) General memory barriers. 461 462 A general memory barrier gives a guarante 463 operations specified before the barrier w 464 the LOAD and STORE operations specified a 465 the other components of the system. 466 467 A general memory barrier is a partial ord 468 469 General memory barriers imply both read a 470 can substitute for either. 471 472 473 And a couple of implicit varieties: 474 475 (5) ACQUIRE operations. 476 477 This acts as a one-way permeable barrier. 478 operations after the ACQUIRE operation wi 479 ACQUIRE operation with respect to the oth 480 ACQUIRE operations include LOCK operation 481 and smp_cond_load_acquire() operations. 482 483 Memory operations that occur before an AC 484 happen after it completes. 485 486 An ACQUIRE operation should almost always 487 operation. 488 489 490 (6) RELEASE operations. 491 492 This also acts as a one-way permeable bar 493 memory operations before the RELEASE oper 494 before the RELEASE operation with respect 495 system. RELEASE operations include UNLOCK 496 smp_store_release() operations. 497 498 Memory operations that occur after a RELE 499 happen before it completes. 500 501 The use of ACQUIRE and RELEASE operations 502 for other sorts of memory barrier. In ad 503 -not- guaranteed to act as a full memory 504 ACQUIRE on a given variable, all memory a 505 RELEASE on that same variable are guarant 506 words, within a given variable's critical 507 previous critical sections for that varia 508 completed. 509 510 This means that ACQUIRE acts as a minimal 511 RELEASE acts as a minimal "release" opera 512 513 A subset of the atomic operations described in 514 RELEASE variants in addition to fully-ordered 515 semantics) definitions. For compound atomics 516 store, ACQUIRE semantics apply only to the loa 517 only to the store portion of the operation. 518 519 Memory barriers are only required where there' 520 between two CPUs or between a CPU and a device 521 there won't be any such interaction in any par 522 memory barriers are unnecessary in that piece 523 524 525 Note that these are the _minimum_ guarantees. 526 more substantial guarantees, but they may _not 527 specific code. 528 529 530 WHAT MAY NOT BE ASSUMED ABOUT MEMORY BARRIERS? 531 ---------------------------------------------- 532 533 There are certain things that the Linux kernel 534 535 (*) There is no guarantee that any of the mem 536 memory barrier will be _complete_ by the 537 instruction; the barrier can be considere 538 access queue that accesses of the appropr 539 540 (*) There is no guarantee that issuing a memo 541 any direct effect on another CPU or any o 542 indirect effect will be the order in whic 543 of the first CPU's accesses occur, but se 544 545 (*) There is no guarantee that a CPU will see 546 from a second CPU's accesses, even _if_ t 547 barrier, unless the first CPU _also_ uses 548 the subsection on "SMP Barrier Pairing"). 549 550 (*) There is no guarantee that some interveni 551 hardware[*] will not reorder the memory a 552 mechanisms should propagate the indirect 553 between CPUs, but might not do so in orde 554 555 [*] For information on bus mastering D 556 557 Documentation/driver-api/pci/pci.r 558 Documentation/core-api/dma-api-how 559 Documentation/core-api/dma-api.rst 560 561 562 ADDRESS-DEPENDENCY BARRIERS (HISTORICAL) 563 ---------------------------------------- 564 [!] This section is marked as HISTORICAL: it c 565 smp_read_barrier_depends() macro, the semantic 566 in all marked accesses. For more up-to-date i 567 how compiler transformations can sometimes bre 568 see Documentation/RCU/rcu_dereference.rst. 569 570 As of v4.15 of the Linux kernel, an smp_mb() w 571 DEC Alpha, which means that about the only peo 572 to this section are those working on DEC Alpha 573 and those working on READ_ONCE() itself. For 574 those who are interested in the history, here 575 address-dependency barriers. 576 577 [!] While address dependencies are observed in 578 load-to-store relations, address-dependency ba 579 for load-to-store situations. 580 581 The requirement of address-dependency barriers 582 it's not always obvious that they're needed. 583 following sequence of events: 584 585 CPU 1 CPU 2 586 =============== =============== 587 { A == 1, B == 2, C == 3, P == &A, Q = 588 B = 4; 589 <write barrier> 590 WRITE_ONCE(P, &B); 591 Q = READ_ONCE_OL 592 D = *Q; 593 594 [!] READ_ONCE_OLD() corresponds to READ_ONCE() 595 doesn't imply an address-dependency barrier. 596 597 There's a clear address dependency here, and i 598 the sequence, Q must be either &A or &B, and t 599 600 (Q == &A) implies (D == 1) 601 (Q == &B) implies (D == 4) 602 603 But! CPU 2's perception of P may be updated _ 604 leading to the following situation: 605 606 (Q == &B) and (D == 2) ???? 607 608 While this may seem like a failure of coherenc 609 isn't, and this behaviour can be observed on c 610 Alpha). 611 612 To deal with this, READ_ONCE() provides an imp 613 since kernel release v4.15: 614 615 CPU 1 CPU 2 616 =============== =============== 617 { A == 1, B == 2, C == 3, P == &A, Q = 618 B = 4; 619 <write barrier> 620 WRITE_ONCE(P, &B); 621 Q = READ_ONCE(P) 622 <implicit addres 623 D = *Q; 624 625 This enforces the occurrence of one of the two 626 third possibility from arising. 627 628 629 [!] Note that this extremely counterintuitive 630 machines with split caches, so that, for examp 631 even-numbered cache lines and the other bank p 632 lines. The pointer P might be stored in an od 633 variable B might be stored in an even-numbered 634 even-numbered bank of the reading CPU's cache 635 odd-numbered bank is idle, one can see the new 636 but the old value of the variable B (2). 637 638 639 An address-dependency barrier is not required 640 because the CPUs that the Linux kernel support 641 are certain (1) that the write will actually h 642 the write, and (3) of the value to be written. 643 But please carefully read the "CONTROL DEPENDE 644 Documentation/RCU/rcu_dereference.rst file: T 645 dependencies in a great many highly creative w 646 647 CPU 1 CPU 2 648 =============== =============== 649 { A == 1, B == 2, C = 3, P == &A, Q == 650 B = 4; 651 <write barrier> 652 WRITE_ONCE(P, &B); 653 Q = READ_ONCE_OL 654 WRITE_ONCE(*Q, 5 655 656 Therefore, no address-dependency barrier is re 657 Q with the store into *Q. In other words, thi 658 even without an implicit address-dependency ba 659 660 (Q == &B) && (B == 4) 661 662 Please note that this pattern should be rare. 663 of dependency ordering is to -prevent- writes 664 with the expensive cache misses associated wit 665 can be used to record rare error conditions an 666 naturally occurring ordering prevents such rec 667 668 669 Note well that the ordering provided by an add 670 the CPU containing it. See the section on "Mu 671 more information. 672 673 674 The address-dependency barrier is very importa 675 for example. See rcu_assign_pointer() and rcu 676 include/linux/rcupdate.h. This permits the cu 677 pointer to be replaced with a new modified tar 678 target appearing to be incompletely initialise 679 680 See also the subsection on "Cache Coherency" f 681 682 683 CONTROL DEPENDENCIES 684 -------------------- 685 686 Control dependencies can be a bit tricky becau 687 not understand them. The purpose of this sect 688 the compiler's ignorance from breaking your co 689 690 A load-load control dependency requires a full 691 simply an (implicit) address-dependency barrie 692 Consider the following bit of code: 693 694 q = READ_ONCE(a); 695 <implicit address-dependency barrier> 696 if (q) { 697 /* BUG: No address dependency! 698 p = READ_ONCE(b); 699 } 700 701 This will not have the desired effect because 702 dependency, but rather a control dependency th 703 by attempting to predict the outcome in advanc 704 the load from b as having happened before the 705 what's actually required is: 706 707 q = READ_ONCE(a); 708 if (q) { 709 <read barrier> 710 p = READ_ONCE(b); 711 } 712 713 However, stores are not speculated. This mean 714 for load-store control dependencies, as in the 715 716 q = READ_ONCE(a); 717 if (q) { 718 WRITE_ONCE(b, 1); 719 } 720 721 Control dependencies pair normally with other 722 That said, please note that neither READ_ONCE( 723 are optional! Without the READ_ONCE(), the com 724 load from 'a' with other loads from 'a'. With 725 the compiler might combine the store to 'b' wi 726 Either can result in highly counterintuitive e 727 728 Worse yet, if the compiler is able to prove (s 729 variable 'a' is always non-zero, it would be w 730 to optimize the original example by eliminatin 731 as follows: 732 733 q = a; 734 b = 1; /* BUG: Compiler and CPU can b 735 736 So don't leave out the READ_ONCE(). 737 738 It is tempting to try to enforce ordering on i 739 branches of the "if" statement as follows: 740 741 q = READ_ONCE(a); 742 if (q) { 743 barrier(); 744 WRITE_ONCE(b, 1); 745 do_something(); 746 } else { 747 barrier(); 748 WRITE_ONCE(b, 1); 749 do_something_else(); 750 } 751 752 Unfortunately, current compilers will transfor 753 optimization levels: 754 755 q = READ_ONCE(a); 756 barrier(); 757 WRITE_ONCE(b, 1); /* BUG: No ordering 758 if (q) { 759 /* WRITE_ONCE(b, 1); -- moved 760 do_something(); 761 } else { 762 /* WRITE_ONCE(b, 1); -- moved 763 do_something_else(); 764 } 765 766 Now there is no conditional between the load f 767 'b', which means that the CPU is within its ri 768 The conditional is absolutely required, and mu 769 assembly code even after all compiler optimiza 770 Therefore, if you need ordering in this exampl 771 memory barriers, for example, smp_store_releas 772 773 q = READ_ONCE(a); 774 if (q) { 775 smp_store_release(&b, 1); 776 do_something(); 777 } else { 778 smp_store_release(&b, 1); 779 do_something_else(); 780 } 781 782 In contrast, without explicit memory barriers, 783 ordering is guaranteed only when the stores di 784 785 q = READ_ONCE(a); 786 if (q) { 787 WRITE_ONCE(b, 1); 788 do_something(); 789 } else { 790 WRITE_ONCE(b, 2); 791 do_something_else(); 792 } 793 794 The initial READ_ONCE() is still required to p 795 proving the value of 'a'. 796 797 In addition, you need to be careful what you d 798 otherwise the compiler might be able to guess 799 the needed conditional. For example: 800 801 q = READ_ONCE(a); 802 if (q % MAX) { 803 WRITE_ONCE(b, 1); 804 do_something(); 805 } else { 806 WRITE_ONCE(b, 2); 807 do_something_else(); 808 } 809 810 If MAX is defined to be 1, then the compiler k 811 equal to zero, in which case the compiler is w 812 transform the above code into the following: 813 814 q = READ_ONCE(a); 815 WRITE_ONCE(b, 2); 816 do_something_else(); 817 818 Given this transformation, the CPU is not requ 819 between the load from variable 'a' and the sto 820 tempting to add a barrier(), but this does not 821 is gone, and the barrier won't bring it back. 822 relying on this ordering, you should make sure 823 one, perhaps as follows: 824 825 q = READ_ONCE(a); 826 BUILD_BUG_ON(MAX <= 1); /* Order load 827 if (q % MAX) { 828 WRITE_ONCE(b, 1); 829 do_something(); 830 } else { 831 WRITE_ONCE(b, 2); 832 do_something_else(); 833 } 834 835 Please note once again that the stores to 'b' 836 identical, as noted earlier, the compiler coul 837 of the 'if' statement. 838 839 You must also be careful not to rely too much 840 evaluation. Consider this example: 841 842 q = READ_ONCE(a); 843 if (q || 1 > 0) 844 WRITE_ONCE(b, 1); 845 846 Because the first condition cannot fault and t 847 always true, the compiler can transform this e 848 defeating control dependency: 849 850 q = READ_ONCE(a); 851 WRITE_ONCE(b, 1); 852 853 This example underscores the need to ensure th 854 out-guess your code. More generally, although 855 the compiler to actually emit code for a given 856 the compiler to use the results. 857 858 In addition, control dependencies apply only t 859 else-clause of the if-statement in question. 860 not necessarily apply to code following the if 861 862 q = READ_ONCE(a); 863 if (q) { 864 WRITE_ONCE(b, 1); 865 } else { 866 WRITE_ONCE(b, 2); 867 } 868 WRITE_ONCE(c, 1); /* BUG: No ordering 869 870 It is tempting to argue that there in fact is 871 compiler cannot reorder volatile accesses and 872 the writes to 'b' with the condition. Unfortu 873 of reasoning, the compiler might compile the t 874 conditional-move instructions, as in this fanc 875 language: 876 877 ld r1,a 878 cmp r1,$0 879 cmov,ne r4,$1 880 cmov,eq r4,$2 881 st r4,b 882 st $1,c 883 884 A weakly ordered CPU would have no dependency 885 from 'a' and the store to 'c'. The control de 886 only to the pair of cmov instructions and the 887 In short, control dependencies apply only to t 888 and else-clause of the if-statement in questio 889 invoked by those two clauses), not to code fol 890 891 892 Note well that the ordering provided by a cont 893 to the CPU containing it. See the section on 894 for more information. 895 896 897 In summary: 898 899 (*) Control dependencies can order prior loa 900 However, they do -not- guarantee any oth 901 Not prior loads against later loads, nor 902 later anything. If you need these other 903 use smp_rmb(), smp_wmb(), or, in the cas 904 later loads, smp_mb(). 905 906 (*) If both legs of the "if" statement begin 907 the same variable, then those stores mus 908 preceding both of them with smp_mb() or 909 to carry out the stores. Please note th 910 to use barrier() at beginning of each le 911 because, as shown by the example above, 912 destroy the control dependency while res 913 barrier() law. 914 915 (*) Control dependencies require at least on 916 between the prior load and the subsequen 917 conditional must involve the prior load. 918 to optimize the conditional away, it wil 919 away the ordering. Careful use of READ_ 920 can help to preserve the needed conditio 921 922 (*) Control dependencies require that the co 923 dependency into nonexistence. Careful u 924 atomic{,64}_read() can help to preserve 925 Please see the COMPILER BARRIER section 926 927 (*) Control dependencies apply only to the t 928 of the if-statement containing the contr 929 any functions that these two clauses cal 930 do -not- apply to code following the if- 931 control dependency. 932 933 (*) Control dependencies pair normally with 934 935 (*) Control dependencies do -not- provide mu 936 need all the CPUs to see a given store a 937 938 (*) Compilers do not understand control depe 939 your job to ensure that they do not brea 940 941 942 SMP BARRIER PAIRING 943 ------------------- 944 945 When dealing with CPU-CPU interactions, certai 946 always be paired. A lack of appropriate pairi 947 948 General barriers pair with each other, though 949 other types of barriers, albeit without multic 950 barrier pairs with a release barrier, but both 951 barriers, including of course general barriers 952 with an address-dependency barrier, a control 953 a release barrier, a read barrier, or a genera 954 read barrier, control dependency, or an addres 955 with a write barrier, an acquire barrier, a re 956 general barrier: 957 958 CPU 1 CPU 2 959 =============== =============== 960 WRITE_ONCE(a, 1); 961 <write barrier> 962 WRITE_ONCE(b, 2); x = READ_ONCE(b) 963 <read barrier> 964 y = READ_ONCE(a) 965 966 Or: 967 968 CPU 1 CPU 2 969 =============== ================ 970 a = 1; 971 <write barrier> 972 WRITE_ONCE(b, &a); x = READ_ONCE(b) 973 <implicit addres 974 y = *x; 975 976 Or even: 977 978 CPU 1 CPU 2 979 =============== ================ 980 r1 = READ_ONCE(y); 981 <general barrier> 982 WRITE_ONCE(x, 1); if (r2 = READ_ON 983 <implicit con 984 WRITE_ONCE(y, 985 } 986 987 assert(r1 == 0 || r2 == 0); 988 989 Basically, the read barrier always has to be t 990 the "weaker" type. 991 992 [!] Note that the stores before the write barr 993 match the loads after the read barrier or the 994 vice versa: 995 996 CPU 1 CP 997 =================== == 998 WRITE_ONCE(a, 1); }---- --->{ v 999 WRITE_ONCE(b, 2); } \ / { w 1000 <write barrier> \ < 1001 WRITE_ONCE(c, 3); } / \ { x 1002 WRITE_ONCE(d, 4); }---- --->{ y 1003 1004 1005 EXAMPLES OF MEMORY BARRIER SEQUENCES 1006 ------------------------------------ 1007 1008 Firstly, write barriers act as partial orderi 1009 Consider the following sequence of events: 1010 1011 CPU 1 1012 ======================= 1013 STORE A = 1 1014 STORE B = 2 1015 STORE C = 3 1016 <write barrier> 1017 STORE D = 4 1018 STORE E = 5 1019 1020 This sequence of events is committed to the m 1021 that the rest of the system might perceive as 1022 STORE B, STORE C } all occurring before the u 1023 }: 1024 1025 +-------+ : : 1026 | | +------+ 1027 | |------>| C=3 | } /\ 1028 | | : +------+ }----- 1029 | | : | A=1 | } 1030 | | : +------+ } 1031 | CPU 1 | : | B=2 | } 1032 | | +------+ } 1033 | | wwwwwwwwwwwwwwww } <--- 1034 | | +------+ } 1035 | | : | E=5 | } 1036 | | : +------+ } 1037 | |------>| D=4 | } 1038 | | +------+ 1039 +-------+ : : 1040 | 1041 | Sequence in whic 1042 | memory system by 1043 V 1044 1045 1046 Secondly, address-dependency barriers act as 1047 dependent loads. Consider the following sequ 1048 1049 CPU 1 CPU 2 1050 ======================= ============= 1051 { B = 7; X = 9; Y = 8; C = &Y 1052 STORE A = 1 1053 STORE B = 2 1054 <write barrier> 1055 STORE C = &B LOAD X 1056 STORE D = 4 LOAD C (gets 1057 LOAD *C (read 1058 1059 Without intervention, CPU 2 may perceive the 1060 effectively random order, despite the write b 1061 1062 +-------+ : : 1063 | | +------+ 1064 | |------>| B=2 |----- - 1065 | | : +------+ \ 1066 | CPU 1 | : | A=1 | \ - 1067 | | +------+ | 1068 | | wwwwwwwwwwwwwwww | 1069 | | +------+ | 1070 | | : | C=&B |--- | 1071 | | : +------+ \ | 1072 | |------>| D=4 | --------- 1073 | | +------+ | 1074 +-------+ : : | 1075 | 1076 | 1077 | 1078 Apparently incorrect ---> | 1079 perception of B (!) | 1080 | 1081 | 1082 The load of X holds ---> \ 1083 up the maintenance \ 1084 of coherence of B --- 1085 1086 1087 1088 1089 In the above example, CPU 2 perceives that B 1090 (which would be B) coming after the LOAD of C 1091 1092 If, however, an address-dependency barrier we 1093 of C and the load of *C (ie: B) on CPU 2: 1094 1095 CPU 1 CPU 2 1096 ======================= ============= 1097 { B = 7; X = 9; Y = 8; C = &Y 1098 STORE A = 1 1099 STORE B = 2 1100 <write barrier> 1101 STORE C = &B LOAD X 1102 STORE D = 4 LOAD C (gets 1103 <address-depe 1104 LOAD *C (read 1105 1106 then the following will occur: 1107 1108 +-------+ : : 1109 | | +------+ 1110 | |------>| B=2 |----- - 1111 | | : +------+ \ 1112 | CPU 1 | : | A=1 | \ - 1113 | | +------+ | 1114 | | wwwwwwwwwwwwwwww | 1115 | | +------+ | 1116 | | : | C=&B |--- | 1117 | | : +------+ \ | 1118 | |------>| D=4 | --------- 1119 | | +------+ | 1120 +-------+ : : | 1121 | 1122 | 1123 | 1124 | 1125 | 1126 Makes sure all effects ---> \ a 1127 prior to the store of C \ 1128 are perceptible to --- 1129 subsequent loads 1130 1131 1132 1133 And thirdly, a read barrier acts as a partial 1134 following sequence of events: 1135 1136 CPU 1 CPU 2 1137 ======================= ============= 1138 { A = 0, B = 9 } 1139 STORE A=1 1140 <write barrier> 1141 STORE B=2 1142 LOAD B 1143 LOAD A 1144 1145 Without intervention, CPU 2 may then choose t 1146 some effectively random order, despite the wr 1147 1148 +-------+ : : 1149 | | +------+ 1150 | |------>| A=1 |------ - 1151 | | +------+ \ 1152 | CPU 1 | wwwwwwwwwwwwwwww \ - 1153 | | +------+ | 1154 | |------>| B=2 |--- | 1155 | | +------+ \ | 1156 +-------+ : : \ | 1157 -------- 1158 | 1159 | 1160 | 1161 | 1162 \ 1163 \ 1164 -- 1165 1166 1167 1168 1169 If, however, a read barrier were to be placed 1170 load of A on CPU 2: 1171 1172 CPU 1 CPU 2 1173 ======================= ============= 1174 { A = 0, B = 9 } 1175 STORE A=1 1176 <write barrier> 1177 STORE B=2 1178 LOAD B 1179 <read barrier 1180 LOAD A 1181 1182 then the partial ordering imposed by CPU 1 wi 1183 2: 1184 1185 +-------+ : : 1186 | | +------+ 1187 | |------>| A=1 |------ - 1188 | | +------+ \ 1189 | CPU 1 | wwwwwwwwwwwwwwww \ - 1190 | | +------+ | 1191 | |------>| B=2 |--- | 1192 | | +------+ \ | 1193 +-------+ : : \ | 1194 -------- 1195 | 1196 | 1197 | 1198 At this point the read ----> \ r 1199 barrier causes all effects \ 1200 prior to the storage of B -- 1201 to be perceptible to CPU 2 1202 1203 1204 1205 To illustrate this more completely, consider 1206 contained a load of A either side of the read 1207 1208 CPU 1 CPU 2 1209 ======================= ============= 1210 { A = 0, B = 9 } 1211 STORE A=1 1212 <write barrier> 1213 STORE B=2 1214 LOAD B 1215 LOAD A [first 1216 <read barrier 1217 LOAD A [secon 1218 1219 Even though the two loads of A both occur aft 1220 come up with different values: 1221 1222 +-------+ : : 1223 | | +------+ 1224 | |------>| A=1 |------ - 1225 | | +------+ \ 1226 | CPU 1 | wwwwwwwwwwwwwwww \ - 1227 | | +------+ | 1228 | |------>| B=2 |--- | 1229 | | +------+ \ | 1230 +-------+ : : \ | 1231 -------- 1232 | 1233 | 1234 | 1235 | 1236 | 1237 | 1238 At this point the read ----> \ r 1239 barrier causes all effects \ 1240 prior to the storage of B -- 1241 to be perceptible to CPU 2 1242 1243 1244 1245 But it may be that the update to A from CPU 1 1246 before the read barrier completes anyway: 1247 1248 +-------+ : : 1249 | | +------+ 1250 | |------>| A=1 |------ - 1251 | | +------+ \ 1252 | CPU 1 | wwwwwwwwwwwwwwww \ - 1253 | | +------+ | 1254 | |------>| B=2 |--- | 1255 | | +------+ \ | 1256 +-------+ : : \ | 1257 -------- 1258 | 1259 | 1260 \ 1261 \ 1262 -- 1263 1264 r 1265 1266 1267 1268 1269 1270 1271 The guarantee is that the second load will al 1272 load of B came up with B == 2. No such guara 1273 A; that may come up with either A == 0 or A = 1274 1275 1276 READ MEMORY BARRIERS VS LOAD SPECULATION 1277 ---------------------------------------- 1278 1279 Many CPUs speculate with loads: that is they 1280 item from memory, and they find a time where 1281 other loads, and so do the load in advance - 1282 got to that point in the instruction executio 1283 actual load instruction to potentially comple 1284 already has the value to hand. 1285 1286 It may turn out that the CPU didn't actually 1287 branch circumvented the load - in which case 1288 cache it for later use. 1289 1290 Consider: 1291 1292 CPU 1 CPU 2 1293 ======================= ============= 1294 LOAD B 1295 DIVIDE 1296 DIVIDE 1297 LOAD A 1298 1299 Which might appear as this: 1300 1301 1302 1303 - 1304 1305 1306 1307 The CPU being busy doing a ---> - 1308 division speculates on the 1309 LOAD of A 1310 1311 1312 Once the divisions are complete --> 1313 the CPU can then perform the 1314 LOAD with immediate effect 1315 1316 1317 Placing a read barrier or an address-dependen 1318 load: 1319 1320 CPU 1 CPU 2 1321 ======================= ============= 1322 LOAD B 1323 DIVIDE 1324 DIVIDE 1325 <read barrier 1326 LOAD A 1327 1328 will force any value speculatively obtained t 1329 dependent on the type of barrier used. If th 1330 speculated memory location, then the speculat 1331 1332 1333 1334 - 1335 1336 1337 1338 The CPU being busy doing a ---> - 1339 division speculates on the 1340 LOAD of A 1341 1342 1343 1344 r 1345 1346 1347 1348 1349 1350 1351 but if there was an update or an invalidation 1352 the speculation will be cancelled and the val 1353 1354 1355 1356 - 1357 1358 1359 1360 The CPU being busy doing a ---> - 1361 division speculates on the 1362 LOAD of A 1363 1364 1365 1366 r 1367 1368 The speculation is discarded ---> - 1369 and an updated value is 1370 retrieved 1371 1372 1373 MULTICOPY ATOMICITY 1374 -------------------- 1375 1376 Multicopy atomicity is a deeply intuitive not 1377 not always provided by real computer systems, 1378 becomes visible at the same time to all CPUs, 1379 CPUs agree on the order in which all stores b 1380 support of full multicopy atomicity would rul 1381 optimizations, so a weaker form called ``othe 1382 instead guarantees only that a given store be 1383 time to all -other- CPUs. The remainder of t 1384 weaker form, but for brevity will call it sim 1385 1386 The following example demonstrates multicopy 1387 1388 CPU 1 CPU 2 1389 ======================= ============= 1390 { X = 0, Y = 0 } 1391 STORE X=1 r1=LOAD X (re 1392 <general barr 1393 STORE Y=r1 1394 1395 Suppose that CPU 2's load from X returns 1, w 1396 and CPU 3's load from Y returns 1. This indi 1397 to X precedes CPU 2's load from X and that CP 1398 CPU 3's load from Y. In addition, the memory 1399 CPU 2 executes its load before its store, and 1400 it loads from X. The question is then "Can C 1401 1402 Because CPU 3's load from X in some sense com 1403 is natural to expect that CPU 3's load from X 1404 This expectation follows from multicopy atomi 1405 on CPU B follows a load from the same variabl 1406 CPU A did not originally store the value whic 1407 multicopy-atomic systems, CPU B's load must r 1408 that CPU A's load did or some later value. H 1409 does not require systems to be multicopy atom 1410 1411 The use of a general memory barrier in the ex 1412 for any lack of multicopy atomicity. In the 1413 from X returns 1 and CPU 3's load from Y retu 1414 from X must indeed also return 1. 1415 1416 However, dependencies, read barriers, and wri 1417 able to compensate for non-multicopy atomicit 1418 that CPU 2's general barrier is removed from 1419 only the data dependency shown below: 1420 1421 CPU 1 CPU 2 1422 ======================= ============= 1423 { X = 0, Y = 0 } 1424 STORE X=1 r1=LOAD X (re 1425 <data depende 1426 STORE Y=r1 1427 1428 This substitution allows non-multicopy atomic 1429 this example, it is perfectly legal for CPU 2 1430 CPU 3's load from Y to return 1, and its load 1431 1432 The key point is that although CPU 2's data d 1433 and store, it does not guarantee to order CPU 1434 example runs on a non-multicopy-atomic system 1435 store buffer or a level of cache, CPU 2 might 1436 writes. General barriers are therefore requi 1437 agree on the combined order of multiple acces 1438 1439 General barriers can compensate not only for 1440 but can also generate additional ordering tha 1441 CPUs will perceive the same order of -all- op 1442 chain of release-acquire pairs do not provide 1443 which means that only those CPUs on the chain 1444 on the combined order of the accesses. For e 1445 in deference to the ghost of Herman Hollerith 1446 1447 int u, v, x, y, z; 1448 1449 void cpu0(void) 1450 { 1451 r0 = smp_load_acquire(&x); 1452 WRITE_ONCE(u, 1); 1453 smp_store_release(&y, 1); 1454 } 1455 1456 void cpu1(void) 1457 { 1458 r1 = smp_load_acquire(&y); 1459 r4 = READ_ONCE(v); 1460 r5 = READ_ONCE(u); 1461 smp_store_release(&z, 1); 1462 } 1463 1464 void cpu2(void) 1465 { 1466 r2 = smp_load_acquire(&z); 1467 smp_store_release(&x, 1); 1468 } 1469 1470 void cpu3(void) 1471 { 1472 WRITE_ONCE(v, 1); 1473 smp_mb(); 1474 r3 = READ_ONCE(u); 1475 } 1476 1477 Because cpu0(), cpu1(), and cpu2() participat 1478 smp_store_release()/smp_load_acquire() pairs, 1479 is prohibited: 1480 1481 r0 == 1 && r1 == 1 && r2 == 1 1482 1483 Furthermore, because of the release-acquire r 1484 and cpu1(), cpu1() must see cpu0()'s writes, 1485 outcome is prohibited: 1486 1487 r1 == 1 && r5 == 0 1488 1489 However, the ordering provided by a release-a 1490 to the CPUs participating in that chain and d 1491 at least aside from stores. Therefore, the f 1492 1493 r0 == 0 && r1 == 1 && r2 == 1 && r3 = 1494 1495 As an aside, the following outcome is also po 1496 1497 r0 == 0 && r1 == 1 && r2 == 1 && r3 = 1498 1499 Although cpu0(), cpu1(), and cpu2() will see 1500 writes in order, CPUs not involved in the rel 1501 well disagree on the order. This disagreemen 1502 the weak memory-barrier instructions used to 1503 and smp_store_release() are not required to o 1504 subsequent loads in all cases. This means th 1505 store to u as happening -after- cpu1()'s load 1506 both cpu0() and cpu1() agree that these two o 1507 intended order. 1508 1509 However, please keep in mind that smp_load_ac 1510 In particular, it simply reads from its argum 1511 -not- ensure that any particular value will b 1512 following outcome is possible: 1513 1514 r0 == 0 && r1 == 0 && r2 == 0 && r5 = 1515 1516 Note that this outcome can happen even on a m 1517 consistent system where nothing is ever reord 1518 1519 To reiterate, if your code requires full orde 1520 use general barriers throughout. 1521 1522 1523 ======================== 1524 EXPLICIT KERNEL BARRIERS 1525 ======================== 1526 1527 The Linux kernel has a variety of different b 1528 levels: 1529 1530 (*) Compiler barrier. 1531 1532 (*) CPU memory barriers. 1533 1534 1535 COMPILER BARRIER 1536 ---------------- 1537 1538 The Linux kernel has an explicit compiler bar 1539 compiler from moving the memory accesses eith 1540 1541 barrier(); 1542 1543 This is a general barrier -- there are no rea 1544 variants of barrier(). However, READ_ONCE() 1545 thought of as weak forms of barrier() that af 1546 accesses flagged by the READ_ONCE() or WRITE_ 1547 1548 The barrier() function has the following effe 1549 1550 (*) Prevents the compiler from reordering ac 1551 barrier() to precede any accesses preced 1552 One example use for this property is to 1553 interrupt-handler code and the code that 1554 1555 (*) Within a loop, forces the compiler to lo 1556 in that loop's conditional on each pass 1557 1558 The READ_ONCE() and WRITE_ONCE() functions ca 1559 optimizations that, while perfectly safe in s 1560 be fatal in concurrent code. Here are some e 1561 of optimizations: 1562 1563 (*) The compiler is within its rights to reo 1564 to the same variable, and in some cases, 1565 rights to reorder loads to the same vari 1566 the following code: 1567 1568 a[0] = x; 1569 a[1] = x; 1570 1571 Might result in an older value of x stor 1572 Prevent both the compiler and the CPU fr 1573 1574 a[0] = READ_ONCE(x); 1575 a[1] = READ_ONCE(x); 1576 1577 In short, READ_ONCE() and WRITE_ONCE() p 1578 accesses from multiple CPUs to a single 1579 1580 (*) The compiler is within its rights to mer 1581 the same variable. Such merging can cau 1582 the following code: 1583 1584 while (tmp = a) 1585 do_something_with(tmp); 1586 1587 into the following code, which, although 1588 for single-threaded code, is almost cert 1589 intended: 1590 1591 if (tmp = a) 1592 for (;;) 1593 do_something_with(tmp 1594 1595 Use READ_ONCE() to prevent the compiler 1596 1597 while (tmp = READ_ONCE(a)) 1598 do_something_with(tmp); 1599 1600 (*) The compiler is within its rights to rel 1601 in cases where high register pressure pr 1602 keeping all data of interest in register 1603 therefore optimize the variable 'tmp' ou 1604 1605 while (tmp = a) 1606 do_something_with(tmp); 1607 1608 This could result in the following code, 1609 single-threaded code, but can be fatal i 1610 1611 while (a) 1612 do_something_with(a); 1613 1614 For example, the optimized version of th 1615 passing a zero to do_something_with() in 1616 a was modified by some other CPU between 1617 the call to do_something_with(). 1618 1619 Again, use READ_ONCE() to prevent the co 1620 1621 while (tmp = READ_ONCE(a)) 1622 do_something_with(tmp); 1623 1624 Note that if the compiler runs short of 1625 tmp onto the stack. The overhead of thi 1626 is why compilers reload variables. Doin 1627 single-threaded code, so you need to tel 1628 where it is not safe. 1629 1630 (*) The compiler is within its rights to omi 1631 what the value will be. For example, if 1632 the value of variable 'a' is always zero 1633 1634 while (tmp = a) 1635 do_something_with(tmp); 1636 1637 Into this: 1638 1639 do { } while (0); 1640 1641 This transformation is a win for single- 1642 gets rid of a load and a branch. The pr 1643 will carry out its proof assuming that t 1644 one updating variable 'a'. If variable 1645 compiler's proof will be erroneous. Use 1646 compiler that it doesn't know as much as 1647 1648 while (tmp = READ_ONCE(a)) 1649 do_something_with(tmp); 1650 1651 But please note that the compiler is als 1652 do with the value after the READ_ONCE(). 1653 do the following and MAX is a preprocess 1654 1655 while ((tmp = READ_ONCE(a)) % MAX) 1656 do_something_with(tmp); 1657 1658 Then the compiler knows that the result 1659 to MAX will always be zero, again allowi 1660 the code into near-nonexistence. (It wi 1661 variable 'a'.) 1662 1663 (*) Similarly, the compiler is within its ri 1664 if it knows that the variable already ha 1665 Again, the compiler assumes that the cur 1666 storing into the variable, which can cau 1667 wrong thing for shared variables. For e 1668 the following: 1669 1670 a = 0; 1671 ... Code that does not store to varia 1672 a = 0; 1673 1674 The compiler sees that the value of vari 1675 it might well omit the second store. Th 1676 surprise if some other CPU might have st 1677 meantime. 1678 1679 Use WRITE_ONCE() to prevent the compiler 1680 wrong guess: 1681 1682 WRITE_ONCE(a, 0); 1683 ... Code that does not store to varia 1684 WRITE_ONCE(a, 0); 1685 1686 (*) The compiler is within its rights to reo 1687 you tell it not to. For example, consid 1688 between process-level code and an interr 1689 1690 void process_level(void) 1691 { 1692 msg = get_message(); 1693 flag = true; 1694 } 1695 1696 void interrupt_handler(void) 1697 { 1698 if (flag) 1699 process_message(msg); 1700 } 1701 1702 There is nothing to prevent the compiler 1703 process_level() to the following, in fac 1704 win for single-threaded code: 1705 1706 void process_level(void) 1707 { 1708 flag = true; 1709 msg = get_message(); 1710 } 1711 1712 If the interrupt occurs between these tw 1713 interrupt_handler() might be passed a ga 1714 to prevent this as follows: 1715 1716 void process_level(void) 1717 { 1718 WRITE_ONCE(msg, get_message() 1719 WRITE_ONCE(flag, true); 1720 } 1721 1722 void interrupt_handler(void) 1723 { 1724 if (READ_ONCE(flag)) 1725 process_message(READ_ 1726 } 1727 1728 Note that the READ_ONCE() and WRITE_ONCE 1729 interrupt_handler() are needed if this i 1730 be interrupted by something that also ac 1731 for example, a nested interrupt or an NM 1732 and WRITE_ONCE() are not needed in inter 1733 for documentation purposes. (Note also 1734 do not typically occur in modern Linux k 1735 interrupt handler returns with interrupt 1736 WARN_ONCE() splat.) 1737 1738 You should assume that the compiler can 1739 WRITE_ONCE() past code not containing RE 1740 barrier(), or similar primitives. 1741 1742 This effect could also be achieved using 1743 and WRITE_ONCE() are more selective: Wi 1744 WRITE_ONCE(), the compiler need only for 1745 indicated memory locations, while with b 1746 discard the value of all memory location 1747 cached in any machine registers. Of cou 1748 respect the order in which the READ_ONCE 1749 though the CPU of course need not do so. 1750 1751 (*) The compiler is within its rights to inv 1752 as in the following example: 1753 1754 if (a) 1755 b = a; 1756 else 1757 b = 42; 1758 1759 The compiler might save a branch by opti 1760 1761 b = 42; 1762 if (a) 1763 b = a; 1764 1765 In single-threaded code, this is not onl 1766 a branch. Unfortunately, in concurrent 1767 could cause some other CPU to see a spur 1768 if variable 'a' was never zero -- when l 1769 Use WRITE_ONCE() to prevent this as foll 1770 1771 if (a) 1772 WRITE_ONCE(b, a); 1773 else 1774 WRITE_ONCE(b, 42); 1775 1776 The compiler can also invent loads. The 1777 damaging, but they can result in cache-l 1778 poor performance and scalability. Use R 1779 invented loads. 1780 1781 (*) For aligned memory locations whose size 1782 with a single memory-reference instructi 1783 and "store tearing," in which a single l 1784 multiple smaller accesses. For example, 1785 16-bit store instructions with 7-bit imm 1786 might be tempted to use two 16-bit store 1787 implement the following 32-bit store: 1788 1789 p = 0x00010002; 1790 1791 Please note that GCC really does use thi 1792 which is not surprising given that it wo 1793 than two instructions to build the const 1794 This optimization can therefore be a win 1795 In fact, a recent bug (since fixed) caus 1796 this optimization in a volatile store. 1797 use of WRITE_ONCE() prevents store teari 1798 1799 WRITE_ONCE(p, 0x00010002); 1800 1801 Use of packed structures can also result 1802 as in this example: 1803 1804 struct __attribute__((__packed__)) fo 1805 short a; 1806 int b; 1807 short c; 1808 }; 1809 struct foo foo1, foo2; 1810 ... 1811 1812 foo2.a = foo1.a; 1813 foo2.b = foo1.b; 1814 foo2.c = foo1.c; 1815 1816 Because there are no READ_ONCE() or WRIT 1817 volatile markings, the compiler would be 1818 implement these three assignment stateme 1819 loads followed by a pair of 32-bit store 1820 load tearing on 'foo1.b' and store teari 1821 and WRITE_ONCE() again prevent tearing i 1822 1823 foo2.a = foo1.a; 1824 WRITE_ONCE(foo2.b, READ_ONCE(foo1.b)) 1825 foo2.c = foo1.c; 1826 1827 All that aside, it is never necessary to use 1828 WRITE_ONCE() on a variable that has been mark 1829 because 'jiffies' is marked volatile, it is n 1830 say READ_ONCE(jiffies). The reason for this 1831 WRITE_ONCE() are implemented as volatile cast 1832 its argument is already marked volatile. 1833 1834 Please note that these compiler barriers have 1835 which may then reorder things however it wish 1836 1837 1838 CPU MEMORY BARRIERS 1839 ------------------- 1840 1841 The Linux kernel has seven basic CPU memory b 1842 1843 TYPE MANDATORY 1844 ======================= ============= 1845 GENERAL mb() 1846 WRITE wmb() 1847 READ rmb() 1848 ADDRESS DEPENDENCY 1849 1850 1851 All memory barriers except the address-depend 1852 barrier. Address dependencies do not impose 1853 1854 Aside: In the case of address dependencies, t 1855 to issue the loads in the correct order (eg. 1856 the value of b before loading a[b]), however 1857 the C specification that the compiler may not 1858 (eg. is equal to 1) and load a[b] before b (e 1859 tmp = a[b]; ). There is also the problem of 1860 having loaded a[b], thus having a newer copy 1861 has not yet been reached about these problems 1862 macro is a good place to start looking. 1863 1864 SMP memory barriers are reduced to compiler b 1865 systems because it is assumed that a CPU will 1866 and will order overlapping accesses correctly 1867 However, see the subsection on "Virtual Machi 1868 1869 [!] Note that SMP memory barriers _must_ be u 1870 references to shared memory on SMP systems, t 1871 is sufficient. 1872 1873 Mandatory barriers should not be used to cont 1874 barriers impose unnecessary overhead on both 1875 however, be used to control MMIO effects on a 1876 windows. These barriers are required even on 1877 the order in which memory operations appear t 1878 compiler and the CPU from reordering them. 1879 1880 1881 There are some more advanced barrier function 1882 1883 (*) smp_store_mb(var, value) 1884 1885 This assigns the value to the variable a 1886 barrier after it. It isn't guaranteed t 1887 compiler barrier in a UP compilation. 1888 1889 1890 (*) smp_mb__before_atomic(); 1891 (*) smp_mb__after_atomic(); 1892 1893 These are for use with atomic RMW functi 1894 barriers, but where the code needs a mem 1895 RMW functions that do not imply a memory 1896 subtract, (failed) conditional operation 1897 but not atomic_read or atomic_set. A com 1898 barrier may be required is when atomic o 1899 counting. 1900 1901 These are also used for atomic RMW bitop 1902 memory barrier (such as set_bit and clea 1903 1904 As an example, consider a piece of code 1905 and then decrements the object's referen 1906 1907 obj->dead = 1; 1908 smp_mb__before_atomic(); 1909 atomic_dec(&obj->ref_count); 1910 1911 This makes sure that the death mark on t 1912 *before* the reference counter is decrem 1913 1914 See Documentation/atomic_{t,bitops}.txt 1915 1916 1917 (*) dma_wmb(); 1918 (*) dma_rmb(); 1919 (*) dma_mb(); 1920 1921 These are for use with consistent memory 1922 of writes or reads of shared memory acce 1923 DMA capable device. See Documentation/co 1924 information about consistent memory. 1925 1926 For example, consider a device driver th 1927 and uses a descriptor status value to in 1928 to the device or the CPU, and a doorbell 1929 descriptors are available: 1930 1931 if (desc->status != DEVICE_OWN) { 1932 /* do not read data until we 1933 dma_rmb(); 1934 1935 /* read/modify data */ 1936 read_data = desc->data; 1937 desc->data = write_data; 1938 1939 /* flush modifications before 1940 dma_wmb(); 1941 1942 /* assign ownership */ 1943 desc->status = DEVICE_OWN; 1944 1945 /* Make descriptor status vis 1946 * notify device of new descr 1947 */ 1948 writel(DESC_NOTIFY, doorbell) 1949 } 1950 1951 The dma_rmb() allows us to guarantee tha 1952 before we read the data from the descrip 1953 us to guarantee the data is written to t 1954 can see it now has ownership. The dma_m 1955 a dma_wmb(). 1956 1957 Note that the dma_*() barriers do not pr 1958 accesses to MMIO regions. See the later 1959 subsection for more information about I/ 1960 1961 (*) pmem_wmb(); 1962 1963 This is for use with persistent memory t 1964 modifications are written to persistent 1965 durability domain. 1966 1967 For example, after a non-temporal write 1968 to ensure that stores have reached a pla 1969 that stores have updated persistent stor 1970 data transfer caused by subsequent instr 1971 in addition to the ordering done by wmb( 1972 1973 For load from persistent memory, existin 1974 to ensure read ordering. 1975 1976 (*) io_stop_wc(); 1977 1978 For memory accesses with write-combining 1979 by ioremap_wc()), the CPU may wait for p 1980 subsequent ones. io_stop_wc() can be use 1981 write-combining memory accesses before t 1982 such wait has performance implications. 1983 1984 =============================== 1985 IMPLICIT KERNEL MEMORY BARRIERS 1986 =============================== 1987 1988 Some of the other functions in the linux kern 1989 which are locking and scheduling functions. 1990 1991 This specification is a _minimum_ guarantee; 1992 provide more substantial guarantees, but thes 1993 of arch specific code. 1994 1995 1996 LOCK ACQUISITION FUNCTIONS 1997 -------------------------- 1998 1999 The Linux kernel has a number of locking cons 2000 2001 (*) spin locks 2002 (*) R/W spin locks 2003 (*) mutexes 2004 (*) semaphores 2005 (*) R/W semaphores 2006 2007 In all cases there are variants on "ACQUIRE" 2008 for each construct. These operations all imp 2009 2010 (1) ACQUIRE operation implication: 2011 2012 Memory operations issued after the ACQUI 2013 ACQUIRE operation has completed. 2014 2015 Memory operations issued before the ACQU 2016 the ACQUIRE operation has completed. 2017 2018 (2) RELEASE operation implication: 2019 2020 Memory operations issued before the RELE 2021 RELEASE operation has completed. 2022 2023 Memory operations issued after the RELEA 2024 RELEASE operation has completed. 2025 2026 (3) ACQUIRE vs ACQUIRE implication: 2027 2028 All ACQUIRE operations issued before ano 2029 completed before that ACQUIRE operation. 2030 2031 (4) ACQUIRE vs RELEASE implication: 2032 2033 All ACQUIRE operations issued before a R 2034 completed before the RELEASE operation. 2035 2036 (5) Failed conditional ACQUIRE implication: 2037 2038 Certain locking variants of the ACQUIRE 2039 being unable to get the lock immediately 2040 signal while asleep waiting for the lock 2041 locks do not imply any sort of barrier. 2042 2043 [!] Note: one of the consequences of lock ACQ 2044 one-way barriers is that the effects of instr 2045 section may seep into the inside of the criti 2046 2047 An ACQUIRE followed by a RELEASE may not be a 2048 because it is possible for an access precedin 2049 ACQUIRE, and an access following the RELEASE 2050 the two accesses can themselves then cross: 2051 2052 *A = a; 2053 ACQUIRE M 2054 RELEASE M 2055 *B = b; 2056 2057 may occur as: 2058 2059 ACQUIRE M, STORE *B, STORE *A, RELEAS 2060 2061 When the ACQUIRE and RELEASE are a lock acqui 2062 respectively, this same reordering can occur 2063 RELEASE are to the same lock variable, but on 2064 another CPU not holding that lock. In short, 2065 RELEASE may -not- be assumed to be a full mem 2066 2067 Similarly, the reverse case of a RELEASE foll 2068 not imply a full memory barrier. Therefore, 2069 critical sections corresponding to the RELEAS 2070 so that: 2071 2072 *A = a; 2073 RELEASE M 2074 ACQUIRE N 2075 *B = b; 2076 2077 could occur as: 2078 2079 ACQUIRE N, STORE *B, STORE *A, RELEAS 2080 2081 It might appear that this reordering could in 2082 However, this cannot happen because if such a 2083 the RELEASE would simply complete, thereby av 2084 2085 Why does this work? 2086 2087 One key point is that we are only tal 2088 the reordering, not the compiler. If 2089 that matter, the developer) switched 2090 -could- occur. 2091 2092 But suppose the CPU reordered the ope 2093 the unlock precedes the lock in the a 2094 simply elected to try executing the l 2095 If there is a deadlock, this lock ope 2096 try to sleep, but more on that later) 2097 execute the unlock operation (which p 2098 in the assembly code), which will unr 2099 allowing the lock operation to succee 2100 2101 But what if the lock is a sleeplock? 2102 try to enter the scheduler, where it 2103 a memory barrier, which will force th 2104 to complete, again unraveling the dea 2105 a sleep-unlock race, but the locking 2106 such races properly in any case. 2107 2108 Locks and semaphores may not provide any guar 2109 systems, and so cannot be counted on in such 2110 anything at all - especially with respect to 2111 with interrupt disabling operations. 2112 2113 See also the section on "Inter-CPU acquiring 2114 2115 2116 As an example, consider the following: 2117 2118 *A = a; 2119 *B = b; 2120 ACQUIRE 2121 *C = c; 2122 *D = d; 2123 RELEASE 2124 *E = e; 2125 *F = f; 2126 2127 The following sequence of events is acceptabl 2128 2129 ACQUIRE, {*F,*A}, *E, {*C,*D}, *B, RE 2130 2131 [+] Note that {*F,*A} indicates a com 2132 2133 But none of the following are: 2134 2135 {*F,*A}, *B, ACQUIRE, *C, *D, 2136 *A, *B, *C, ACQUIRE, *D, 2137 *A, *B, ACQUIRE, *C, 2138 *B, ACQUIRE, *C, *D, 2139 2140 2141 2142 INTERRUPT DISABLING FUNCTIONS 2143 ----------------------------- 2144 2145 Functions that disable interrupts (ACQUIRE eq 2146 (RELEASE equivalent) will act as compiler bar 2147 barriers are required in such a situation, th 2148 other means. 2149 2150 2151 SLEEP AND WAKE-UP FUNCTIONS 2152 --------------------------- 2153 2154 Sleeping and waking on an event flagged in gl 2155 interaction between two pieces of data: the t 2156 the event and the global data used to indicat 2157 these appear to happen in the right order, th 2158 of going to sleep, and the primitives to init 2159 barriers. 2160 2161 Firstly, the sleeper normally follows somethi 2162 2163 for (;;) { 2164 set_current_state(TASK_UNINTE 2165 if (event_indicated) 2166 break; 2167 schedule(); 2168 } 2169 2170 A general memory barrier is interpolated auto 2171 after it has altered the task state: 2172 2173 CPU 1 2174 =============================== 2175 set_current_state(); 2176 smp_store_mb(); 2177 STORE current->state 2178 <general barrier> 2179 LOAD event_indicated 2180 2181 set_current_state() may be wrapped by: 2182 2183 prepare_to_wait(); 2184 prepare_to_wait_exclusive(); 2185 2186 which therefore also imply a general memory b 2187 The whole sequence above is available in vari 2188 interpolate the memory barrier in the right p 2189 2190 wait_event(); 2191 wait_event_interruptible(); 2192 wait_event_interruptible_exclusive(); 2193 wait_event_interruptible_timeout(); 2194 wait_event_killable(); 2195 wait_event_timeout(); 2196 wait_on_bit(); 2197 wait_on_bit_lock(); 2198 2199 2200 Secondly, code that performs a wake up normal 2201 2202 event_indicated = 1; 2203 wake_up(&event_wait_queue); 2204 2205 or: 2206 2207 event_indicated = 1; 2208 wake_up_process(event_daemon); 2209 2210 A general memory barrier is executed by wake_ 2211 If it doesn't wake anything up then a memory 2212 executed; you must not rely on it. The barri 2213 is accessed, in particular, it sits between t 2214 and the STORE to set TASK_RUNNING: 2215 2216 CPU 1 (Sleeper) CPU 2 2217 =============================== ===== 2218 set_current_state(); STORE 2219 smp_store_mb(); wake_ 2220 STORE current->state ... 2221 <general barrier> <ge 2222 LOAD event_indicated if 2223 S 2224 2225 where "task" is the thread being woken up and 2226 2227 To repeat, a general memory barrier is guaran 2228 if something is actually awakened, but otherw 2229 To see this, consider the following sequence 2230 initially zero: 2231 2232 CPU 1 CPU 2 2233 =============================== ===== 2234 X = 1; Y = 1 2235 smp_mb(); wake_ 2236 LOAD Y LOAD 2237 2238 If a wakeup does occur, one (at least) of the 2239 the other hand, a wakeup does not occur, both 2240 2241 wake_up_process() always executes a general m 2242 occurs before the task state is accessed. In 2243 the previous snippet were replaced by a call 2244 the two loads would be guaranteed to see 1. 2245 2246 The available waker functions include: 2247 2248 complete(); 2249 wake_up(); 2250 wake_up_all(); 2251 wake_up_bit(); 2252 wake_up_interruptible(); 2253 wake_up_interruptible_all(); 2254 wake_up_interruptible_nr(); 2255 wake_up_interruptible_poll(); 2256 wake_up_interruptible_sync(); 2257 wake_up_interruptible_sync_poll(); 2258 wake_up_locked(); 2259 wake_up_locked_poll(); 2260 wake_up_nr(); 2261 wake_up_poll(); 2262 wake_up_process(); 2263 2264 In terms of memory ordering, these functions 2265 a wake_up() (or stronger). 2266 2267 [!] Note that the memory barriers implied by 2268 order multiple stores before the wake-up with 2269 values after the sleeper has called set_curre 2270 sleeper does: 2271 2272 set_current_state(TASK_INTERRUPTIBLE) 2273 if (event_indicated) 2274 break; 2275 __set_current_state(TASK_RUNNING); 2276 do_something(my_data); 2277 2278 and the waker does: 2279 2280 my_data = value; 2281 event_indicated = 1; 2282 wake_up(&event_wait_queue); 2283 2284 there's no guarantee that the change to event 2285 the sleeper as coming after the change to my_ 2286 code on both sides must interpolate its own m 2287 separate data accesses. Thus the above sleep 2288 2289 set_current_state(TASK_INTERRUPTIBLE) 2290 if (event_indicated) { 2291 smp_rmb(); 2292 do_something(my_data); 2293 } 2294 2295 and the waker should do: 2296 2297 my_data = value; 2298 smp_wmb(); 2299 event_indicated = 1; 2300 wake_up(&event_wait_queue); 2301 2302 2303 MISCELLANEOUS FUNCTIONS 2304 ----------------------- 2305 2306 Other functions that imply barriers: 2307 2308 (*) schedule() and similar imply full memory 2309 2310 2311 =================================== 2312 INTER-CPU ACQUIRING BARRIER EFFECTS 2313 =================================== 2314 2315 On SMP systems locking primitives give a more 2316 that does affect memory access ordering on ot 2317 conflict on any particular lock. 2318 2319 2320 ACQUIRES VS MEMORY ACCESSES 2321 --------------------------- 2322 2323 Consider the following: the system has a pair 2324 three CPUs; then should the following sequenc 2325 2326 CPU 1 CPU 2 2327 =============================== ===== 2328 WRITE_ONCE(*A, a); WRITE 2329 ACQUIRE M ACQUI 2330 WRITE_ONCE(*B, b); WRITE 2331 WRITE_ONCE(*C, c); WRITE 2332 RELEASE M RELEA 2333 WRITE_ONCE(*D, d); WRITE 2334 2335 Then there is no guarantee as to what order C 2336 through *H occur in, other than the constrain 2337 on the separate CPUs. It might, for example, 2338 2339 *E, ACQUIRE M, ACQUIRE Q, *G, *C, *F, 2340 2341 But it won't see any of: 2342 2343 *B, *C or *D preceding ACQUIRE M 2344 *A, *B or *C following RELEASE M 2345 *F, *G or *H preceding ACQUIRE Q 2346 *E, *F or *G following RELEASE Q 2347 2348 2349 ================================= 2350 WHERE ARE MEMORY BARRIERS NEEDED? 2351 ================================= 2352 2353 Under normal operation, memory operation reor 2354 be a problem as a single-threaded linear piec 2355 work correctly, even if it's in an SMP kernel 2356 circumstances in which reordering definitely 2357 2358 (*) Interprocessor interaction. 2359 2360 (*) Atomic operations. 2361 2362 (*) Accessing devices. 2363 2364 (*) Interrupts. 2365 2366 2367 INTERPROCESSOR INTERACTION 2368 -------------------------- 2369 2370 When there's a system with more than one proc 2371 system may be working on the same data set at 2372 synchronisation problems, and the usual way o 2373 locks. Locks, however, are quite expensive, 2374 operate without the use of a lock if at all p 2375 operations that affect both CPUs may have to 2376 a malfunction. 2377 2378 Consider, for example, the R/W semaphore slow 2379 queued on the semaphore, by virtue of it havi 2380 the semaphore's list of waiting processes: 2381 2382 struct rw_semaphore { 2383 ... 2384 spinlock_t lock; 2385 struct list_head waiters; 2386 }; 2387 2388 struct rwsem_waiter { 2389 struct list_head list; 2390 struct task_struct *task; 2391 }; 2392 2393 To wake up a particular waiter, the up_read() 2394 2395 (1) read the next pointer from this waiter's 2396 next waiter record is; 2397 2398 (2) read the pointer to the waiter's task st 2399 2400 (3) clear the task pointer to tell the waite 2401 2402 (4) call wake_up_process() on the task; and 2403 2404 (5) release the reference held on the waiter 2405 2406 In other words, it has to perform this sequen 2407 2408 LOAD waiter->list.next; 2409 LOAD waiter->task; 2410 STORE waiter->task; 2411 CALL wakeup 2412 RELEASE task 2413 2414 and if any of these steps occur out of order, 2415 malfunction. 2416 2417 Once it has queued itself and dropped the sem 2418 get the lock again; it instead just waits for 2419 before proceeding. Since the record is on th 2420 if the task pointer is cleared _before_ the n 2421 another CPU might start processing the waiter 2422 stack before the up*() function has a chance 2423 2424 Consider then what might happen to the above 2425 2426 CPU 1 CPU 2 2427 =============================== ===== 2428 down_ 2429 Queue 2430 Sleep 2431 up_yyy() 2432 LOAD waiter->task; 2433 STORE waiter->task; 2434 Woken 2435 <preempt> 2436 Resum 2437 down_ 2438 call 2439 foo() 2440 </preempt> 2441 LOAD waiter->list.next; 2442 --- OOPS --- 2443 2444 This could be dealt with using the semaphore 2445 function has to needlessly get the spinlock a 2446 2447 The way to deal with this is to insert a gene 2448 2449 LOAD waiter->list.next; 2450 LOAD waiter->task; 2451 smp_mb(); 2452 STORE waiter->task; 2453 CALL wakeup 2454 RELEASE task 2455 2456 In this case, the barrier makes a guarantee t 2457 barrier will appear to happen before all the 2458 with respect to the other CPUs on the system. 2459 the memory accesses before the barrier will b 2460 instruction itself is complete. 2461 2462 On a UP system - where this wouldn't be a pro 2463 compiler barrier, thus making sure the compil 2464 right order without actually intervening in t 2465 CPU, that CPU's dependency ordering logic wil 2466 2467 2468 ATOMIC OPERATIONS 2469 ----------------- 2470 2471 While they are technically interprocessor int 2472 operations are noted specially as some of the 2473 some don't, but they're very heavily relied o 2474 kernel. 2475 2476 See Documentation/atomic_t.txt for more infor 2477 2478 2479 ACCESSING DEVICES 2480 ----------------- 2481 2482 Many devices can be memory mapped, and so app 2483 a set of memory locations. To control such a 2484 make the right memory accesses in exactly the 2485 2486 However, having a clever CPU or a clever comp 2487 in that the carefully sequenced accesses in t 2488 device in the requisite order if the CPU or t 2489 efficient to reorder, combine or merge access 2490 the device to malfunction. 2491 2492 Inside of the Linux kernel, I/O should be don 2493 routines - such as inb() or writel() - which 2494 appropriately sequential. While this, for th 2495 use of memory barriers unnecessary, if the ac 2496 to an I/O memory window with relaxed memory a 2497 memory barriers are required to enforce order 2498 2499 See Documentation/driver-api/device-io.rst fo 2500 2501 2502 INTERRUPTS 2503 ---------- 2504 2505 A driver may be interrupted by its own interr 2506 two parts of the driver may interfere with ea 2507 access the device. 2508 2509 This may be alleviated - at least in part - b 2510 form of locking), such that the critical oper 2511 the interrupt-disabled section in the driver. 2512 routine is executing, the driver's core may n 2513 interrupt is not permitted to happen again un 2514 handled, thus the interrupt handler does not 2515 2516 However, consider a driver that was talking t 2517 address register and a data register. If tha 2518 under interrupt-disablement and then the driv 2519 2520 LOCAL IRQ DISABLE 2521 writew(ADDR, 3); 2522 writew(DATA, y); 2523 LOCAL IRQ ENABLE 2524 <interrupt> 2525 writew(ADDR, 4); 2526 q = readw(DATA); 2527 </interrupt> 2528 2529 The store to the data register might happen a 2530 address register if ordering rules are suffic 2531 2532 STORE *ADDR = 3, STORE *ADDR = 4, STO 2533 2534 2535 If ordering rules are relaxed, it must be ass 2536 interrupt disabled section may leak outside o 2537 accesses performed in an interrupt - and vice 2538 explicit barriers are used. 2539 2540 Normally this won't be a problem because the 2541 sections will include synchronous load operat 2542 registers that form implicit I/O barriers. 2543 2544 2545 A similar situation may occur between an inte 2546 running on separate CPUs that communicate wit 2547 likely, then interrupt-disabling locks should 2548 2549 2550 ========================== 2551 KERNEL I/O BARRIER EFFECTS 2552 ========================== 2553 2554 Interfacing with peripherals via I/O accesses 2555 specific. Therefore, drivers which are inhere 2556 specific behaviours of their target systems i 2557 in the most lightweight manner possible. For 2558 between multiple architectures and bus implem 2559 series of accessor functions that provide var 2560 guarantees: 2561 2562 (*) readX(), writeX(): 2563 2564 The readX() and writeX() MMIO accesso 2565 peripheral being accessed as an __iom 2566 mapped with the default I/O attribute 2567 ioremap()), the ordering guarantees a 2568 2569 1. All readX() and writeX() accesses 2570 with respect to each other. This e 2571 by the same CPU thread to a partic 2572 order. 2573 2574 2. A writeX() issued by a CPU thread 2575 before a writeX() to the same peri 2576 issued after a later acquisition o 2577 that MMIO register writes to a par 2578 a spinlock will arrive in an order 2579 the lock. 2580 2581 3. A writeX() by a CPU thread to the 2582 completion of all prior writes to 2583 propagated to, the same thread. Th 2584 to an outbound DMA buffer allocate 2585 visible to a DMA engine when the C 2586 register to trigger the transfer. 2587 2588 4. A readX() by a CPU thread from the 2589 any subsequent reads from memory b 2590 ensures that reads by the CPU from 2591 by dma_alloc_coherent() will not s 2592 the DMA engine's MMIO status regis 2593 transfer has completed. 2594 2595 5. A readX() by a CPU thread from the 2596 any subsequent delay() loop can be 2597 This ensures that two MMIO registe 2598 will arrive at least 1us apart if 2599 back with readX() and udelay(1) is 2600 writeX(): 2601 2602 writel(42, DEVICE_REGISTER_0) 2603 readl(DEVICE_REGISTER_0); 2604 udelay(1); 2605 writel(42, DEVICE_REGISTER_1) 2606 2607 The ordering properties of __iomem po 2608 attributes (e.g. those returned by io 2609 underlying architecture and therefore 2610 generally be relied upon for accesses 2611 2612 (*) readX_relaxed(), writeX_relaxed(): 2613 2614 These are similar to readX() and writ 2615 ordering guarantees. Specifically, th 2616 respect to locking, normal memory acc 2617 bullets 2-5 above) but they are still 2618 respect to other accesses from the sa 2619 peripheral when operating on __iomem 2620 I/O attributes. 2621 2622 (*) readsX(), writesX(): 2623 2624 The readsX() and writesX() MMIO acces 2625 register-based, memory-mapped FIFOs r 2626 capable of performing DMA. Consequent 2627 guarantees of readX_relaxed() and wri 2628 2629 (*) inX(), outX(): 2630 2631 The inX() and outX() accessors are in 2632 I/O peripherals, which may require sp 2633 architectures (notably x86). The port 2634 accessed is passed as an argument. 2635 2636 Since many CPU architectures ultimate 2637 internal virtual memory mapping, the 2638 provided by inX() and outX() are the 2639 and writeX() respectively when access 2640 attributes. 2641 2642 Device drivers may expect outX() to e 2643 that waits for a completion response 2644 returning. This is not guaranteed by 2645 not part of the portable ordering sem 2646 2647 (*) insX(), outsX(): 2648 2649 As above, the insX() and outsX() acce 2650 guarantees as readsX() and writesX() 2651 mapping with the default I/O attribut 2652 2653 (*) ioreadX(), iowriteX(): 2654 2655 These will perform appropriately for 2656 doing, be it inX()/outX() or readX()/ 2657 2658 With the exception of the string accessors (i 2659 writesX()), all of the above assume that the 2660 little-endian and will therefore perform byte 2661 architectures. 2662 2663 2664 ======================================== 2665 ASSUMED MINIMUM EXECUTION ORDERING MODEL 2666 ======================================== 2667 2668 It has to be assumed that the conceptual CPU 2669 maintain the appearance of program causality 2670 (such as i386 or x86_64) are more constrained 2671 frv), and so the most relaxed case (namely DE 2672 of arch-specific code. 2673 2674 This means that it must be considered that th 2675 stream in any order it feels like - or even i 2676 instruction in the stream depends on an earli 2677 earlier instruction must be sufficiently comp 2678 instruction may proceed; in other words: prov 2679 causality is maintained. 2680 2681 [*] Some instructions have more than one eff 2682 condition codes, changing registers or c 2683 instructions may depend on different eff 2684 2685 A CPU may also discard any instruction sequen 2686 ultimate effect. For example, if two adjacen 2687 immediate value into the same register, the f 2688 2689 2690 Similarly, it has to be assumed that compiler 2691 stream in any way it sees fit, again provided 2692 maintained. 2693 2694 2695 ============================ 2696 THE EFFECTS OF THE CPU CACHE 2697 ============================ 2698 2699 The way cached memory operations are perceive 2700 a certain extent by the caches that lie betwe 2701 memory coherence system that maintains the co 2702 2703 As far as the way a CPU interacts with anothe 2704 caches goes, the memory system has to include 2705 barriers for the most part act at the interfa 2706 (memory barriers logically act on the dotted 2707 2708 <--- CPU ---> : <-- 2709 : 2710 +--------+ +--------+ : +------ 2711 | | | | : | 2712 | CPU | | Memory | : | CPU 2713 | Core |--->| Access |----->| Cache 2714 | | | Queue | : | 2715 | | | | : | 2716 +--------+ +--------+ : +------ 2717 : 2718 : 2719 : 2720 +--------+ +--------+ : +------ 2721 | | | | : | 2722 | CPU | | Memory | : | CPU 2723 | Core |--->| Access |----->| Cache 2724 | | | Queue | : | 2725 | | | | : | 2726 +--------+ +--------+ : +------ 2727 : 2728 : 2729 2730 Although any particular load or store may not 2731 CPU that issued it since it may have been sat 2732 it will still appear as if the full memory ac 2733 other CPUs are concerned since the cache cohe 2734 cacheline over to the accessing CPU and propa 2735 2736 The CPU core may execute instructions in any 2737 expected program causality appears to be main 2738 generate load and store operations which then 2739 accesses to be performed. The core may place 2740 it wishes, and continue execution until it is 2741 to complete. 2742 2743 What memory barriers are concerned with is co 2744 accesses cross from the CPU side of things to 2745 the order in which the effects are perceived 2746 in the system. 2747 2748 [!] Memory barriers are _not_ needed within a 2749 their own loads and stores as if they had hap 2750 2751 [!] MMIO or other device accesses may bypass 2752 the properties of the memory window through w 2753 the use of any special device communication i 2754 2755 2756 CACHE COHERENCY VS DMA 2757 ---------------------- 2758 2759 Not all systems maintain cache coherency with 2760 such cases, a device attempting DMA may obtai 2761 dirty cache lines may be resident in the cach 2762 have been written back to RAM yet. To deal w 2763 the kernel must flush the overlapping bits of 2764 invalidate them as well). 2765 2766 In addition, the data DMA'd to RAM by a devic 2767 cache lines being written back to RAM from a 2768 installed its own data, or cache lines presen 2769 obscure the fact that RAM has been updated, u 2770 is discarded from the CPU's cache and reloade 2771 appropriate part of the kernel must invalidat 2772 cache on each CPU. 2773 2774 See Documentation/core-api/cachetlb.rst for m 2775 management. 2776 2777 2778 CACHE COHERENCY VS MMIO 2779 ----------------------- 2780 2781 Memory mapped I/O usually takes place through 2782 a window in the CPU's memory space that has d 2783 the usual RAM directed window. 2784 2785 Amongst these properties is usually the fact 2786 caching entirely and go directly to the devic 2787 may, in effect, overtake accesses to cached m 2788 A memory barrier isn't sufficient in such a c 2789 flushed between the cached memory write and t 2790 any way dependent. 2791 2792 2793 ========================= 2794 THE THINGS CPUS GET UP TO 2795 ========================= 2796 2797 A programmer might take it for granted that t 2798 operations in exactly the order specified, so 2799 given the following piece of code to execute: 2800 2801 a = READ_ONCE(*A); 2802 WRITE_ONCE(*B, b); 2803 c = READ_ONCE(*C); 2804 d = READ_ONCE(*D); 2805 WRITE_ONCE(*E, e); 2806 2807 they would then expect that the CPU will comp 2808 instruction before moving on to the next one, 2809 operations as seen by external observers in t 2810 2811 LOAD *A, STORE *B, LOAD *C, LOAD *D, 2812 2813 2814 Reality is, of course, much messier. With ma 2815 assumption doesn't hold because: 2816 2817 (*) loads are more likely to need to be comp 2818 execution progress, whereas stores can o 2819 problem; 2820 2821 (*) loads may be done speculatively, and the 2822 to have been unnecessary; 2823 2824 (*) loads may be done speculatively, leading 2825 at the wrong time in the expected sequen 2826 2827 (*) the order of the memory accesses may be 2828 of the CPU buses and caches; 2829 2830 (*) loads and stores may be combined to impr 2831 memory or I/O hardware that can do batch 2832 thus cutting down on transaction setup c 2833 both be able to do this); and 2834 2835 (*) the CPU's data cache may affect the orde 2836 mechanisms may alleviate this - once the 2837 - there's no guarantee that the coherenc 2838 order to other CPUs. 2839 2840 So what another CPU, say, might actually obse 2841 is: 2842 2843 LOAD *A, ..., LOAD {*C,*D}, STORE *E, 2844 2845 (Where "LOAD {*C,*D}" is a combined l 2846 2847 2848 However, it is guaranteed that a CPU will be 2849 _own_ accesses appear to be correctly ordered 2850 barrier. For instance with the following cod 2851 2852 U = READ_ONCE(*A); 2853 WRITE_ONCE(*A, V); 2854 WRITE_ONCE(*A, W); 2855 X = READ_ONCE(*A); 2856 WRITE_ONCE(*A, Y); 2857 Z = READ_ONCE(*A); 2858 2859 and assuming no intervention by an external i 2860 the final result will appear to be: 2861 2862 U == the original value of *A 2863 X == W 2864 Z == Y 2865 *A == Y 2866 2867 The code above may cause the CPU to generate 2868 accesses: 2869 2870 U=LOAD *A, STORE *A=V, STORE *A=W, X= 2871 2872 in that order, but, without intervention, the 2873 combination of elements combined or discarded 2874 of the world remains consistent. Note that R 2875 are -not- optional in the above example, as t 2876 where a given CPU might reorder successive lo 2877 On such architectures, READ_ONCE() and WRITE_ 2878 necessary to prevent this, for example, on It 2879 used by READ_ONCE() and WRITE_ONCE() cause GC 2880 and st.rel instructions (respectively) that p 2881 2882 The compiler may also combine, discard or def 2883 the CPU even sees them. 2884 2885 For instance: 2886 2887 *A = V; 2888 *A = W; 2889 2890 may be reduced to: 2891 2892 *A = W; 2893 2894 since, without either a write barrier or an W 2895 assumed that the effect of the storage of V t 2896 2897 *A = Y; 2898 Z = *A; 2899 2900 may, without a memory barrier or an READ_ONCE 2901 reduced to: 2902 2903 *A = Y; 2904 Z = Y; 2905 2906 and the LOAD operation never appear outside o 2907 2908 2909 AND THEN THERE'S THE ALPHA 2910 -------------------------- 2911 2912 The DEC Alpha CPU is one of the most relaxed 2913 some versions of the Alpha CPU have a split d 2914 two semantically-related cache lines updated 2915 the address-dependency barrier really becomes 2916 both caches with the memory coherence system, 2917 changes vs new data occur in the right order. 2918 2919 The Alpha defines the Linux kernel's memory m 2920 the Linux kernel's addition of smp_mb() to RE 2921 reduced its impact on the memory model. 2922 2923 2924 VIRTUAL MACHINE GUESTS 2925 ---------------------- 2926 2927 Guests running within virtual machines might 2928 the guest itself is compiled without SMP supp 2929 interfacing with an SMP host while running an 2930 barriers for this use-case would be possible 2931 2932 To handle this case optimally, low-level virt 2933 These have the same effect as smp_mb() etc wh 2934 identical code for SMP and non-SMP systems. 2935 should use virt_mb() rather than smp_mb() whe 2936 (possibly SMP) host. 2937 2938 These are equivalent to smp_mb() etc counterp 2939 in particular, they do not control MMIO effec 2940 MMIO effects, use mandatory barriers. 2941 2942 2943 ============ 2944 EXAMPLE USES 2945 ============ 2946 2947 CIRCULAR BUFFERS 2948 ---------------- 2949 2950 Memory barriers can be used to implement circ 2951 of a lock to serialise the producer with the 2952 2953 Documentation/core-api/circular-buffe 2954 2955 for details. 2956 2957 2958 ========== 2959 REFERENCES 2960 ========== 2961 2962 Alpha AXP Architecture Reference Manual, Seco 2963 Digital Press) 2964 Chapter 5.2: Physical Address Space C 2965 Chapter 5.4: Caches and Write Buffers 2966 Chapter 5.5: Data Sharing 2967 Chapter 5.6: Read/Write Ordering 2968 2969 AMD64 Architecture Programmer's Manual Volume 2970 Chapter 7.1: Memory-Access Ordering 2971 Chapter 7.4: Buffering and Combining 2972 2973 ARM Architecture Reference Manual (ARMv8, for 2974 Chapter B2: The AArch64 Application L 2975 2976 IA-32 Intel Architecture Software Developer's 2977 System Programming Guide 2978 Chapter 7.1: Locked Atomic Operations 2979 Chapter 7.2: Memory Ordering 2980 Chapter 7.4: Serializing Instructions 2981 2982 The SPARC Architecture Manual, Version 9 2983 Chapter 8: Memory Models 2984 Appendix D: Formal Specification of t 2985 Appendix J: Programming with the Memo 2986 2987 Storage in the PowerPC (Stone and Fitzgerald) 2988 2989 UltraSPARC Programmer Reference Manual 2990 Chapter 5: Memory Accesses and Cachea 2991 Chapter 15: Sparc-V9 Memory Models 2992 2993 UltraSPARC III Cu User's Manual 2994 Chapter 9: Memory Models 2995 2996 UltraSPARC IIIi Processor User's Manual 2997 Chapter 8: Memory Models 2998 2999 UltraSPARC Architecture 2005 3000 Chapter 9: Memory 3001 Appendix D: Formal Specifications of 3002 3003 UltraSPARC T1 Supplement to the UltraSPARC Ar 3004 Chapter 8: Memory Models 3005 Appendix F: Caches and Cache Coherenc 3006 3007 Solaris Internals, Core Kernel Architecture, 3008 Chapter 3.3: Hardware Considerations 3009 Synchronization 3010 3011 Unix Systems for Modern Architectures, Symmet 3012 for Kernel Programmers: 3013 Chapter 13: Other Memory Models 3014 3015 Intel Itanium Architecture Software Developer 3016 Section 2.6: Speculation 3017 Section 4.4: Memory Access
Linux® is a registered trademark of Linus Torvalds in the United States and other countries.
TOMOYO® is a registered trademark of NTT DATA CORPORATION.