~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

TOMOYO Linux Cross Reference
Linux/Documentation/atomic_t.txt

Version: ~ [ linux-6.11.5 ] ~ [ linux-6.10.14 ] ~ [ linux-6.9.12 ] ~ [ linux-6.8.12 ] ~ [ linux-6.7.12 ] ~ [ linux-6.6.58 ] ~ [ linux-6.5.13 ] ~ [ linux-6.4.16 ] ~ [ linux-6.3.13 ] ~ [ linux-6.2.16 ] ~ [ linux-6.1.114 ] ~ [ linux-6.0.19 ] ~ [ linux-5.19.17 ] ~ [ linux-5.18.19 ] ~ [ linux-5.17.15 ] ~ [ linux-5.16.20 ] ~ [ linux-5.15.169 ] ~ [ linux-5.14.21 ] ~ [ linux-5.13.19 ] ~ [ linux-5.12.19 ] ~ [ linux-5.11.22 ] ~ [ linux-5.10.228 ] ~ [ linux-5.9.16 ] ~ [ linux-5.8.18 ] ~ [ linux-5.7.19 ] ~ [ linux-5.6.19 ] ~ [ linux-5.5.19 ] ~ [ linux-5.4.284 ] ~ [ linux-5.3.18 ] ~ [ linux-5.2.21 ] ~ [ linux-5.1.21 ] ~ [ linux-5.0.21 ] ~ [ linux-4.20.17 ] ~ [ linux-4.19.322 ] ~ [ linux-4.18.20 ] ~ [ linux-4.17.19 ] ~ [ linux-4.16.18 ] ~ [ linux-4.15.18 ] ~ [ linux-4.14.336 ] ~ [ linux-4.13.16 ] ~ [ linux-4.12.14 ] ~ [ linux-4.11.12 ] ~ [ linux-4.10.17 ] ~ [ linux-4.9.337 ] ~ [ linux-4.4.302 ] ~ [ linux-3.10.108 ] ~ [ linux-2.6.32.71 ] ~ [ linux-2.6.0 ] ~ [ linux-2.4.37.11 ] ~ [ unix-v6-master ] ~ [ ccs-tools-1.8.9 ] ~ [ policy-sample ] ~
Architecture: ~ [ i386 ] ~ [ alpha ] ~ [ m68k ] ~ [ mips ] ~ [ ppc ] ~ [ sparc ] ~ [ sparc64 ] ~

Diff markup

Differences between /Documentation/atomic_t.txt (Version linux-6.11.5) and /Documentation/atomic_t.txt (Version linux-4.16.18)


  1                                                     1 
  2 On atomic types (atomic_t atomic64_t and atomi      2 On atomic types (atomic_t atomic64_t and atomic_long_t).
  3                                                     3 
  4 The atomic type provides an interface to the a      4 The atomic type provides an interface to the architecture's means of atomic
  5 RMW operations between CPUs (atomic operations      5 RMW operations between CPUs (atomic operations on MMIO are not supported and
  6 can lead to fatal traps on some platforms).         6 can lead to fatal traps on some platforms).
  7                                                     7 
  8 API                                                 8 API
  9 ---                                                 9 ---
 10                                                    10 
 11 The 'full' API consists of (atomic64_ and atom     11 The 'full' API consists of (atomic64_ and atomic_long_ prefixes omitted for
 12 brevity):                                          12 brevity):
 13                                                    13 
 14 Non-RMW ops:                                       14 Non-RMW ops:
 15                                                    15 
 16   atomic_read(), atomic_set()                      16   atomic_read(), atomic_set()
 17   atomic_read_acquire(), atomic_set_release()      17   atomic_read_acquire(), atomic_set_release()
 18                                                    18 
 19                                                    19 
 20 RMW atomic operations:                             20 RMW atomic operations:
 21                                                    21 
 22 Arithmetic:                                        22 Arithmetic:
 23                                                    23 
 24   atomic_{add,sub,inc,dec}()                       24   atomic_{add,sub,inc,dec}()
 25   atomic_{add,sub,inc,dec}_return{,_relaxed,_a     25   atomic_{add,sub,inc,dec}_return{,_relaxed,_acquire,_release}()
 26   atomic_fetch_{add,sub,inc,dec}{,_relaxed,_ac     26   atomic_fetch_{add,sub,inc,dec}{,_relaxed,_acquire,_release}()
 27                                                    27 
 28                                                    28 
 29 Bitwise:                                           29 Bitwise:
 30                                                    30 
 31   atomic_{and,or,xor,andnot}()                     31   atomic_{and,or,xor,andnot}()
 32   atomic_fetch_{and,or,xor,andnot}{,_relaxed,_     32   atomic_fetch_{and,or,xor,andnot}{,_relaxed,_acquire,_release}()
 33                                                    33 
 34                                                    34 
 35 Swap:                                              35 Swap:
 36                                                    36 
 37   atomic_xchg{,_relaxed,_acquire,_release}()       37   atomic_xchg{,_relaxed,_acquire,_release}()
 38   atomic_cmpxchg{,_relaxed,_acquire,_release}(     38   atomic_cmpxchg{,_relaxed,_acquire,_release}()
 39   atomic_try_cmpxchg{,_relaxed,_acquire,_relea     39   atomic_try_cmpxchg{,_relaxed,_acquire,_release}()
 40                                                    40 
 41                                                    41 
 42 Reference count (but please see refcount_t):       42 Reference count (but please see refcount_t):
 43                                                    43 
 44   atomic_add_unless(), atomic_inc_not_zero()       44   atomic_add_unless(), atomic_inc_not_zero()
 45   atomic_sub_and_test(), atomic_dec_and_test()     45   atomic_sub_and_test(), atomic_dec_and_test()
 46                                                    46 
 47                                                    47 
 48 Misc:                                              48 Misc:
 49                                                    49 
 50   atomic_inc_and_test(), atomic_add_negative()     50   atomic_inc_and_test(), atomic_add_negative()
 51   atomic_dec_unless_positive(), atomic_inc_unl     51   atomic_dec_unless_positive(), atomic_inc_unless_negative()
 52                                                    52 
 53                                                    53 
 54 Barriers:                                          54 Barriers:
 55                                                    55 
 56   smp_mb__{before,after}_atomic()                  56   smp_mb__{before,after}_atomic()
 57                                                    57 
 58                                                    58 
 59 TYPES (signed vs unsigned)                     << 
 60 -----                                          << 
 61                                                << 
 62 While atomic_t, atomic_long_t and atomic64_t u << 
 63 respectively (for hysterical raisins), the ker << 
 64 (which implies -fwrapv) and defines signed ove << 
 65 2s-complement.                                 << 
 66                                                << 
 67 Therefore, an explicitly unsigned variant of t << 
 68 unnecessary and we can simply cast, there is n << 
 69                                                << 
 70 There was a bug in UBSAN prior to GCC-8 that w << 
 71 signed types.                                  << 
 72                                                << 
 73 With this we also conform to the C/C++ _Atomic << 
 74 P1236R1.                                       << 
 75                                                << 
 76                                                    59 
 77 SEMANTICS                                          60 SEMANTICS
 78 ---------                                          61 ---------
 79                                                    62 
 80 Non-RMW ops:                                       63 Non-RMW ops:
 81                                                    64 
 82 The non-RMW ops are (typically) regular LOADs      65 The non-RMW ops are (typically) regular LOADs and STOREs and are canonically
 83 implemented using READ_ONCE(), WRITE_ONCE(), s     66 implemented using READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and
 84 smp_store_release() respectively. Therefore, i !!  67 smp_store_release() respectively.
 85 the Non-RMW operations of atomic_t, you do not << 
 86 and are doing it wrong.                        << 
 87                                                    68 
 88 A note for the implementation of atomic_set{}( !!  69 The one detail to this is that atomic_set{}() should be observable to the RMW
 89 atomicity of the RMW ops. That is:             !!  70 ops. That is:
 90                                                    71 
 91   C Atomic-RMW-ops-are-atomic-WRT-atomic_set   !!  72   C atomic-set
 92                                                    73 
 93   {                                                74   {
 94     atomic_t v = ATOMIC_INIT(1);               !!  75     atomic_set(v, 1);
 95   }                                                76   }
 96                                                    77 
 97   P0(atomic_t *v)                              !!  78   P1(atomic_t *v)
 98   {                                                79   {
 99     (void)atomic_add_unless(v, 1, 0);          !!  80     atomic_add_unless(v, 1, 0);
100   }                                                81   }
101                                                    82 
102   P1(atomic_t *v)                              !!  83   P2(atomic_t *v)
103   {                                                84   {
104     atomic_set(v, 0);                              85     atomic_set(v, 0);
105   }                                                86   }
106                                                    87 
107   exists                                           88   exists
108   (v=2)                                            89   (v=2)
109                                                    90 
110 In this case we would expect the atomic_set()      91 In this case we would expect the atomic_set() from CPU1 to either happen
111 before the atomic_add_unless(), in which case      92 before the atomic_add_unless(), in which case that latter one would no-op, or
112 _after_ in which case we'd overwrite its resul     93 _after_ in which case we'd overwrite its result. In no case is "2" a valid
113 outcome.                                           94 outcome.
114                                                    95 
115 This is typically true on 'normal' platforms,      96 This is typically true on 'normal' platforms, where a regular competing STORE
116 will invalidate a LL/SC or fail a CMPXCHG.         97 will invalidate a LL/SC or fail a CMPXCHG.
117                                                    98 
118 The obvious case where this is not so is when      99 The obvious case where this is not so is when we need to implement atomic ops
119 with a lock:                                      100 with a lock:
120                                                   101 
121   CPU0                                            102   CPU0                                          CPU1
122                                                   103 
123   atomic_add_unless(v, 1, 0);                     104   atomic_add_unless(v, 1, 0);
124     lock();                                       105     lock();
125     ret = READ_ONCE(v->counter); // == 1          106     ret = READ_ONCE(v->counter); // == 1
126                                                   107                                                 atomic_set(v, 0);
127     if (ret != u)                                 108     if (ret != u)                                 WRITE_ONCE(v->counter, 0);
128       WRITE_ONCE(v->counter, ret + 1);            109       WRITE_ONCE(v->counter, ret + 1);
129     unlock();                                     110     unlock();
130                                                   111 
131 the typical solution is to then implement atom    112 the typical solution is to then implement atomic_set{}() with atomic_xchg().
132                                                   113 
133                                                   114 
134 RMW ops:                                          115 RMW ops:
135                                                   116 
136 These come in various forms:                      117 These come in various forms:
137                                                   118 
138  - plain operations without return value: atom    119  - plain operations without return value: atomic_{}()
139                                                   120 
140  - operations which return the modified value:    121  - operations which return the modified value: atomic_{}_return()
141                                                   122 
142    these are limited to the arithmetic operati    123    these are limited to the arithmetic operations because those are
143    reversible. Bitops are irreversible and the    124    reversible. Bitops are irreversible and therefore the modified value
144    is of dubious utility.                         125    is of dubious utility.
145                                                   126 
146  - operations which return the original value:    127  - operations which return the original value: atomic_fetch_{}()
147                                                   128 
148  - swap operations: xchg(), cmpxchg() and try_    129  - swap operations: xchg(), cmpxchg() and try_cmpxchg()
149                                                   130 
150  - misc; the special purpose operations that a    131  - misc; the special purpose operations that are commonly used and would,
151    given the interface, normally be implemente    132    given the interface, normally be implemented using (try_)cmpxchg loops but
152    are time critical and can, (typically) on L    133    are time critical and can, (typically) on LL/SC architectures, be more
153    efficiently implemented.                       134    efficiently implemented.
154                                                   135 
155 All these operations are SMP atomic; that is,     136 All these operations are SMP atomic; that is, the operations (for a single
156 atomic variable) can be fully ordered and no i    137 atomic variable) can be fully ordered and no intermediate state is lost or
157 visible.                                          138 visible.
158                                                   139 
159                                                   140 
160 ORDERING  (go read memory-barriers.txt first)     141 ORDERING  (go read memory-barriers.txt first)
161 --------                                          142 --------
162                                                   143 
163 The rule of thumb:                                144 The rule of thumb:
164                                                   145 
165  - non-RMW operations are unordered;              146  - non-RMW operations are unordered;
166                                                   147 
167  - RMW operations that have no return value ar    148  - RMW operations that have no return value are unordered;
168                                                   149 
169  - RMW operations that have a return value are    150  - RMW operations that have a return value are fully ordered;
170                                                   151 
171  - RMW operations that are conditional are uno    152  - RMW operations that are conditional are unordered on FAILURE,
172    otherwise the above rules apply.               153    otherwise the above rules apply.
173                                                   154 
174 Except of course when a successful operation h !! 155 Except of course when an operation has an explicit ordering like:
175                                                   156 
176  {}_relaxed: unordered                            157  {}_relaxed: unordered
177  {}_acquire: the R of the RMW (or atomic_read)    158  {}_acquire: the R of the RMW (or atomic_read) is an ACQUIRE
178  {}_release: the W of the RMW (or atomic_set)     159  {}_release: the W of the RMW (or atomic_set)  is a  RELEASE
179                                                   160 
180 Where 'unordered' is against other memory loca    161 Where 'unordered' is against other memory locations. Address dependencies are
181 not defeated.  Conditional operations are stil !! 162 not defeated.
182                                                   163 
183 Fully ordered primitives are ordered against e    164 Fully ordered primitives are ordered against everything prior and everything
184 subsequent. Therefore a fully ordered primitiv    165 subsequent. Therefore a fully ordered primitive is like having an smp_mb()
185 before and an smp_mb() after the primitive.       166 before and an smp_mb() after the primitive.
186                                                   167 
187                                                   168 
188 The barriers:                                     169 The barriers:
189                                                   170 
190   smp_mb__{before,after}_atomic()                 171   smp_mb__{before,after}_atomic()
191                                                   172 
192 only apply to the RMW atomic ops and can be us !! 173 only apply to the RMW ops and can be used to augment/upgrade the ordering
193 ordering inherent to the op. These barriers ac !! 174 inherent to the used atomic op. These barriers provide a full smp_mb().
194 smp_mb__before_atomic() orders all earlier acc << 
195 itself and all accesses following it, and smp_ << 
196 later accesses against the RMW op and all acce << 
197 accesses between the smp_mb__{before,after}_at << 
198 ordered, so it is advisable to place the barri << 
199 op whenever possible.                          << 
200                                                   175 
201 These helper barriers exist because architectu    176 These helper barriers exist because architectures have varying implicit
202 ordering on their SMP atomic primitives. For e    177 ordering on their SMP atomic primitives. For example our TSO architectures
203 provide full ordered atomics and these barrier    178 provide full ordered atomics and these barriers are no-ops.
204                                                   179 
205 NOTE: when the atomic RmW ops are fully ordere << 
206 compiler barrier.                              << 
207                                                << 
208 Thus:                                             180 Thus:
209                                                   181 
210   atomic_fetch_add();                             182   atomic_fetch_add();
211                                                   183 
212 is equivalent to:                                 184 is equivalent to:
213                                                   185 
214   smp_mb__before_atomic();                        186   smp_mb__before_atomic();
215   atomic_fetch_add_relaxed();                     187   atomic_fetch_add_relaxed();
216   smp_mb__after_atomic();                         188   smp_mb__after_atomic();
217                                                   189 
218 However the atomic_fetch_add() might be implem    190 However the atomic_fetch_add() might be implemented more efficiently.
219                                                   191 
220 Further, while something like:                    192 Further, while something like:
221                                                   193 
222   smp_mb__before_atomic();                        194   smp_mb__before_atomic();
223   atomic_dec(&X);                                 195   atomic_dec(&X);
224                                                   196 
225 is a 'typical' RELEASE pattern, the barrier is    197 is a 'typical' RELEASE pattern, the barrier is strictly stronger than
226 a RELEASE because it orders preceding instruct !! 198 a RELEASE. Similarly for something like:
227 and write parts of the atomic_dec(), and again << 
228 as well. Similarly, something like:            << 
229                                                   199 
230   atomic_inc(&X);                                 200   atomic_inc(&X);
231   smp_mb__after_atomic();                         201   smp_mb__after_atomic();
232                                                   202 
233 is an ACQUIRE pattern (though very much not ty    203 is an ACQUIRE pattern (though very much not typical), but again the barrier is
234 strictly stronger than ACQUIRE. As illustrated    204 strictly stronger than ACQUIRE. As illustrated:
235                                                   205 
236   C Atomic-RMW+mb__after_atomic-is-stronger-th !! 206   C strong-acquire
237                                                   207 
238   {                                               208   {
239   }                                               209   }
240                                                   210 
241   P0(int *x, atomic_t *y)                      !! 211   P1(int *x, atomic_t *y)
242   {                                               212   {
243     r0 = READ_ONCE(*x);                           213     r0 = READ_ONCE(*x);
244     smp_rmb();                                    214     smp_rmb();
245     r1 = atomic_read(y);                          215     r1 = atomic_read(y);
246   }                                               216   }
247                                                   217 
248   P1(int *x, atomic_t *y)                      !! 218   P2(int *x, atomic_t *y)
249   {                                               219   {
250     atomic_inc(y);                                220     atomic_inc(y);
251     smp_mb__after_atomic();                       221     smp_mb__after_atomic();
252     WRITE_ONCE(*x, 1);                            222     WRITE_ONCE(*x, 1);
253   }                                               223   }
254                                                   224 
255   exists                                          225   exists
256   (0:r0=1 /\ 0:r1=0)                           !! 226   (r0=1 /\ r1=0)
257                                                   227 
258 This should not happen; but a hypothetical ato    228 This should not happen; but a hypothetical atomic_inc_acquire() --
259 (void)atomic_fetch_inc_acquire() for instance     229 (void)atomic_fetch_inc_acquire() for instance -- would allow the outcome,
260 because it would not order the W part of the R !! 230 since then:
261 WRITE_ONCE.  Thus:                             << 
262                                                   231 
263   P0                    P1                     !! 232   P1                    P2
264                                                   233 
265                         t = LL.acq *y (0)         234                         t = LL.acq *y (0)
266                         t++;                      235                         t++;
267                         *x = 1;                   236                         *x = 1;
268   r0 = *x (1)                                     237   r0 = *x (1)
269   RMB                                             238   RMB
270   r1 = *y (0)                                     239   r1 = *y (0)
271                         SC *y, t;                 240                         SC *y, t;
272                                                   241 
273 is allowed.                                       242 is allowed.
274                                                << 
275                                                << 
276 CMPXCHG vs TRY_CMPXCHG                         << 
277 ----------------------                         << 
278                                                << 
279   int atomic_cmpxchg(atomic_t *ptr, int old, i << 
280   bool atomic_try_cmpxchg(atomic_t *ptr, int * << 
281                                                << 
282 Both provide the same functionality, but try_c << 
283 compact code. The functions relate like:       << 
284                                                << 
285   bool atomic_try_cmpxchg(atomic_t *ptr, int * << 
286   {                                            << 
287     int ret, old = *oldp;                      << 
288     ret = atomic_cmpxchg(ptr, old, new);       << 
289     if (ret != old)                            << 
290       *oldp = ret;                             << 
291     return ret == old;                         << 
292   }                                            << 
293                                                << 
294 and:                                           << 
295                                                << 
296   int atomic_cmpxchg(atomic_t *ptr, int old, i << 
297   {                                            << 
298     (void)atomic_try_cmpxchg(ptr, &old, new);  << 
299     return old;                                << 
300   }                                            << 
301                                                << 
302 Usage:                                         << 
303                                                << 
304   old = atomic_read(&v);                       << 
305   for (;;) {                                   << 
306     new = func(old);                           << 
307     tmp = atomic_cmpxchg(&v, old, new);        << 
308     if (tmp == old)                            << 
309       break;                                   << 
310     old = tmp;                                 << 
311   }                                            << 
312                                                << 
313 NB. try_cmpxchg() also generates better code o << 
314 where the function more closely matches the ha << 
315                                                << 
316                                                << 
317 FORWARD PROGRESS                               << 
318 ----------------                               << 
319                                                << 
320 In general strong forward progress is expected << 
321 operations -- those in the Arithmetic and Bitw << 
322 a fair amount of code also requires forward pr << 
323 atomic operations.                             << 
324                                                << 
325 Specifically 'simple' cmpxchg() loops are expe << 
326 indefinitely. However, this is not evident on  << 
327 while an LL/SC architecture 'can/should/must'  << 
328 guarantees between competing LL/SC sections, s << 
329 transfer to cmpxchg() implemented using LL/SC. << 
330                                                << 
331   old = atomic_read(&v);                       << 
332   do {                                         << 
333     new = func(old);                           << 
334   } while (!atomic_try_cmpxchg(&v, &old, new)) << 
335                                                << 
336 which on LL/SC becomes something like:         << 
337                                                << 
338   old = atomic_read(&v);                       << 
339   do {                                         << 
340     new = func(old);                           << 
341   } while (!({                                 << 
342     volatile asm ("1: LL  %[oldval], %[v]\n"   << 
343                   "   CMP %[oldval], %[old]\n" << 
344                   "   BNE 2f\n"                << 
345                   "   SC  %[new], %[v]\n"      << 
346                   "   BNE 1b\n"                << 
347                   "2:\n"                       << 
348                   : [oldval] "=&r" (oldval), [ << 
349                   : [old] "r" (old), [new] "r" << 
350                   : "memory");                 << 
351     success = (oldval == old);                 << 
352     if (!success)                              << 
353       old = oldval;                            << 
354     success; }));                              << 
355                                                << 
356 However, even the forward branch from the fail << 
357 to fail on some architectures, let alone whate << 
358 loop body. As a result there is no guarantee w << 
359 containing @v will stay on the local CPU and p << 
360                                                << 
361 Even native CAS architectures can fail to prov << 
362 primitive (See Sparc64 for an example).        << 
363                                                << 
364 Such implementations are strongly encouraged t << 
365 to a failed CAS in order to ensure some progre << 
366 also strongly encouraged to inspect/audit the  << 
367 their locking primitives.                      << 
                                                      

~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

kernel.org | git.kernel.org | LWN.net | Project Home | SVN repository | Mail admin

Linux® is a registered trademark of Linus Torvalds in the United States and other countries.
TOMOYO® is a registered trademark of NTT DATA CORPORATION.

sflogo.php