~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

TOMOYO Linux Cross Reference
Linux/Documentation/atomic_t.txt

Version: ~ [ linux-6.11.5 ] ~ [ linux-6.10.14 ] ~ [ linux-6.9.12 ] ~ [ linux-6.8.12 ] ~ [ linux-6.7.12 ] ~ [ linux-6.6.58 ] ~ [ linux-6.5.13 ] ~ [ linux-6.4.16 ] ~ [ linux-6.3.13 ] ~ [ linux-6.2.16 ] ~ [ linux-6.1.114 ] ~ [ linux-6.0.19 ] ~ [ linux-5.19.17 ] ~ [ linux-5.18.19 ] ~ [ linux-5.17.15 ] ~ [ linux-5.16.20 ] ~ [ linux-5.15.169 ] ~ [ linux-5.14.21 ] ~ [ linux-5.13.19 ] ~ [ linux-5.12.19 ] ~ [ linux-5.11.22 ] ~ [ linux-5.10.228 ] ~ [ linux-5.9.16 ] ~ [ linux-5.8.18 ] ~ [ linux-5.7.19 ] ~ [ linux-5.6.19 ] ~ [ linux-5.5.19 ] ~ [ linux-5.4.284 ] ~ [ linux-5.3.18 ] ~ [ linux-5.2.21 ] ~ [ linux-5.1.21 ] ~ [ linux-5.0.21 ] ~ [ linux-4.20.17 ] ~ [ linux-4.19.322 ] ~ [ linux-4.18.20 ] ~ [ linux-4.17.19 ] ~ [ linux-4.16.18 ] ~ [ linux-4.15.18 ] ~ [ linux-4.14.336 ] ~ [ linux-4.13.16 ] ~ [ linux-4.12.14 ] ~ [ linux-4.11.12 ] ~ [ linux-4.10.17 ] ~ [ linux-4.9.337 ] ~ [ linux-4.4.302 ] ~ [ linux-3.10.108 ] ~ [ linux-2.6.32.71 ] ~ [ linux-2.6.0 ] ~ [ linux-2.4.37.11 ] ~ [ unix-v6-master ] ~ [ ccs-tools-1.8.9 ] ~ [ policy-sample ] ~
Architecture: ~ [ i386 ] ~ [ alpha ] ~ [ m68k ] ~ [ mips ] ~ [ ppc ] ~ [ sparc ] ~ [ sparc64 ] ~

Diff markup

Differences between /Documentation/atomic_t.txt (Version linux-6.11.5) and /Documentation/atomic_t.txt (Version linux-5.2.21)


  1                                                     1 
  2 On atomic types (atomic_t atomic64_t and atomi      2 On atomic types (atomic_t atomic64_t and atomic_long_t).
  3                                                     3 
  4 The atomic type provides an interface to the a      4 The atomic type provides an interface to the architecture's means of atomic
  5 RMW operations between CPUs (atomic operations      5 RMW operations between CPUs (atomic operations on MMIO are not supported and
  6 can lead to fatal traps on some platforms).         6 can lead to fatal traps on some platforms).
  7                                                     7 
  8 API                                                 8 API
  9 ---                                                 9 ---
 10                                                    10 
 11 The 'full' API consists of (atomic64_ and atom     11 The 'full' API consists of (atomic64_ and atomic_long_ prefixes omitted for
 12 brevity):                                          12 brevity):
 13                                                    13 
 14 Non-RMW ops:                                       14 Non-RMW ops:
 15                                                    15 
 16   atomic_read(), atomic_set()                      16   atomic_read(), atomic_set()
 17   atomic_read_acquire(), atomic_set_release()      17   atomic_read_acquire(), atomic_set_release()
 18                                                    18 
 19                                                    19 
 20 RMW atomic operations:                             20 RMW atomic operations:
 21                                                    21 
 22 Arithmetic:                                        22 Arithmetic:
 23                                                    23 
 24   atomic_{add,sub,inc,dec}()                       24   atomic_{add,sub,inc,dec}()
 25   atomic_{add,sub,inc,dec}_return{,_relaxed,_a     25   atomic_{add,sub,inc,dec}_return{,_relaxed,_acquire,_release}()
 26   atomic_fetch_{add,sub,inc,dec}{,_relaxed,_ac     26   atomic_fetch_{add,sub,inc,dec}{,_relaxed,_acquire,_release}()
 27                                                    27 
 28                                                    28 
 29 Bitwise:                                           29 Bitwise:
 30                                                    30 
 31   atomic_{and,or,xor,andnot}()                     31   atomic_{and,or,xor,andnot}()
 32   atomic_fetch_{and,or,xor,andnot}{,_relaxed,_     32   atomic_fetch_{and,or,xor,andnot}{,_relaxed,_acquire,_release}()
 33                                                    33 
 34                                                    34 
 35 Swap:                                              35 Swap:
 36                                                    36 
 37   atomic_xchg{,_relaxed,_acquire,_release}()       37   atomic_xchg{,_relaxed,_acquire,_release}()
 38   atomic_cmpxchg{,_relaxed,_acquire,_release}(     38   atomic_cmpxchg{,_relaxed,_acquire,_release}()
 39   atomic_try_cmpxchg{,_relaxed,_acquire,_relea     39   atomic_try_cmpxchg{,_relaxed,_acquire,_release}()
 40                                                    40 
 41                                                    41 
 42 Reference count (but please see refcount_t):       42 Reference count (but please see refcount_t):
 43                                                    43 
 44   atomic_add_unless(), atomic_inc_not_zero()       44   atomic_add_unless(), atomic_inc_not_zero()
 45   atomic_sub_and_test(), atomic_dec_and_test()     45   atomic_sub_and_test(), atomic_dec_and_test()
 46                                                    46 
 47                                                    47 
 48 Misc:                                              48 Misc:
 49                                                    49 
 50   atomic_inc_and_test(), atomic_add_negative()     50   atomic_inc_and_test(), atomic_add_negative()
 51   atomic_dec_unless_positive(), atomic_inc_unl     51   atomic_dec_unless_positive(), atomic_inc_unless_negative()
 52                                                    52 
 53                                                    53 
 54 Barriers:                                          54 Barriers:
 55                                                    55 
 56   smp_mb__{before,after}_atomic()                  56   smp_mb__{before,after}_atomic()
 57                                                    57 
 58                                                    58 
 59 TYPES (signed vs unsigned)                         59 TYPES (signed vs unsigned)
 60 -----                                              60 -----
 61                                                    61 
 62 While atomic_t, atomic_long_t and atomic64_t u     62 While atomic_t, atomic_long_t and atomic64_t use int, long and s64
 63 respectively (for hysterical raisins), the ker     63 respectively (for hysterical raisins), the kernel uses -fno-strict-overflow
 64 (which implies -fwrapv) and defines signed ove     64 (which implies -fwrapv) and defines signed overflow to behave like
 65 2s-complement.                                     65 2s-complement.
 66                                                    66 
 67 Therefore, an explicitly unsigned variant of t     67 Therefore, an explicitly unsigned variant of the atomic ops is strictly
 68 unnecessary and we can simply cast, there is n     68 unnecessary and we can simply cast, there is no UB.
 69                                                    69 
 70 There was a bug in UBSAN prior to GCC-8 that w     70 There was a bug in UBSAN prior to GCC-8 that would generate UB warnings for
 71 signed types.                                      71 signed types.
 72                                                    72 
 73 With this we also conform to the C/C++ _Atomic     73 With this we also conform to the C/C++ _Atomic behaviour and things like
 74 P1236R1.                                           74 P1236R1.
 75                                                    75 
 76                                                    76 
 77 SEMANTICS                                          77 SEMANTICS
 78 ---------                                          78 ---------
 79                                                    79 
 80 Non-RMW ops:                                       80 Non-RMW ops:
 81                                                    81 
 82 The non-RMW ops are (typically) regular LOADs      82 The non-RMW ops are (typically) regular LOADs and STOREs and are canonically
 83 implemented using READ_ONCE(), WRITE_ONCE(), s     83 implemented using READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and
 84 smp_store_release() respectively. Therefore, i !!  84 smp_store_release() respectively.
 85 the Non-RMW operations of atomic_t, you do not << 
 86 and are doing it wrong.                        << 
 87                                                    85 
 88 A note for the implementation of atomic_set{}( !!  86 The one detail to this is that atomic_set{}() should be observable to the RMW
 89 atomicity of the RMW ops. That is:             !!  87 ops. That is:
 90                                                    88 
 91   C Atomic-RMW-ops-are-atomic-WRT-atomic_set   !!  89   C atomic-set
 92                                                    90 
 93   {                                                91   {
 94     atomic_t v = ATOMIC_INIT(1);               !!  92     atomic_set(v, 1);
 95   }                                                93   }
 96                                                    94 
 97   P0(atomic_t *v)                              !!  95   P1(atomic_t *v)
 98   {                                                96   {
 99     (void)atomic_add_unless(v, 1, 0);          !!  97     atomic_add_unless(v, 1, 0);
100   }                                                98   }
101                                                    99 
102   P1(atomic_t *v)                              !! 100   P2(atomic_t *v)
103   {                                               101   {
104     atomic_set(v, 0);                             102     atomic_set(v, 0);
105   }                                               103   }
106                                                   104 
107   exists                                          105   exists
108   (v=2)                                           106   (v=2)
109                                                   107 
110 In this case we would expect the atomic_set()     108 In this case we would expect the atomic_set() from CPU1 to either happen
111 before the atomic_add_unless(), in which case     109 before the atomic_add_unless(), in which case that latter one would no-op, or
112 _after_ in which case we'd overwrite its resul    110 _after_ in which case we'd overwrite its result. In no case is "2" a valid
113 outcome.                                          111 outcome.
114                                                   112 
115 This is typically true on 'normal' platforms,     113 This is typically true on 'normal' platforms, where a regular competing STORE
116 will invalidate a LL/SC or fail a CMPXCHG.        114 will invalidate a LL/SC or fail a CMPXCHG.
117                                                   115 
118 The obvious case where this is not so is when     116 The obvious case where this is not so is when we need to implement atomic ops
119 with a lock:                                      117 with a lock:
120                                                   118 
121   CPU0                                            119   CPU0                                          CPU1
122                                                   120 
123   atomic_add_unless(v, 1, 0);                     121   atomic_add_unless(v, 1, 0);
124     lock();                                       122     lock();
125     ret = READ_ONCE(v->counter); // == 1          123     ret = READ_ONCE(v->counter); // == 1
126                                                   124                                                 atomic_set(v, 0);
127     if (ret != u)                                 125     if (ret != u)                                 WRITE_ONCE(v->counter, 0);
128       WRITE_ONCE(v->counter, ret + 1);            126       WRITE_ONCE(v->counter, ret + 1);
129     unlock();                                     127     unlock();
130                                                   128 
131 the typical solution is to then implement atom    129 the typical solution is to then implement atomic_set{}() with atomic_xchg().
132                                                   130 
133                                                   131 
134 RMW ops:                                          132 RMW ops:
135                                                   133 
136 These come in various forms:                      134 These come in various forms:
137                                                   135 
138  - plain operations without return value: atom    136  - plain operations without return value: atomic_{}()
139                                                   137 
140  - operations which return the modified value:    138  - operations which return the modified value: atomic_{}_return()
141                                                   139 
142    these are limited to the arithmetic operati    140    these are limited to the arithmetic operations because those are
143    reversible. Bitops are irreversible and the    141    reversible. Bitops are irreversible and therefore the modified value
144    is of dubious utility.                         142    is of dubious utility.
145                                                   143 
146  - operations which return the original value:    144  - operations which return the original value: atomic_fetch_{}()
147                                                   145 
148  - swap operations: xchg(), cmpxchg() and try_    146  - swap operations: xchg(), cmpxchg() and try_cmpxchg()
149                                                   147 
150  - misc; the special purpose operations that a    148  - misc; the special purpose operations that are commonly used and would,
151    given the interface, normally be implemente    149    given the interface, normally be implemented using (try_)cmpxchg loops but
152    are time critical and can, (typically) on L    150    are time critical and can, (typically) on LL/SC architectures, be more
153    efficiently implemented.                       151    efficiently implemented.
154                                                   152 
155 All these operations are SMP atomic; that is,     153 All these operations are SMP atomic; that is, the operations (for a single
156 atomic variable) can be fully ordered and no i    154 atomic variable) can be fully ordered and no intermediate state is lost or
157 visible.                                          155 visible.
158                                                   156 
159                                                   157 
160 ORDERING  (go read memory-barriers.txt first)     158 ORDERING  (go read memory-barriers.txt first)
161 --------                                          159 --------
162                                                   160 
163 The rule of thumb:                                161 The rule of thumb:
164                                                   162 
165  - non-RMW operations are unordered;              163  - non-RMW operations are unordered;
166                                                   164 
167  - RMW operations that have no return value ar    165  - RMW operations that have no return value are unordered;
168                                                   166 
169  - RMW operations that have a return value are    167  - RMW operations that have a return value are fully ordered;
170                                                   168 
171  - RMW operations that are conditional are uno    169  - RMW operations that are conditional are unordered on FAILURE,
172    otherwise the above rules apply.               170    otherwise the above rules apply.
173                                                   171 
174 Except of course when a successful operation h !! 172 Except of course when an operation has an explicit ordering like:
175                                                   173 
176  {}_relaxed: unordered                            174  {}_relaxed: unordered
177  {}_acquire: the R of the RMW (or atomic_read)    175  {}_acquire: the R of the RMW (or atomic_read) is an ACQUIRE
178  {}_release: the W of the RMW (or atomic_set)     176  {}_release: the W of the RMW (or atomic_set)  is a  RELEASE
179                                                   177 
180 Where 'unordered' is against other memory loca    178 Where 'unordered' is against other memory locations. Address dependencies are
181 not defeated.  Conditional operations are stil !! 179 not defeated.
182                                                   180 
183 Fully ordered primitives are ordered against e    181 Fully ordered primitives are ordered against everything prior and everything
184 subsequent. Therefore a fully ordered primitiv    182 subsequent. Therefore a fully ordered primitive is like having an smp_mb()
185 before and an smp_mb() after the primitive.       183 before and an smp_mb() after the primitive.
186                                                   184 
187                                                   185 
188 The barriers:                                     186 The barriers:
189                                                   187 
190   smp_mb__{before,after}_atomic()                 188   smp_mb__{before,after}_atomic()
191                                                   189 
192 only apply to the RMW atomic ops and can be us !! 190 only apply to the RMW ops and can be used to augment/upgrade the ordering
193 ordering inherent to the op. These barriers ac !! 191 inherent to the used atomic op. These barriers provide a full smp_mb().
194 smp_mb__before_atomic() orders all earlier acc << 
195 itself and all accesses following it, and smp_ << 
196 later accesses against the RMW op and all acce << 
197 accesses between the smp_mb__{before,after}_at << 
198 ordered, so it is advisable to place the barri << 
199 op whenever possible.                          << 
200                                                   192 
201 These helper barriers exist because architectu    193 These helper barriers exist because architectures have varying implicit
202 ordering on their SMP atomic primitives. For e    194 ordering on their SMP atomic primitives. For example our TSO architectures
203 provide full ordered atomics and these barrier    195 provide full ordered atomics and these barriers are no-ops.
204                                                   196 
205 NOTE: when the atomic RmW ops are fully ordere    197 NOTE: when the atomic RmW ops are fully ordered, they should also imply a
206 compiler barrier.                                 198 compiler barrier.
207                                                   199 
208 Thus:                                             200 Thus:
209                                                   201 
210   atomic_fetch_add();                             202   atomic_fetch_add();
211                                                   203 
212 is equivalent to:                                 204 is equivalent to:
213                                                   205 
214   smp_mb__before_atomic();                        206   smp_mb__before_atomic();
215   atomic_fetch_add_relaxed();                     207   atomic_fetch_add_relaxed();
216   smp_mb__after_atomic();                         208   smp_mb__after_atomic();
217                                                   209 
218 However the atomic_fetch_add() might be implem    210 However the atomic_fetch_add() might be implemented more efficiently.
219                                                   211 
220 Further, while something like:                    212 Further, while something like:
221                                                   213 
222   smp_mb__before_atomic();                        214   smp_mb__before_atomic();
223   atomic_dec(&X);                                 215   atomic_dec(&X);
224                                                   216 
225 is a 'typical' RELEASE pattern, the barrier is    217 is a 'typical' RELEASE pattern, the barrier is strictly stronger than
226 a RELEASE because it orders preceding instruct !! 218 a RELEASE. Similarly for something like:
227 and write parts of the atomic_dec(), and again << 
228 as well. Similarly, something like:            << 
229                                                   219 
230   atomic_inc(&X);                                 220   atomic_inc(&X);
231   smp_mb__after_atomic();                         221   smp_mb__after_atomic();
232                                                   222 
233 is an ACQUIRE pattern (though very much not ty    223 is an ACQUIRE pattern (though very much not typical), but again the barrier is
234 strictly stronger than ACQUIRE. As illustrated    224 strictly stronger than ACQUIRE. As illustrated:
235                                                   225 
236   C Atomic-RMW+mb__after_atomic-is-stronger-th !! 226   C strong-acquire
237                                                   227 
238   {                                               228   {
239   }                                               229   }
240                                                   230 
241   P0(int *x, atomic_t *y)                      !! 231   P1(int *x, atomic_t *y)
242   {                                               232   {
243     r0 = READ_ONCE(*x);                           233     r0 = READ_ONCE(*x);
244     smp_rmb();                                    234     smp_rmb();
245     r1 = atomic_read(y);                          235     r1 = atomic_read(y);
246   }                                               236   }
247                                                   237 
248   P1(int *x, atomic_t *y)                      !! 238   P2(int *x, atomic_t *y)
249   {                                               239   {
250     atomic_inc(y);                                240     atomic_inc(y);
251     smp_mb__after_atomic();                       241     smp_mb__after_atomic();
252     WRITE_ONCE(*x, 1);                            242     WRITE_ONCE(*x, 1);
253   }                                               243   }
254                                                   244 
255   exists                                          245   exists
256   (0:r0=1 /\ 0:r1=0)                           !! 246   (r0=1 /\ r1=0)
257                                                   247 
258 This should not happen; but a hypothetical ato    248 This should not happen; but a hypothetical atomic_inc_acquire() --
259 (void)atomic_fetch_inc_acquire() for instance     249 (void)atomic_fetch_inc_acquire() for instance -- would allow the outcome,
260 because it would not order the W part of the R !! 250 since then:
261 WRITE_ONCE.  Thus:                             << 
262                                                   251 
263   P0                    P1                     !! 252   P1                    P2
264                                                   253 
265                         t = LL.acq *y (0)         254                         t = LL.acq *y (0)
266                         t++;                      255                         t++;
267                         *x = 1;                   256                         *x = 1;
268   r0 = *x (1)                                     257   r0 = *x (1)
269   RMB                                             258   RMB
270   r1 = *y (0)                                     259   r1 = *y (0)
271                         SC *y, t;                 260                         SC *y, t;
272                                                   261 
273 is allowed.                                       262 is allowed.
274                                                << 
275                                                << 
276 CMPXCHG vs TRY_CMPXCHG                         << 
277 ----------------------                         << 
278                                                << 
279   int atomic_cmpxchg(atomic_t *ptr, int old, i << 
280   bool atomic_try_cmpxchg(atomic_t *ptr, int * << 
281                                                << 
282 Both provide the same functionality, but try_c << 
283 compact code. The functions relate like:       << 
284                                                << 
285   bool atomic_try_cmpxchg(atomic_t *ptr, int * << 
286   {                                            << 
287     int ret, old = *oldp;                      << 
288     ret = atomic_cmpxchg(ptr, old, new);       << 
289     if (ret != old)                            << 
290       *oldp = ret;                             << 
291     return ret == old;                         << 
292   }                                            << 
293                                                << 
294 and:                                           << 
295                                                << 
296   int atomic_cmpxchg(atomic_t *ptr, int old, i << 
297   {                                            << 
298     (void)atomic_try_cmpxchg(ptr, &old, new);  << 
299     return old;                                << 
300   }                                            << 
301                                                << 
302 Usage:                                         << 
303                                                << 
304   old = atomic_read(&v);                       << 
305   for (;;) {                                   << 
306     new = func(old);                           << 
307     tmp = atomic_cmpxchg(&v, old, new);        << 
308     if (tmp == old)                            << 
309       break;                                   << 
310     old = tmp;                                 << 
311   }                                            << 
312                                                << 
313 NB. try_cmpxchg() also generates better code o << 
314 where the function more closely matches the ha << 
315                                                << 
316                                                << 
317 FORWARD PROGRESS                               << 
318 ----------------                               << 
319                                                << 
320 In general strong forward progress is expected << 
321 operations -- those in the Arithmetic and Bitw << 
322 a fair amount of code also requires forward pr << 
323 atomic operations.                             << 
324                                                << 
325 Specifically 'simple' cmpxchg() loops are expe << 
326 indefinitely. However, this is not evident on  << 
327 while an LL/SC architecture 'can/should/must'  << 
328 guarantees between competing LL/SC sections, s << 
329 transfer to cmpxchg() implemented using LL/SC. << 
330                                                << 
331   old = atomic_read(&v);                       << 
332   do {                                         << 
333     new = func(old);                           << 
334   } while (!atomic_try_cmpxchg(&v, &old, new)) << 
335                                                << 
336 which on LL/SC becomes something like:         << 
337                                                << 
338   old = atomic_read(&v);                       << 
339   do {                                         << 
340     new = func(old);                           << 
341   } while (!({                                 << 
342     volatile asm ("1: LL  %[oldval], %[v]\n"   << 
343                   "   CMP %[oldval], %[old]\n" << 
344                   "   BNE 2f\n"                << 
345                   "   SC  %[new], %[v]\n"      << 
346                   "   BNE 1b\n"                << 
347                   "2:\n"                       << 
348                   : [oldval] "=&r" (oldval), [ << 
349                   : [old] "r" (old), [new] "r" << 
350                   : "memory");                 << 
351     success = (oldval == old);                 << 
352     if (!success)                              << 
353       old = oldval;                            << 
354     success; }));                              << 
355                                                << 
356 However, even the forward branch from the fail << 
357 to fail on some architectures, let alone whate << 
358 loop body. As a result there is no guarantee w << 
359 containing @v will stay on the local CPU and p << 
360                                                << 
361 Even native CAS architectures can fail to prov << 
362 primitive (See Sparc64 for an example).        << 
363                                                << 
364 Such implementations are strongly encouraged t << 
365 to a failed CAS in order to ensure some progre << 
366 also strongly encouraged to inspect/audit the  << 
367 their locking primitives.                      << 
                                                      

~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

kernel.org | git.kernel.org | LWN.net | Project Home | SVN repository | Mail admin

Linux® is a registered trademark of Linus Torvalds in the United States and other countries.
TOMOYO® is a registered trademark of NTT DATA CORPORATION.

sflogo.php