~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

TOMOYO Linux Cross Reference
Linux/Documentation/atomic_t.txt

Version: ~ [ linux-6.11.5 ] ~ [ linux-6.10.14 ] ~ [ linux-6.9.12 ] ~ [ linux-6.8.12 ] ~ [ linux-6.7.12 ] ~ [ linux-6.6.58 ] ~ [ linux-6.5.13 ] ~ [ linux-6.4.16 ] ~ [ linux-6.3.13 ] ~ [ linux-6.2.16 ] ~ [ linux-6.1.114 ] ~ [ linux-6.0.19 ] ~ [ linux-5.19.17 ] ~ [ linux-5.18.19 ] ~ [ linux-5.17.15 ] ~ [ linux-5.16.20 ] ~ [ linux-5.15.169 ] ~ [ linux-5.14.21 ] ~ [ linux-5.13.19 ] ~ [ linux-5.12.19 ] ~ [ linux-5.11.22 ] ~ [ linux-5.10.228 ] ~ [ linux-5.9.16 ] ~ [ linux-5.8.18 ] ~ [ linux-5.7.19 ] ~ [ linux-5.6.19 ] ~ [ linux-5.5.19 ] ~ [ linux-5.4.284 ] ~ [ linux-5.3.18 ] ~ [ linux-5.2.21 ] ~ [ linux-5.1.21 ] ~ [ linux-5.0.21 ] ~ [ linux-4.20.17 ] ~ [ linux-4.19.322 ] ~ [ linux-4.18.20 ] ~ [ linux-4.17.19 ] ~ [ linux-4.16.18 ] ~ [ linux-4.15.18 ] ~ [ linux-4.14.336 ] ~ [ linux-4.13.16 ] ~ [ linux-4.12.14 ] ~ [ linux-4.11.12 ] ~ [ linux-4.10.17 ] ~ [ linux-4.9.337 ] ~ [ linux-4.4.302 ] ~ [ linux-3.10.108 ] ~ [ linux-2.6.32.71 ] ~ [ linux-2.6.0 ] ~ [ linux-2.4.37.11 ] ~ [ unix-v6-master ] ~ [ ccs-tools-1.8.9 ] ~ [ policy-sample ] ~
Architecture: ~ [ i386 ] ~ [ alpha ] ~ [ m68k ] ~ [ mips ] ~ [ ppc ] ~ [ sparc ] ~ [ sparc64 ] ~

Diff markup

Differences between /Documentation/atomic_t.txt (Version linux-6.11.5) and /Documentation/atomic_t.txt (Version linux-5.16.20)


  1                                                     1 
  2 On atomic types (atomic_t atomic64_t and atomi      2 On atomic types (atomic_t atomic64_t and atomic_long_t).
  3                                                     3 
  4 The atomic type provides an interface to the a      4 The atomic type provides an interface to the architecture's means of atomic
  5 RMW operations between CPUs (atomic operations      5 RMW operations between CPUs (atomic operations on MMIO are not supported and
  6 can lead to fatal traps on some platforms).         6 can lead to fatal traps on some platforms).
  7                                                     7 
  8 API                                                 8 API
  9 ---                                                 9 ---
 10                                                    10 
 11 The 'full' API consists of (atomic64_ and atom     11 The 'full' API consists of (atomic64_ and atomic_long_ prefixes omitted for
 12 brevity):                                          12 brevity):
 13                                                    13 
 14 Non-RMW ops:                                       14 Non-RMW ops:
 15                                                    15 
 16   atomic_read(), atomic_set()                      16   atomic_read(), atomic_set()
 17   atomic_read_acquire(), atomic_set_release()      17   atomic_read_acquire(), atomic_set_release()
 18                                                    18 
 19                                                    19 
 20 RMW atomic operations:                             20 RMW atomic operations:
 21                                                    21 
 22 Arithmetic:                                        22 Arithmetic:
 23                                                    23 
 24   atomic_{add,sub,inc,dec}()                       24   atomic_{add,sub,inc,dec}()
 25   atomic_{add,sub,inc,dec}_return{,_relaxed,_a     25   atomic_{add,sub,inc,dec}_return{,_relaxed,_acquire,_release}()
 26   atomic_fetch_{add,sub,inc,dec}{,_relaxed,_ac     26   atomic_fetch_{add,sub,inc,dec}{,_relaxed,_acquire,_release}()
 27                                                    27 
 28                                                    28 
 29 Bitwise:                                           29 Bitwise:
 30                                                    30 
 31   atomic_{and,or,xor,andnot}()                     31   atomic_{and,or,xor,andnot}()
 32   atomic_fetch_{and,or,xor,andnot}{,_relaxed,_     32   atomic_fetch_{and,or,xor,andnot}{,_relaxed,_acquire,_release}()
 33                                                    33 
 34                                                    34 
 35 Swap:                                              35 Swap:
 36                                                    36 
 37   atomic_xchg{,_relaxed,_acquire,_release}()       37   atomic_xchg{,_relaxed,_acquire,_release}()
 38   atomic_cmpxchg{,_relaxed,_acquire,_release}(     38   atomic_cmpxchg{,_relaxed,_acquire,_release}()
 39   atomic_try_cmpxchg{,_relaxed,_acquire,_relea     39   atomic_try_cmpxchg{,_relaxed,_acquire,_release}()
 40                                                    40 
 41                                                    41 
 42 Reference count (but please see refcount_t):       42 Reference count (but please see refcount_t):
 43                                                    43 
 44   atomic_add_unless(), atomic_inc_not_zero()       44   atomic_add_unless(), atomic_inc_not_zero()
 45   atomic_sub_and_test(), atomic_dec_and_test()     45   atomic_sub_and_test(), atomic_dec_and_test()
 46                                                    46 
 47                                                    47 
 48 Misc:                                              48 Misc:
 49                                                    49 
 50   atomic_inc_and_test(), atomic_add_negative()     50   atomic_inc_and_test(), atomic_add_negative()
 51   atomic_dec_unless_positive(), atomic_inc_unl     51   atomic_dec_unless_positive(), atomic_inc_unless_negative()
 52                                                    52 
 53                                                    53 
 54 Barriers:                                          54 Barriers:
 55                                                    55 
 56   smp_mb__{before,after}_atomic()                  56   smp_mb__{before,after}_atomic()
 57                                                    57 
 58                                                    58 
 59 TYPES (signed vs unsigned)                         59 TYPES (signed vs unsigned)
 60 -----                                              60 -----
 61                                                    61 
 62 While atomic_t, atomic_long_t and atomic64_t u     62 While atomic_t, atomic_long_t and atomic64_t use int, long and s64
 63 respectively (for hysterical raisins), the ker     63 respectively (for hysterical raisins), the kernel uses -fno-strict-overflow
 64 (which implies -fwrapv) and defines signed ove     64 (which implies -fwrapv) and defines signed overflow to behave like
 65 2s-complement.                                     65 2s-complement.
 66                                                    66 
 67 Therefore, an explicitly unsigned variant of t     67 Therefore, an explicitly unsigned variant of the atomic ops is strictly
 68 unnecessary and we can simply cast, there is n     68 unnecessary and we can simply cast, there is no UB.
 69                                                    69 
 70 There was a bug in UBSAN prior to GCC-8 that w     70 There was a bug in UBSAN prior to GCC-8 that would generate UB warnings for
 71 signed types.                                      71 signed types.
 72                                                    72 
 73 With this we also conform to the C/C++ _Atomic     73 With this we also conform to the C/C++ _Atomic behaviour and things like
 74 P1236R1.                                           74 P1236R1.
 75                                                    75 
 76                                                    76 
 77 SEMANTICS                                          77 SEMANTICS
 78 ---------                                          78 ---------
 79                                                    79 
 80 Non-RMW ops:                                       80 Non-RMW ops:
 81                                                    81 
 82 The non-RMW ops are (typically) regular LOADs      82 The non-RMW ops are (typically) regular LOADs and STOREs and are canonically
 83 implemented using READ_ONCE(), WRITE_ONCE(), s     83 implemented using READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and
 84 smp_store_release() respectively. Therefore, i     84 smp_store_release() respectively. Therefore, if you find yourself only using
 85 the Non-RMW operations of atomic_t, you do not     85 the Non-RMW operations of atomic_t, you do not in fact need atomic_t at all
 86 and are doing it wrong.                            86 and are doing it wrong.
 87                                                    87 
 88 A note for the implementation of atomic_set{}(     88 A note for the implementation of atomic_set{}() is that it must not break the
 89 atomicity of the RMW ops. That is:                 89 atomicity of the RMW ops. That is:
 90                                                    90 
 91   C Atomic-RMW-ops-are-atomic-WRT-atomic_set       91   C Atomic-RMW-ops-are-atomic-WRT-atomic_set
 92                                                    92 
 93   {                                                93   {
 94     atomic_t v = ATOMIC_INIT(1);                   94     atomic_t v = ATOMIC_INIT(1);
 95   }                                                95   }
 96                                                    96 
 97   P0(atomic_t *v)                                  97   P0(atomic_t *v)
 98   {                                                98   {
 99     (void)atomic_add_unless(v, 1, 0);              99     (void)atomic_add_unless(v, 1, 0);
100   }                                               100   }
101                                                   101 
102   P1(atomic_t *v)                                 102   P1(atomic_t *v)
103   {                                               103   {
104     atomic_set(v, 0);                             104     atomic_set(v, 0);
105   }                                               105   }
106                                                   106 
107   exists                                          107   exists
108   (v=2)                                           108   (v=2)
109                                                   109 
110 In this case we would expect the atomic_set()     110 In this case we would expect the atomic_set() from CPU1 to either happen
111 before the atomic_add_unless(), in which case     111 before the atomic_add_unless(), in which case that latter one would no-op, or
112 _after_ in which case we'd overwrite its resul    112 _after_ in which case we'd overwrite its result. In no case is "2" a valid
113 outcome.                                          113 outcome.
114                                                   114 
115 This is typically true on 'normal' platforms,     115 This is typically true on 'normal' platforms, where a regular competing STORE
116 will invalidate a LL/SC or fail a CMPXCHG.        116 will invalidate a LL/SC or fail a CMPXCHG.
117                                                   117 
118 The obvious case where this is not so is when     118 The obvious case where this is not so is when we need to implement atomic ops
119 with a lock:                                      119 with a lock:
120                                                   120 
121   CPU0                                            121   CPU0                                          CPU1
122                                                   122 
123   atomic_add_unless(v, 1, 0);                     123   atomic_add_unless(v, 1, 0);
124     lock();                                       124     lock();
125     ret = READ_ONCE(v->counter); // == 1          125     ret = READ_ONCE(v->counter); // == 1
126                                                   126                                                 atomic_set(v, 0);
127     if (ret != u)                                 127     if (ret != u)                                 WRITE_ONCE(v->counter, 0);
128       WRITE_ONCE(v->counter, ret + 1);            128       WRITE_ONCE(v->counter, ret + 1);
129     unlock();                                     129     unlock();
130                                                   130 
131 the typical solution is to then implement atom    131 the typical solution is to then implement atomic_set{}() with atomic_xchg().
132                                                   132 
133                                                   133 
134 RMW ops:                                          134 RMW ops:
135                                                   135 
136 These come in various forms:                      136 These come in various forms:
137                                                   137 
138  - plain operations without return value: atom    138  - plain operations without return value: atomic_{}()
139                                                   139 
140  - operations which return the modified value:    140  - operations which return the modified value: atomic_{}_return()
141                                                   141 
142    these are limited to the arithmetic operati    142    these are limited to the arithmetic operations because those are
143    reversible. Bitops are irreversible and the    143    reversible. Bitops are irreversible and therefore the modified value
144    is of dubious utility.                         144    is of dubious utility.
145                                                   145 
146  - operations which return the original value:    146  - operations which return the original value: atomic_fetch_{}()
147                                                   147 
148  - swap operations: xchg(), cmpxchg() and try_    148  - swap operations: xchg(), cmpxchg() and try_cmpxchg()
149                                                   149 
150  - misc; the special purpose operations that a    150  - misc; the special purpose operations that are commonly used and would,
151    given the interface, normally be implemente    151    given the interface, normally be implemented using (try_)cmpxchg loops but
152    are time critical and can, (typically) on L    152    are time critical and can, (typically) on LL/SC architectures, be more
153    efficiently implemented.                       153    efficiently implemented.
154                                                   154 
155 All these operations are SMP atomic; that is,     155 All these operations are SMP atomic; that is, the operations (for a single
156 atomic variable) can be fully ordered and no i    156 atomic variable) can be fully ordered and no intermediate state is lost or
157 visible.                                          157 visible.
158                                                   158 
159                                                   159 
160 ORDERING  (go read memory-barriers.txt first)     160 ORDERING  (go read memory-barriers.txt first)
161 --------                                          161 --------
162                                                   162 
163 The rule of thumb:                                163 The rule of thumb:
164                                                   164 
165  - non-RMW operations are unordered;              165  - non-RMW operations are unordered;
166                                                   166 
167  - RMW operations that have no return value ar    167  - RMW operations that have no return value are unordered;
168                                                   168 
169  - RMW operations that have a return value are    169  - RMW operations that have a return value are fully ordered;
170                                                   170 
171  - RMW operations that are conditional are uno    171  - RMW operations that are conditional are unordered on FAILURE,
172    otherwise the above rules apply.               172    otherwise the above rules apply.
173                                                   173 
174 Except of course when a successful operation h !! 174 Except of course when an operation has an explicit ordering like:
175                                                   175 
176  {}_relaxed: unordered                            176  {}_relaxed: unordered
177  {}_acquire: the R of the RMW (or atomic_read)    177  {}_acquire: the R of the RMW (or atomic_read) is an ACQUIRE
178  {}_release: the W of the RMW (or atomic_set)     178  {}_release: the W of the RMW (or atomic_set)  is a  RELEASE
179                                                   179 
180 Where 'unordered' is against other memory loca    180 Where 'unordered' is against other memory locations. Address dependencies are
181 not defeated.  Conditional operations are stil !! 181 not defeated.
182                                                   182 
183 Fully ordered primitives are ordered against e    183 Fully ordered primitives are ordered against everything prior and everything
184 subsequent. Therefore a fully ordered primitiv    184 subsequent. Therefore a fully ordered primitive is like having an smp_mb()
185 before and an smp_mb() after the primitive.       185 before and an smp_mb() after the primitive.
186                                                   186 
187                                                   187 
188 The barriers:                                     188 The barriers:
189                                                   189 
190   smp_mb__{before,after}_atomic()                 190   smp_mb__{before,after}_atomic()
191                                                   191 
192 only apply to the RMW atomic ops and can be us    192 only apply to the RMW atomic ops and can be used to augment/upgrade the
193 ordering inherent to the op. These barriers ac    193 ordering inherent to the op. These barriers act almost like a full smp_mb():
194 smp_mb__before_atomic() orders all earlier acc    194 smp_mb__before_atomic() orders all earlier accesses against the RMW op
195 itself and all accesses following it, and smp_    195 itself and all accesses following it, and smp_mb__after_atomic() orders all
196 later accesses against the RMW op and all acce    196 later accesses against the RMW op and all accesses preceding it. However,
197 accesses between the smp_mb__{before,after}_at    197 accesses between the smp_mb__{before,after}_atomic() and the RMW op are not
198 ordered, so it is advisable to place the barri    198 ordered, so it is advisable to place the barrier right next to the RMW atomic
199 op whenever possible.                             199 op whenever possible.
200                                                   200 
201 These helper barriers exist because architectu    201 These helper barriers exist because architectures have varying implicit
202 ordering on their SMP atomic primitives. For e    202 ordering on their SMP atomic primitives. For example our TSO architectures
203 provide full ordered atomics and these barrier    203 provide full ordered atomics and these barriers are no-ops.
204                                                   204 
205 NOTE: when the atomic RmW ops are fully ordere    205 NOTE: when the atomic RmW ops are fully ordered, they should also imply a
206 compiler barrier.                                 206 compiler barrier.
207                                                   207 
208 Thus:                                             208 Thus:
209                                                   209 
210   atomic_fetch_add();                             210   atomic_fetch_add();
211                                                   211 
212 is equivalent to:                                 212 is equivalent to:
213                                                   213 
214   smp_mb__before_atomic();                        214   smp_mb__before_atomic();
215   atomic_fetch_add_relaxed();                     215   atomic_fetch_add_relaxed();
216   smp_mb__after_atomic();                         216   smp_mb__after_atomic();
217                                                   217 
218 However the atomic_fetch_add() might be implem    218 However the atomic_fetch_add() might be implemented more efficiently.
219                                                   219 
220 Further, while something like:                    220 Further, while something like:
221                                                   221 
222   smp_mb__before_atomic();                        222   smp_mb__before_atomic();
223   atomic_dec(&X);                                 223   atomic_dec(&X);
224                                                   224 
225 is a 'typical' RELEASE pattern, the barrier is    225 is a 'typical' RELEASE pattern, the barrier is strictly stronger than
226 a RELEASE because it orders preceding instruct    226 a RELEASE because it orders preceding instructions against both the read
227 and write parts of the atomic_dec(), and again    227 and write parts of the atomic_dec(), and against all following instructions
228 as well. Similarly, something like:               228 as well. Similarly, something like:
229                                                   229 
230   atomic_inc(&X);                                 230   atomic_inc(&X);
231   smp_mb__after_atomic();                         231   smp_mb__after_atomic();
232                                                   232 
233 is an ACQUIRE pattern (though very much not ty    233 is an ACQUIRE pattern (though very much not typical), but again the barrier is
234 strictly stronger than ACQUIRE. As illustrated    234 strictly stronger than ACQUIRE. As illustrated:
235                                                   235 
236   C Atomic-RMW+mb__after_atomic-is-stronger-th    236   C Atomic-RMW+mb__after_atomic-is-stronger-than-acquire
237                                                   237 
238   {                                               238   {
239   }                                               239   }
240                                                   240 
241   P0(int *x, atomic_t *y)                         241   P0(int *x, atomic_t *y)
242   {                                               242   {
243     r0 = READ_ONCE(*x);                           243     r0 = READ_ONCE(*x);
244     smp_rmb();                                    244     smp_rmb();
245     r1 = atomic_read(y);                          245     r1 = atomic_read(y);
246   }                                               246   }
247                                                   247 
248   P1(int *x, atomic_t *y)                         248   P1(int *x, atomic_t *y)
249   {                                               249   {
250     atomic_inc(y);                                250     atomic_inc(y);
251     smp_mb__after_atomic();                       251     smp_mb__after_atomic();
252     WRITE_ONCE(*x, 1);                            252     WRITE_ONCE(*x, 1);
253   }                                               253   }
254                                                   254 
255   exists                                          255   exists
256   (0:r0=1 /\ 0:r1=0)                              256   (0:r0=1 /\ 0:r1=0)
257                                                   257 
258 This should not happen; but a hypothetical ato    258 This should not happen; but a hypothetical atomic_inc_acquire() --
259 (void)atomic_fetch_inc_acquire() for instance     259 (void)atomic_fetch_inc_acquire() for instance -- would allow the outcome,
260 because it would not order the W part of the R    260 because it would not order the W part of the RMW against the following
261 WRITE_ONCE.  Thus:                                261 WRITE_ONCE.  Thus:
262                                                   262 
263   P0                    P1                        263   P0                    P1
264                                                   264 
265                         t = LL.acq *y (0)         265                         t = LL.acq *y (0)
266                         t++;                      266                         t++;
267                         *x = 1;                   267                         *x = 1;
268   r0 = *x (1)                                     268   r0 = *x (1)
269   RMB                                             269   RMB
270   r1 = *y (0)                                     270   r1 = *y (0)
271                         SC *y, t;                 271                         SC *y, t;
272                                                   272 
273 is allowed.                                       273 is allowed.
274                                                   274 
275                                                   275 
276 CMPXCHG vs TRY_CMPXCHG                            276 CMPXCHG vs TRY_CMPXCHG
277 ----------------------                            277 ----------------------
278                                                   278 
279   int atomic_cmpxchg(atomic_t *ptr, int old, i    279   int atomic_cmpxchg(atomic_t *ptr, int old, int new);
280   bool atomic_try_cmpxchg(atomic_t *ptr, int *    280   bool atomic_try_cmpxchg(atomic_t *ptr, int *oldp, int new);
281                                                   281 
282 Both provide the same functionality, but try_c    282 Both provide the same functionality, but try_cmpxchg() can lead to more
283 compact code. The functions relate like:          283 compact code. The functions relate like:
284                                                   284 
285   bool atomic_try_cmpxchg(atomic_t *ptr, int *    285   bool atomic_try_cmpxchg(atomic_t *ptr, int *oldp, int new)
286   {                                               286   {
287     int ret, old = *oldp;                         287     int ret, old = *oldp;
288     ret = atomic_cmpxchg(ptr, old, new);          288     ret = atomic_cmpxchg(ptr, old, new);
289     if (ret != old)                               289     if (ret != old)
290       *oldp = ret;                                290       *oldp = ret;
291     return ret == old;                            291     return ret == old;
292   }                                               292   }
293                                                   293 
294 and:                                              294 and:
295                                                   295 
296   int atomic_cmpxchg(atomic_t *ptr, int old, i    296   int atomic_cmpxchg(atomic_t *ptr, int old, int new)
297   {                                               297   {
298     (void)atomic_try_cmpxchg(ptr, &old, new);     298     (void)atomic_try_cmpxchg(ptr, &old, new);
299     return old;                                   299     return old;
300   }                                               300   }
301                                                   301 
302 Usage:                                            302 Usage:
303                                                   303 
304   old = atomic_read(&v);                          304   old = atomic_read(&v);                        old = atomic_read(&v);
305   for (;;) {                                      305   for (;;) {                                    do {
306     new = func(old);                              306     new = func(old);                              new = func(old);
307     tmp = atomic_cmpxchg(&v, old, new);           307     tmp = atomic_cmpxchg(&v, old, new);         } while (!atomic_try_cmpxchg(&v, &old, new));
308     if (tmp == old)                               308     if (tmp == old)
309       break;                                      309       break;
310     old = tmp;                                    310     old = tmp;
311   }                                               311   }
312                                                   312 
313 NB. try_cmpxchg() also generates better code o    313 NB. try_cmpxchg() also generates better code on some platforms (notably x86)
314 where the function more closely matches the ha    314 where the function more closely matches the hardware instruction.
315                                                   315 
316                                                   316 
317 FORWARD PROGRESS                                  317 FORWARD PROGRESS
318 ----------------                                  318 ----------------
319                                                   319 
320 In general strong forward progress is expected    320 In general strong forward progress is expected of all unconditional atomic
321 operations -- those in the Arithmetic and Bitw    321 operations -- those in the Arithmetic and Bitwise classes and xchg(). However
322 a fair amount of code also requires forward pr    322 a fair amount of code also requires forward progress from the conditional
323 atomic operations.                                323 atomic operations.
324                                                   324 
325 Specifically 'simple' cmpxchg() loops are expe    325 Specifically 'simple' cmpxchg() loops are expected to not starve one another
326 indefinitely. However, this is not evident on     326 indefinitely. However, this is not evident on LL/SC architectures, because
327 while an LL/SC architecture 'can/should/must'  !! 327 while an LL/SC architecure 'can/should/must' provide forward progress
328 guarantees between competing LL/SC sections, s    328 guarantees between competing LL/SC sections, such a guarantee does not
329 transfer to cmpxchg() implemented using LL/SC.    329 transfer to cmpxchg() implemented using LL/SC. Consider:
330                                                   330 
331   old = atomic_read(&v);                          331   old = atomic_read(&v);
332   do {                                            332   do {
333     new = func(old);                              333     new = func(old);
334   } while (!atomic_try_cmpxchg(&v, &old, new))    334   } while (!atomic_try_cmpxchg(&v, &old, new));
335                                                   335 
336 which on LL/SC becomes something like:            336 which on LL/SC becomes something like:
337                                                   337 
338   old = atomic_read(&v);                          338   old = atomic_read(&v);
339   do {                                            339   do {
340     new = func(old);                              340     new = func(old);
341   } while (!({                                    341   } while (!({
342     volatile asm ("1: LL  %[oldval], %[v]\n"      342     volatile asm ("1: LL  %[oldval], %[v]\n"
343                   "   CMP %[oldval], %[old]\n"    343                   "   CMP %[oldval], %[old]\n"
344                   "   BNE 2f\n"                   344                   "   BNE 2f\n"
345                   "   SC  %[new], %[v]\n"         345                   "   SC  %[new], %[v]\n"
346                   "   BNE 1b\n"                   346                   "   BNE 1b\n"
347                   "2:\n"                          347                   "2:\n"
348                   : [oldval] "=&r" (oldval), [    348                   : [oldval] "=&r" (oldval), [v] "m" (v)
349                   : [old] "r" (old), [new] "r"    349                   : [old] "r" (old), [new] "r" (new)
350                   : "memory");                    350                   : "memory");
351     success = (oldval == old);                    351     success = (oldval == old);
352     if (!success)                                 352     if (!success)
353       old = oldval;                               353       old = oldval;
354     success; }));                                 354     success; }));
355                                                   355 
356 However, even the forward branch from the fail    356 However, even the forward branch from the failed compare can cause the LL/SC
357 to fail on some architectures, let alone whate    357 to fail on some architectures, let alone whatever the compiler makes of the C
358 loop body. As a result there is no guarantee w    358 loop body. As a result there is no guarantee what so ever the cacheline
359 containing @v will stay on the local CPU and p    359 containing @v will stay on the local CPU and progress is made.
360                                                   360 
361 Even native CAS architectures can fail to prov    361 Even native CAS architectures can fail to provide forward progress for their
362 primitive (See Sparc64 for an example).           362 primitive (See Sparc64 for an example).
363                                                   363 
364 Such implementations are strongly encouraged t    364 Such implementations are strongly encouraged to add exponential backoff loops
365 to a failed CAS in order to ensure some progre    365 to a failed CAS in order to ensure some progress. Affected architectures are
366 also strongly encouraged to inspect/audit the     366 also strongly encouraged to inspect/audit the atomic fallbacks, refcount_t and
367 their locking primitives.                         367 their locking primitives.
                                                      

~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

kernel.org | git.kernel.org | LWN.net | Project Home | SVN repository | Mail admin

Linux® is a registered trademark of Linus Torvalds in the United States and other countries.
TOMOYO® is a registered trademark of NTT DATA CORPORATION.

sflogo.php