~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

TOMOYO Linux Cross Reference
Linux/Documentation/RCU/Design/Memory-Ordering/Tree-RCU-Memory-Ordering.rst

Version: ~ [ linux-6.12-rc7 ] ~ [ linux-6.11.7 ] ~ [ linux-6.10.14 ] ~ [ linux-6.9.12 ] ~ [ linux-6.8.12 ] ~ [ linux-6.7.12 ] ~ [ linux-6.6.60 ] ~ [ linux-6.5.13 ] ~ [ linux-6.4.16 ] ~ [ linux-6.3.13 ] ~ [ linux-6.2.16 ] ~ [ linux-6.1.116 ] ~ [ linux-6.0.19 ] ~ [ linux-5.19.17 ] ~ [ linux-5.18.19 ] ~ [ linux-5.17.15 ] ~ [ linux-5.16.20 ] ~ [ linux-5.15.171 ] ~ [ linux-5.14.21 ] ~ [ linux-5.13.19 ] ~ [ linux-5.12.19 ] ~ [ linux-5.11.22 ] ~ [ linux-5.10.229 ] ~ [ linux-5.9.16 ] ~ [ linux-5.8.18 ] ~ [ linux-5.7.19 ] ~ [ linux-5.6.19 ] ~ [ linux-5.5.19 ] ~ [ linux-5.4.285 ] ~ [ linux-5.3.18 ] ~ [ linux-5.2.21 ] ~ [ linux-5.1.21 ] ~ [ linux-5.0.21 ] ~ [ linux-4.20.17 ] ~ [ linux-4.19.323 ] ~ [ linux-4.18.20 ] ~ [ linux-4.17.19 ] ~ [ linux-4.16.18 ] ~ [ linux-4.15.18 ] ~ [ linux-4.14.336 ] ~ [ linux-4.13.16 ] ~ [ linux-4.12.14 ] ~ [ linux-4.11.12 ] ~ [ linux-4.10.17 ] ~ [ linux-4.9.337 ] ~ [ linux-4.4.302 ] ~ [ linux-3.10.108 ] ~ [ linux-2.6.32.71 ] ~ [ linux-2.6.0 ] ~ [ linux-2.4.37.11 ] ~ [ unix-v6-master ] ~ [ ccs-tools-1.8.12 ] ~ [ policy-sample ] ~
Architecture: ~ [ i386 ] ~ [ alpha ] ~ [ m68k ] ~ [ mips ] ~ [ ppc ] ~ [ sparc ] ~ [ sparc64 ] ~

  1 ======================================================
  2 A Tour Through TREE_RCU's Grace-Period Memory Ordering
  3 ======================================================
  4 
  5 August 8, 2017
  6 
  7 This article was contributed by Paul E. McKenney
  8 
  9 Introduction
 10 ============
 11 
 12 This document gives a rough visual overview of how Tree RCU's
 13 grace-period memory ordering guarantee is provided.
 14 
 15 What Is Tree RCU's Grace Period Memory Ordering Guarantee?
 16 ==========================================================
 17 
 18 RCU grace periods provide extremely strong memory-ordering guarantees
 19 for non-idle non-offline code.
 20 Any code that happens after the end of a given RCU grace period is guaranteed
 21 to see the effects of all accesses prior to the beginning of that grace
 22 period that are within RCU read-side critical sections.
 23 Similarly, any code that happens before the beginning of a given RCU grace
 24 period is guaranteed to not see the effects of all accesses following the end
 25 of that grace period that are within RCU read-side critical sections.
 26 
 27 Note well that RCU-sched read-side critical sections include any region
 28 of code for which preemption is disabled.
 29 Given that each individual machine instruction can be thought of as
 30 an extremely small region of preemption-disabled code, one can think of
 31 ``synchronize_rcu()`` as ``smp_mb()`` on steroids.
 32 
 33 RCU updaters use this guarantee by splitting their updates into
 34 two phases, one of which is executed before the grace period and
 35 the other of which is executed after the grace period.
 36 In the most common use case, phase one removes an element from
 37 a linked RCU-protected data structure, and phase two frees that element.
 38 For this to work, any readers that have witnessed state prior to the
 39 phase-one update (in the common case, removal) must not witness state
 40 following the phase-two update (in the common case, freeing).
 41 
 42 The RCU implementation provides this guarantee using a network
 43 of lock-based critical sections, memory barriers, and per-CPU
 44 processing, as is described in the following sections.
 45 
 46 Tree RCU Grace Period Memory Ordering Building Blocks
 47 =====================================================
 48 
 49 The workhorse for RCU's grace-period memory ordering is the
 50 critical section for the ``rcu_node`` structure's
 51 ``->lock``. These critical sections use helper functions for lock
 52 acquisition, including ``raw_spin_lock_rcu_node()``,
 53 ``raw_spin_lock_irq_rcu_node()``, and ``raw_spin_lock_irqsave_rcu_node()``.
 54 Their lock-release counterparts are ``raw_spin_unlock_rcu_node()``,
 55 ``raw_spin_unlock_irq_rcu_node()``, and
 56 ``raw_spin_unlock_irqrestore_rcu_node()``, respectively.
 57 For completeness, a ``raw_spin_trylock_rcu_node()`` is also provided.
 58 The key point is that the lock-acquisition functions, including
 59 ``raw_spin_trylock_rcu_node()``, all invoke ``smp_mb__after_unlock_lock()``
 60 immediately after successful acquisition of the lock.
 61 
 62 Therefore, for any given ``rcu_node`` structure, any access
 63 happening before one of the above lock-release functions will be seen
 64 by all CPUs as happening before any access happening after a later
 65 one of the above lock-acquisition functions.
 66 Furthermore, any access happening before one of the
 67 above lock-release function on any given CPU will be seen by all
 68 CPUs as happening before any access happening after a later one
 69 of the above lock-acquisition functions executing on that same CPU,
 70 even if the lock-release and lock-acquisition functions are operating
 71 on different ``rcu_node`` structures.
 72 Tree RCU uses these two ordering guarantees to form an ordering
 73 network among all CPUs that were in any way involved in the grace
 74 period, including any CPUs that came online or went offline during
 75 the grace period in question.
 76 
 77 The following litmus test exhibits the ordering effects of these
 78 lock-acquisition and lock-release functions::
 79 
 80     1 int x, y, z;
 81     2
 82     3 void task0(void)
 83     4 {
 84     5   raw_spin_lock_rcu_node(rnp);
 85     6   WRITE_ONCE(x, 1);
 86     7   r1 = READ_ONCE(y);
 87     8   raw_spin_unlock_rcu_node(rnp);
 88     9 }
 89    10
 90    11 void task1(void)
 91    12 {
 92    13   raw_spin_lock_rcu_node(rnp);
 93    14   WRITE_ONCE(y, 1);
 94    15   r2 = READ_ONCE(z);
 95    16   raw_spin_unlock_rcu_node(rnp);
 96    17 }
 97    18
 98    19 void task2(void)
 99    20 {
100    21   WRITE_ONCE(z, 1);
101    22   smp_mb();
102    23   r3 = READ_ONCE(x);
103    24 }
104    25
105    26 WARN_ON(r1 == 0 && r2 == 0 && r3 == 0);
106 
107 The ``WARN_ON()`` is evaluated at "the end of time",
108 after all changes have propagated throughout the system.
109 Without the ``smp_mb__after_unlock_lock()`` provided by the
110 acquisition functions, this ``WARN_ON()`` could trigger, for example
111 on PowerPC.
112 The ``smp_mb__after_unlock_lock()`` invocations prevent this
113 ``WARN_ON()`` from triggering.
114 
115 +-----------------------------------------------------------------------+
116 | **Quick Quiz**:                                                       |
117 +-----------------------------------------------------------------------+
118 | But the chain of rcu_node-structure lock acquisitions guarantees      |
119 | that new readers will see all of the updater's pre-grace-period       |
120 | accesses and also guarantees that the updater's post-grace-period     |
121 | accesses will see all of the old reader's accesses.  So why do we     |
122 | need all of those calls to smp_mb__after_unlock_lock()?               |
123 +-----------------------------------------------------------------------+
124 | **Answer**:                                                           |
125 +-----------------------------------------------------------------------+
126 | Because we must provide ordering for RCU's polling grace-period       |
127 | primitives, for example, get_state_synchronize_rcu() and              |
128 | poll_state_synchronize_rcu().  Consider this code::                   |
129 |                                                                       |
130 |  CPU 0                                     CPU 1                      |
131 |  ----                                      ----                       |
132 |  WRITE_ONCE(X, 1)                          WRITE_ONCE(Y, 1)           |
133 |  g = get_state_synchronize_rcu()           smp_mb()                   |
134 |  while (!poll_state_synchronize_rcu(g))    r1 = READ_ONCE(X)          |
135 |          continue;                                                    |
136 |  r0 = READ_ONCE(Y)                                                    |
137 |                                                                       |
138 | RCU guarantees that the outcome r0 == 0 && r1 == 0 will not           |
139 | happen, even if CPU 1 is in an RCU extended quiescent state           |
140 | (idle or offline) and thus won't interact directly with the RCU       |
141 | core processing at all.                                               |
142 +-----------------------------------------------------------------------+
143 
144 This approach must be extended to include idle CPUs, which need
145 RCU's grace-period memory ordering guarantee to extend to any
146 RCU read-side critical sections preceding and following the current
147 idle sojourn.
148 This case is handled by calls to the strongly ordered
149 ``atomic_add_return()`` read-modify-write atomic operation that
150 is invoked within ``ct_kernel_exit_state()`` at idle-entry
151 time and within ``ct_kernel_enter_state()`` at idle-exit time.
152 The grace-period kthread invokes first ``ct_rcu_watching_cpu_acquire()``
153 (preceded by a full memory barrier) and ``rcu_watching_snap_stopped_since()``
154 (both of which rely on acquire semantics) to detect idle CPUs.
155 
156 +-----------------------------------------------------------------------+
157 | **Quick Quiz**:                                                       |
158 +-----------------------------------------------------------------------+
159 | But what about CPUs that remain offline for the entire grace period?  |
160 +-----------------------------------------------------------------------+
161 | **Answer**:                                                           |
162 +-----------------------------------------------------------------------+
163 | Such CPUs will be offline at the beginning of the grace period, so    |
164 | the grace period won't expect quiescent states from them. Races       |
165 | between grace-period start and CPU-hotplug operations are mediated    |
166 | by the CPU's leaf ``rcu_node`` structure's ``->lock`` as described    |
167 | above.                                                                |
168 +-----------------------------------------------------------------------+
169 
170 The approach must be extended to handle one final case, that of waking a
171 task blocked in ``synchronize_rcu()``. This task might be affined to
172 a CPU that is not yet aware that the grace period has ended, and thus
173 might not yet be subject to the grace period's memory ordering.
174 Therefore, there is an ``smp_mb()`` after the return from
175 ``wait_for_completion()`` in the ``synchronize_rcu()`` code path.
176 
177 +-----------------------------------------------------------------------+
178 | **Quick Quiz**:                                                       |
179 +-----------------------------------------------------------------------+
180 | What? Where??? I don't see any ``smp_mb()`` after the return from     |
181 | ``wait_for_completion()``!!!                                          |
182 +-----------------------------------------------------------------------+
183 | **Answer**:                                                           |
184 +-----------------------------------------------------------------------+
185 | That would be because I spotted the need for that ``smp_mb()`` during |
186 | the creation of this documentation, and it is therefore unlikely to   |
187 | hit mainline before v4.14. Kudos to Lance Roy, Will Deacon, Peter     |
188 | Zijlstra, and Jonathan Cameron for asking questions that sensitized   |
189 | me to the rather elaborate sequence of events that demonstrate the    |
190 | need for this memory barrier.                                         |
191 +-----------------------------------------------------------------------+
192 
193 Tree RCU's grace--period memory-ordering guarantees rely most heavily on
194 the ``rcu_node`` structure's ``->lock`` field, so much so that it is
195 necessary to abbreviate this pattern in the diagrams in the next
196 section. For example, consider the ``rcu_prepare_for_idle()`` function
197 shown below, which is one of several functions that enforce ordering of
198 newly arrived RCU callbacks against future grace periods:
199 
200 ::
201 
202     1 static void rcu_prepare_for_idle(void)
203     2 {
204     3   bool needwake;
205     4   struct rcu_data *rdp = this_cpu_ptr(&rcu_data);
206     5   struct rcu_node *rnp;
207     6   int tne;
208     7
209     8   lockdep_assert_irqs_disabled();
210     9   if (rcu_rdp_is_offloaded(rdp))
211    10     return;
212    11
213    12   /* Handle nohz enablement switches conservatively. */
214    13   tne = READ_ONCE(tick_nohz_active);
215    14   if (tne != rdp->tick_nohz_enabled_snap) {
216    15     if (!rcu_segcblist_empty(&rdp->cblist))
217    16       invoke_rcu_core(); /* force nohz to see update. */
218    17     rdp->tick_nohz_enabled_snap = tne;
219    18     return;
220    19   }
221    20   if (!tne)
222    21     return;
223    22
224    23   /*
225    24    * If we have not yet accelerated this jiffy, accelerate all
226    25    * callbacks on this CPU.
227    26   */
228    27   if (rdp->last_accelerate == jiffies)
229    28     return;
230    29   rdp->last_accelerate = jiffies;
231    30   if (rcu_segcblist_pend_cbs(&rdp->cblist)) {
232    31     rnp = rdp->mynode;
233    32     raw_spin_lock_rcu_node(rnp); /* irqs already disabled. */
234    33     needwake = rcu_accelerate_cbs(rnp, rdp);
235    34     raw_spin_unlock_rcu_node(rnp); /* irqs remain disabled. */
236    35     if (needwake)
237    36       rcu_gp_kthread_wake();
238    37   }
239    38 }
240 
241 But the only part of ``rcu_prepare_for_idle()`` that really matters for
242 this discussion are lines 32–34. We will therefore abbreviate this
243 function as follows:
244 
245 .. kernel-figure:: rcu_node-lock.svg
246 
247 The box represents the ``rcu_node`` structure's ``->lock`` critical
248 section, with the double line on top representing the additional
249 ``smp_mb__after_unlock_lock()``.
250 
251 Tree RCU Grace Period Memory Ordering Components
252 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
253 
254 Tree RCU's grace-period memory-ordering guarantee is provided by a
255 number of RCU components:
256 
257 #. `Callback Registry`_
258 #. `Grace-Period Initialization`_
259 #. `Self-Reported Quiescent States`_
260 #. `Dynamic Tick Interface`_
261 #. `CPU-Hotplug Interface`_
262 #. `Forcing Quiescent States`_
263 #. `Grace-Period Cleanup`_
264 #. `Callback Invocation`_
265 
266 Each of the following section looks at the corresponding component in
267 detail.
268 
269 Callback Registry
270 ^^^^^^^^^^^^^^^^^
271 
272 If RCU's grace-period guarantee is to mean anything at all, any access
273 that happens before a given invocation of ``call_rcu()`` must also
274 happen before the corresponding grace period. The implementation of this
275 portion of RCU's grace period guarantee is shown in the following
276 figure:
277 
278 .. kernel-figure:: TreeRCU-callback-registry.svg
279 
280 Because ``call_rcu()`` normally acts only on CPU-local state, it
281 provides no ordering guarantees, either for itself or for phase one of
282 the update (which again will usually be removal of an element from an
283 RCU-protected data structure). It simply enqueues the ``rcu_head``
284 structure on a per-CPU list, which cannot become associated with a grace
285 period until a later call to ``rcu_accelerate_cbs()``, as shown in the
286 diagram above.
287 
288 One set of code paths shown on the left invokes ``rcu_accelerate_cbs()``
289 via ``note_gp_changes()``, either directly from ``call_rcu()`` (if the
290 current CPU is inundated with queued ``rcu_head`` structures) or more
291 likely from an ``RCU_SOFTIRQ`` handler. Another code path in the middle
292 is taken only in kernels built with ``CONFIG_RCU_FAST_NO_HZ=y``, which
293 invokes ``rcu_accelerate_cbs()`` via ``rcu_prepare_for_idle()``. The
294 final code path on the right is taken only in kernels built with
295 ``CONFIG_HOTPLUG_CPU=y``, which invokes ``rcu_accelerate_cbs()`` via
296 ``rcu_advance_cbs()``, ``rcu_migrate_callbacks``,
297 ``rcutree_migrate_callbacks()``, and ``takedown_cpu()``, which in turn
298 is invoked on a surviving CPU after the outgoing CPU has been completely
299 offlined.
300 
301 There are a few other code paths within grace-period processing that
302 opportunistically invoke ``rcu_accelerate_cbs()``. However, either way,
303 all of the CPU's recently queued ``rcu_head`` structures are associated
304 with a future grace-period number under the protection of the CPU's lead
305 ``rcu_node`` structure's ``->lock``. In all cases, there is full
306 ordering against any prior critical section for that same ``rcu_node``
307 structure's ``->lock``, and also full ordering against any of the
308 current task's or CPU's prior critical sections for any ``rcu_node``
309 structure's ``->lock``.
310 
311 The next section will show how this ordering ensures that any accesses
312 prior to the ``call_rcu()`` (particularly including phase one of the
313 update) happen before the start of the corresponding grace period.
314 
315 +-----------------------------------------------------------------------+
316 | **Quick Quiz**:                                                       |
317 +-----------------------------------------------------------------------+
318 | But what about ``synchronize_rcu()``?                                 |
319 +-----------------------------------------------------------------------+
320 | **Answer**:                                                           |
321 +-----------------------------------------------------------------------+
322 | The ``synchronize_rcu()`` passes ``call_rcu()`` to ``wait_rcu_gp()``, |
323 | which invokes it. So either way, it eventually comes down to          |
324 | ``call_rcu()``.                                                       |
325 +-----------------------------------------------------------------------+
326 
327 Grace-Period Initialization
328 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
329 
330 Grace-period initialization is carried out by the grace-period kernel
331 thread, which makes several passes over the ``rcu_node`` tree within the
332 ``rcu_gp_init()`` function. This means that showing the full flow of
333 ordering through the grace-period computation will require duplicating
334 this tree. If you find this confusing, please note that the state of the
335 ``rcu_node`` changes over time, just like Heraclitus's river. However,
336 to keep the ``rcu_node`` river tractable, the grace-period kernel
337 thread's traversals are presented in multiple parts, starting in this
338 section with the various phases of grace-period initialization.
339 
340 The first ordering-related grace-period initialization action is to
341 advance the ``rcu_state`` structure's ``->gp_seq`` grace-period-number
342 counter, as shown below:
343 
344 .. kernel-figure:: TreeRCU-gp-init-1.svg
345 
346 The actual increment is carried out using ``smp_store_release()``, which
347 helps reject false-positive RCU CPU stall detection. Note that only the
348 root ``rcu_node`` structure is touched.
349 
350 The first pass through the ``rcu_node`` tree updates bitmasks based on
351 CPUs having come online or gone offline since the start of the previous
352 grace period. In the common case where the number of online CPUs for
353 this ``rcu_node`` structure has not transitioned to or from zero, this
354 pass will scan only the leaf ``rcu_node`` structures. However, if the
355 number of online CPUs for a given leaf ``rcu_node`` structure has
356 transitioned from zero, ``rcu_init_new_rnp()`` will be invoked for the
357 first incoming CPU. Similarly, if the number of online CPUs for a given
358 leaf ``rcu_node`` structure has transitioned to zero,
359 ``rcu_cleanup_dead_rnp()`` will be invoked for the last outgoing CPU.
360 The diagram below shows the path of ordering if the leftmost
361 ``rcu_node`` structure onlines its first CPU and if the next
362 ``rcu_node`` structure has no online CPUs (or, alternatively if the
363 leftmost ``rcu_node`` structure offlines its last CPU and if the next
364 ``rcu_node`` structure has no online CPUs).
365 
366 .. kernel-figure:: TreeRCU-gp-init-2.svg
367 
368 The final ``rcu_gp_init()`` pass through the ``rcu_node`` tree traverses
369 breadth-first, setting each ``rcu_node`` structure's ``->gp_seq`` field
370 to the newly advanced value from the ``rcu_state`` structure, as shown
371 in the following diagram.
372 
373 .. kernel-figure:: TreeRCU-gp-init-3.svg
374 
375 This change will also cause each CPU's next call to
376 ``__note_gp_changes()`` to notice that a new grace period has started,
377 as described in the next section. But because the grace-period kthread
378 started the grace period at the root (with the advancing of the
379 ``rcu_state`` structure's ``->gp_seq`` field) before setting each leaf
380 ``rcu_node`` structure's ``->gp_seq`` field, each CPU's observation of
381 the start of the grace period will happen after the actual start of the
382 grace period.
383 
384 +-----------------------------------------------------------------------+
385 | **Quick Quiz**:                                                       |
386 +-----------------------------------------------------------------------+
387 | But what about the CPU that started the grace period? Why wouldn't it |
388 | see the start of the grace period right when it started that grace    |
389 | period?                                                               |
390 +-----------------------------------------------------------------------+
391 | **Answer**:                                                           |
392 +-----------------------------------------------------------------------+
393 | In some deep philosophical and overly anthromorphized sense, yes, the |
394 | CPU starting the grace period is immediately aware of having done so. |
395 | However, if we instead assume that RCU is not self-aware, then even   |
396 | the CPU starting the grace period does not really become aware of the |
397 | start of this grace period until its first call to                    |
398 | ``__note_gp_changes()``. On the other hand, this CPU potentially gets |
399 | early notification because it invokes ``__note_gp_changes()`` during  |
400 | its last ``rcu_gp_init()`` pass through its leaf ``rcu_node``         |
401 | structure.                                                            |
402 +-----------------------------------------------------------------------+
403 
404 Self-Reported Quiescent States
405 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
406 
407 When all entities that might block the grace period have reported
408 quiescent states (or as described in a later section, had quiescent
409 states reported on their behalf), the grace period can end. Online
410 non-idle CPUs report their own quiescent states, as shown in the
411 following diagram:
412 
413 .. kernel-figure:: TreeRCU-qs.svg
414 
415 This is for the last CPU to report a quiescent state, which signals the
416 end of the grace period. Earlier quiescent states would push up the
417 ``rcu_node`` tree only until they encountered an ``rcu_node`` structure
418 that is waiting for additional quiescent states. However, ordering is
419 nevertheless preserved because some later quiescent state will acquire
420 that ``rcu_node`` structure's ``->lock``.
421 
422 Any number of events can lead up to a CPU invoking ``note_gp_changes``
423 (or alternatively, directly invoking ``__note_gp_changes()``), at which
424 point that CPU will notice the start of a new grace period while holding
425 its leaf ``rcu_node`` lock. Therefore, all execution shown in this
426 diagram happens after the start of the grace period. In addition, this
427 CPU will consider any RCU read-side critical section that started before
428 the invocation of ``__note_gp_changes()`` to have started before the
429 grace period, and thus a critical section that the grace period must
430 wait on.
431 
432 +-----------------------------------------------------------------------+
433 | **Quick Quiz**:                                                       |
434 +-----------------------------------------------------------------------+
435 | But a RCU read-side critical section might have started after the     |
436 | beginning of the grace period (the advancing of ``->gp_seq`` from     |
437 | earlier), so why should the grace period wait on such a critical      |
438 | section?                                                              |
439 +-----------------------------------------------------------------------+
440 | **Answer**:                                                           |
441 +-----------------------------------------------------------------------+
442 | It is indeed not necessary for the grace period to wait on such a     |
443 | critical section. However, it is permissible to wait on it. And it is |
444 | furthermore important to wait on it, as this lazy approach is far     |
445 | more scalable than a “big bang” all-at-once grace-period start could  |
446 | possibly be.                                                          |
447 +-----------------------------------------------------------------------+
448 
449 If the CPU does a context switch, a quiescent state will be noted by
450 ``rcu_note_context_switch()`` on the left. On the other hand, if the CPU
451 takes a scheduler-clock interrupt while executing in usermode, a
452 quiescent state will be noted by ``rcu_sched_clock_irq()`` on the right.
453 Either way, the passage through a quiescent state will be noted in a
454 per-CPU variable.
455 
456 The next time an ``RCU_SOFTIRQ`` handler executes on this CPU (for
457 example, after the next scheduler-clock interrupt), ``rcu_core()`` will
458 invoke ``rcu_check_quiescent_state()``, which will notice the recorded
459 quiescent state, and invoke ``rcu_report_qs_rdp()``. If
460 ``rcu_report_qs_rdp()`` verifies that the quiescent state really does
461 apply to the current grace period, it invokes ``rcu_report_rnp()`` which
462 traverses up the ``rcu_node`` tree as shown at the bottom of the
463 diagram, clearing bits from each ``rcu_node`` structure's ``->qsmask``
464 field, and propagating up the tree when the result is zero.
465 
466 Note that traversal passes upwards out of a given ``rcu_node`` structure
467 only if the current CPU is reporting the last quiescent state for the
468 subtree headed by that ``rcu_node`` structure. A key point is that if a
469 CPU's traversal stops at a given ``rcu_node`` structure, then there will
470 be a later traversal by another CPU (or perhaps the same one) that
471 proceeds upwards from that point, and the ``rcu_node`` ``->lock``
472 guarantees that the first CPU's quiescent state happens before the
473 remainder of the second CPU's traversal. Applying this line of thought
474 repeatedly shows that all CPUs' quiescent states happen before the last
475 CPU traverses through the root ``rcu_node`` structure, the “last CPU”
476 being the one that clears the last bit in the root ``rcu_node``
477 structure's ``->qsmask`` field.
478 
479 Dynamic Tick Interface
480 ^^^^^^^^^^^^^^^^^^^^^^
481 
482 Due to energy-efficiency considerations, RCU is forbidden from
483 disturbing idle CPUs. CPUs are therefore required to notify RCU when
484 entering or leaving idle state, which they do via fully ordered
485 value-returning atomic operations on a per-CPU variable. The ordering
486 effects are as shown below:
487 
488 .. kernel-figure:: TreeRCU-dyntick.svg
489 
490 The RCU grace-period kernel thread samples the per-CPU idleness variable
491 while holding the corresponding CPU's leaf ``rcu_node`` structure's
492 ``->lock``. This means that any RCU read-side critical sections that
493 precede the idle period (the oval near the top of the diagram above)
494 will happen before the end of the current grace period. Similarly, the
495 beginning of the current grace period will happen before any RCU
496 read-side critical sections that follow the idle period (the oval near
497 the bottom of the diagram above).
498 
499 Plumbing this into the full grace-period execution is described
500 `below <Forcing Quiescent States_>`__.
501 
502 CPU-Hotplug Interface
503 ^^^^^^^^^^^^^^^^^^^^^
504 
505 RCU is also forbidden from disturbing offline CPUs, which might well be
506 powered off and removed from the system completely. CPUs are therefore
507 required to notify RCU of their comings and goings as part of the
508 corresponding CPU hotplug operations. The ordering effects are shown
509 below:
510 
511 .. kernel-figure:: TreeRCU-hotplug.svg
512 
513 Because CPU hotplug operations are much less frequent than idle
514 transitions, they are heavier weight, and thus acquire the CPU's leaf
515 ``rcu_node`` structure's ``->lock`` and update this structure's
516 ``->qsmaskinitnext``. The RCU grace-period kernel thread samples this
517 mask to detect CPUs having gone offline since the beginning of this
518 grace period.
519 
520 Plumbing this into the full grace-period execution is described
521 `below <Forcing Quiescent States_>`__.
522 
523 Forcing Quiescent States
524 ^^^^^^^^^^^^^^^^^^^^^^^^
525 
526 As noted above, idle and offline CPUs cannot report their own quiescent
527 states, and therefore the grace-period kernel thread must do the
528 reporting on their behalf. This process is called “forcing quiescent
529 states”, it is repeated every few jiffies, and its ordering effects are
530 shown below:
531 
532 .. kernel-figure:: TreeRCU-gp-fqs.svg
533 
534 Each pass of quiescent state forcing is guaranteed to traverse the leaf
535 ``rcu_node`` structures, and if there are no new quiescent states due to
536 recently idled and/or offlined CPUs, then only the leaves are traversed.
537 However, if there is a newly offlined CPU as illustrated on the left or
538 a newly idled CPU as illustrated on the right, the corresponding
539 quiescent state will be driven up towards the root. As with
540 self-reported quiescent states, the upwards driving stops once it
541 reaches an ``rcu_node`` structure that has quiescent states outstanding
542 from other CPUs.
543 
544 +-----------------------------------------------------------------------+
545 | **Quick Quiz**:                                                       |
546 +-----------------------------------------------------------------------+
547 | The leftmost drive to root stopped before it reached the root         |
548 | ``rcu_node`` structure, which means that there are still CPUs         |
549 | subordinate to that structure on which the current grace period is    |
550 | waiting. Given that, how is it possible that the rightmost drive to   |
551 | root ended the grace period?                                          |
552 +-----------------------------------------------------------------------+
553 | **Answer**:                                                           |
554 +-----------------------------------------------------------------------+
555 | Good analysis! It is in fact impossible in the absence of bugs in     |
556 | RCU. But this diagram is complex enough as it is, so simplicity       |
557 | overrode accuracy. You can think of it as poetic license, or you can  |
558 | think of it as misdirection that is resolved in the                   |
559 | `stitched-together diagram <Putting It All Together_>`__.             |
560 +-----------------------------------------------------------------------+
561 
562 Grace-Period Cleanup
563 ^^^^^^^^^^^^^^^^^^^^
564 
565 Grace-period cleanup first scans the ``rcu_node`` tree breadth-first
566 advancing all the ``->gp_seq`` fields, then it advances the
567 ``rcu_state`` structure's ``->gp_seq`` field. The ordering effects are
568 shown below:
569 
570 .. kernel-figure:: TreeRCU-gp-cleanup.svg
571 
572 As indicated by the oval at the bottom of the diagram, once grace-period
573 cleanup is complete, the next grace period can begin.
574 
575 +-----------------------------------------------------------------------+
576 | **Quick Quiz**:                                                       |
577 +-----------------------------------------------------------------------+
578 | But when precisely does the grace period end?                         |
579 +-----------------------------------------------------------------------+
580 | **Answer**:                                                           |
581 +-----------------------------------------------------------------------+
582 | There is no useful single point at which the grace period can be said |
583 | to end. The earliest reasonable candidate is as soon as the last CPU  |
584 | has reported its quiescent state, but it may be some milliseconds     |
585 | before RCU becomes aware of this. The latest reasonable candidate is  |
586 | once the ``rcu_state`` structure's ``->gp_seq`` field has been        |
587 | updated, but it is quite possible that some CPUs have already         |
588 | completed phase two of their updates by that time. In short, if you   |
589 | are going to work with RCU, you need to learn to embrace uncertainty. |
590 +-----------------------------------------------------------------------+
591 
592 Callback Invocation
593 ^^^^^^^^^^^^^^^^^^^
594 
595 Once a given CPU's leaf ``rcu_node`` structure's ``->gp_seq`` field has
596 been updated, that CPU can begin invoking its RCU callbacks that were
597 waiting for this grace period to end. These callbacks are identified by
598 ``rcu_advance_cbs()``, which is usually invoked by
599 ``__note_gp_changes()``. As shown in the diagram below, this invocation
600 can be triggered by the scheduling-clock interrupt
601 (``rcu_sched_clock_irq()`` on the left) or by idle entry
602 (``rcu_cleanup_after_idle()`` on the right, but only for kernels build
603 with ``CONFIG_RCU_FAST_NO_HZ=y``). Either way, ``RCU_SOFTIRQ`` is
604 raised, which results in ``rcu_do_batch()`` invoking the callbacks,
605 which in turn allows those callbacks to carry out (either directly or
606 indirectly via wakeup) the needed phase-two processing for each update.
607 
608 .. kernel-figure:: TreeRCU-callback-invocation.svg
609 
610 Please note that callback invocation can also be prompted by any number
611 of corner-case code paths, for example, when a CPU notes that it has
612 excessive numbers of callbacks queued. In all cases, the CPU acquires
613 its leaf ``rcu_node`` structure's ``->lock`` before invoking callbacks,
614 which preserves the required ordering against the newly completed grace
615 period.
616 
617 However, if the callback function communicates to other CPUs, for
618 example, doing a wakeup, then it is that function's responsibility to
619 maintain ordering. For example, if the callback function wakes up a task
620 that runs on some other CPU, proper ordering must in place in both the
621 callback function and the task being awakened. To see why this is
622 important, consider the top half of the `grace-period
623 cleanup`_ diagram. The callback might be
624 running on a CPU corresponding to the leftmost leaf ``rcu_node``
625 structure, and awaken a task that is to run on a CPU corresponding to
626 the rightmost leaf ``rcu_node`` structure, and the grace-period kernel
627 thread might not yet have reached the rightmost leaf. In this case, the
628 grace period's memory ordering might not yet have reached that CPU, so
629 again the callback function and the awakened task must supply proper
630 ordering.
631 
632 Putting It All Together
633 ~~~~~~~~~~~~~~~~~~~~~~~
634 
635 A stitched-together diagram is here:
636 
637 .. kernel-figure:: TreeRCU-gp.svg
638 
639 Legal Statement
640 ~~~~~~~~~~~~~~~
641 
642 This work represents the view of the author and does not necessarily
643 represent the view of IBM.
644 
645 Linux is a registered trademark of Linus Torvalds.
646 
647 Other company, product, and service names may be trademarks or service
648 marks of others.

~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

kernel.org | git.kernel.org | LWN.net | Project Home | SVN repository | Mail admin

Linux® is a registered trademark of Linus Torvalds in the United States and other countries.
TOMOYO® is a registered trademark of NTT DATA CORPORATION.

sflogo.php