~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

TOMOYO Linux Cross Reference
Linux/Documentation/core-api/cpu_hotplug.rst

Version: ~ [ linux-6.12-rc7 ] ~ [ linux-6.11.7 ] ~ [ linux-6.10.14 ] ~ [ linux-6.9.12 ] ~ [ linux-6.8.12 ] ~ [ linux-6.7.12 ] ~ [ linux-6.6.60 ] ~ [ linux-6.5.13 ] ~ [ linux-6.4.16 ] ~ [ linux-6.3.13 ] ~ [ linux-6.2.16 ] ~ [ linux-6.1.116 ] ~ [ linux-6.0.19 ] ~ [ linux-5.19.17 ] ~ [ linux-5.18.19 ] ~ [ linux-5.17.15 ] ~ [ linux-5.16.20 ] ~ [ linux-5.15.171 ] ~ [ linux-5.14.21 ] ~ [ linux-5.13.19 ] ~ [ linux-5.12.19 ] ~ [ linux-5.11.22 ] ~ [ linux-5.10.229 ] ~ [ linux-5.9.16 ] ~ [ linux-5.8.18 ] ~ [ linux-5.7.19 ] ~ [ linux-5.6.19 ] ~ [ linux-5.5.19 ] ~ [ linux-5.4.285 ] ~ [ linux-5.3.18 ] ~ [ linux-5.2.21 ] ~ [ linux-5.1.21 ] ~ [ linux-5.0.21 ] ~ [ linux-4.20.17 ] ~ [ linux-4.19.323 ] ~ [ linux-4.18.20 ] ~ [ linux-4.17.19 ] ~ [ linux-4.16.18 ] ~ [ linux-4.15.18 ] ~ [ linux-4.14.336 ] ~ [ linux-4.13.16 ] ~ [ linux-4.12.14 ] ~ [ linux-4.11.12 ] ~ [ linux-4.10.17 ] ~ [ linux-4.9.337 ] ~ [ linux-4.4.302 ] ~ [ linux-3.10.108 ] ~ [ linux-2.6.32.71 ] ~ [ linux-2.6.0 ] ~ [ linux-2.4.37.11 ] ~ [ unix-v6-master ] ~ [ ccs-tools-1.8.12 ] ~ [ policy-sample ] ~
Architecture: ~ [ i386 ] ~ [ alpha ] ~ [ m68k ] ~ [ mips ] ~ [ ppc ] ~ [ sparc ] ~ [ sparc64 ] ~

  1 =========================
  2 CPU hotplug in the Kernel
  3 =========================
  4 
  5 :Date: September, 2021
  6 :Author: Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
  7          Rusty Russell <rusty@rustcorp.com.au>,
  8          Srivatsa Vaddagiri <vatsa@in.ibm.com>,
  9          Ashok Raj <ashok.raj@intel.com>,
 10          Joel Schopp <jschopp@austin.ibm.com>,
 11          Thomas Gleixner <tglx@linutronix.de>
 12 
 13 Introduction
 14 ============
 15 
 16 Modern advances in system architectures have introduced advanced error
 17 reporting and correction capabilities in processors. There are couple OEMS that
 18 support NUMA hardware which are hot pluggable as well, where physical node
 19 insertion and removal require support for CPU hotplug.
 20 
 21 Such advances require CPUs available to a kernel to be removed either for
 22 provisioning reasons, or for RAS purposes to keep an offending CPU off
 23 system execution path. Hence the need for CPU hotplug support in the
 24 Linux kernel.
 25 
 26 A more novel use of CPU-hotplug support is its use today in suspend resume
 27 support for SMP. Dual-core and HT support makes even a laptop run SMP kernels
 28 which didn't support these methods.
 29 
 30 
 31 Command Line Switches
 32 =====================
 33 ``maxcpus=n``
 34   Restrict boot time CPUs to *n*. Say if you have four CPUs, using
 35   ``maxcpus=2`` will only boot two. You can choose to bring the
 36   other CPUs later online.
 37 
 38 ``nr_cpus=n``
 39   Restrict the total amount of CPUs the kernel will support. If the number
 40   supplied here is lower than the number of physically available CPUs, then
 41   those CPUs can not be brought online later.
 42 
 43 ``possible_cpus=n``
 44   This option sets ``possible_cpus`` bits in ``cpu_possible_mask``.
 45 
 46   This option is limited to the X86 and S390 architecture.
 47 
 48 ``cpu0_hotplug``
 49   Allow to shutdown CPU0.
 50 
 51   This option is limited to the X86 architecture.
 52 
 53 CPU maps
 54 ========
 55 
 56 ``cpu_possible_mask``
 57   Bitmap of possible CPUs that can ever be available in the
 58   system. This is used to allocate some boot time memory for per_cpu variables
 59   that aren't designed to grow/shrink as CPUs are made available or removed.
 60   Once set during boot time discovery phase, the map is static, i.e no bits
 61   are added or removed anytime. Trimming it accurately for your system needs
 62   upfront can save some boot time memory.
 63 
 64 ``cpu_online_mask``
 65   Bitmap of all CPUs currently online. Its set in ``__cpu_up()``
 66   after a CPU is available for kernel scheduling and ready to receive
 67   interrupts from devices. Its cleared when a CPU is brought down using
 68   ``__cpu_disable()``, before which all OS services including interrupts are
 69   migrated to another target CPU.
 70 
 71 ``cpu_present_mask``
 72   Bitmap of CPUs currently present in the system. Not all
 73   of them may be online. When physical hotplug is processed by the relevant
 74   subsystem (e.g ACPI) can change and new bit either be added or removed
 75   from the map depending on the event is hot-add/hot-remove. There are currently
 76   no locking rules as of now. Typical usage is to init topology during boot,
 77   at which time hotplug is disabled.
 78 
 79 You really don't need to manipulate any of the system CPU maps. They should
 80 be read-only for most use. When setting up per-cpu resources almost always use
 81 ``cpu_possible_mask`` or ``for_each_possible_cpu()`` to iterate. To macro
 82 ``for_each_cpu()`` can be used to iterate over a custom CPU mask.
 83 
 84 Never use anything other than ``cpumask_t`` to represent bitmap of CPUs.
 85 
 86 
 87 Using CPU hotplug
 88 =================
 89 
 90 The kernel option *CONFIG_HOTPLUG_CPU* needs to be enabled. It is currently
 91 available on multiple architectures including ARM, MIPS, PowerPC and X86. The
 92 configuration is done via the sysfs interface::
 93 
 94  $ ls -lh /sys/devices/system/cpu
 95  total 0
 96  drwxr-xr-x  9 root root    0 Dec 21 16:33 cpu0
 97  drwxr-xr-x  9 root root    0 Dec 21 16:33 cpu1
 98  drwxr-xr-x  9 root root    0 Dec 21 16:33 cpu2
 99  drwxr-xr-x  9 root root    0 Dec 21 16:33 cpu3
100  drwxr-xr-x  9 root root    0 Dec 21 16:33 cpu4
101  drwxr-xr-x  9 root root    0 Dec 21 16:33 cpu5
102  drwxr-xr-x  9 root root    0 Dec 21 16:33 cpu6
103  drwxr-xr-x  9 root root    0 Dec 21 16:33 cpu7
104  drwxr-xr-x  2 root root    0 Dec 21 16:33 hotplug
105  -r--r--r--  1 root root 4.0K Dec 21 16:33 offline
106  -r--r--r--  1 root root 4.0K Dec 21 16:33 online
107  -r--r--r--  1 root root 4.0K Dec 21 16:33 possible
108  -r--r--r--  1 root root 4.0K Dec 21 16:33 present
109 
110 The files *offline*, *online*, *possible*, *present* represent the CPU masks.
111 Each CPU folder contains an *online* file which controls the logical on (1) and
112 off (0) state. To logically shutdown CPU4::
113 
114  $ echo 0 > /sys/devices/system/cpu/cpu4/online
115   smpboot: CPU 4 is now offline
116 
117 Once the CPU is shutdown, it will be removed from */proc/interrupts*,
118 */proc/cpuinfo* and should also not be shown visible by the *top* command. To
119 bring CPU4 back online::
120 
121  $ echo 1 > /sys/devices/system/cpu/cpu4/online
122  smpboot: Booting Node 0 Processor 4 APIC 0x1
123 
124 The CPU is usable again. This should work on all CPUs, but CPU0 is often special
125 and excluded from CPU hotplug.
126 
127 The CPU hotplug coordination
128 ============================
129 
130 The offline case
131 ----------------
132 
133 Once a CPU has been logically shutdown the teardown callbacks of registered
134 hotplug states will be invoked, starting with ``CPUHP_ONLINE`` and terminating
135 at state ``CPUHP_OFFLINE``. This includes:
136 
137 * If tasks are frozen due to a suspend operation then *cpuhp_tasks_frozen*
138   will be set to true.
139 * All processes are migrated away from this outgoing CPU to new CPUs.
140   The new CPU is chosen from each process' current cpuset, which may be
141   a subset of all online CPUs.
142 * All interrupts targeted to this CPU are migrated to a new CPU
143 * timers are also migrated to a new CPU
144 * Once all services are migrated, kernel calls an arch specific routine
145   ``__cpu_disable()`` to perform arch specific cleanup.
146 
147 
148 The CPU hotplug API
149 ===================
150 
151 CPU hotplug state machine
152 -------------------------
153 
154 CPU hotplug uses a trivial state machine with a linear state space from
155 CPUHP_OFFLINE to CPUHP_ONLINE. Each state has a startup and a teardown
156 callback.
157 
158 When a CPU is onlined, the startup callbacks are invoked sequentially until
159 the state CPUHP_ONLINE is reached. They can also be invoked when the
160 callbacks of a state are set up or an instance is added to a multi-instance
161 state.
162 
163 When a CPU is offlined the teardown callbacks are invoked in the reverse
164 order sequentially until the state CPUHP_OFFLINE is reached. They can also
165 be invoked when the callbacks of a state are removed or an instance is
166 removed from a multi-instance state.
167 
168 If a usage site requires only a callback in one direction of the hotplug
169 operations (CPU online or CPU offline) then the other not-required callback
170 can be set to NULL when the state is set up.
171 
172 The state space is divided into three sections:
173 
174 * The PREPARE section
175 
176   The PREPARE section covers the state space from CPUHP_OFFLINE to
177   CPUHP_BRINGUP_CPU.
178 
179   The startup callbacks in this section are invoked before the CPU is
180   started during a CPU online operation. The teardown callbacks are invoked
181   after the CPU has become dysfunctional during a CPU offline operation.
182 
183   The callbacks are invoked on a control CPU as they can't obviously run on
184   the hotplugged CPU which is either not yet started or has become
185   dysfunctional already.
186 
187   The startup callbacks are used to setup resources which are required to
188   bring a CPU successfully online. The teardown callbacks are used to free
189   resources or to move pending work to an online CPU after the hotplugged
190   CPU became dysfunctional.
191 
192   The startup callbacks are allowed to fail. If a callback fails, the CPU
193   online operation is aborted and the CPU is brought down to the previous
194   state (usually CPUHP_OFFLINE) again.
195 
196   The teardown callbacks in this section are not allowed to fail.
197 
198 * The STARTING section
199 
200   The STARTING section covers the state space between CPUHP_BRINGUP_CPU + 1
201   and CPUHP_AP_ONLINE.
202 
203   The startup callbacks in this section are invoked on the hotplugged CPU
204   with interrupts disabled during a CPU online operation in the early CPU
205   setup code. The teardown callbacks are invoked with interrupts disabled
206   on the hotplugged CPU during a CPU offline operation shortly before the
207   CPU is completely shut down.
208 
209   The callbacks in this section are not allowed to fail.
210 
211   The callbacks are used for low level hardware initialization/shutdown and
212   for core subsystems.
213 
214 * The ONLINE section
215 
216   The ONLINE section covers the state space between CPUHP_AP_ONLINE + 1 and
217   CPUHP_ONLINE.
218 
219   The startup callbacks in this section are invoked on the hotplugged CPU
220   during a CPU online operation. The teardown callbacks are invoked on the
221   hotplugged CPU during a CPU offline operation.
222 
223   The callbacks are invoked in the context of the per CPU hotplug thread,
224   which is pinned on the hotplugged CPU. The callbacks are invoked with
225   interrupts and preemption enabled.
226 
227   The callbacks are allowed to fail. When a callback fails the hotplug
228   operation is aborted and the CPU is brought back to the previous state.
229 
230 CPU online/offline operations
231 -----------------------------
232 
233 A successful online operation looks like this::
234 
235   [CPUHP_OFFLINE]
236   [CPUHP_OFFLINE + 1]->startup()       -> success
237   [CPUHP_OFFLINE + 2]->startup()       -> success
238   [CPUHP_OFFLINE + 3]                  -> skipped because startup == NULL
239   ...
240   [CPUHP_BRINGUP_CPU]->startup()       -> success
241   === End of PREPARE section
242   [CPUHP_BRINGUP_CPU + 1]->startup()   -> success
243   ...
244   [CPUHP_AP_ONLINE]->startup()         -> success
245   === End of STARTUP section
246   [CPUHP_AP_ONLINE + 1]->startup()     -> success
247   ...
248   [CPUHP_ONLINE - 1]->startup()        -> success
249   [CPUHP_ONLINE]
250 
251 A successful offline operation looks like this::
252 
253   [CPUHP_ONLINE]
254   [CPUHP_ONLINE - 1]->teardown()       -> success
255   ...
256   [CPUHP_AP_ONLINE + 1]->teardown()    -> success
257   === Start of STARTUP section
258   [CPUHP_AP_ONLINE]->teardown()        -> success
259   ...
260   [CPUHP_BRINGUP_ONLINE - 1]->teardown()
261   ...
262   === Start of PREPARE section
263   [CPUHP_BRINGUP_CPU]->teardown()
264   [CPUHP_OFFLINE + 3]->teardown()
265   [CPUHP_OFFLINE + 2]                  -> skipped because teardown == NULL
266   [CPUHP_OFFLINE + 1]->teardown()
267   [CPUHP_OFFLINE]
268 
269 A failed online operation looks like this::
270 
271   [CPUHP_OFFLINE]
272   [CPUHP_OFFLINE + 1]->startup()       -> success
273   [CPUHP_OFFLINE + 2]->startup()       -> success
274   [CPUHP_OFFLINE + 3]                  -> skipped because startup == NULL
275   ...
276   [CPUHP_BRINGUP_CPU]->startup()       -> success
277   === End of PREPARE section
278   [CPUHP_BRINGUP_CPU + 1]->startup()   -> success
279   ...
280   [CPUHP_AP_ONLINE]->startup()         -> success
281   === End of STARTUP section
282   [CPUHP_AP_ONLINE + 1]->startup()     -> success
283   ---
284   [CPUHP_AP_ONLINE + N]->startup()     -> fail
285   [CPUHP_AP_ONLINE + (N - 1)]->teardown()
286   ...
287   [CPUHP_AP_ONLINE + 1]->teardown()
288   === Start of STARTUP section
289   [CPUHP_AP_ONLINE]->teardown()
290   ...
291   [CPUHP_BRINGUP_ONLINE - 1]->teardown()
292   ...
293   === Start of PREPARE section
294   [CPUHP_BRINGUP_CPU]->teardown()
295   [CPUHP_OFFLINE + 3]->teardown()
296   [CPUHP_OFFLINE + 2]                  -> skipped because teardown == NULL
297   [CPUHP_OFFLINE + 1]->teardown()
298   [CPUHP_OFFLINE]
299 
300 A failed offline operation looks like this::
301 
302   [CPUHP_ONLINE]
303   [CPUHP_ONLINE - 1]->teardown()       -> success
304   ...
305   [CPUHP_ONLINE - N]->teardown()       -> fail
306   [CPUHP_ONLINE - (N - 1)]->startup()
307   ...
308   [CPUHP_ONLINE - 1]->startup()
309   [CPUHP_ONLINE]
310 
311 Recursive failures cannot be handled sensibly. Look at the following
312 example of a recursive fail due to a failed offline operation: ::
313 
314   [CPUHP_ONLINE]
315   [CPUHP_ONLINE - 1]->teardown()       -> success
316   ...
317   [CPUHP_ONLINE - N]->teardown()       -> fail
318   [CPUHP_ONLINE - (N - 1)]->startup()  -> success
319   [CPUHP_ONLINE - (N - 2)]->startup()  -> fail
320 
321 The CPU hotplug state machine stops right here and does not try to go back
322 down again because that would likely result in an endless loop::
323 
324   [CPUHP_ONLINE - (N - 1)]->teardown() -> success
325   [CPUHP_ONLINE - N]->teardown()       -> fail
326   [CPUHP_ONLINE - (N - 1)]->startup()  -> success
327   [CPUHP_ONLINE - (N - 2)]->startup()  -> fail
328   [CPUHP_ONLINE - (N - 1)]->teardown() -> success
329   [CPUHP_ONLINE - N]->teardown()       -> fail
330 
331 Lather, rinse and repeat. In this case the CPU left in state::
332 
333   [CPUHP_ONLINE - (N - 1)]
334 
335 which at least lets the system make progress and gives the user a chance to
336 debug or even resolve the situation.
337 
338 Allocating a state
339 ------------------
340 
341 There are two ways to allocate a CPU hotplug state:
342 
343 * Static allocation
344 
345   Static allocation has to be used when the subsystem or driver has
346   ordering requirements versus other CPU hotplug states. E.g. the PERF core
347   startup callback has to be invoked before the PERF driver startup
348   callbacks during a CPU online operation. During a CPU offline operation
349   the driver teardown callbacks have to be invoked before the core teardown
350   callback. The statically allocated states are described by constants in
351   the cpuhp_state enum which can be found in include/linux/cpuhotplug.h.
352 
353   Insert the state into the enum at the proper place so the ordering
354   requirements are fulfilled. The state constant has to be used for state
355   setup and removal.
356 
357   Static allocation is also required when the state callbacks are not set
358   up at runtime and are part of the initializer of the CPU hotplug state
359   array in kernel/cpu.c.
360 
361 * Dynamic allocation
362 
363   When there are no ordering requirements for the state callbacks then
364   dynamic allocation is the preferred method. The state number is allocated
365   by the setup function and returned to the caller on success.
366 
367   Only the PREPARE and ONLINE sections provide a dynamic allocation
368   range. The STARTING section does not as most of the callbacks in that
369   section have explicit ordering requirements.
370 
371 Setup of a CPU hotplug state
372 ----------------------------
373 
374 The core code provides the following functions to setup a state:
375 
376 * cpuhp_setup_state(state, name, startup, teardown)
377 * cpuhp_setup_state_nocalls(state, name, startup, teardown)
378 * cpuhp_setup_state_cpuslocked(state, name, startup, teardown)
379 * cpuhp_setup_state_nocalls_cpuslocked(state, name, startup, teardown)
380 
381 For cases where a driver or a subsystem has multiple instances and the same
382 CPU hotplug state callbacks need to be invoked for each instance, the CPU
383 hotplug core provides multi-instance support. The advantage over driver
384 specific instance lists is that the instance related functions are fully
385 serialized against CPU hotplug operations and provide the automatic
386 invocations of the state callbacks on add and removal. To set up such a
387 multi-instance state the following function is available:
388 
389 * cpuhp_setup_state_multi(state, name, startup, teardown)
390 
391 The @state argument is either a statically allocated state or one of the
392 constants for dynamically allocated states - CPUHP_BP_PREPARE_DYN,
393 CPUHP_AP_ONLINE_DYN - depending on the state section (PREPARE, ONLINE) for
394 which a dynamic state should be allocated.
395 
396 The @name argument is used for sysfs output and for instrumentation. The
397 naming convention is "subsys:mode" or "subsys/driver:mode",
398 e.g. "perf:mode" or "perf/x86:mode". The common mode names are:
399 
400 ======== =======================================================
401 prepare  For states in the PREPARE section
402 
403 dead     For states in the PREPARE section which do not provide
404          a startup callback
405 
406 starting For states in the STARTING section
407 
408 dying    For states in the STARTING section which do not provide
409          a startup callback
410 
411 online   For states in the ONLINE section
412 
413 offline  For states in the ONLINE section which do not provide
414          a startup callback
415 ======== =======================================================
416 
417 As the @name argument is only used for sysfs and instrumentation other mode
418 descriptors can be used as well if they describe the nature of the state
419 better than the common ones.
420 
421 Examples for @name arguments: "perf/online", "perf/x86:prepare",
422 "RCU/tree:dying", "sched/waitempty"
423 
424 The @startup argument is a function pointer to the callback which should be
425 invoked during a CPU online operation. If the usage site does not require a
426 startup callback set the pointer to NULL.
427 
428 The @teardown argument is a function pointer to the callback which should
429 be invoked during a CPU offline operation. If the usage site does not
430 require a teardown callback set the pointer to NULL.
431 
432 The functions differ in the way how the installed callbacks are treated:
433 
434   * cpuhp_setup_state_nocalls(), cpuhp_setup_state_nocalls_cpuslocked()
435     and cpuhp_setup_state_multi() only install the callbacks
436 
437   * cpuhp_setup_state() and cpuhp_setup_state_cpuslocked() install the
438     callbacks and invoke the @startup callback (if not NULL) for all online
439     CPUs which have currently a state greater than the newly installed
440     state. Depending on the state section the callback is either invoked on
441     the current CPU (PREPARE section) or on each online CPU (ONLINE
442     section) in the context of the CPU's hotplug thread.
443 
444     If a callback fails for CPU N then the teardown callback for CPU
445     0 .. N-1 is invoked to rollback the operation. The state setup fails,
446     the callbacks for the state are not installed and in case of dynamic
447     allocation the allocated state is freed.
448 
449 The state setup and the callback invocations are serialized against CPU
450 hotplug operations. If the setup function has to be called from a CPU
451 hotplug read locked region, then the _cpuslocked() variants have to be
452 used. These functions cannot be used from within CPU hotplug callbacks.
453 
454 The function return values:
455   ======== ===================================================================
456   0        Statically allocated state was successfully set up
457 
458   >0       Dynamically allocated state was successfully set up.
459 
460            The returned number is the state number which was allocated. If
461            the state callbacks have to be removed later, e.g. module
462            removal, then this number has to be saved by the caller and used
463            as @state argument for the state remove function. For
464            multi-instance states the dynamically allocated state number is
465            also required as @state argument for the instance add/remove
466            operations.
467 
468   <0       Operation failed
469   ======== ===================================================================
470 
471 Removal of a CPU hotplug state
472 ------------------------------
473 
474 To remove a previously set up state, the following functions are provided:
475 
476 * cpuhp_remove_state(state)
477 * cpuhp_remove_state_nocalls(state)
478 * cpuhp_remove_state_nocalls_cpuslocked(state)
479 * cpuhp_remove_multi_state(state)
480 
481 The @state argument is either a statically allocated state or the state
482 number which was allocated in the dynamic range by cpuhp_setup_state*(). If
483 the state is in the dynamic range, then the state number is freed and
484 available for dynamic allocation again.
485 
486 The functions differ in the way how the installed callbacks are treated:
487 
488   * cpuhp_remove_state_nocalls(), cpuhp_remove_state_nocalls_cpuslocked()
489     and cpuhp_remove_multi_state() only remove the callbacks.
490 
491   * cpuhp_remove_state() removes the callbacks and invokes the teardown
492     callback (if not NULL) for all online CPUs which have currently a state
493     greater than the removed state. Depending on the state section the
494     callback is either invoked on the current CPU (PREPARE section) or on
495     each online CPU (ONLINE section) in the context of the CPU's hotplug
496     thread.
497 
498     In order to complete the removal, the teardown callback should not fail.
499 
500 The state removal and the callback invocations are serialized against CPU
501 hotplug operations. If the remove function has to be called from a CPU
502 hotplug read locked region, then the _cpuslocked() variants have to be
503 used. These functions cannot be used from within CPU hotplug callbacks.
504 
505 If a multi-instance state is removed then the caller has to remove all
506 instances first.
507 
508 Multi-Instance state instance management
509 ----------------------------------------
510 
511 Once the multi-instance state is set up, instances can be added to the
512 state:
513 
514   * cpuhp_state_add_instance(state, node)
515   * cpuhp_state_add_instance_nocalls(state, node)
516 
517 The @state argument is either a statically allocated state or the state
518 number which was allocated in the dynamic range by cpuhp_setup_state_multi().
519 
520 The @node argument is a pointer to an hlist_node which is embedded in the
521 instance's data structure. The pointer is handed to the multi-instance
522 state callbacks and can be used by the callback to retrieve the instance
523 via container_of().
524 
525 The functions differ in the way how the installed callbacks are treated:
526 
527   * cpuhp_state_add_instance_nocalls() and only adds the instance to the
528     multi-instance state's node list.
529 
530   * cpuhp_state_add_instance() adds the instance and invokes the startup
531     callback (if not NULL) associated with @state for all online CPUs which
532     have currently a state greater than @state. The callback is only
533     invoked for the to be added instance. Depending on the state section
534     the callback is either invoked on the current CPU (PREPARE section) or
535     on each online CPU (ONLINE section) in the context of the CPU's hotplug
536     thread.
537 
538     If a callback fails for CPU N then the teardown callback for CPU
539     0 .. N-1 is invoked to rollback the operation, the function fails and
540     the instance is not added to the node list of the multi-instance state.
541 
542 To remove an instance from the state's node list these functions are
543 available:
544 
545   * cpuhp_state_remove_instance(state, node)
546   * cpuhp_state_remove_instance_nocalls(state, node)
547 
548 The arguments are the same as for the cpuhp_state_add_instance*()
549 variants above.
550 
551 The functions differ in the way how the installed callbacks are treated:
552 
553   * cpuhp_state_remove_instance_nocalls() only removes the instance from the
554     state's node list.
555 
556   * cpuhp_state_remove_instance() removes the instance and invokes the
557     teardown callback (if not NULL) associated with @state for all online
558     CPUs which have currently a state greater than @state.  The callback is
559     only invoked for the to be removed instance.  Depending on the state
560     section the callback is either invoked on the current CPU (PREPARE
561     section) or on each online CPU (ONLINE section) in the context of the
562     CPU's hotplug thread.
563 
564     In order to complete the removal, the teardown callback should not fail.
565 
566 The node list add/remove operations and the callback invocations are
567 serialized against CPU hotplug operations. These functions cannot be used
568 from within CPU hotplug callbacks and CPU hotplug read locked regions.
569 
570 Examples
571 --------
572 
573 Setup and teardown a statically allocated state in the STARTING section for
574 notifications on online and offline operations::
575 
576    ret = cpuhp_setup_state(CPUHP_SUBSYS_STARTING, "subsys:starting", subsys_cpu_starting, subsys_cpu_dying);
577    if (ret < 0)
578         return ret;
579    ....
580    cpuhp_remove_state(CPUHP_SUBSYS_STARTING);
581 
582 Setup and teardown a dynamically allocated state in the ONLINE section
583 for notifications on offline operations::
584 
585    state = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "subsys:offline", NULL, subsys_cpu_offline);
586    if (state < 0)
587        return state;
588    ....
589    cpuhp_remove_state(state);
590 
591 Setup and teardown a dynamically allocated state in the ONLINE section
592 for notifications on online operations without invoking the callbacks::
593 
594    state = cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN, "subsys:online", subsys_cpu_online, NULL);
595    if (state < 0)
596        return state;
597    ....
598    cpuhp_remove_state_nocalls(state);
599 
600 Setup, use and teardown a dynamically allocated multi-instance state in the
601 ONLINE section for notifications on online and offline operation::
602 
603    state = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN, "subsys:online", subsys_cpu_online, subsys_cpu_offline);
604    if (state < 0)
605        return state;
606    ....
607    ret = cpuhp_state_add_instance(state, &inst1->node);
608    if (ret)
609         return ret;
610    ....
611    ret = cpuhp_state_add_instance(state, &inst2->node);
612    if (ret)
613         return ret;
614    ....
615    cpuhp_remove_instance(state, &inst1->node);
616    ....
617    cpuhp_remove_instance(state, &inst2->node);
618    ....
619    remove_multi_state(state);
620 
621 
622 Testing of hotplug states
623 =========================
624 
625 One way to verify whether a custom state is working as expected or not is to
626 shutdown a CPU and then put it online again. It is also possible to put the CPU
627 to certain state (for instance *CPUHP_AP_ONLINE*) and then go back to
628 *CPUHP_ONLINE*. This would simulate an error one state after *CPUHP_AP_ONLINE*
629 which would lead to rollback to the online state.
630 
631 All registered states are enumerated in ``/sys/devices/system/cpu/hotplug/states`` ::
632 
633  $ tail /sys/devices/system/cpu/hotplug/states
634  138: mm/vmscan:online
635  139: mm/vmstat:online
636  140: lib/percpu_cnt:online
637  141: acpi/cpu-drv:online
638  142: base/cacheinfo:online
639  143: virtio/net:online
640  144: x86/mce:online
641  145: printk:online
642  168: sched:active
643  169: online
644 
645 To rollback CPU4 to ``lib/percpu_cnt:online`` and back online just issue::
646 
647   $ cat /sys/devices/system/cpu/cpu4/hotplug/state
648   169
649   $ echo 140 > /sys/devices/system/cpu/cpu4/hotplug/target
650   $ cat /sys/devices/system/cpu/cpu4/hotplug/state
651   140
652 
653 It is important to note that the teardown callback of state 140 have been
654 invoked. And now get back online::
655 
656   $ echo 169 > /sys/devices/system/cpu/cpu4/hotplug/target
657   $ cat /sys/devices/system/cpu/cpu4/hotplug/state
658   169
659 
660 With trace events enabled, the individual steps are visible, too::
661 
662   #  TASK-PID   CPU#    TIMESTAMP  FUNCTION
663   #     | |       |        |         |
664       bash-394  [001]  22.976: cpuhp_enter: cpu: 0004 target: 140 step: 169 (cpuhp_kick_ap_work)
665    cpuhp/4-31   [004]  22.977: cpuhp_enter: cpu: 0004 target: 140 step: 168 (sched_cpu_deactivate)
666    cpuhp/4-31   [004]  22.990: cpuhp_exit:  cpu: 0004  state: 168 step: 168 ret: 0
667    cpuhp/4-31   [004]  22.991: cpuhp_enter: cpu: 0004 target: 140 step: 144 (mce_cpu_pre_down)
668    cpuhp/4-31   [004]  22.992: cpuhp_exit:  cpu: 0004  state: 144 step: 144 ret: 0
669    cpuhp/4-31   [004]  22.993: cpuhp_multi_enter: cpu: 0004 target: 140 step: 143 (virtnet_cpu_down_prep)
670    cpuhp/4-31   [004]  22.994: cpuhp_exit:  cpu: 0004  state: 143 step: 143 ret: 0
671    cpuhp/4-31   [004]  22.995: cpuhp_enter: cpu: 0004 target: 140 step: 142 (cacheinfo_cpu_pre_down)
672    cpuhp/4-31   [004]  22.996: cpuhp_exit:  cpu: 0004  state: 142 step: 142 ret: 0
673       bash-394  [001]  22.997: cpuhp_exit:  cpu: 0004  state: 140 step: 169 ret: 0
674       bash-394  [005]  95.540: cpuhp_enter: cpu: 0004 target: 169 step: 140 (cpuhp_kick_ap_work)
675    cpuhp/4-31   [004]  95.541: cpuhp_enter: cpu: 0004 target: 169 step: 141 (acpi_soft_cpu_online)
676    cpuhp/4-31   [004]  95.542: cpuhp_exit:  cpu: 0004  state: 141 step: 141 ret: 0
677    cpuhp/4-31   [004]  95.543: cpuhp_enter: cpu: 0004 target: 169 step: 142 (cacheinfo_cpu_online)
678    cpuhp/4-31   [004]  95.544: cpuhp_exit:  cpu: 0004  state: 142 step: 142 ret: 0
679    cpuhp/4-31   [004]  95.545: cpuhp_multi_enter: cpu: 0004 target: 169 step: 143 (virtnet_cpu_online)
680    cpuhp/4-31   [004]  95.546: cpuhp_exit:  cpu: 0004  state: 143 step: 143 ret: 0
681    cpuhp/4-31   [004]  95.547: cpuhp_enter: cpu: 0004 target: 169 step: 144 (mce_cpu_online)
682    cpuhp/4-31   [004]  95.548: cpuhp_exit:  cpu: 0004  state: 144 step: 144 ret: 0
683    cpuhp/4-31   [004]  95.549: cpuhp_enter: cpu: 0004 target: 169 step: 145 (console_cpu_notify)
684    cpuhp/4-31   [004]  95.550: cpuhp_exit:  cpu: 0004  state: 145 step: 145 ret: 0
685    cpuhp/4-31   [004]  95.551: cpuhp_enter: cpu: 0004 target: 169 step: 168 (sched_cpu_activate)
686    cpuhp/4-31   [004]  95.552: cpuhp_exit:  cpu: 0004  state: 168 step: 168 ret: 0
687       bash-394  [005]  95.553: cpuhp_exit:  cpu: 0004  state: 169 step: 140 ret: 0
688 
689 As it an be seen, CPU4 went down until timestamp 22.996 and then back up until
690 95.552. All invoked callbacks including their return codes are visible in the
691 trace.
692 
693 Architecture's requirements
694 ===========================
695 
696 The following functions and configurations are required:
697 
698 ``CONFIG_HOTPLUG_CPU``
699   This entry needs to be enabled in Kconfig
700 
701 ``__cpu_up()``
702   Arch interface to bring up a CPU
703 
704 ``__cpu_disable()``
705   Arch interface to shutdown a CPU, no more interrupts can be handled by the
706   kernel after the routine returns. This includes the shutdown of the timer.
707 
708 ``__cpu_die()``
709   This actually supposed to ensure death of the CPU. Actually look at some
710   example code in other arch that implement CPU hotplug. The processor is taken
711   down from the ``idle()`` loop for that specific architecture. ``__cpu_die()``
712   typically waits for some per_cpu state to be set, to ensure the processor dead
713   routine is called to be sure positively.
714 
715 User Space Notification
716 =======================
717 
718 After CPU successfully onlined or offline udev events are sent. A udev rule like::
719 
720   SUBSYSTEM=="cpu", DRIVERS=="processor", DEVPATH=="/devices/system/cpu/*", RUN+="the_hotplug_receiver.sh"
721 
722 will receive all events. A script like::
723 
724   #!/bin/sh
725 
726   if [ "${ACTION}" = "offline" ]
727   then
728       echo "CPU ${DEVPATH##*/} offline"
729 
730   elif [ "${ACTION}" = "online" ]
731   then
732       echo "CPU ${DEVPATH##*/} online"
733 
734   fi
735 
736 can process the event further.
737 
738 When changes to the CPUs in the system occur, the sysfs file
739 /sys/devices/system/cpu/crash_hotplug contains '1' if the kernel
740 updates the kdump capture kernel list of CPUs itself (via elfcorehdr and
741 other relevant kexec segment), or '0' if userspace must update the kdump
742 capture kernel list of CPUs.
743 
744 The availability depends on the CONFIG_HOTPLUG_CPU kernel configuration
745 option.
746 
747 To skip userspace processing of CPU hot un/plug events for kdump
748 (i.e. the unload-then-reload to obtain a current list of CPUs), this sysfs
749 file can be used in a udev rule as follows:
750 
751  SUBSYSTEM=="cpu", ATTRS{crash_hotplug}=="1", GOTO="kdump_reload_end"
752 
753 For a CPU hot un/plug event, if the architecture supports kernel updates
754 of the elfcorehdr (which contains the list of CPUs) and other relevant
755 kexec segments, then the rule skips the unload-then-reload of the kdump
756 capture kernel.
757 
758 Kernel Inline Documentations Reference
759 ======================================
760 
761 .. kernel-doc:: include/linux/cpuhotplug.h

~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

kernel.org | git.kernel.org | LWN.net | Project Home | SVN repository | Mail admin

Linux® is a registered trademark of Linus Torvalds in the United States and other countries.
TOMOYO® is a registered trademark of NTT DATA CORPORATION.

sflogo.php