~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

TOMOYO Linux Cross Reference
Linux/Documentation/RCU/NMI-RCU.rst

Version: ~ [ linux-6.12-rc7 ] ~ [ linux-6.11.7 ] ~ [ linux-6.10.14 ] ~ [ linux-6.9.12 ] ~ [ linux-6.8.12 ] ~ [ linux-6.7.12 ] ~ [ linux-6.6.60 ] ~ [ linux-6.5.13 ] ~ [ linux-6.4.16 ] ~ [ linux-6.3.13 ] ~ [ linux-6.2.16 ] ~ [ linux-6.1.116 ] ~ [ linux-6.0.19 ] ~ [ linux-5.19.17 ] ~ [ linux-5.18.19 ] ~ [ linux-5.17.15 ] ~ [ linux-5.16.20 ] ~ [ linux-5.15.171 ] ~ [ linux-5.14.21 ] ~ [ linux-5.13.19 ] ~ [ linux-5.12.19 ] ~ [ linux-5.11.22 ] ~ [ linux-5.10.229 ] ~ [ linux-5.9.16 ] ~ [ linux-5.8.18 ] ~ [ linux-5.7.19 ] ~ [ linux-5.6.19 ] ~ [ linux-5.5.19 ] ~ [ linux-5.4.285 ] ~ [ linux-5.3.18 ] ~ [ linux-5.2.21 ] ~ [ linux-5.1.21 ] ~ [ linux-5.0.21 ] ~ [ linux-4.20.17 ] ~ [ linux-4.19.323 ] ~ [ linux-4.18.20 ] ~ [ linux-4.17.19 ] ~ [ linux-4.16.18 ] ~ [ linux-4.15.18 ] ~ [ linux-4.14.336 ] ~ [ linux-4.13.16 ] ~ [ linux-4.12.14 ] ~ [ linux-4.11.12 ] ~ [ linux-4.10.17 ] ~ [ linux-4.9.337 ] ~ [ linux-4.4.302 ] ~ [ linux-3.10.108 ] ~ [ linux-2.6.32.71 ] ~ [ linux-2.6.0 ] ~ [ linux-2.4.37.11 ] ~ [ unix-v6-master ] ~ [ ccs-tools-1.8.12 ] ~ [ policy-sample ] ~
Architecture: ~ [ i386 ] ~ [ alpha ] ~ [ m68k ] ~ [ mips ] ~ [ ppc ] ~ [ sparc ] ~ [ sparc64 ] ~

Diff markup

Differences between /Documentation/RCU/NMI-RCU.rst (Version linux-6.12-rc7) and /Documentation/RCU/NMI-RCU.rst (Version linux-5.9.16)


  1 .. _NMI_rcu_doc:                                    1 .. _NMI_rcu_doc:
  2                                                     2 
  3 Using RCU to Protect Dynamic NMI Handlers           3 Using RCU to Protect Dynamic NMI Handlers
  4 =========================================           4 =========================================
  5                                                     5 
  6                                                     6 
  7 Although RCU is usually used to protect read-m      7 Although RCU is usually used to protect read-mostly data structures,
  8 it is possible to use RCU to provide dynamic n      8 it is possible to use RCU to provide dynamic non-maskable interrupt
  9 handlers, as well as dynamic irq handlers.  Th      9 handlers, as well as dynamic irq handlers.  This document describes
 10 how to do this, drawing loosely from Zwane Mwa     10 how to do this, drawing loosely from Zwane Mwaikambo's NMI-timer
 11 work in an old version of "arch/x86/kernel/tra !!  11 work in "arch/x86/oprofile/nmi_timer_int.c" and in
                                                   >>  12 "arch/x86/kernel/traps.c".
 12                                                    13 
 13 The relevant pieces of code are listed below,      14 The relevant pieces of code are listed below, each followed by a
 14 brief explanation::                                15 brief explanation::
 15                                                    16 
 16         static int dummy_nmi_callback(struct p     17         static int dummy_nmi_callback(struct pt_regs *regs, int cpu)
 17         {                                          18         {
 18                 return 0;                          19                 return 0;
 19         }                                          20         }
 20                                                    21 
 21 The dummy_nmi_callback() function is a "dummy"     22 The dummy_nmi_callback() function is a "dummy" NMI handler that does
 22 nothing, but returns zero, thus saying that it     23 nothing, but returns zero, thus saying that it did nothing, allowing
 23 the NMI handler to take the default machine-sp     24 the NMI handler to take the default machine-specific action::
 24                                                    25 
 25         static nmi_callback_t nmi_callback = d     26         static nmi_callback_t nmi_callback = dummy_nmi_callback;
 26                                                    27 
 27 This nmi_callback variable is a global functio     28 This nmi_callback variable is a global function pointer to the current
 28 NMI handler::                                      29 NMI handler::
 29                                                    30 
 30         void do_nmi(struct pt_regs * regs, lon     31         void do_nmi(struct pt_regs * regs, long error_code)
 31         {                                          32         {
 32                 int cpu;                           33                 int cpu;
 33                                                    34 
 34                 nmi_enter();                       35                 nmi_enter();
 35                                                    36 
 36                 cpu = smp_processor_id();          37                 cpu = smp_processor_id();
 37                 ++nmi_count(cpu);                  38                 ++nmi_count(cpu);
 38                                                    39 
 39                 if (!rcu_dereference_sched(nmi     40                 if (!rcu_dereference_sched(nmi_callback)(regs, cpu))
 40                         default_do_nmi(regs);      41                         default_do_nmi(regs);
 41                                                    42 
 42                 nmi_exit();                        43                 nmi_exit();
 43         }                                          44         }
 44                                                    45 
 45 The do_nmi() function processes each NMI.  It      46 The do_nmi() function processes each NMI.  It first disables preemption
 46 in the same way that a hardware irq would, the     47 in the same way that a hardware irq would, then increments the per-CPU
 47 count of NMIs.  It then invokes the NMI handle     48 count of NMIs.  It then invokes the NMI handler stored in the nmi_callback
 48 function pointer.  If this handler returns zer     49 function pointer.  If this handler returns zero, do_nmi() invokes the
 49 default_do_nmi() function to handle a machine-     50 default_do_nmi() function to handle a machine-specific NMI.  Finally,
 50 preemption is restored.                            51 preemption is restored.
 51                                                    52 
 52 In theory, rcu_dereference_sched() is not need     53 In theory, rcu_dereference_sched() is not needed, since this code runs
 53 only on i386, which in theory does not need rc     54 only on i386, which in theory does not need rcu_dereference_sched()
 54 anyway.  However, in practice it is a good doc     55 anyway.  However, in practice it is a good documentation aid, particularly
 55 for anyone attempting to do something similar      56 for anyone attempting to do something similar on Alpha or on systems
 56 with aggressive optimizing compilers.              57 with aggressive optimizing compilers.
 57                                                    58 
 58 Quick Quiz:                                        59 Quick Quiz:
 59                 Why might the rcu_dereference_     60                 Why might the rcu_dereference_sched() be necessary on Alpha, given that the code referenced by the pointer is read-only?
 60                                                    61 
 61 :ref:`Answer to Quick Quiz <answer_quick_quiz_     62 :ref:`Answer to Quick Quiz <answer_quick_quiz_NMI>`
 62                                                    63 
 63 Back to the discussion of NMI and RCU::            64 Back to the discussion of NMI and RCU::
 64                                                    65 
 65         void set_nmi_callback(nmi_callback_t c     66         void set_nmi_callback(nmi_callback_t callback)
 66         {                                          67         {
 67                 rcu_assign_pointer(nmi_callbac     68                 rcu_assign_pointer(nmi_callback, callback);
 68         }                                          69         }
 69                                                    70 
 70 The set_nmi_callback() function registers an N     71 The set_nmi_callback() function registers an NMI handler.  Note that any
 71 data that is to be used by the callback must b     72 data that is to be used by the callback must be initialized up -before-
 72 the call to set_nmi_callback().  On architectu     73 the call to set_nmi_callback().  On architectures that do not order
 73 writes, the rcu_assign_pointer() ensures that      74 writes, the rcu_assign_pointer() ensures that the NMI handler sees the
 74 initialized values::                               75 initialized values::
 75                                                    76 
 76         void unset_nmi_callback(void)              77         void unset_nmi_callback(void)
 77         {                                          78         {
 78                 rcu_assign_pointer(nmi_callbac     79                 rcu_assign_pointer(nmi_callback, dummy_nmi_callback);
 79         }                                          80         }
 80                                                    81 
 81 This function unregisters an NMI handler, rest     82 This function unregisters an NMI handler, restoring the original
 82 dummy_nmi_handler().  However, there may well      83 dummy_nmi_handler().  However, there may well be an NMI handler
 83 currently executing on some other CPU.  We the     84 currently executing on some other CPU.  We therefore cannot free
 84 up any data structures used by the old NMI han     85 up any data structures used by the old NMI handler until execution
 85 of it completes on all other CPUs.                 86 of it completes on all other CPUs.
 86                                                    87 
 87 One way to accomplish this is via synchronize_     88 One way to accomplish this is via synchronize_rcu(), perhaps as
 88 follows::                                          89 follows::
 89                                                    90 
 90         unset_nmi_callback();                      91         unset_nmi_callback();
 91         synchronize_rcu();                         92         synchronize_rcu();
 92         kfree(my_nmi_data);                        93         kfree(my_nmi_data);
 93                                                    94 
 94 This works because (as of v4.20) synchronize_r     95 This works because (as of v4.20) synchronize_rcu() blocks until all
 95 CPUs complete any preemption-disabled segments     96 CPUs complete any preemption-disabled segments of code that they were
 96 executing.                                         97 executing.
 97 Since NMI handlers disable preemption, synchro     98 Since NMI handlers disable preemption, synchronize_rcu() is guaranteed
 98 not to return until all ongoing NMI handlers e     99 not to return until all ongoing NMI handlers exit.  It is therefore safe
 99 to free up the handler's data as soon as synch    100 to free up the handler's data as soon as synchronize_rcu() returns.
100                                                   101 
101 Important note: for this to work, the architec    102 Important note: for this to work, the architecture in question must
102 invoke nmi_enter() and nmi_exit() on NMI entry    103 invoke nmi_enter() and nmi_exit() on NMI entry and exit, respectively.
103                                                   104 
104 .. _answer_quick_quiz_NMI:                        105 .. _answer_quick_quiz_NMI:
105                                                   106 
106 Answer to Quick Quiz:                             107 Answer to Quick Quiz:
107         Why might the rcu_dereference_sched()     108         Why might the rcu_dereference_sched() be necessary on Alpha, given that the code referenced by the pointer is read-only?
108                                                   109 
109         The caller to set_nmi_callback() might    110         The caller to set_nmi_callback() might well have
110         initialized some data that is to be us    111         initialized some data that is to be used by the new NMI
111         handler.  In this case, the rcu_derefe    112         handler.  In this case, the rcu_dereference_sched() would
112         be needed, because otherwise a CPU tha    113         be needed, because otherwise a CPU that received an NMI
113         just after the new handler was set mig    114         just after the new handler was set might see the pointer
114         to the new NMI handler, but the old pr    115         to the new NMI handler, but the old pre-initialized
115         version of the handler's data.            116         version of the handler's data.
116                                                   117 
117         This same sad story can happen on othe    118         This same sad story can happen on other CPUs when using
118         a compiler with aggressive pointer-val    119         a compiler with aggressive pointer-value speculation
119         optimizations.  (But please don't!)    !! 120         optimizations.
120                                                   121 
121         More important, the rcu_dereference_sc    122         More important, the rcu_dereference_sched() makes it
122         clear to someone reading the code that    123         clear to someone reading the code that the pointer is
123         being protected by RCU-sched.             124         being protected by RCU-sched.
                                                      

~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

kernel.org | git.kernel.org | LWN.net | Project Home | SVN repository | Mail admin

Linux® is a registered trademark of Linus Torvalds in the United States and other countries.
TOMOYO® is a registered trademark of NTT DATA CORPORATION.

sflogo.php