~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

TOMOYO Linux Cross Reference
Linux/Documentation/virt/kvm/vcpu-requests.rst

Version: ~ [ linux-6.12-rc7 ] ~ [ linux-6.11.7 ] ~ [ linux-6.10.14 ] ~ [ linux-6.9.12 ] ~ [ linux-6.8.12 ] ~ [ linux-6.7.12 ] ~ [ linux-6.6.60 ] ~ [ linux-6.5.13 ] ~ [ linux-6.4.16 ] ~ [ linux-6.3.13 ] ~ [ linux-6.2.16 ] ~ [ linux-6.1.116 ] ~ [ linux-6.0.19 ] ~ [ linux-5.19.17 ] ~ [ linux-5.18.19 ] ~ [ linux-5.17.15 ] ~ [ linux-5.16.20 ] ~ [ linux-5.15.171 ] ~ [ linux-5.14.21 ] ~ [ linux-5.13.19 ] ~ [ linux-5.12.19 ] ~ [ linux-5.11.22 ] ~ [ linux-5.10.229 ] ~ [ linux-5.9.16 ] ~ [ linux-5.8.18 ] ~ [ linux-5.7.19 ] ~ [ linux-5.6.19 ] ~ [ linux-5.5.19 ] ~ [ linux-5.4.285 ] ~ [ linux-5.3.18 ] ~ [ linux-5.2.21 ] ~ [ linux-5.1.21 ] ~ [ linux-5.0.21 ] ~ [ linux-4.20.17 ] ~ [ linux-4.19.323 ] ~ [ linux-4.18.20 ] ~ [ linux-4.17.19 ] ~ [ linux-4.16.18 ] ~ [ linux-4.15.18 ] ~ [ linux-4.14.336 ] ~ [ linux-4.13.16 ] ~ [ linux-4.12.14 ] ~ [ linux-4.11.12 ] ~ [ linux-4.10.17 ] ~ [ linux-4.9.337 ] ~ [ linux-4.4.302 ] ~ [ linux-3.10.108 ] ~ [ linux-2.6.32.71 ] ~ [ linux-2.6.0 ] ~ [ linux-2.4.37.11 ] ~ [ unix-v6-master ] ~ [ ccs-tools-1.8.12 ] ~ [ policy-sample ] ~
Architecture: ~ [ i386 ] ~ [ alpha ] ~ [ m68k ] ~ [ mips ] ~ [ ppc ] ~ [ sparc ] ~ [ sparc64 ] ~

Diff markup

Differences between /Documentation/virt/kvm/vcpu-requests.rst (Version linux-6.12-rc7) and /Documentation/virt/kvm/vcpu-requests.rst (Version linux-5.3.18)


  1 .. SPDX-License-Identifier: GPL-2.0            << 
  2                                                << 
  3 =================                                   1 =================
  4 KVM VCPU Requests                                   2 KVM VCPU Requests
  5 =================                                   3 =================
  6                                                     4 
  7 Overview                                            5 Overview
  8 ========                                            6 ========
  9                                                     7 
 10 KVM supports an internal API enabling threads       8 KVM supports an internal API enabling threads to request a VCPU thread to
 11 perform some activity.  For example, a thread       9 perform some activity.  For example, a thread may request a VCPU to flush
 12 its TLB with a VCPU request.  The API consists     10 its TLB with a VCPU request.  The API consists of the following functions::
 13                                                    11 
 14   /* Check if any requests are pending for VCP     12   /* Check if any requests are pending for VCPU @vcpu. */
 15   bool kvm_request_pending(struct kvm_vcpu *vc     13   bool kvm_request_pending(struct kvm_vcpu *vcpu);
 16                                                    14 
 17   /* Check if VCPU @vcpu has request @req pend     15   /* Check if VCPU @vcpu has request @req pending. */
 18   bool kvm_test_request(int req, struct kvm_vc     16   bool kvm_test_request(int req, struct kvm_vcpu *vcpu);
 19                                                    17 
 20   /* Clear request @req for VCPU @vcpu. */         18   /* Clear request @req for VCPU @vcpu. */
 21   void kvm_clear_request(int req, struct kvm_v     19   void kvm_clear_request(int req, struct kvm_vcpu *vcpu);
 22                                                    20 
 23   /*                                               21   /*
 24    * Check if VCPU @vcpu has request @req pend     22    * Check if VCPU @vcpu has request @req pending. When the request is
 25    * pending it will be cleared and a memory b     23    * pending it will be cleared and a memory barrier, which pairs with
 26    * another in kvm_make_request(), will be is     24    * another in kvm_make_request(), will be issued.
 27    */                                              25    */
 28   bool kvm_check_request(int req, struct kvm_v     26   bool kvm_check_request(int req, struct kvm_vcpu *vcpu);
 29                                                    27 
 30   /*                                               28   /*
 31    * Make request @req of VCPU @vcpu. Issues a     29    * Make request @req of VCPU @vcpu. Issues a memory barrier, which pairs
 32    * with another in kvm_check_request(), prio     30    * with another in kvm_check_request(), prior to setting the request.
 33    */                                              31    */
 34   void kvm_make_request(int req, struct kvm_vc     32   void kvm_make_request(int req, struct kvm_vcpu *vcpu);
 35                                                    33 
 36   /* Make request @req of all VCPUs of the VM      34   /* Make request @req of all VCPUs of the VM with struct kvm @kvm. */
 37   bool kvm_make_all_cpus_request(struct kvm *k     35   bool kvm_make_all_cpus_request(struct kvm *kvm, unsigned int req);
 38                                                    36 
 39 Typically a requester wants the VCPU to perfor     37 Typically a requester wants the VCPU to perform the activity as soon
 40 as possible after making the request.  This me     38 as possible after making the request.  This means most requests
 41 (kvm_make_request() calls) are followed by a c     39 (kvm_make_request() calls) are followed by a call to kvm_vcpu_kick(),
 42 and kvm_make_all_cpus_request() has the kickin     40 and kvm_make_all_cpus_request() has the kicking of all VCPUs built
 43 into it.                                           41 into it.
 44                                                    42 
 45 VCPU Kicks                                         43 VCPU Kicks
 46 ----------                                         44 ----------
 47                                                    45 
 48 The goal of a VCPU kick is to bring a VCPU thr     46 The goal of a VCPU kick is to bring a VCPU thread out of guest mode in
 49 order to perform some KVM maintenance.  To do      47 order to perform some KVM maintenance.  To do so, an IPI is sent, forcing
 50 a guest mode exit.  However, a VCPU thread may     48 a guest mode exit.  However, a VCPU thread may not be in guest mode at the
 51 time of the kick.  Therefore, depending on the     49 time of the kick.  Therefore, depending on the mode and state of the VCPU
 52 thread, there are two other actions a kick may     50 thread, there are two other actions a kick may take.  All three actions
 53 are listed below:                                  51 are listed below:
 54                                                    52 
 55 1) Send an IPI.  This forces a guest mode exit     53 1) Send an IPI.  This forces a guest mode exit.
 56 2) Waking a sleeping VCPU.  Sleeping VCPUs are     54 2) Waking a sleeping VCPU.  Sleeping VCPUs are VCPU threads outside guest
 57    mode that wait on waitqueues.  Waking them      55    mode that wait on waitqueues.  Waking them removes the threads from
 58    the waitqueues, allowing the threads to run     56    the waitqueues, allowing the threads to run again.  This behavior
 59    may be suppressed, see KVM_REQUEST_NO_WAKEU     57    may be suppressed, see KVM_REQUEST_NO_WAKEUP below.
 60 3) Nothing.  When the VCPU is not in guest mod     58 3) Nothing.  When the VCPU is not in guest mode and the VCPU thread is not
 61    sleeping, then there is nothing to do.          59    sleeping, then there is nothing to do.
 62                                                    60 
 63 VCPU Mode                                          61 VCPU Mode
 64 ---------                                          62 ---------
 65                                                    63 
 66 VCPUs have a mode state, ``vcpu->mode``, that      64 VCPUs have a mode state, ``vcpu->mode``, that is used to track whether the
 67 guest is running in guest mode or not, as well     65 guest is running in guest mode or not, as well as some specific
 68 outside guest mode states.  The architecture m     66 outside guest mode states.  The architecture may use ``vcpu->mode`` to
 69 ensure VCPU requests are seen by VCPUs (see "E     67 ensure VCPU requests are seen by VCPUs (see "Ensuring Requests Are Seen"),
 70 as well as to avoid sending unnecessary IPIs (     68 as well as to avoid sending unnecessary IPIs (see "IPI Reduction"), and
 71 even to ensure IPI acknowledgements are waited     69 even to ensure IPI acknowledgements are waited upon (see "Waiting for
 72 Acknowledgements").  The following modes are d     70 Acknowledgements").  The following modes are defined:
 73                                                    71 
 74 OUTSIDE_GUEST_MODE                                 72 OUTSIDE_GUEST_MODE
 75                                                    73 
 76   The VCPU thread is outside guest mode.           74   The VCPU thread is outside guest mode.
 77                                                    75 
 78 IN_GUEST_MODE                                      76 IN_GUEST_MODE
 79                                                    77 
 80   The VCPU thread is in guest mode.                78   The VCPU thread is in guest mode.
 81                                                    79 
 82 EXITING_GUEST_MODE                                 80 EXITING_GUEST_MODE
 83                                                    81 
 84   The VCPU thread is transitioning from IN_GUE     82   The VCPU thread is transitioning from IN_GUEST_MODE to
 85   OUTSIDE_GUEST_MODE.                              83   OUTSIDE_GUEST_MODE.
 86                                                    84 
 87 READING_SHADOW_PAGE_TABLES                         85 READING_SHADOW_PAGE_TABLES
 88                                                    86 
 89   The VCPU thread is outside guest mode, but i     87   The VCPU thread is outside guest mode, but it wants the sender of
 90   certain VCPU requests, namely KVM_REQ_TLB_FL     88   certain VCPU requests, namely KVM_REQ_TLB_FLUSH, to wait until the VCPU
 91   thread is done reading the page tables.          89   thread is done reading the page tables.
 92                                                    90 
 93 VCPU Request Internals                             91 VCPU Request Internals
 94 ======================                             92 ======================
 95                                                    93 
 96 VCPU requests are simply bit indices of the ``     94 VCPU requests are simply bit indices of the ``vcpu->requests`` bitmap.
 97 This means general bitops, like those document     95 This means general bitops, like those documented in [atomic-ops]_ could
 98 also be used, e.g. ::                              96 also be used, e.g. ::
 99                                                    97 
100   clear_bit(KVM_REQ_UNBLOCK & KVM_REQUEST_MASK !!  98   clear_bit(KVM_REQ_UNHALT & KVM_REQUEST_MASK, &vcpu->requests);
101                                                    99 
102 However, VCPU request users should refrain fro    100 However, VCPU request users should refrain from doing so, as it would
103 break the abstraction.  The first 8 bits are r    101 break the abstraction.  The first 8 bits are reserved for architecture
104 independent requests; all additional bits are  !! 102 independent requests, all additional bits are available for architecture
105 dependent requests.                               103 dependent requests.
106                                                   104 
107 Architecture Independent Requests                 105 Architecture Independent Requests
108 ---------------------------------                 106 ---------------------------------
109                                                   107 
110 KVM_REQ_TLB_FLUSH                                 108 KVM_REQ_TLB_FLUSH
111                                                   109 
112   KVM's common MMU notifier may need to flush     110   KVM's common MMU notifier may need to flush all of a guest's TLB
113   entries, calling kvm_flush_remote_tlbs() to     111   entries, calling kvm_flush_remote_tlbs() to do so.  Architectures that
114   choose to use the common kvm_flush_remote_tl    112   choose to use the common kvm_flush_remote_tlbs() implementation will
115   need to handle this VCPU request.               113   need to handle this VCPU request.
116                                                   114 
117 KVM_REQ_VM_DEAD                                !! 115 KVM_REQ_MMU_RELOAD
118                                                << 
119   This request informs all VCPUs that the VM i << 
120   fatal error or because the VM's state has be << 
121                                                << 
122 KVM_REQ_UNBLOCK                                << 
123                                                   116 
124   This request informs the vCPU to exit kvm_vc !! 117   When shadow page tables are used and memory slots are removed it's
125   example from timer handlers that run on the  !! 118   necessary to inform each VCPU to completely refresh the tables.  This
126   or in order to update the interrupt routing  !! 119   request is used for that.
127   devices will wake up the vCPU.               !! 120 
128                                                !! 121 KVM_REQ_PENDING_TIMER
129 KVM_REQ_OUTSIDE_GUEST_MODE                     !! 122 
130                                                !! 123   This request may be made from a timer handler run on the host on behalf
131   This "request" ensures the target vCPU has e !! 124   of a VCPU.  It informs the VCPU thread to inject a timer interrupt.
132   sender of the request continuing on.  No act !! 125 
133   and so no request is actually logged for the !! 126 KVM_REQ_UNHALT
134   to a "kick", but unlike a kick it guarantees !! 127 
135   guest mode.  A kick only guarantees the vCPU !! 128   This request may be made from the KVM common function kvm_vcpu_block(),
136   future, e.g. a previous kick may have starte !! 129   which is used to emulate an instruction that causes a CPU to halt until
137   guarantee the to-be-kicked vCPU has fully ex !! 130   one of an architectural specific set of events and/or interrupts is
                                                   >> 131   received (determined by checking kvm_arch_vcpu_runnable()).  When that
                                                   >> 132   event or interrupt arrives kvm_vcpu_block() makes the request.  This is
                                                   >> 133   in contrast to when kvm_vcpu_block() returns due to any other reason,
                                                   >> 134   such as a pending signal, which does not indicate the VCPU's halt
                                                   >> 135   emulation should stop, and therefore does not make the request.
138                                                   136 
139 KVM_REQUEST_MASK                                  137 KVM_REQUEST_MASK
140 ----------------                                  138 ----------------
141                                                   139 
142 VCPU requests should be masked by KVM_REQUEST_    140 VCPU requests should be masked by KVM_REQUEST_MASK before using them with
143 bitops.  This is because only the lower 8 bits    141 bitops.  This is because only the lower 8 bits are used to represent the
144 request's number.  The upper bits are used as     142 request's number.  The upper bits are used as flags.  Currently only two
145 flags are defined.                                143 flags are defined.
146                                                   144 
147 VCPU Request Flags                                145 VCPU Request Flags
148 ------------------                                146 ------------------
149                                                   147 
150 KVM_REQUEST_NO_WAKEUP                             148 KVM_REQUEST_NO_WAKEUP
151                                                   149 
152   This flag is applied to requests that only n    150   This flag is applied to requests that only need immediate attention
153   from VCPUs running in guest mode.  That is,     151   from VCPUs running in guest mode.  That is, sleeping VCPUs do not need
154   to be awakened for these requests.  Sleeping !! 152   to be awaken for these requests.  Sleeping VCPUs will handle the
155   requests when they are awakened later for so !! 153   requests when they are awaken later for some other reason.
156                                                   154 
157 KVM_REQUEST_WAIT                                  155 KVM_REQUEST_WAIT
158                                                   156 
159   When requests with this flag are made with k    157   When requests with this flag are made with kvm_make_all_cpus_request(),
160   then the caller will wait for each VCPU to a    158   then the caller will wait for each VCPU to acknowledge its IPI before
161   proceeding.  This flag only applies to VCPUs    159   proceeding.  This flag only applies to VCPUs that would receive IPIs.
162   If, for example, the VCPU is sleeping, so no    160   If, for example, the VCPU is sleeping, so no IPI is necessary, then
163   the requesting thread does not wait.  This m    161   the requesting thread does not wait.  This means that this flag may be
164   safely combined with KVM_REQUEST_NO_WAKEUP.     162   safely combined with KVM_REQUEST_NO_WAKEUP.  See "Waiting for
165   Acknowledgements" for more information about    163   Acknowledgements" for more information about requests with
166   KVM_REQUEST_WAIT.                               164   KVM_REQUEST_WAIT.
167                                                   165 
168 VCPU Requests with Associated State               166 VCPU Requests with Associated State
169 ===================================               167 ===================================
170                                                   168 
171 Requesters that want the receiving VCPU to han    169 Requesters that want the receiving VCPU to handle new state need to ensure
172 the newly written state is observable to the r    170 the newly written state is observable to the receiving VCPU thread's CPU
173 by the time it observes the request.  This mea    171 by the time it observes the request.  This means a write memory barrier
174 must be inserted after writing the new state a    172 must be inserted after writing the new state and before setting the VCPU
175 request bit.  Additionally, on the receiving V    173 request bit.  Additionally, on the receiving VCPU thread's side, a
176 corresponding read barrier must be inserted af    174 corresponding read barrier must be inserted after reading the request bit
177 and before proceeding to read the new state as    175 and before proceeding to read the new state associated with it.  See
178 scenario 3, Message and Flag, of [lwn-mb]_ and    176 scenario 3, Message and Flag, of [lwn-mb]_ and the kernel documentation
179 [memory-barriers]_.                               177 [memory-barriers]_.
180                                                   178 
181 The pair of functions, kvm_check_request() and    179 The pair of functions, kvm_check_request() and kvm_make_request(), provide
182 the memory barriers, allowing this requirement    180 the memory barriers, allowing this requirement to be handled internally by
183 the API.                                          181 the API.
184                                                   182 
185 Ensuring Requests Are Seen                        183 Ensuring Requests Are Seen
186 ==========================                        184 ==========================
187                                                   185 
188 When making requests to VCPUs, we want to avoi    186 When making requests to VCPUs, we want to avoid the receiving VCPU
189 executing in guest mode for an arbitrary long     187 executing in guest mode for an arbitrary long time without handling the
190 request.  We can be sure this won't happen as     188 request.  We can be sure this won't happen as long as we ensure the VCPU
191 thread checks kvm_request_pending() before ent    189 thread checks kvm_request_pending() before entering guest mode and that a
192 kick will send an IPI to force an exit from gu    190 kick will send an IPI to force an exit from guest mode when necessary.
193 Extra care must be taken to cover the period a    191 Extra care must be taken to cover the period after the VCPU thread's last
194 kvm_request_pending() check and before it has     192 kvm_request_pending() check and before it has entered guest mode, as kick
195 IPIs will only trigger guest mode exits for VC    193 IPIs will only trigger guest mode exits for VCPU threads that are in guest
196 mode or at least have already disabled interru    194 mode or at least have already disabled interrupts in order to prepare to
197 enter guest mode.  This means that an optimize    195 enter guest mode.  This means that an optimized implementation (see "IPI
198 Reduction") must be certain when it's safe to     196 Reduction") must be certain when it's safe to not send the IPI.  One
199 solution, which all architectures except s390     197 solution, which all architectures except s390 apply, is to:
200                                                   198 
201 - set ``vcpu->mode`` to IN_GUEST_MODE between     199 - set ``vcpu->mode`` to IN_GUEST_MODE between disabling the interrupts and
202   the last kvm_request_pending() check;           200   the last kvm_request_pending() check;
203 - enable interrupts atomically when entering t    201 - enable interrupts atomically when entering the guest.
204                                                   202 
205 This solution also requires memory barriers to    203 This solution also requires memory barriers to be placed carefully in both
206 the requesting thread and the receiving VCPU.     204 the requesting thread and the receiving VCPU.  With the memory barriers we
207 can exclude the possibility of a VCPU thread o    205 can exclude the possibility of a VCPU thread observing
208 !kvm_request_pending() on its last check and t    206 !kvm_request_pending() on its last check and then not receiving an IPI for
209 the next request made of it, even if the reque    207 the next request made of it, even if the request is made immediately after
210 the check.  This is done by way of the Dekker     208 the check.  This is done by way of the Dekker memory barrier pattern
211 (scenario 10 of [lwn-mb]_).  As the Dekker pat    209 (scenario 10 of [lwn-mb]_).  As the Dekker pattern requires two variables,
212 this solution pairs ``vcpu->mode`` with ``vcpu    210 this solution pairs ``vcpu->mode`` with ``vcpu->requests``.  Substituting
213 them into the pattern gives::                     211 them into the pattern gives::
214                                                   212 
215   CPU1                                    CPU2    213   CPU1                                    CPU2
216   =================                       ====    214   =================                       =================
217   local_irq_disable();                            215   local_irq_disable();
218   WRITE_ONCE(vcpu->mode, IN_GUEST_MODE);  kvm_    216   WRITE_ONCE(vcpu->mode, IN_GUEST_MODE);  kvm_make_request(REQ, vcpu);
219   smp_mb();                               smp_    217   smp_mb();                               smp_mb();
220   if (kvm_request_pending(vcpu)) {        if (    218   if (kvm_request_pending(vcpu)) {        if (READ_ONCE(vcpu->mode) ==
221                                                   219                                               IN_GUEST_MODE) {
222       ...abort guest entry...                     220       ...abort guest entry...                 ...send IPI...
223   }                                       }       221   }                                       }
224                                                   222 
225 As stated above, the IPI is only useful for VC    223 As stated above, the IPI is only useful for VCPU threads in guest mode or
226 that have already disabled interrupts.  This i    224 that have already disabled interrupts.  This is why this specific case of
227 the Dekker pattern has been extended to disabl    225 the Dekker pattern has been extended to disable interrupts before setting
228 ``vcpu->mode`` to IN_GUEST_MODE.  WRITE_ONCE()    226 ``vcpu->mode`` to IN_GUEST_MODE.  WRITE_ONCE() and READ_ONCE() are used to
229 pedantically implement the memory barrier patt    227 pedantically implement the memory barrier pattern, guaranteeing the
230 compiler doesn't interfere with ``vcpu->mode``    228 compiler doesn't interfere with ``vcpu->mode``'s carefully planned
231 accesses.                                         229 accesses.
232                                                   230 
233 IPI Reduction                                     231 IPI Reduction
234 -------------                                     232 -------------
235                                                   233 
236 As only one IPI is needed to get a VCPU to che    234 As only one IPI is needed to get a VCPU to check for any/all requests,
237 then they may be coalesced.  This is easily do    235 then they may be coalesced.  This is easily done by having the first IPI
238 sending kick also change the VCPU mode to some    236 sending kick also change the VCPU mode to something !IN_GUEST_MODE.  The
239 transitional state, EXITING_GUEST_MODE, is use    237 transitional state, EXITING_GUEST_MODE, is used for this purpose.
240                                                   238 
241 Waiting for Acknowledgements                      239 Waiting for Acknowledgements
242 ----------------------------                      240 ----------------------------
243                                                   241 
244 Some requests, those with the KVM_REQUEST_WAIT    242 Some requests, those with the KVM_REQUEST_WAIT flag set, require IPIs to
245 be sent, and the acknowledgements to be waited    243 be sent, and the acknowledgements to be waited upon, even when the target
246 VCPU threads are in modes other than IN_GUEST_    244 VCPU threads are in modes other than IN_GUEST_MODE.  For example, one case
247 is when a target VCPU thread is in READING_SHA    245 is when a target VCPU thread is in READING_SHADOW_PAGE_TABLES mode, which
248 is set after disabling interrupts.  To support    246 is set after disabling interrupts.  To support these cases, the
249 KVM_REQUEST_WAIT flag changes the condition fo    247 KVM_REQUEST_WAIT flag changes the condition for sending an IPI from
250 checking that the VCPU is IN_GUEST_MODE to che    248 checking that the VCPU is IN_GUEST_MODE to checking that it is not
251 OUTSIDE_GUEST_MODE.                               249 OUTSIDE_GUEST_MODE.
252                                                   250 
253 Request-less VCPU Kicks                           251 Request-less VCPU Kicks
254 -----------------------                           252 -----------------------
255                                                   253 
256 As the determination of whether or not to send    254 As the determination of whether or not to send an IPI depends on the
257 two-variable Dekker memory barrier pattern, th    255 two-variable Dekker memory barrier pattern, then it's clear that
258 request-less VCPU kicks are almost never corre    256 request-less VCPU kicks are almost never correct.  Without the assurance
259 that a non-IPI generating kick will still resu    257 that a non-IPI generating kick will still result in an action by the
260 receiving VCPU, as the final kvm_request_pendi    258 receiving VCPU, as the final kvm_request_pending() check does for
261 request-accompanying kicks, then the kick may     259 request-accompanying kicks, then the kick may not do anything useful at
262 all.  If, for instance, a request-less kick wa    260 all.  If, for instance, a request-less kick was made to a VCPU that was
263 just about to set its mode to IN_GUEST_MODE, m    261 just about to set its mode to IN_GUEST_MODE, meaning no IPI is sent, then
264 the VCPU thread may continue its entry without    262 the VCPU thread may continue its entry without actually having done
265 whatever it was the kick was meant to initiate    263 whatever it was the kick was meant to initiate.
266                                                   264 
267 One exception is x86's posted interrupt mechan    265 One exception is x86's posted interrupt mechanism.  In this case, however,
268 even the request-less VCPU kick is coupled wit    266 even the request-less VCPU kick is coupled with the same
269 local_irq_disable() + smp_mb() pattern describ    267 local_irq_disable() + smp_mb() pattern described above; the ON bit
270 (Outstanding Notification) in the posted inter    268 (Outstanding Notification) in the posted interrupt descriptor takes the
271 role of ``vcpu->requests``.  When sending a po    269 role of ``vcpu->requests``.  When sending a posted interrupt, PIR.ON is
272 set before reading ``vcpu->mode``; dually, in     270 set before reading ``vcpu->mode``; dually, in the VCPU thread,
273 vmx_sync_pir_to_irr() reads PIR after setting     271 vmx_sync_pir_to_irr() reads PIR after setting ``vcpu->mode`` to
274 IN_GUEST_MODE.                                    272 IN_GUEST_MODE.
275                                                   273 
276 Additional Considerations                         274 Additional Considerations
277 =========================                         275 =========================
278                                                   276 
279 Sleeping VCPUs                                    277 Sleeping VCPUs
280 --------------                                    278 --------------
281                                                   279 
282 VCPU threads may need to consider requests bef    280 VCPU threads may need to consider requests before and/or after calling
283 functions that may put them to sleep, e.g. kvm    281 functions that may put them to sleep, e.g. kvm_vcpu_block().  Whether they
284 do or not, and, if they do, which requests nee    282 do or not, and, if they do, which requests need consideration, is
285 architecture dependent.  kvm_vcpu_block() call    283 architecture dependent.  kvm_vcpu_block() calls kvm_arch_vcpu_runnable()
286 to check if it should awaken.  One reason to d    284 to check if it should awaken.  One reason to do so is to provide
287 architectures a function where requests may be    285 architectures a function where requests may be checked if necessary.
288                                                   286 
                                                   >> 287 Clearing Requests
                                                   >> 288 -----------------
                                                   >> 289 
                                                   >> 290 Generally it only makes sense for the receiving VCPU thread to clear a
                                                   >> 291 request.  However, in some circumstances, such as when the requesting
                                                   >> 292 thread and the receiving VCPU thread are executed serially, such as when
                                                   >> 293 they are the same thread, or when they are using some form of concurrency
                                                   >> 294 control to temporarily execute synchronously, then it's possible to know
                                                   >> 295 that the request may be cleared immediately, rather than waiting for the
                                                   >> 296 receiving VCPU thread to handle the request in VCPU RUN.  The only current
                                                   >> 297 examples of this are kvm_vcpu_block() calls made by VCPUs to block
                                                   >> 298 themselves.  A possible side-effect of that call is to make the
                                                   >> 299 KVM_REQ_UNHALT request, which may then be cleared immediately when the
                                                   >> 300 VCPU returns from the call.
                                                   >> 301 
289 References                                        302 References
290 ==========                                        303 ==========
291                                                   304 
292 .. [atomic-ops] Documentation/atomic_bitops.tx !! 305 .. [atomic-ops] Documentation/core-api/atomic_ops.rst
293 .. [memory-barriers] Documentation/memory-barr    306 .. [memory-barriers] Documentation/memory-barriers.txt
294 .. [lwn-mb] https://lwn.net/Articles/573436/      307 .. [lwn-mb] https://lwn.net/Articles/573436/
                                                      

~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

kernel.org | git.kernel.org | LWN.net | Project Home | SVN repository | Mail admin

Linux® is a registered trademark of Linus Torvalds in the United States and other countries.
TOMOYO® is a registered trademark of NTT DATA CORPORATION.

sflogo.php