~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

TOMOYO Linux Cross Reference
Linux/Documentation/virt/hyperv/overview.rst

Version: ~ [ linux-6.12-rc7 ] ~ [ linux-6.11.7 ] ~ [ linux-6.10.14 ] ~ [ linux-6.9.12 ] ~ [ linux-6.8.12 ] ~ [ linux-6.7.12 ] ~ [ linux-6.6.60 ] ~ [ linux-6.5.13 ] ~ [ linux-6.4.16 ] ~ [ linux-6.3.13 ] ~ [ linux-6.2.16 ] ~ [ linux-6.1.116 ] ~ [ linux-6.0.19 ] ~ [ linux-5.19.17 ] ~ [ linux-5.18.19 ] ~ [ linux-5.17.15 ] ~ [ linux-5.16.20 ] ~ [ linux-5.15.171 ] ~ [ linux-5.14.21 ] ~ [ linux-5.13.19 ] ~ [ linux-5.12.19 ] ~ [ linux-5.11.22 ] ~ [ linux-5.10.229 ] ~ [ linux-5.9.16 ] ~ [ linux-5.8.18 ] ~ [ linux-5.7.19 ] ~ [ linux-5.6.19 ] ~ [ linux-5.5.19 ] ~ [ linux-5.4.285 ] ~ [ linux-5.3.18 ] ~ [ linux-5.2.21 ] ~ [ linux-5.1.21 ] ~ [ linux-5.0.21 ] ~ [ linux-4.20.17 ] ~ [ linux-4.19.323 ] ~ [ linux-4.18.20 ] ~ [ linux-4.17.19 ] ~ [ linux-4.16.18 ] ~ [ linux-4.15.18 ] ~ [ linux-4.14.336 ] ~ [ linux-4.13.16 ] ~ [ linux-4.12.14 ] ~ [ linux-4.11.12 ] ~ [ linux-4.10.17 ] ~ [ linux-4.9.337 ] ~ [ linux-4.4.302 ] ~ [ linux-3.10.108 ] ~ [ linux-2.6.32.71 ] ~ [ linux-2.6.0 ] ~ [ linux-2.4.37.11 ] ~ [ unix-v6-master ] ~ [ ccs-tools-1.8.12 ] ~ [ policy-sample ] ~
Architecture: ~ [ i386 ] ~ [ alpha ] ~ [ m68k ] ~ [ mips ] ~ [ ppc ] ~ [ sparc ] ~ [ sparc64 ] ~

  1 .. SPDX-License-Identifier: GPL-2.0
  2 
  3 Overview
  4 ========
  5 The Linux kernel contains a variety of code for running as a fully
  6 enlightened guest on Microsoft's Hyper-V hypervisor.  Hyper-V
  7 consists primarily of a bare-metal hypervisor plus a virtual machine
  8 management service running in the parent partition (roughly
  9 equivalent to KVM and QEMU, for example).  Guest VMs run in child
 10 partitions.  In this documentation, references to Hyper-V usually
 11 encompass both the hypervisor and the VMM service without making a
 12 distinction about which functionality is provided by which
 13 component.
 14 
 15 Hyper-V runs on x86/x64 and arm64 architectures, and Linux guests
 16 are supported on both.  The functionality and behavior of Hyper-V is
 17 generally the same on both architectures unless noted otherwise.
 18 
 19 Linux Guest Communication with Hyper-V
 20 --------------------------------------
 21 Linux guests communicate with Hyper-V in four different ways:
 22 
 23 * Implicit traps: As defined by the x86/x64 or arm64 architecture,
 24   some guest actions trap to Hyper-V.  Hyper-V emulates the action and
 25   returns control to the guest.  This behavior is generally invisible
 26   to the Linux kernel.
 27 
 28 * Explicit hypercalls: Linux makes an explicit function call to
 29   Hyper-V, passing parameters.  Hyper-V performs the requested action
 30   and returns control to the caller.  Parameters are passed in
 31   processor registers or in memory shared between the Linux guest and
 32   Hyper-V.   On x86/x64, hypercalls use a Hyper-V specific calling
 33   sequence.  On arm64, hypercalls use the ARM standard SMCCC calling
 34   sequence.
 35 
 36 * Synthetic register access: Hyper-V implements a variety of
 37   synthetic registers.  On x86/x64 these registers appear as MSRs in
 38   the guest, and the Linux kernel can read or write these MSRs using
 39   the normal mechanisms defined by the x86/x64 architecture.  On
 40   arm64, these synthetic registers must be accessed using explicit
 41   hypercalls.
 42 
 43 * VMBus: VMBus is a higher-level software construct that is built on
 44   the other 3 mechanisms.  It is a message passing interface between
 45   the Hyper-V host and the Linux guest.  It uses memory that is shared
 46   between Hyper-V and the guest, along with various signaling
 47   mechanisms.
 48 
 49 The first three communication mechanisms are documented in the
 50 `Hyper-V Top Level Functional Spec (TLFS)`_.  The TLFS describes
 51 general Hyper-V functionality and provides details on the hypercalls
 52 and synthetic registers.  The TLFS is currently written for the
 53 x86/x64 architecture only.
 54 
 55 .. _Hyper-V Top Level Functional Spec (TLFS): https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/tlfs/tlfs
 56 
 57 VMBus is not documented.  This documentation provides a high-level
 58 overview of VMBus and how it works, but the details can be discerned
 59 only from the code.
 60 
 61 Sharing Memory
 62 --------------
 63 Many aspects are communication between Hyper-V and Linux are based
 64 on sharing memory.  Such sharing is generally accomplished as
 65 follows:
 66 
 67 * Linux allocates memory from its physical address space using
 68   standard Linux mechanisms.
 69 
 70 * Linux tells Hyper-V the guest physical address (GPA) of the
 71   allocated memory.  Many shared areas are kept to 1 page so that a
 72   single GPA is sufficient.   Larger shared areas require a list of
 73   GPAs, which usually do not need to be contiguous in the guest
 74   physical address space.  How Hyper-V is told about the GPA or list
 75   of GPAs varies.  In some cases, a single GPA is written to a
 76   synthetic register.  In other cases, a GPA or list of GPAs is sent
 77   in a VMBus message.
 78 
 79 * Hyper-V translates the GPAs into "real" physical memory addresses,
 80   and creates a virtual mapping that it can use to access the memory.
 81 
 82 * Linux can later revoke sharing it has previously established by
 83   telling Hyper-V to set the shared GPA to zero.
 84 
 85 Hyper-V operates with a page size of 4 Kbytes. GPAs communicated to
 86 Hyper-V may be in the form of page numbers, and always describe a
 87 range of 4 Kbytes.  Since the Linux guest page size on x86/x64 is
 88 also 4 Kbytes, the mapping from guest page to Hyper-V page is 1-to-1.
 89 On arm64, Hyper-V supports guests with 4/16/64 Kbyte pages as
 90 defined by the arm64 architecture.   If Linux is using 16 or 64
 91 Kbyte pages, Linux code must be careful to communicate with Hyper-V
 92 only in terms of 4 Kbyte pages.  HV_HYP_PAGE_SIZE and related macros
 93 are used in code that communicates with Hyper-V so that it works
 94 correctly in all configurations.
 95 
 96 As described in the TLFS, a few memory pages shared between Hyper-V
 97 and the Linux guest are "overlay" pages.  With overlay pages, Linux
 98 uses the usual approach of allocating guest memory and telling
 99 Hyper-V the GPA of the allocated memory.  But Hyper-V then replaces
100 that physical memory page with a page it has allocated, and the
101 original physical memory page is no longer accessible in the guest
102 VM.  Linux may access the memory normally as if it were the memory
103 that it originally allocated.  The "overlay" behavior is visible
104 only because the contents of the page (as seen by Linux) change at
105 the time that Linux originally establishes the sharing and the
106 overlay page is inserted.  Similarly, the contents change if Linux
107 revokes the sharing, in which case Hyper-V removes the overlay page,
108 and the guest page originally allocated by Linux becomes visible
109 again.
110 
111 Before Linux does a kexec to a kdump kernel or any other kernel,
112 memory shared with Hyper-V should be revoked.  Hyper-V could modify
113 a shared page or remove an overlay page after the new kernel is
114 using the page for a different purpose, corrupting the new kernel.
115 Hyper-V does not provide a single "set everything" operation to
116 guest VMs, so Linux code must individually revoke all sharing before
117 doing kexec.   See hv_kexec_handler() and hv_crash_handler().  But
118 the crash/panic path still has holes in cleanup because some shared
119 pages are set using per-CPU synthetic registers and there's no
120 mechanism to revoke the shared pages for CPUs other than the CPU
121 running the panic path.
122 
123 CPU Management
124 --------------
125 Hyper-V does not have a ability to hot-add or hot-remove a CPU
126 from a running VM.  However, Windows Server 2019 Hyper-V and
127 earlier versions may provide guests with ACPI tables that indicate
128 more CPUs than are actually present in the VM.  As is normal, Linux
129 treats these additional CPUs as potential hot-add CPUs, and reports
130 them as such even though Hyper-V will never actually hot-add them.
131 Starting in Windows Server 2022 Hyper-V, the ACPI tables reflect
132 only the CPUs actually present in the VM, so Linux does not report
133 any hot-add CPUs.
134 
135 A Linux guest CPU may be taken offline using the normal Linux
136 mechanisms, provided no VMBus channel interrupts are assigned to
137 the CPU.  See the section on VMBus Interrupts for more details
138 on how VMBus channel interrupts can be re-assigned to permit
139 taking a CPU offline.
140 
141 32-bit and 64-bit
142 -----------------
143 On x86/x64, Hyper-V supports 32-bit and 64-bit guests, and Linux
144 will build and run in either version. While the 32-bit version is
145 expected to work, it is used rarely and may suffer from undetected
146 regressions.
147 
148 On arm64, Hyper-V supports only 64-bit guests.
149 
150 Endian-ness
151 -----------
152 All communication between Hyper-V and guest VMs uses Little-Endian
153 format on both x86/x64 and arm64.  Big-endian format on arm64 is not
154 supported by Hyper-V, and Linux code does not use endian-ness macros
155 when accessing data shared with Hyper-V.
156 
157 Versioning
158 ----------
159 Current Linux kernels operate correctly with older versions of
160 Hyper-V back to Windows Server 2012 Hyper-V. Support for running
161 on the original Hyper-V release in Windows Server 2008/2008 R2
162 has been removed.
163 
164 A Linux guest on Hyper-V outputs in dmesg the version of Hyper-V
165 it is running on.  This version is in the form of a Windows build
166 number and is for display purposes only. Linux code does not
167 test this version number at runtime to determine available features
168 and functionality. Hyper-V indicates feature/function availability
169 via flags in synthetic MSRs that Hyper-V provides to the guest,
170 and the guest code tests these flags.
171 
172 VMBus has its own protocol version that is negotiated during the
173 initial VMBus connection from the guest to Hyper-V. This version
174 number is also output to dmesg during boot.  This version number
175 is checked in a few places in the code to determine if specific
176 functionality is present.
177 
178 Furthermore, each synthetic device on VMBus also has a protocol
179 version that is separate from the VMBus protocol version. Device
180 drivers for these synthetic devices typically negotiate the device
181 protocol version, and may test that protocol version to determine
182 if specific device functionality is present.
183 
184 Code Packaging
185 --------------
186 Hyper-V related code appears in the Linux kernel code tree in three
187 main areas:
188 
189 1. drivers/hv
190 
191 2. arch/x86/hyperv and arch/arm64/hyperv
192 
193 3. individual device driver areas such as drivers/scsi, drivers/net,
194    drivers/clocksource, etc.
195 
196 A few miscellaneous files appear elsewhere. See the full list under
197 "Hyper-V/Azure CORE AND DRIVERS" and "DRM DRIVER FOR HYPERV
198 SYNTHETIC VIDEO DEVICE" in the MAINTAINERS file.
199 
200 The code in #1 and #2 is built only when CONFIG_HYPERV is set.
201 Similarly, the code for most Hyper-V related drivers is built only
202 when CONFIG_HYPERV is set.
203 
204 Most Hyper-V related code in #1 and #3 can be built as a module.
205 The architecture specific code in #2 must be built-in.  Also,
206 drivers/hv/hv_common.c is low-level code that is common across
207 architectures and must be built-in.

~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

kernel.org | git.kernel.org | LWN.net | Project Home | SVN repository | Mail admin

Linux® is a registered trademark of Linus Torvalds in the United States and other countries.
TOMOYO® is a registered trademark of NTT DATA CORPORATION.

sflogo.php