~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

TOMOYO Linux Cross Reference
Linux/Documentation/virt/ne_overview.rst

Version: ~ [ linux-6.11.5 ] ~ [ linux-6.10.14 ] ~ [ linux-6.9.12 ] ~ [ linux-6.8.12 ] ~ [ linux-6.7.12 ] ~ [ linux-6.6.58 ] ~ [ linux-6.5.13 ] ~ [ linux-6.4.16 ] ~ [ linux-6.3.13 ] ~ [ linux-6.2.16 ] ~ [ linux-6.1.114 ] ~ [ linux-6.0.19 ] ~ [ linux-5.19.17 ] ~ [ linux-5.18.19 ] ~ [ linux-5.17.15 ] ~ [ linux-5.16.20 ] ~ [ linux-5.15.169 ] ~ [ linux-5.14.21 ] ~ [ linux-5.13.19 ] ~ [ linux-5.12.19 ] ~ [ linux-5.11.22 ] ~ [ linux-5.10.228 ] ~ [ linux-5.9.16 ] ~ [ linux-5.8.18 ] ~ [ linux-5.7.19 ] ~ [ linux-5.6.19 ] ~ [ linux-5.5.19 ] ~ [ linux-5.4.284 ] ~ [ linux-5.3.18 ] ~ [ linux-5.2.21 ] ~ [ linux-5.1.21 ] ~ [ linux-5.0.21 ] ~ [ linux-4.20.17 ] ~ [ linux-4.19.322 ] ~ [ linux-4.18.20 ] ~ [ linux-4.17.19 ] ~ [ linux-4.16.18 ] ~ [ linux-4.15.18 ] ~ [ linux-4.14.336 ] ~ [ linux-4.13.16 ] ~ [ linux-4.12.14 ] ~ [ linux-4.11.12 ] ~ [ linux-4.10.17 ] ~ [ linux-4.9.337 ] ~ [ linux-4.4.302 ] ~ [ linux-3.10.108 ] ~ [ linux-2.6.32.71 ] ~ [ linux-2.6.0 ] ~ [ linux-2.4.37.11 ] ~ [ unix-v6-master ] ~ [ ccs-tools-1.8.9 ] ~ [ policy-sample ] ~
Architecture: ~ [ i386 ] ~ [ alpha ] ~ [ m68k ] ~ [ mips ] ~ [ ppc ] ~ [ sparc ] ~ [ sparc64 ] ~

Diff markup

Differences between /Documentation/virt/ne_overview.rst (Version linux-6.11.5) and /Documentation/virt/ne_overview.rst (Version linux-6.0.19)


  1 .. SPDX-License-Identifier: GPL-2.0                 1 .. SPDX-License-Identifier: GPL-2.0
  2                                                     2 
  3 ==============                                      3 ==============
  4 Nitro Enclaves                                      4 Nitro Enclaves
  5 ==============                                      5 ==============
  6                                                     6 
  7 Overview                                            7 Overview
  8 ========                                            8 ========
  9                                                     9 
 10 Nitro Enclaves (NE) is a new Amazon Elastic Co     10 Nitro Enclaves (NE) is a new Amazon Elastic Compute Cloud (EC2) capability
 11 that allows customers to carve out isolated co     11 that allows customers to carve out isolated compute environments within EC2
 12 instances [1].                                     12 instances [1].
 13                                                    13 
 14 For example, an application that processes sen     14 For example, an application that processes sensitive data and runs in a VM,
 15 can be separated from other applications runni     15 can be separated from other applications running in the same VM. This
 16 application then runs in a separate VM than th     16 application then runs in a separate VM than the primary VM, namely an enclave.
 17 It runs alongside the VM that spawned it. This     17 It runs alongside the VM that spawned it. This setup matches low latency
 18 applications needs.                                18 applications needs.
 19                                                    19 
 20 The current supported architectures for the NE     20 The current supported architectures for the NE kernel driver, available in the
 21 upstream Linux kernel, are x86 and ARM64.          21 upstream Linux kernel, are x86 and ARM64.
 22                                                    22 
 23 The resources that are allocated for the encla     23 The resources that are allocated for the enclave, such as memory and CPUs, are
 24 carved out of the primary VM. Each enclave is      24 carved out of the primary VM. Each enclave is mapped to a process running in the
 25 primary VM, that communicates with the NE kern     25 primary VM, that communicates with the NE kernel driver via an ioctl interface.
 26                                                    26 
 27 In this sense, there are two components:           27 In this sense, there are two components:
 28                                                    28 
 29 1. An enclave abstraction process - a user spa     29 1. An enclave abstraction process - a user space process running in the primary
 30 VM guest that uses the provided ioctl interfac     30 VM guest that uses the provided ioctl interface of the NE driver to spawn an
 31 enclave VM (that's 2 below).                       31 enclave VM (that's 2 below).
 32                                                    32 
 33 There is a NE emulated PCI device exposed to t     33 There is a NE emulated PCI device exposed to the primary VM. The driver for this
 34 new PCI device is included in the NE driver.       34 new PCI device is included in the NE driver.
 35                                                    35 
 36 The ioctl logic is mapped to PCI device comman     36 The ioctl logic is mapped to PCI device commands e.g. the NE_START_ENCLAVE ioctl
 37 maps to an enclave start PCI command. The PCI      37 maps to an enclave start PCI command. The PCI device commands are then
 38 translated into  actions taken on the hypervis     38 translated into  actions taken on the hypervisor side; that's the Nitro
 39 hypervisor running on the host where the prima     39 hypervisor running on the host where the primary VM is running. The Nitro
 40 hypervisor is based on core KVM technology.        40 hypervisor is based on core KVM technology.
 41                                                    41 
 42 2. The enclave itself - a VM running on the sa     42 2. The enclave itself - a VM running on the same host as the primary VM that
 43 spawned it. Memory and CPUs are carved out of      43 spawned it. Memory and CPUs are carved out of the primary VM and are dedicated
 44 for the enclave VM. An enclave does not have p     44 for the enclave VM. An enclave does not have persistent storage attached.
 45                                                    45 
 46 The memory regions carved out of the primary V     46 The memory regions carved out of the primary VM and given to an enclave need to
 47 be aligned 2 MiB / 1 GiB physically contiguous     47 be aligned 2 MiB / 1 GiB physically contiguous memory regions (or multiple of
 48 this size e.g. 8 MiB). The memory can be alloc     48 this size e.g. 8 MiB). The memory can be allocated e.g. by using hugetlbfs from
 49 user space [2][3][7]. The memory size for an e     49 user space [2][3][7]. The memory size for an enclave needs to be at least
 50 64 MiB. The enclave memory and CPUs need to be     50 64 MiB. The enclave memory and CPUs need to be from the same NUMA node.
 51                                                    51 
 52 An enclave runs on dedicated cores. CPU 0 and      52 An enclave runs on dedicated cores. CPU 0 and its CPU siblings need to remain
 53 available for the primary VM. A CPU pool has t     53 available for the primary VM. A CPU pool has to be set for NE purposes by an
 54 user with admin capability. See the cpu list s     54 user with admin capability. See the cpu list section from the kernel
 55 documentation [4] for how a CPU pool format lo     55 documentation [4] for how a CPU pool format looks.
 56                                                    56 
 57 An enclave communicates with the primary VM vi     57 An enclave communicates with the primary VM via a local communication channel,
 58 using virtio-vsock [5]. The primary VM has vir     58 using virtio-vsock [5]. The primary VM has virtio-pci vsock emulated device,
 59 while the enclave VM has a virtio-mmio vsock e     59 while the enclave VM has a virtio-mmio vsock emulated device. The vsock device
 60 uses eventfd for signaling. The enclave VM see     60 uses eventfd for signaling. The enclave VM sees the usual interfaces - local
 61 APIC and IOAPIC - to get interrupts from virti     61 APIC and IOAPIC - to get interrupts from virtio-vsock device. The virtio-mmio
 62 device is placed in memory below the typical 4     62 device is placed in memory below the typical 4 GiB.
 63                                                    63 
 64 The application that runs in the enclave needs     64 The application that runs in the enclave needs to be packaged in an enclave
 65 image together with the OS ( e.g. kernel, ramd     65 image together with the OS ( e.g. kernel, ramdisk, init ) that will run in the
 66 enclave VM. The enclave VM has its own kernel      66 enclave VM. The enclave VM has its own kernel and follows the standard Linux
 67 boot protocol [6][8].                              67 boot protocol [6][8].
 68                                                    68 
 69 The kernel bzImage, the kernel command line, t     69 The kernel bzImage, the kernel command line, the ramdisk(s) are part of the
 70 Enclave Image Format (EIF); plus an EIF header     70 Enclave Image Format (EIF); plus an EIF header including metadata such as magic
 71 number, eif version, image size and CRC.           71 number, eif version, image size and CRC.
 72                                                    72 
 73 Hash values are computed for the entire enclav     73 Hash values are computed for the entire enclave image (EIF), the kernel and
 74 ramdisk(s). That's used, for example, to check     74 ramdisk(s). That's used, for example, to check that the enclave image that is
 75 loaded in the enclave VM is the one that was i     75 loaded in the enclave VM is the one that was intended to be run.
 76                                                    76 
 77 These crypto measurements are included in a si     77 These crypto measurements are included in a signed attestation document
 78 generated by the Nitro Hypervisor and further      78 generated by the Nitro Hypervisor and further used to prove the identity of the
 79 enclave; KMS is an example of service that NE      79 enclave; KMS is an example of service that NE is integrated with and that checks
 80 the attestation doc.                               80 the attestation doc.
 81                                                    81 
 82 The enclave image (EIF) is loaded in the encla     82 The enclave image (EIF) is loaded in the enclave memory at offset 8 MiB. The
 83 init process in the enclave connects to the vs     83 init process in the enclave connects to the vsock CID of the primary VM and a
 84 predefined port - 9000 - to send a heartbeat v     84 predefined port - 9000 - to send a heartbeat value - 0xb7. This mechanism is
 85 used to check in the primary VM that the encla     85 used to check in the primary VM that the enclave has booted. The CID of the
 86 primary VM is 3.                                   86 primary VM is 3.
 87                                                    87 
 88 If the enclave VM crashes or gracefully exits,     88 If the enclave VM crashes or gracefully exits, an interrupt event is received by
 89 the NE driver. This event is sent further to t     89 the NE driver. This event is sent further to the user space enclave process
 90 running in the primary VM via a poll notificat     90 running in the primary VM via a poll notification mechanism. Then the user space
 91 enclave process can exit.                          91 enclave process can exit.
 92                                                    92 
 93 [1] https://aws.amazon.com/ec2/nitro/nitro-enc     93 [1] https://aws.amazon.com/ec2/nitro/nitro-enclaves/
 94 [2] https://www.kernel.org/doc/html/latest/adm     94 [2] https://www.kernel.org/doc/html/latest/admin-guide/mm/hugetlbpage.html
 95 [3] https://lwn.net/Articles/807108/               95 [3] https://lwn.net/Articles/807108/
 96 [4] https://www.kernel.org/doc/html/latest/adm     96 [4] https://www.kernel.org/doc/html/latest/admin-guide/kernel-parameters.html
 97 [5] https://man7.org/linux/man-pages/man7/vsoc     97 [5] https://man7.org/linux/man-pages/man7/vsock.7.html
 98 [6] https://www.kernel.org/doc/html/latest/x86     98 [6] https://www.kernel.org/doc/html/latest/x86/boot.html
 99 [7] https://www.kernel.org/doc/html/latest/arm     99 [7] https://www.kernel.org/doc/html/latest/arm64/hugetlbpage.html
100 [8] https://www.kernel.org/doc/html/latest/arm    100 [8] https://www.kernel.org/doc/html/latest/arm64/booting.html
                                                      

~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

kernel.org | git.kernel.org | LWN.net | Project Home | SVN repository | Mail admin

Linux® is a registered trademark of Linus Torvalds in the United States and other countries.
TOMOYO® is a registered trademark of NTT DATA CORPORATION.

sflogo.php