>> 1 .. _userfaultfd: >> 2 1 =========== 3 =========== 2 Userfaultfd 4 Userfaultfd 3 =========== 5 =========== 4 6 5 Objective 7 Objective 6 ========= 8 ========= 7 9 8 Userfaults allow the implementation of on-dema 10 Userfaults allow the implementation of on-demand paging from userland 9 and more generally they allow userland to take 11 and more generally they allow userland to take control of various 10 memory page faults, something otherwise only t 12 memory page faults, something otherwise only the kernel code could do. 11 13 12 For example userfaults allows a proper and mor 14 For example userfaults allows a proper and more optimal implementation 13 of the ``PROT_NONE+SIGSEGV`` trick. 15 of the ``PROT_NONE+SIGSEGV`` trick. 14 16 15 Design 17 Design 16 ====== 18 ====== 17 19 18 Userspace creates a new userfaultfd, initializ !! 20 Userfaults are delivered and resolved through the ``userfaultfd`` syscall. 19 regions of virtual memory with it. Then, any p << 20 region(s) result in a message being delivered << 21 userspace of the fault. << 22 21 23 The ``userfaultfd`` (aside from registering an 22 The ``userfaultfd`` (aside from registering and unregistering virtual 24 memory ranges) provides two primary functional 23 memory ranges) provides two primary functionalities: 25 24 26 1) ``read/POLLIN`` protocol to notify a userla 25 1) ``read/POLLIN`` protocol to notify a userland thread of the faults 27 happening 26 happening 28 27 29 2) various ``UFFDIO_*`` ioctls that can manage 28 2) various ``UFFDIO_*`` ioctls that can manage the virtual memory regions 30 registered in the ``userfaultfd`` that allo 29 registered in the ``userfaultfd`` that allows userland to efficiently 31 resolve the userfaults it receives via 1) o 30 resolve the userfaults it receives via 1) or to manage the virtual 32 memory in the background 31 memory in the background 33 32 34 The real advantage of userfaults if compared t 33 The real advantage of userfaults if compared to regular virtual memory 35 management of mremap/mprotect is that the user 34 management of mremap/mprotect is that the userfaults in all their 36 operations never involve heavyweight structure 35 operations never involve heavyweight structures like vmas (in fact the 37 ``userfaultfd`` runtime load never takes the m 36 ``userfaultfd`` runtime load never takes the mmap_lock for writing). >> 37 38 Vmas are not suitable for page- (or hugepage) 38 Vmas are not suitable for page- (or hugepage) granular fault tracking 39 when dealing with virtual address spaces that 39 when dealing with virtual address spaces that could span 40 Terabytes. Too many vmas would be needed for t 40 Terabytes. Too many vmas would be needed for that. 41 41 42 The ``userfaultfd``, once created, can also be !! 42 The ``userfaultfd`` once opened by invoking the syscall, can also be 43 passed using unix domain sockets to a manager 43 passed using unix domain sockets to a manager process, so the same 44 manager process could handle the userfaults of 44 manager process could handle the userfaults of a multitude of 45 different processes without them being aware a 45 different processes without them being aware about what is going on 46 (well of course unless they later try to use t 46 (well of course unless they later try to use the ``userfaultfd`` 47 themselves on the same region the manager is a 47 themselves on the same region the manager is already tracking, which 48 is a corner case that would currently return ` 48 is a corner case that would currently return ``-EBUSY``). 49 49 50 API 50 API 51 === 51 === 52 52 53 Creating a userfaultfd << 54 ---------------------- << 55 << 56 There are two ways to create a new userfaultfd << 57 restrict access to this functionality (since h << 58 handle kernel page faults have been a useful t << 59 << 60 The first way, supported since userfaultfd was << 61 userfaultfd(2) syscall. Access to this is cont << 62 << 63 - Any user can always create a userfaultfd whi << 64 only. Such a userfaultfd can be created usin << 65 with the flag UFFD_USER_MODE_ONLY. << 66 << 67 - In order to also trap kernel page faults for << 68 process needs the CAP_SYS_PTRACE capability, << 69 vm.unprivileged_userfaultfd set to 1. By def << 70 is set to 0. << 71 << 72 The second way, added to the kernel more recen << 73 /dev/userfaultfd and issuing a USERFAULTFD_IOC << 74 yields equivalent userfaultfds to the userfaul << 75 << 76 Unlike userfaultfd(2), access to /dev/userfaul << 77 filesystem permissions (user/group/mode), whic << 78 userfaultfd specifically, without also grantin << 79 the same time (as e.g. granting CAP_SYS_PTRACE << 80 to /dev/userfaultfd can always create userfaul << 81 vm.unprivileged_userfaultfd is not considered. << 82 << 83 Initializing a userfaultfd << 84 -------------------------- << 85 << 86 When first opened the ``userfaultfd`` must be 53 When first opened the ``userfaultfd`` must be enabled invoking the 87 ``UFFDIO_API`` ioctl specifying a ``uffdio_api 54 ``UFFDIO_API`` ioctl specifying a ``uffdio_api.api`` value set to ``UFFD_API`` (or 88 a later API version) which will specify the `` 55 a later API version) which will specify the ``read/POLLIN`` protocol 89 userland intends to speak on the ``UFFD`` and 56 userland intends to speak on the ``UFFD`` and the ``uffdio_api.features`` 90 userland requires. The ``UFFDIO_API`` ioctl if 57 userland requires. The ``UFFDIO_API`` ioctl if successful (i.e. if the 91 requested ``uffdio_api.api`` is spoken also by 58 requested ``uffdio_api.api`` is spoken also by the running kernel and the 92 requested features are going to be enabled) wi 59 requested features are going to be enabled) will return into 93 ``uffdio_api.features`` and ``uffdio_api.ioctl 60 ``uffdio_api.features`` and ``uffdio_api.ioctls`` two 64bit bitmasks of 94 respectively all the available features of the 61 respectively all the available features of the read(2) protocol and 95 the generic ioctl available. 62 the generic ioctl available. 96 63 97 The ``uffdio_api.features`` bitmask returned b 64 The ``uffdio_api.features`` bitmask returned by the ``UFFDIO_API`` ioctl 98 defines what memory types are supported by the 65 defines what memory types are supported by the ``userfaultfd`` and what 99 events, except page fault notifications, may b 66 events, except page fault notifications, may be generated: 100 67 101 - The ``UFFD_FEATURE_EVENT_*`` flags indicate 68 - The ``UFFD_FEATURE_EVENT_*`` flags indicate that various other events 102 other than page faults are supported. These 69 other than page faults are supported. These events are described in more 103 detail below in the `Non-cooperative userfau 70 detail below in the `Non-cooperative userfaultfd`_ section. 104 71 105 - ``UFFD_FEATURE_MISSING_HUGETLBFS`` and ``UFF 72 - ``UFFD_FEATURE_MISSING_HUGETLBFS`` and ``UFFD_FEATURE_MISSING_SHMEM`` 106 indicate that the kernel supports ``UFFDIO_R 73 indicate that the kernel supports ``UFFDIO_REGISTER_MODE_MISSING`` 107 registrations for hugetlbfs and shared memor 74 registrations for hugetlbfs and shared memory (covering all shmem APIs, 108 i.e. tmpfs, ``IPCSHM``, ``/dev/zero``, ``MAP 75 i.e. tmpfs, ``IPCSHM``, ``/dev/zero``, ``MAP_SHARED``, ``memfd_create``, 109 etc) virtual memory areas, respectively. 76 etc) virtual memory areas, respectively. 110 77 111 - ``UFFD_FEATURE_MINOR_HUGETLBFS`` indicates t 78 - ``UFFD_FEATURE_MINOR_HUGETLBFS`` indicates that the kernel supports 112 ``UFFDIO_REGISTER_MODE_MINOR`` registration 79 ``UFFDIO_REGISTER_MODE_MINOR`` registration for hugetlbfs virtual memory 113 areas. ``UFFD_FEATURE_MINOR_SHMEM`` is the a !! 80 areas. 114 support for shmem virtual memory areas. << 115 << 116 - ``UFFD_FEATURE_MOVE`` indicates that the ker << 117 existing page contents from userspace. << 118 81 119 The userland application should set the featur 82 The userland application should set the feature flags it intends to use 120 when invoking the ``UFFDIO_API`` ioctl, to req 83 when invoking the ``UFFDIO_API`` ioctl, to request that those features be 121 enabled if supported. 84 enabled if supported. 122 85 123 Once the ``userfaultfd`` API has been enabled 86 Once the ``userfaultfd`` API has been enabled the ``UFFDIO_REGISTER`` 124 ioctl should be invoked (if present in the ret 87 ioctl should be invoked (if present in the returned ``uffdio_api.ioctls`` 125 bitmask) to register a memory range in the ``u 88 bitmask) to register a memory range in the ``userfaultfd`` by setting the 126 uffdio_register structure accordingly. The ``u 89 uffdio_register structure accordingly. The ``uffdio_register.mode`` 127 bitmask will specify to the kernel which kind 90 bitmask will specify to the kernel which kind of faults to track for 128 the range. The ``UFFDIO_REGISTER`` ioctl will 91 the range. The ``UFFDIO_REGISTER`` ioctl will return the 129 ``uffdio_register.ioctls`` bitmask of ioctls t 92 ``uffdio_register.ioctls`` bitmask of ioctls that are suitable to resolve 130 userfaults on the range registered. Not all io 93 userfaults on the range registered. Not all ioctls will necessarily be 131 supported for all memory types (e.g. anonymous 94 supported for all memory types (e.g. anonymous memory vs. shmem vs. 132 hugetlbfs), or all types of intercepted faults 95 hugetlbfs), or all types of intercepted faults. 133 96 134 Userland can use the ``uffdio_register.ioctls` 97 Userland can use the ``uffdio_register.ioctls`` to manage the virtual 135 address space in the background (to add or pot 98 address space in the background (to add or potentially also remove 136 memory from the ``userfaultfd`` registered ran 99 memory from the ``userfaultfd`` registered range). This means a userfault 137 could be triggering just before userland maps 100 could be triggering just before userland maps in the background the 138 user-faulted page. 101 user-faulted page. 139 102 140 Resolving Userfaults 103 Resolving Userfaults 141 -------------------- 104 -------------------- 142 105 143 There are three basic ways to resolve userfaul 106 There are three basic ways to resolve userfaults: 144 107 145 - ``UFFDIO_COPY`` atomically copies some exist 108 - ``UFFDIO_COPY`` atomically copies some existing page contents from 146 userspace. 109 userspace. 147 110 148 - ``UFFDIO_ZEROPAGE`` atomically zeros the new 111 - ``UFFDIO_ZEROPAGE`` atomically zeros the new page. 149 112 150 - ``UFFDIO_CONTINUE`` maps an existing, previo 113 - ``UFFDIO_CONTINUE`` maps an existing, previously-populated page. 151 114 152 These operations are atomic in the sense that 115 These operations are atomic in the sense that they guarantee nothing can 153 see a half-populated page, since readers will 116 see a half-populated page, since readers will keep userfaulting until the 154 operation has finished. 117 operation has finished. 155 118 156 By default, these wake up userfaults blocked o 119 By default, these wake up userfaults blocked on the range in question. 157 They support a ``UFFDIO_*_MODE_DONTWAKE`` ``mo 120 They support a ``UFFDIO_*_MODE_DONTWAKE`` ``mode`` flag, which indicates 158 that waking will be done separately at some la 121 that waking will be done separately at some later time. 159 122 160 Which ioctl to choose depends on the kind of p 123 Which ioctl to choose depends on the kind of page fault, and what we'd 161 like to do to resolve it: 124 like to do to resolve it: 162 125 163 - For ``UFFDIO_REGISTER_MODE_MISSING`` faults, 126 - For ``UFFDIO_REGISTER_MODE_MISSING`` faults, the fault needs to be 164 resolved by either providing a new page (``U 127 resolved by either providing a new page (``UFFDIO_COPY``), or mapping 165 the zero page (``UFFDIO_ZEROPAGE``). By defa 128 the zero page (``UFFDIO_ZEROPAGE``). By default, the kernel would map 166 the zero page for a missing fault. With user 129 the zero page for a missing fault. With userfaultfd, userspace can 167 decide what content to provide before the fa 130 decide what content to provide before the faulting thread continues. 168 131 169 - For ``UFFDIO_REGISTER_MODE_MINOR`` faults, t 132 - For ``UFFDIO_REGISTER_MODE_MINOR`` faults, there is an existing page (in 170 the page cache). Userspace has the option of 133 the page cache). Userspace has the option of modifying the page's 171 contents before resolving the fault. Once th 134 contents before resolving the fault. Once the contents are correct 172 (modified or not), userspace asks the kernel 135 (modified or not), userspace asks the kernel to map the page and let the 173 faulting thread continue with ``UFFDIO_CONTI 136 faulting thread continue with ``UFFDIO_CONTINUE``. 174 137 175 Notes: 138 Notes: 176 139 177 - You can tell which kind of fault occurred by 140 - You can tell which kind of fault occurred by examining 178 ``pagefault.flags`` within the ``uffd_msg``, 141 ``pagefault.flags`` within the ``uffd_msg``, checking for the 179 ``UFFD_PAGEFAULT_FLAG_*`` flags. 142 ``UFFD_PAGEFAULT_FLAG_*`` flags. 180 143 181 - None of the page-delivering ioctls default t 144 - None of the page-delivering ioctls default to the range that you 182 registered with. You must fill in all field 145 registered with. You must fill in all fields for the appropriate 183 ioctl struct including the range. 146 ioctl struct including the range. 184 147 185 - You get the address of the access that trigg 148 - You get the address of the access that triggered the missing page 186 event out of a struct uffd_msg that you read 149 event out of a struct uffd_msg that you read in the thread from the 187 uffd. You can supply as many pages as you w 150 uffd. You can supply as many pages as you want with these IOCTLs. 188 Keep in mind that unless you used DONTWAKE t 151 Keep in mind that unless you used DONTWAKE then the first of any of 189 those IOCTLs wakes up the faulting thread. 152 those IOCTLs wakes up the faulting thread. 190 153 191 - Be sure to test for all errors including 154 - Be sure to test for all errors including 192 (``pollfd[0].revents & POLLERR``). This can 155 (``pollfd[0].revents & POLLERR``). This can happen, e.g. when ranges 193 supplied were incorrect. 156 supplied were incorrect. 194 157 195 Write Protect Notifications 158 Write Protect Notifications 196 --------------------------- 159 --------------------------- 197 160 198 This is equivalent to (but faster than) using 161 This is equivalent to (but faster than) using mprotect and a SIGSEGV 199 signal handler. 162 signal handler. 200 163 201 Firstly you need to register a range with ``UF 164 Firstly you need to register a range with ``UFFDIO_REGISTER_MODE_WP``. 202 Instead of using mprotect(2) you use 165 Instead of using mprotect(2) you use 203 ``ioctl(uffd, UFFDIO_WRITEPROTECT, struct *uff 166 ``ioctl(uffd, UFFDIO_WRITEPROTECT, struct *uffdio_writeprotect)`` 204 while ``mode = UFFDIO_WRITEPROTECT_MODE_WP`` 167 while ``mode = UFFDIO_WRITEPROTECT_MODE_WP`` 205 in the struct passed in. The range does not d 168 in the struct passed in. The range does not default to and does not 206 have to be identical to the range you register 169 have to be identical to the range you registered with. You can write 207 protect as many ranges as you like (inside the 170 protect as many ranges as you like (inside the registered range). 208 Then, in the thread reading from uffd the stru 171 Then, in the thread reading from uffd the struct will have 209 ``msg.arg.pagefault.flags & UFFD_PAGEFAULT_FLA 172 ``msg.arg.pagefault.flags & UFFD_PAGEFAULT_FLAG_WP`` set. Now you send 210 ``ioctl(uffd, UFFDIO_WRITEPROTECT, struct *uff 173 ``ioctl(uffd, UFFDIO_WRITEPROTECT, struct *uffdio_writeprotect)`` 211 again while ``pagefault.mode`` does not have ` 174 again while ``pagefault.mode`` does not have ``UFFDIO_WRITEPROTECT_MODE_WP`` 212 set. This wakes up the thread which will conti 175 set. This wakes up the thread which will continue to run with writes. This 213 allows you to do the bookkeeping about the wri 176 allows you to do the bookkeeping about the write in the uffd reading 214 thread before the ioctl. 177 thread before the ioctl. 215 178 216 If you registered with both ``UFFDIO_REGISTER_ 179 If you registered with both ``UFFDIO_REGISTER_MODE_MISSING`` and 217 ``UFFDIO_REGISTER_MODE_WP`` then you need to t 180 ``UFFDIO_REGISTER_MODE_WP`` then you need to think about the sequence in 218 which you supply a page and undo write protect 181 which you supply a page and undo write protect. Note that there is a 219 difference between writes into a WP area and i 182 difference between writes into a WP area and into a !WP area. The 220 former will have ``UFFD_PAGEFAULT_FLAG_WP`` se 183 former will have ``UFFD_PAGEFAULT_FLAG_WP`` set, the latter 221 ``UFFD_PAGEFAULT_FLAG_WRITE``. The latter did 184 ``UFFD_PAGEFAULT_FLAG_WRITE``. The latter did not fail on protection but 222 you still need to supply a page when ``UFFDIO_ 185 you still need to supply a page when ``UFFDIO_REGISTER_MODE_MISSING`` was 223 used. 186 used. 224 << 225 Userfaultfd write-protect mode currently behav << 226 (when e.g. page is missing) over different typ << 227 << 228 For anonymous memory, ``ioctl(UFFDIO_WRITEPROT << 229 (e.g. when pages are missing and not populated << 230 like shmem and hugetlbfs, none ptes will be wr << 231 present pte. In other words, there will be a << 232 message generated when writing to a missing pa << 233 as long as the page range was write-protected << 234 not be generated on anonymous memories by defa << 235 << 236 If the application wants to be able to write p << 237 memory, one can pre-populate the memory with e << 238 newer kernels, one can also detect the feature << 239 and set the feature bit in advance to make sur << 240 write protected even upon anonymous memory. << 241 << 242 When using ``UFFDIO_REGISTER_MODE_WP`` in comb << 243 ``UFFDIO_REGISTER_MODE_MISSING`` or ``UFFDIO_R << 244 resolving missing / minor faults with ``UFFDIO << 245 respectively, it may be desirable for the new << 246 write-protected (so future writes will also re << 247 support a mode flag (``UFFDIO_COPY_MODE_WP`` o << 248 respectively) to configure the mapping this wa << 249 << 250 If the userfaultfd context has ``UFFD_FEATURE_ << 251 any vma registered with write-protection will << 252 than the default sync mode. << 253 << 254 In async mode, there will be no message genera << 255 happens, meanwhile the write-protection will b << 256 the kernel. It can be seen as a more accurate << 257 tracking and it can be different in a few ways << 258 << 259 - The dirty result will not be affected by v << 260 merging) because the dirty is only tracked << 261 << 262 - It supports range operations by default, s << 263 any range of memory as long as page aligne << 264 << 265 - Dirty information will not get lost if the << 266 various reasons (e.g. during split of a sh << 267 << 268 - Due to a reverted meaning of soft-dirty (p << 269 set; dirty when uffd-wp bit cleared), it h << 270 some of the memory operations. For exampl << 271 anonymous (or ``MADV_REMOVE`` on a file ma << 272 dirtying of memory by dropping uffd-wp bit << 273 << 274 The user app can collect the "written/dirty" s << 275 uffd-wp bit for the pages being interested in << 276 << 277 The page will not be under track of uffd-wp as << 278 explicitly write-protected by ``ioctl(UFFDIO_W << 279 flag ``UFFDIO_WRITEPROTECT_MODE_WP`` set. Try << 280 that was tracked by async mode userfaultfd-wp << 281 << 282 When userfaultfd-wp async mode is used alone, << 283 kinds of memory. << 284 << 285 Memory Poisioning Emulation << 286 --------------------------- << 287 << 288 In response to a fault (either missing or mino << 289 take to "resolve" it is to issue a ``UFFDIO_PO << 290 future faulters to either get a SIGBUS, or in << 291 receive an MCE as if there were hardware memor << 292 << 293 This is used to emulate hardware memory poison << 294 machine which experiences a real hardware memo << 295 the VM to another physical machine. Since we w << 296 transparent to the guest, we want that same ad << 297 still poisoned, even though it's on a new phys << 298 doesn't have a memory error in the exact same << 299 187 300 QEMU/KVM 188 QEMU/KVM 301 ======== 189 ======== 302 190 303 QEMU/KVM is using the ``userfaultfd`` syscall 191 QEMU/KVM is using the ``userfaultfd`` syscall to implement postcopy live 304 migration. Postcopy live migration is one form 192 migration. Postcopy live migration is one form of memory 305 externalization consisting of a virtual machin 193 externalization consisting of a virtual machine running with part or 306 all of its memory residing on a different node 194 all of its memory residing on a different node in the cloud. The 307 ``userfaultfd`` abstraction is generic enough 195 ``userfaultfd`` abstraction is generic enough that not a single line of 308 KVM kernel code had to be modified in order to 196 KVM kernel code had to be modified in order to add postcopy live 309 migration to QEMU. 197 migration to QEMU. 310 198 311 Guest async page faults, ``FOLL_NOWAIT`` and a 199 Guest async page faults, ``FOLL_NOWAIT`` and all other ``GUP*`` features work 312 just fine in combination with userfaults. User 200 just fine in combination with userfaults. Userfaults trigger async 313 page faults in the guest scheduler so those gu 201 page faults in the guest scheduler so those guest processes that 314 aren't waiting for userfaults (i.e. network bo 202 aren't waiting for userfaults (i.e. network bound) can keep running in 315 the guest vcpus. 203 the guest vcpus. 316 204 317 It is generally beneficial to run one pass of 205 It is generally beneficial to run one pass of precopy live migration 318 just before starting postcopy live migration, 206 just before starting postcopy live migration, in order to avoid 319 generating userfaults for readonly guest regio 207 generating userfaults for readonly guest regions. 320 208 321 The implementation of postcopy live migration 209 The implementation of postcopy live migration currently uses one 322 single bidirectional socket but in the future 210 single bidirectional socket but in the future two different sockets 323 will be used (to reduce the latency of the use 211 will be used (to reduce the latency of the userfaults to the minimum 324 possible without having to decrease ``/proc/sy 212 possible without having to decrease ``/proc/sys/net/ipv4/tcp_wmem``). 325 213 326 The QEMU in the source node writes all pages t 214 The QEMU in the source node writes all pages that it knows are missing 327 in the destination node, into the socket, and 215 in the destination node, into the socket, and the migration thread of 328 the QEMU running in the destination node runs 216 the QEMU running in the destination node runs ``UFFDIO_COPY|ZEROPAGE`` 329 ioctls on the ``userfaultfd`` in order to map 217 ioctls on the ``userfaultfd`` in order to map the received pages into the 330 guest (``UFFDIO_ZEROCOPY`` is used if the sour 218 guest (``UFFDIO_ZEROCOPY`` is used if the source page was a zero page). 331 219 332 A different postcopy thread in the destination 220 A different postcopy thread in the destination node listens with 333 poll() to the ``userfaultfd`` in parallel. Whe 221 poll() to the ``userfaultfd`` in parallel. When a ``POLLIN`` event is 334 generated after a userfault triggers, the post 222 generated after a userfault triggers, the postcopy thread read() from 335 the ``userfaultfd`` and receives the fault add 223 the ``userfaultfd`` and receives the fault address (or ``-EAGAIN`` in case the 336 userfault was already resolved and waken by a 224 userfault was already resolved and waken by a ``UFFDIO_COPY|ZEROPAGE`` run 337 by the parallel QEMU migration thread). 225 by the parallel QEMU migration thread). 338 226 339 After the QEMU postcopy thread (running in the 227 After the QEMU postcopy thread (running in the destination node) gets 340 the userfault address it writes the informatio 228 the userfault address it writes the information about the missing page 341 into the socket. The QEMU source node receives 229 into the socket. The QEMU source node receives the information and 342 roughly "seeks" to that page address and conti 230 roughly "seeks" to that page address and continues sending all 343 remaining missing pages from that new page off 231 remaining missing pages from that new page offset. Soon after that 344 (just the time to flush the tcp_wmem queue thr 232 (just the time to flush the tcp_wmem queue through the network) the 345 migration thread in the QEMU running in the de 233 migration thread in the QEMU running in the destination node will 346 receive the page that triggered the userfault 234 receive the page that triggered the userfault and it'll map it as 347 usual with the ``UFFDIO_COPY|ZEROPAGE`` (witho 235 usual with the ``UFFDIO_COPY|ZEROPAGE`` (without actually knowing if it 348 was spontaneously sent by the source or if it 236 was spontaneously sent by the source or if it was an urgent page 349 requested through a userfault). 237 requested through a userfault). 350 238 351 By the time the userfaults start, the QEMU in 239 By the time the userfaults start, the QEMU in the destination node 352 doesn't need to keep any per-page state bitmap 240 doesn't need to keep any per-page state bitmap relative to the live 353 migration around and a single per-page bitmap 241 migration around and a single per-page bitmap has to be maintained in 354 the QEMU running in the source node to know wh 242 the QEMU running in the source node to know which pages are still 355 missing in the destination node. The bitmap in 243 missing in the destination node. The bitmap in the source node is 356 checked to find which missing pages to send in 244 checked to find which missing pages to send in round robin and we seek 357 over it when receiving incoming userfaults. Af 245 over it when receiving incoming userfaults. After sending each page of 358 course the bitmap is updated accordingly. It's 246 course the bitmap is updated accordingly. It's also useful to avoid 359 sending the same page twice (in case the userf 247 sending the same page twice (in case the userfault is read by the 360 postcopy thread just before ``UFFDIO_COPY|ZERO 248 postcopy thread just before ``UFFDIO_COPY|ZEROPAGE`` runs in the migration 361 thread). 249 thread). 362 250 363 Non-cooperative userfaultfd 251 Non-cooperative userfaultfd 364 =========================== 252 =========================== 365 253 366 When the ``userfaultfd`` is monitored by an ex 254 When the ``userfaultfd`` is monitored by an external manager, the manager 367 must be able to track changes in the process v 255 must be able to track changes in the process virtual memory 368 layout. Userfaultfd can notify the manager abo 256 layout. Userfaultfd can notify the manager about such changes using 369 the same read(2) protocol as for the page faul 257 the same read(2) protocol as for the page fault notifications. The 370 manager has to explicitly enable these events 258 manager has to explicitly enable these events by setting appropriate 371 bits in ``uffdio_api.features`` passed to ``UF 259 bits in ``uffdio_api.features`` passed to ``UFFDIO_API`` ioctl: 372 260 373 ``UFFD_FEATURE_EVENT_FORK`` 261 ``UFFD_FEATURE_EVENT_FORK`` 374 enable ``userfaultfd`` hooks for fork( 262 enable ``userfaultfd`` hooks for fork(). When this feature is 375 enabled, the ``userfaultfd`` context o 263 enabled, the ``userfaultfd`` context of the parent process is 376 duplicated into the newly created proc 264 duplicated into the newly created process. The manager 377 receives ``UFFD_EVENT_FORK`` with file 265 receives ``UFFD_EVENT_FORK`` with file descriptor of the new 378 ``userfaultfd`` context in the ``uffd_ 266 ``userfaultfd`` context in the ``uffd_msg.fork``. 379 267 380 ``UFFD_FEATURE_EVENT_REMAP`` 268 ``UFFD_FEATURE_EVENT_REMAP`` 381 enable notifications about mremap() ca 269 enable notifications about mremap() calls. When the 382 non-cooperative process moves a virtua 270 non-cooperative process moves a virtual memory area to a 383 different location, the manager will r 271 different location, the manager will receive 384 ``UFFD_EVENT_REMAP``. The ``uffd_msg.r 272 ``UFFD_EVENT_REMAP``. The ``uffd_msg.remap`` will contain the old and 385 new addresses of the area and its orig 273 new addresses of the area and its original length. 386 274 387 ``UFFD_FEATURE_EVENT_REMOVE`` 275 ``UFFD_FEATURE_EVENT_REMOVE`` 388 enable notifications about madvise(MAD 276 enable notifications about madvise(MADV_REMOVE) and 389 madvise(MADV_DONTNEED) calls. The even 277 madvise(MADV_DONTNEED) calls. The event ``UFFD_EVENT_REMOVE`` will 390 be generated upon these calls to madvi 278 be generated upon these calls to madvise(). The ``uffd_msg.remove`` 391 will contain start and end addresses o 279 will contain start and end addresses of the removed area. 392 280 393 ``UFFD_FEATURE_EVENT_UNMAP`` 281 ``UFFD_FEATURE_EVENT_UNMAP`` 394 enable notifications about memory unma 282 enable notifications about memory unmapping. The manager will 395 get ``UFFD_EVENT_UNMAP`` with ``uffd_m 283 get ``UFFD_EVENT_UNMAP`` with ``uffd_msg.remove`` containing start and 396 end addresses of the unmapped area. 284 end addresses of the unmapped area. 397 285 398 Although the ``UFFD_FEATURE_EVENT_REMOVE`` and 286 Although the ``UFFD_FEATURE_EVENT_REMOVE`` and ``UFFD_FEATURE_EVENT_UNMAP`` 399 are pretty similar, they quite differ in the a 287 are pretty similar, they quite differ in the action expected from the 400 ``userfaultfd`` manager. In the former case, t 288 ``userfaultfd`` manager. In the former case, the virtual memory is 401 removed, but the area is not, the area remains 289 removed, but the area is not, the area remains monitored by the 402 ``userfaultfd``, and if a page fault occurs in 290 ``userfaultfd``, and if a page fault occurs in that area it will be 403 delivered to the manager. The proper resolutio 291 delivered to the manager. The proper resolution for such page fault is 404 to zeromap the faulting address. However, in t 292 to zeromap the faulting address. However, in the latter case, when an 405 area is unmapped, either explicitly (with munm 293 area is unmapped, either explicitly (with munmap() system call), or 406 implicitly (e.g. during mremap()), the area is 294 implicitly (e.g. during mremap()), the area is removed and in turn the 407 ``userfaultfd`` context for such area disappea 295 ``userfaultfd`` context for such area disappears too and the manager will 408 not get further userland page faults from the 296 not get further userland page faults from the removed area. Still, the 409 notification is required in order to prevent m 297 notification is required in order to prevent manager from using 410 ``UFFDIO_COPY`` on the unmapped area. 298 ``UFFDIO_COPY`` on the unmapped area. 411 299 412 Unlike userland page faults which have to be s 300 Unlike userland page faults which have to be synchronous and require 413 explicit or implicit wakeup, all the events ar 301 explicit or implicit wakeup, all the events are delivered 414 asynchronously and the non-cooperative process 302 asynchronously and the non-cooperative process resumes execution as 415 soon as manager executes read(). The ``userfau 303 soon as manager executes read(). The ``userfaultfd`` manager should 416 carefully synchronize calls to ``UFFDIO_COPY`` 304 carefully synchronize calls to ``UFFDIO_COPY`` with the events 417 processing. To aid the synchronization, the `` 305 processing. To aid the synchronization, the ``UFFDIO_COPY`` ioctl will 418 return ``-ENOSPC`` when the monitored process 306 return ``-ENOSPC`` when the monitored process exits at the time of 419 ``UFFDIO_COPY``, and ``-ENOENT``, when the non 307 ``UFFDIO_COPY``, and ``-ENOENT``, when the non-cooperative process has changed 420 its virtual memory layout simultaneously with 308 its virtual memory layout simultaneously with outstanding ``UFFDIO_COPY`` 421 operation. 309 operation. 422 310 423 The current asynchronous model of the event de 311 The current asynchronous model of the event delivery is optimal for 424 single threaded non-cooperative ``userfaultfd` 312 single threaded non-cooperative ``userfaultfd`` manager implementations. A 425 synchronous event delivery model can be added 313 synchronous event delivery model can be added later as a new 426 ``userfaultfd`` feature to facilitate multithr 314 ``userfaultfd`` feature to facilitate multithreading enhancements of the 427 non cooperative manager, for example to allow 315 non cooperative manager, for example to allow ``UFFDIO_COPY`` ioctls to 428 run in parallel to the event reception. Single 316 run in parallel to the event reception. Single threaded 429 implementations should continue to use the cur 317 implementations should continue to use the current async event 430 delivery model instead. 318 delivery model instead.
Linux® is a registered trademark of Linus Torvalds in the United States and other countries.
TOMOYO® is a registered trademark of NTT DATA CORPORATION.