~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

TOMOYO Linux Cross Reference
Linux/Documentation/mm/page_migration.rst

Version: ~ [ linux-6.12-rc7 ] ~ [ linux-6.11.7 ] ~ [ linux-6.10.14 ] ~ [ linux-6.9.12 ] ~ [ linux-6.8.12 ] ~ [ linux-6.7.12 ] ~ [ linux-6.6.60 ] ~ [ linux-6.5.13 ] ~ [ linux-6.4.16 ] ~ [ linux-6.3.13 ] ~ [ linux-6.2.16 ] ~ [ linux-6.1.116 ] ~ [ linux-6.0.19 ] ~ [ linux-5.19.17 ] ~ [ linux-5.18.19 ] ~ [ linux-5.17.15 ] ~ [ linux-5.16.20 ] ~ [ linux-5.15.171 ] ~ [ linux-5.14.21 ] ~ [ linux-5.13.19 ] ~ [ linux-5.12.19 ] ~ [ linux-5.11.22 ] ~ [ linux-5.10.229 ] ~ [ linux-5.9.16 ] ~ [ linux-5.8.18 ] ~ [ linux-5.7.19 ] ~ [ linux-5.6.19 ] ~ [ linux-5.5.19 ] ~ [ linux-5.4.285 ] ~ [ linux-5.3.18 ] ~ [ linux-5.2.21 ] ~ [ linux-5.1.21 ] ~ [ linux-5.0.21 ] ~ [ linux-4.20.17 ] ~ [ linux-4.19.323 ] ~ [ linux-4.18.20 ] ~ [ linux-4.17.19 ] ~ [ linux-4.16.18 ] ~ [ linux-4.15.18 ] ~ [ linux-4.14.336 ] ~ [ linux-4.13.16 ] ~ [ linux-4.12.14 ] ~ [ linux-4.11.12 ] ~ [ linux-4.10.17 ] ~ [ linux-4.9.337 ] ~ [ linux-4.4.302 ] ~ [ linux-3.10.108 ] ~ [ linux-2.6.32.71 ] ~ [ linux-2.6.0 ] ~ [ linux-2.4.37.11 ] ~ [ unix-v6-master ] ~ [ ccs-tools-1.8.12 ] ~ [ policy-sample ] ~
Architecture: ~ [ i386 ] ~ [ alpha ] ~ [ m68k ] ~ [ mips ] ~ [ ppc ] ~ [ sparc ] ~ [ sparc64 ] ~

  1 ==============
  2 Page migration
  3 ==============
  4 
  5 Page migration allows moving the physical location of pages between
  6 nodes in a NUMA system while the process is running. This means that the
  7 virtual addresses that the process sees do not change. However, the
  8 system rearranges the physical location of those pages.
  9 
 10 Also see Documentation/mm/hmm.rst for migrating pages to or from device
 11 private memory.
 12 
 13 The main intent of page migration is to reduce the latency of memory accesses
 14 by moving pages near to the processor where the process accessing that memory
 15 is running.
 16 
 17 Page migration allows a process to manually relocate the node on which its
 18 pages are located through the MF_MOVE and MF_MOVE_ALL options while setting
 19 a new memory policy via mbind(). The pages of a process can also be relocated
 20 from another process using the sys_migrate_pages() function call. The
 21 migrate_pages() function call takes two sets of nodes and moves pages of a
 22 process that are located on the from nodes to the destination nodes.
 23 Page migration functions are provided by the numactl package by Andi Kleen
 24 (a version later than 0.9.3 is required. Get it from
 25 https://github.com/numactl/numactl.git). numactl provides libnuma
 26 which provides an interface similar to other NUMA functionality for page
 27 migration.  cat ``/proc/<pid>/numa_maps`` allows an easy review of where the
 28 pages of a process are located. See also the numa_maps documentation in the
 29 proc(5) man page.
 30 
 31 Manual migration is useful if for example the scheduler has relocated
 32 a process to a processor on a distant node. A batch scheduler or an
 33 administrator may detect the situation and move the pages of the process
 34 nearer to the new processor. The kernel itself only provides
 35 manual page migration support. Automatic page migration may be implemented
 36 through user space processes that move pages. A special function call
 37 "move_pages" allows the moving of individual pages within a process.
 38 For example, A NUMA profiler may obtain a log showing frequent off-node
 39 accesses and may use the result to move pages to more advantageous
 40 locations.
 41 
 42 Larger installations usually partition the system using cpusets into
 43 sections of nodes. Paul Jackson has equipped cpusets with the ability to
 44 move pages when a task is moved to another cpuset (See
 45 :ref:`CPUSETS <cpusets>`).
 46 Cpusets allow the automation of process locality. If a task is moved to
 47 a new cpuset then also all its pages are moved with it so that the
 48 performance of the process does not sink dramatically. Also the pages
 49 of processes in a cpuset are moved if the allowed memory nodes of a
 50 cpuset are changed.
 51 
 52 Page migration allows the preservation of the relative location of pages
 53 within a group of nodes for all migration techniques which will preserve a
 54 particular memory allocation pattern generated even after migrating a
 55 process. This is necessary in order to preserve the memory latencies.
 56 Processes will run with similar performance after migration.
 57 
 58 Page migration occurs in several steps. First a high level
 59 description for those trying to use migrate_pages() from the kernel
 60 (for userspace usage see the Andi Kleen's numactl package mentioned above)
 61 and then a low level description of how the low level details work.
 62 
 63 In kernel use of migrate_pages()
 64 ================================
 65 
 66 1. Remove folios from the LRU.
 67 
 68    Lists of folios to be migrated are generated by scanning over
 69    folios and moving them into lists. This is done by
 70    calling folio_isolate_lru().
 71    Calling folio_isolate_lru() increases the references to the folio
 72    so that it cannot vanish while the folio migration occurs.
 73    It also prevents the swapper or other scans from encountering
 74    the folio.
 75 
 76 2. We need to have a function of type new_folio_t that can be
 77    passed to migrate_pages(). This function should figure out
 78    how to allocate the correct new folio given the old folio.
 79 
 80 3. The migrate_pages() function is called which attempts
 81    to do the migration. It will call the function to allocate
 82    the new folio for each folio that is considered for moving.
 83 
 84 How migrate_pages() works
 85 =========================
 86 
 87 migrate_pages() does several passes over its list of folios. A folio is moved
 88 if all references to a folio are removable at the time. The folio has
 89 already been removed from the LRU via folio_isolate_lru() and the refcount
 90 is increased so that the folio cannot be freed while folio migration occurs.
 91 
 92 Steps:
 93 
 94 1. Lock the page to be migrated.
 95 
 96 2. Ensure that writeback is complete.
 97 
 98 3. Lock the new page that we want to move to. It is locked so that accesses to
 99    this (not yet up-to-date) page immediately block while the move is in progress.
100 
101 4. All the page table references to the page are converted to migration
102    entries. This decreases the mapcount of a page. If the resulting
103    mapcount is not zero then we do not migrate the page. All user space
104    processes that attempt to access the page will now wait on the page lock
105    or wait for the migration page table entry to be removed.
106 
107 5. The i_pages lock is taken. This will cause all processes trying
108    to access the page via the mapping to block on the spinlock.
109 
110 6. The refcount of the page is examined and we back out if references remain.
111    Otherwise, we know that we are the only one referencing this page.
112 
113 7. The radix tree is checked and if it does not contain the pointer to this
114    page then we back out because someone else modified the radix tree.
115 
116 8. The new page is prepped with some settings from the old page so that
117    accesses to the new page will discover a page with the correct settings.
118 
119 9. The radix tree is changed to point to the new page.
120 
121 10. The reference count of the old page is dropped because the address space
122     reference is gone. A reference to the new page is established because
123     the new page is referenced by the address space.
124 
125 11. The i_pages lock is dropped. With that lookups in the mapping
126     become possible again. Processes will move from spinning on the lock
127     to sleeping on the locked new page.
128 
129 12. The page contents are copied to the new page.
130 
131 13. The remaining page flags are copied to the new page.
132 
133 14. The old page flags are cleared to indicate that the page does
134     not provide any information anymore.
135 
136 15. Queued up writeback on the new page is triggered.
137 
138 16. If migration entries were inserted into the page table, then replace them
139     with real ptes. Doing so will enable access for user space processes not
140     already waiting for the page lock.
141 
142 17. The page locks are dropped from the old and new page.
143     Processes waiting on the page lock will redo their page faults
144     and will reach the new page.
145 
146 18. The new page is moved to the LRU and can be scanned by the swapper,
147     etc. again.
148 
149 Non-LRU page migration
150 ======================
151 
152 Although migration originally aimed for reducing the latency of memory
153 accesses for NUMA, compaction also uses migration to create high-order
154 pages.  For compaction purposes, it is also useful to be able to move
155 non-LRU pages, such as zsmalloc and virtio-balloon pages.
156 
157 If a driver wants to make its pages movable, it should define a struct
158 movable_operations.  It then needs to call __SetPageMovable() on each
159 page that it may be able to move.  This uses the ``page->mapping`` field,
160 so this field is not available for the driver to use for other purposes.
161 
162 Monitoring Migration
163 =====================
164 
165 The following events (counters) can be used to monitor page migration.
166 
167 1. PGMIGRATE_SUCCESS: Normal page migration success. Each count means that a
168    page was migrated. If the page was a non-THP and non-hugetlb page, then
169    this counter is increased by one. If the page was a THP or hugetlb, then
170    this counter is increased by the number of THP or hugetlb subpages.
171    For example, migration of a single 2MB THP that has 4KB-size base pages
172    (subpages) will cause this counter to increase by 512.
173 
174 2. PGMIGRATE_FAIL: Normal page migration failure. Same counting rules as for
175    PGMIGRATE_SUCCESS, above: this will be increased by the number of subpages,
176    if it was a THP or hugetlb.
177 
178 3. THP_MIGRATION_SUCCESS: A THP was migrated without being split.
179 
180 4. THP_MIGRATION_FAIL: A THP could not be migrated nor it could be split.
181 
182 5. THP_MIGRATION_SPLIT: A THP was migrated, but not as such: first, the THP had
183    to be split. After splitting, a migration retry was used for its sub-pages.
184 
185 THP_MIGRATION_* events also update the appropriate PGMIGRATE_SUCCESS or
186 PGMIGRATE_FAIL events. For example, a THP migration failure will cause both
187 THP_MIGRATION_FAIL and PGMIGRATE_FAIL to increase.
188 
189 Christoph Lameter, May 8, 2006.
190 Minchan Kim, Mar 28, 2016.
191 
192 .. kernel-doc:: include/linux/migrate.h

~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

kernel.org | git.kernel.org | LWN.net | Project Home | SVN repository | Mail admin

Linux® is a registered trademark of Linus Torvalds in the United States and other countries.
TOMOYO® is a registered trademark of NTT DATA CORPORATION.

sflogo.php