1 =============================== 1 =============================== 2 Documentation for /proc/sys/vm/ 2 Documentation for /proc/sys/vm/ 3 =============================== 3 =============================== 4 4 5 kernel version 2.6.29 5 kernel version 2.6.29 6 6 7 Copyright (c) 1998, 1999, Rik van Riel <riel@n 7 Copyright (c) 1998, 1999, Rik van Riel <riel@nl.linux.org> 8 8 9 Copyright (c) 2008 Peter W. Morreale <p 9 Copyright (c) 2008 Peter W. Morreale <pmorreale@novell.com> 10 10 11 For general info and legal blurb, please look 11 For general info and legal blurb, please look in index.rst. 12 12 13 ---------------------------------------------- 13 ------------------------------------------------------------------------------ 14 14 15 This file contains the documentation for the s 15 This file contains the documentation for the sysctl files in 16 /proc/sys/vm and is valid for Linux kernel ver 16 /proc/sys/vm and is valid for Linux kernel version 2.6.29. 17 17 18 The files in this directory can be used to tun 18 The files in this directory can be used to tune the operation 19 of the virtual memory (VM) subsystem of the Li 19 of the virtual memory (VM) subsystem of the Linux kernel and 20 the writeout of dirty data to disk. 20 the writeout of dirty data to disk. 21 21 22 Default values and initialization routines for 22 Default values and initialization routines for most of these 23 files can be found in mm/swap.c. 23 files can be found in mm/swap.c. 24 24 25 Currently, these files are in /proc/sys/vm: 25 Currently, these files are in /proc/sys/vm: 26 26 27 - admin_reserve_kbytes 27 - admin_reserve_kbytes 28 - compact_memory 28 - compact_memory 29 - compaction_proactiveness 29 - compaction_proactiveness 30 - compact_unevictable_allowed 30 - compact_unevictable_allowed 31 - dirty_background_bytes 31 - dirty_background_bytes 32 - dirty_background_ratio 32 - dirty_background_ratio 33 - dirty_bytes 33 - dirty_bytes 34 - dirty_expire_centisecs 34 - dirty_expire_centisecs 35 - dirty_ratio 35 - dirty_ratio 36 - dirtytime_expire_seconds 36 - dirtytime_expire_seconds 37 - dirty_writeback_centisecs 37 - dirty_writeback_centisecs 38 - drop_caches 38 - drop_caches 39 - enable_soft_offline 39 - enable_soft_offline 40 - extfrag_threshold 40 - extfrag_threshold 41 - highmem_is_dirtyable 41 - highmem_is_dirtyable 42 - hugetlb_shm_group 42 - hugetlb_shm_group 43 - laptop_mode 43 - laptop_mode 44 - legacy_va_layout 44 - legacy_va_layout 45 - lowmem_reserve_ratio 45 - lowmem_reserve_ratio 46 - max_map_count 46 - max_map_count 47 - mem_profiling (only if CONFIG_MEM_AL 47 - mem_profiling (only if CONFIG_MEM_ALLOC_PROFILING=y) 48 - memory_failure_early_kill 48 - memory_failure_early_kill 49 - memory_failure_recovery 49 - memory_failure_recovery 50 - min_free_kbytes 50 - min_free_kbytes 51 - min_slab_ratio 51 - min_slab_ratio 52 - min_unmapped_ratio 52 - min_unmapped_ratio 53 - mmap_min_addr 53 - mmap_min_addr 54 - mmap_rnd_bits 54 - mmap_rnd_bits 55 - mmap_rnd_compat_bits 55 - mmap_rnd_compat_bits 56 - nr_hugepages 56 - nr_hugepages 57 - nr_hugepages_mempolicy 57 - nr_hugepages_mempolicy 58 - nr_overcommit_hugepages 58 - nr_overcommit_hugepages 59 - nr_trim_pages (only if CONFIG_MMU=n) 59 - nr_trim_pages (only if CONFIG_MMU=n) 60 - numa_zonelist_order 60 - numa_zonelist_order 61 - oom_dump_tasks 61 - oom_dump_tasks 62 - oom_kill_allocating_task 62 - oom_kill_allocating_task 63 - overcommit_kbytes 63 - overcommit_kbytes 64 - overcommit_memory 64 - overcommit_memory 65 - overcommit_ratio 65 - overcommit_ratio 66 - page-cluster 66 - page-cluster 67 - page_lock_unfairness 67 - page_lock_unfairness 68 - panic_on_oom 68 - panic_on_oom 69 - percpu_pagelist_high_fraction 69 - percpu_pagelist_high_fraction 70 - stat_interval 70 - stat_interval 71 - stat_refresh 71 - stat_refresh 72 - numa_stat 72 - numa_stat 73 - swappiness 73 - swappiness 74 - unprivileged_userfaultfd 74 - unprivileged_userfaultfd 75 - user_reserve_kbytes 75 - user_reserve_kbytes 76 - vfs_cache_pressure 76 - vfs_cache_pressure 77 - watermark_boost_factor 77 - watermark_boost_factor 78 - watermark_scale_factor 78 - watermark_scale_factor 79 - zone_reclaim_mode 79 - zone_reclaim_mode 80 80 81 81 82 admin_reserve_kbytes 82 admin_reserve_kbytes 83 ==================== 83 ==================== 84 84 85 The amount of free memory in the system that s 85 The amount of free memory in the system that should be reserved for users 86 with the capability cap_sys_admin. 86 with the capability cap_sys_admin. 87 87 88 admin_reserve_kbytes defaults to min(3% of fre 88 admin_reserve_kbytes defaults to min(3% of free pages, 8MB) 89 89 90 That should provide enough for the admin to lo 90 That should provide enough for the admin to log in and kill a process, 91 if necessary, under the default overcommit 'gu 91 if necessary, under the default overcommit 'guess' mode. 92 92 93 Systems running under overcommit 'never' shoul 93 Systems running under overcommit 'never' should increase this to account 94 for the full Virtual Memory Size of programs u 94 for the full Virtual Memory Size of programs used to recover. Otherwise, 95 root may not be able to log in to recover the 95 root may not be able to log in to recover the system. 96 96 97 How do you calculate a minimum useful reserve? 97 How do you calculate a minimum useful reserve? 98 98 99 sshd or login + bash (or some other shell) + t 99 sshd or login + bash (or some other shell) + top (or ps, kill, etc.) 100 100 101 For overcommit 'guess', we can sum resident se 101 For overcommit 'guess', we can sum resident set sizes (RSS). 102 On x86_64 this is about 8MB. 102 On x86_64 this is about 8MB. 103 103 104 For overcommit 'never', we can take the max of 104 For overcommit 'never', we can take the max of their virtual sizes (VSZ) 105 and add the sum of their RSS. 105 and add the sum of their RSS. 106 On x86_64 this is about 128MB. 106 On x86_64 this is about 128MB. 107 107 108 Changing this takes effect whenever an applica 108 Changing this takes effect whenever an application requests memory. 109 109 110 110 111 compact_memory 111 compact_memory 112 ============== 112 ============== 113 113 114 Available only when CONFIG_COMPACTION is set. 114 Available only when CONFIG_COMPACTION is set. When 1 is written to the file, 115 all zones are compacted such that free memory 115 all zones are compacted such that free memory is available in contiguous 116 blocks where possible. This can be important f 116 blocks where possible. This can be important for example in the allocation of 117 huge pages although processes will also direct 117 huge pages although processes will also directly compact memory as required. 118 118 119 compaction_proactiveness 119 compaction_proactiveness 120 ======================== 120 ======================== 121 121 122 This tunable takes a value in the range [0, 10 122 This tunable takes a value in the range [0, 100] with a default value of 123 20. This tunable determines how aggressively c 123 20. This tunable determines how aggressively compaction is done in the 124 background. Write of a non zero value to this 124 background. Write of a non zero value to this tunable will immediately 125 trigger the proactive compaction. Setting it t 125 trigger the proactive compaction. Setting it to 0 disables proactive compaction. 126 126 127 Note that compaction has a non-trivial system- 127 Note that compaction has a non-trivial system-wide impact as pages 128 belonging to different processes are moved aro 128 belonging to different processes are moved around, which could also lead 129 to latency spikes in unsuspecting applications 129 to latency spikes in unsuspecting applications. The kernel employs 130 various heuristics to avoid wasting CPU cycles 130 various heuristics to avoid wasting CPU cycles if it detects that 131 proactive compaction is not being effective. 131 proactive compaction is not being effective. 132 132 133 Be careful when setting it to extreme values l 133 Be careful when setting it to extreme values like 100, as that may 134 cause excessive background compaction activity 134 cause excessive background compaction activity. 135 135 136 compact_unevictable_allowed 136 compact_unevictable_allowed 137 =========================== 137 =========================== 138 138 139 Available only when CONFIG_COMPACTION is set. 139 Available only when CONFIG_COMPACTION is set. When set to 1, compaction is 140 allowed to examine the unevictable lru (mlocke 140 allowed to examine the unevictable lru (mlocked pages) for pages to compact. 141 This should be used on systems where stalls fo 141 This should be used on systems where stalls for minor page faults are an 142 acceptable trade for large contiguous free mem 142 acceptable trade for large contiguous free memory. Set to 0 to prevent 143 compaction from moving pages that are unevicta 143 compaction from moving pages that are unevictable. Default value is 1. 144 On CONFIG_PREEMPT_RT the default value is 0 in 144 On CONFIG_PREEMPT_RT the default value is 0 in order to avoid a page fault, due 145 to compaction, which would block the task from 145 to compaction, which would block the task from becoming active until the fault 146 is resolved. 146 is resolved. 147 147 148 148 149 dirty_background_bytes 149 dirty_background_bytes 150 ====================== 150 ====================== 151 151 152 Contains the amount of dirty memory at which t 152 Contains the amount of dirty memory at which the background kernel 153 flusher threads will start writeback. 153 flusher threads will start writeback. 154 154 155 Note: 155 Note: 156 dirty_background_bytes is the counterpart of 156 dirty_background_bytes is the counterpart of dirty_background_ratio. Only 157 one of them may be specified at a time. When 157 one of them may be specified at a time. When one sysctl is written it is 158 immediately taken into account to evaluate t 158 immediately taken into account to evaluate the dirty memory limits and the 159 other appears as 0 when read. 159 other appears as 0 when read. 160 160 161 161 162 dirty_background_ratio 162 dirty_background_ratio 163 ====================== 163 ====================== 164 164 165 Contains, as a percentage of total available m 165 Contains, as a percentage of total available memory that contains free pages 166 and reclaimable pages, the number of pages at 166 and reclaimable pages, the number of pages at which the background kernel 167 flusher threads will start writing out dirty d 167 flusher threads will start writing out dirty data. 168 168 169 The total available memory is not equal to tot 169 The total available memory is not equal to total system memory. 170 170 171 171 172 dirty_bytes 172 dirty_bytes 173 =========== 173 =========== 174 174 175 Contains the amount of dirty memory at which a 175 Contains the amount of dirty memory at which a process generating disk writes 176 will itself start writeback. 176 will itself start writeback. 177 177 178 Note: dirty_bytes is the counterpart of dirty_ 178 Note: dirty_bytes is the counterpart of dirty_ratio. Only one of them may be 179 specified at a time. When one sysctl is writte 179 specified at a time. When one sysctl is written it is immediately taken into 180 account to evaluate the dirty memory limits an 180 account to evaluate the dirty memory limits and the other appears as 0 when 181 read. 181 read. 182 182 183 Note: the minimum value allowed for dirty_byte 183 Note: the minimum value allowed for dirty_bytes is two pages (in bytes); any 184 value lower than this limit will be ignored an 184 value lower than this limit will be ignored and the old configuration will be 185 retained. 185 retained. 186 186 187 187 188 dirty_expire_centisecs 188 dirty_expire_centisecs 189 ====================== 189 ====================== 190 190 191 This tunable is used to define when dirty data 191 This tunable is used to define when dirty data is old enough to be eligible 192 for writeout by the kernel flusher threads. I 192 for writeout by the kernel flusher threads. It is expressed in 100'ths 193 of a second. Data which has been dirty in-mem 193 of a second. Data which has been dirty in-memory for longer than this 194 interval will be written out next time a flush 194 interval will be written out next time a flusher thread wakes up. 195 195 196 196 197 dirty_ratio 197 dirty_ratio 198 =========== 198 =========== 199 199 200 Contains, as a percentage of total available m 200 Contains, as a percentage of total available memory that contains free pages 201 and reclaimable pages, the number of pages at 201 and reclaimable pages, the number of pages at which a process which is 202 generating disk writes will itself start writi 202 generating disk writes will itself start writing out dirty data. 203 203 204 The total available memory is not equal to tot 204 The total available memory is not equal to total system memory. 205 205 206 206 207 dirtytime_expire_seconds 207 dirtytime_expire_seconds 208 ======================== 208 ======================== 209 209 210 When a lazytime inode is constantly having its 210 When a lazytime inode is constantly having its pages dirtied, the inode with 211 an updated timestamp will never get chance to 211 an updated timestamp will never get chance to be written out. And, if the 212 only thing that has happened on the file syste 212 only thing that has happened on the file system is a dirtytime inode caused 213 by an atime update, a worker will be scheduled 213 by an atime update, a worker will be scheduled to make sure that inode 214 eventually gets pushed out to disk. This tuna 214 eventually gets pushed out to disk. This tunable is used to define when dirty 215 inode is old enough to be eligible for writeba 215 inode is old enough to be eligible for writeback by the kernel flusher threads. 216 And, it is also used as the interval to wakeup 216 And, it is also used as the interval to wakeup dirtytime_writeback thread. 217 217 218 218 219 dirty_writeback_centisecs 219 dirty_writeback_centisecs 220 ========================= 220 ========================= 221 221 222 The kernel flusher threads will periodically w 222 The kernel flusher threads will periodically wake up and write `old` data 223 out to disk. This tunable expresses the inter 223 out to disk. This tunable expresses the interval between those wakeups, in 224 100'ths of a second. 224 100'ths of a second. 225 225 226 Setting this to zero disables periodic writeba 226 Setting this to zero disables periodic writeback altogether. 227 227 228 228 229 drop_caches 229 drop_caches 230 =========== 230 =========== 231 231 232 Writing to this will cause the kernel to drop 232 Writing to this will cause the kernel to drop clean caches, as well as 233 reclaimable slab objects like dentries and ino 233 reclaimable slab objects like dentries and inodes. Once dropped, their 234 memory becomes free. 234 memory becomes free. 235 235 236 To free pagecache:: 236 To free pagecache:: 237 237 238 echo 1 > /proc/sys/vm/drop_caches 238 echo 1 > /proc/sys/vm/drop_caches 239 239 240 To free reclaimable slab objects (includes den 240 To free reclaimable slab objects (includes dentries and inodes):: 241 241 242 echo 2 > /proc/sys/vm/drop_caches 242 echo 2 > /proc/sys/vm/drop_caches 243 243 244 To free slab objects and pagecache:: 244 To free slab objects and pagecache:: 245 245 246 echo 3 > /proc/sys/vm/drop_caches 246 echo 3 > /proc/sys/vm/drop_caches 247 247 248 This is a non-destructive operation and will n 248 This is a non-destructive operation and will not free any dirty objects. 249 To increase the number of objects freed by thi 249 To increase the number of objects freed by this operation, the user may run 250 `sync` prior to writing to /proc/sys/vm/drop_c 250 `sync` prior to writing to /proc/sys/vm/drop_caches. This will minimize the 251 number of dirty objects on the system and crea 251 number of dirty objects on the system and create more candidates to be 252 dropped. 252 dropped. 253 253 254 This file is not a means to control the growth 254 This file is not a means to control the growth of the various kernel caches 255 (inodes, dentries, pagecache, etc...) These o 255 (inodes, dentries, pagecache, etc...) These objects are automatically 256 reclaimed by the kernel when memory is needed 256 reclaimed by the kernel when memory is needed elsewhere on the system. 257 257 258 Use of this file can cause performance problem 258 Use of this file can cause performance problems. Since it discards cached 259 objects, it may cost a significant amount of I 259 objects, it may cost a significant amount of I/O and CPU to recreate the 260 dropped objects, especially if they were under 260 dropped objects, especially if they were under heavy use. Because of this, 261 use outside of a testing or debugging environm 261 use outside of a testing or debugging environment is not recommended. 262 262 263 You may see informational messages in your ker 263 You may see informational messages in your kernel log when this file is 264 used:: 264 used:: 265 265 266 cat (1234): drop_caches: 3 266 cat (1234): drop_caches: 3 267 267 268 These are informational only. They do not mea 268 These are informational only. They do not mean that anything is wrong 269 with your system. To disable them, echo 4 (bi 269 with your system. To disable them, echo 4 (bit 2) into drop_caches. 270 270 271 enable_soft_offline 271 enable_soft_offline 272 =================== 272 =================== 273 Correctable memory errors are very common on s 273 Correctable memory errors are very common on servers. Soft-offline is kernel's 274 solution for memory pages having (excessive) c 274 solution for memory pages having (excessive) corrected memory errors. 275 275 276 For different types of page, soft-offline has 276 For different types of page, soft-offline has different behaviors / costs. 277 277 278 - For a raw error page, soft-offline migrates 278 - For a raw error page, soft-offline migrates the in-use page's content to 279 a new raw page. 279 a new raw page. 280 280 281 - For a page that is part of a transparent hug 281 - For a page that is part of a transparent hugepage, soft-offline splits the 282 transparent hugepage into raw pages, then mi 282 transparent hugepage into raw pages, then migrates only the raw error page. 283 As a result, user is transparently backed by 283 As a result, user is transparently backed by 1 less hugepage, impacting 284 memory access performance. 284 memory access performance. 285 285 286 - For a page that is part of a HugeTLB hugepag 286 - For a page that is part of a HugeTLB hugepage, soft-offline first migrates 287 the entire HugeTLB hugepage, during which a 287 the entire HugeTLB hugepage, during which a free hugepage will be consumed 288 as migration target. Then the original huge 288 as migration target. Then the original hugepage is dissolved into raw 289 pages without compensation, reducing the cap 289 pages without compensation, reducing the capacity of the HugeTLB pool by 1. 290 290 291 It is user's call to choose between reliabilit 291 It is user's call to choose between reliability (staying away from fragile 292 physical memory) vs performance / capacity imp 292 physical memory) vs performance / capacity implications in transparent and 293 HugeTLB cases. 293 HugeTLB cases. 294 294 295 For all architectures, enable_soft_offline con 295 For all architectures, enable_soft_offline controls whether to soft offline 296 memory pages. When set to 1, kernel attempts 296 memory pages. When set to 1, kernel attempts to soft offline the pages 297 whenever it thinks needed. When set to 0, ker 297 whenever it thinks needed. When set to 0, kernel returns EOPNOTSUPP to 298 the request to soft offline the pages. Its de 298 the request to soft offline the pages. Its default value is 1. 299 299 300 It is worth mentioning that after setting enab 300 It is worth mentioning that after setting enable_soft_offline to 0, the 301 following requests to soft offline pages will 301 following requests to soft offline pages will not be performed: 302 302 303 - Request to soft offline pages from RAS Corre 303 - Request to soft offline pages from RAS Correctable Errors Collector. 304 304 305 - On ARM, the request to soft offline pages fr 305 - On ARM, the request to soft offline pages from GHES driver. 306 306 307 - On PARISC, the request to soft offline pages 307 - On PARISC, the request to soft offline pages from Page Deallocation Table. 308 308 309 extfrag_threshold 309 extfrag_threshold 310 ================= 310 ================= 311 311 312 This parameter affects whether the kernel will 312 This parameter affects whether the kernel will compact memory or direct 313 reclaim to satisfy a high-order allocation. Th 313 reclaim to satisfy a high-order allocation. The extfrag/extfrag_index file in 314 debugfs shows what the fragmentation index for 314 debugfs shows what the fragmentation index for each order is in each zone in 315 the system. Values tending towards 0 imply all 315 the system. Values tending towards 0 imply allocations would fail due to lack 316 of memory, values towards 1000 imply failures 316 of memory, values towards 1000 imply failures are due to fragmentation and -1 317 implies that the allocation will succeed as lo 317 implies that the allocation will succeed as long as watermarks are met. 318 318 319 The kernel will not compact memory in a zone i 319 The kernel will not compact memory in a zone if the 320 fragmentation index is <= extfrag_threshold. T 320 fragmentation index is <= extfrag_threshold. The default value is 500. 321 321 322 322 323 highmem_is_dirtyable 323 highmem_is_dirtyable 324 ==================== 324 ==================== 325 325 326 Available only for systems with CONFIG_HIGHMEM 326 Available only for systems with CONFIG_HIGHMEM enabled (32b systems). 327 327 328 This parameter controls whether the high memor 328 This parameter controls whether the high memory is considered for dirty 329 writers throttling. This is not the case by d 329 writers throttling. This is not the case by default which means that 330 only the amount of memory directly visible/usa 330 only the amount of memory directly visible/usable by the kernel can 331 be dirtied. As a result, on systems with a lar 331 be dirtied. As a result, on systems with a large amount of memory and 332 lowmem basically depleted writers might be thr 332 lowmem basically depleted writers might be throttled too early and 333 streaming writes can get very slow. 333 streaming writes can get very slow. 334 334 335 Changing the value to non zero would allow mor 335 Changing the value to non zero would allow more memory to be dirtied 336 and thus allow writers to write more data whic 336 and thus allow writers to write more data which can be flushed to the 337 storage more effectively. Note this also comes 337 storage more effectively. Note this also comes with a risk of pre-mature 338 OOM killer because some writers (e.g. direct b 338 OOM killer because some writers (e.g. direct block device writes) can 339 only use the low memory and they can fill it u 339 only use the low memory and they can fill it up with dirty data without 340 any throttling. 340 any throttling. 341 341 342 342 343 hugetlb_shm_group 343 hugetlb_shm_group 344 ================= 344 ================= 345 345 346 hugetlb_shm_group contains group id that is al 346 hugetlb_shm_group contains group id that is allowed to create SysV 347 shared memory segment using hugetlb page. 347 shared memory segment using hugetlb page. 348 348 349 349 350 laptop_mode 350 laptop_mode 351 =========== 351 =========== 352 352 353 laptop_mode is a knob that controls "laptop mo 353 laptop_mode is a knob that controls "laptop mode". All the things that are 354 controlled by this knob are discussed in Docum 354 controlled by this knob are discussed in Documentation/admin-guide/laptops/laptop-mode.rst. 355 355 356 356 357 legacy_va_layout 357 legacy_va_layout 358 ================ 358 ================ 359 359 360 If non-zero, this sysctl disables the new 32-b 360 If non-zero, this sysctl disables the new 32-bit mmap layout - the kernel 361 will use the legacy (2.4) layout for all proce 361 will use the legacy (2.4) layout for all processes. 362 362 363 363 364 lowmem_reserve_ratio 364 lowmem_reserve_ratio 365 ==================== 365 ==================== 366 366 367 For some specialised workloads on highmem mach 367 For some specialised workloads on highmem machines it is dangerous for 368 the kernel to allow process memory to be alloc 368 the kernel to allow process memory to be allocated from the "lowmem" 369 zone. This is because that memory could then 369 zone. This is because that memory could then be pinned via the mlock() 370 system call, or by unavailability of swapspace 370 system call, or by unavailability of swapspace. 371 371 372 And on large highmem machines this lack of rec 372 And on large highmem machines this lack of reclaimable lowmem memory 373 can be fatal. 373 can be fatal. 374 374 375 So the Linux page allocator has a mechanism wh 375 So the Linux page allocator has a mechanism which prevents allocations 376 which *could* use highmem from using too much 376 which *could* use highmem from using too much lowmem. This means that 377 a certain amount of lowmem is defended from th 377 a certain amount of lowmem is defended from the possibility of being 378 captured into pinned user memory. 378 captured into pinned user memory. 379 379 380 (The same argument applies to the old 16 megab 380 (The same argument applies to the old 16 megabyte ISA DMA region. This 381 mechanism will also defend that region from al 381 mechanism will also defend that region from allocations which could use 382 highmem or lowmem). 382 highmem or lowmem). 383 383 384 The `lowmem_reserve_ratio` tunable determines 384 The `lowmem_reserve_ratio` tunable determines how aggressive the kernel is 385 in defending these lower zones. 385 in defending these lower zones. 386 386 387 If you have a machine which uses highmem or IS 387 If you have a machine which uses highmem or ISA DMA and your 388 applications are using mlock(), or if you are 388 applications are using mlock(), or if you are running with no swap then 389 you probably should change the lowmem_reserve_ 389 you probably should change the lowmem_reserve_ratio setting. 390 390 391 The lowmem_reserve_ratio is an array. You can 391 The lowmem_reserve_ratio is an array. You can see them by reading this file:: 392 392 393 % cat /proc/sys/vm/lowmem_reserve_rati 393 % cat /proc/sys/vm/lowmem_reserve_ratio 394 256 256 32 394 256 256 32 395 395 396 But, these values are not used directly. The k 396 But, these values are not used directly. The kernel calculates # of protection 397 pages for each zones from them. These are show 397 pages for each zones from them. These are shown as array of protection pages 398 in /proc/zoneinfo like the following. (This is 398 in /proc/zoneinfo like the following. (This is an example of x86-64 box). 399 Each zone has an array of protection pages lik 399 Each zone has an array of protection pages like this:: 400 400 401 Node 0, zone DMA 401 Node 0, zone DMA 402 pages free 1355 402 pages free 1355 403 min 3 403 min 3 404 low 3 404 low 3 405 high 4 405 high 4 406 : 406 : 407 : 407 : 408 numa_other 0 408 numa_other 0 409 protection: (0, 2004, 2004, 2004) 409 protection: (0, 2004, 2004, 2004) 410 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 410 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 411 pagesets 411 pagesets 412 cpu: 0 pcp: 0 412 cpu: 0 pcp: 0 413 : 413 : 414 414 415 These protections are added to score to judge 415 These protections are added to score to judge whether this zone should be used 416 for page allocation or should be reclaimed. 416 for page allocation or should be reclaimed. 417 417 418 In this example, if normal pages (index=2) are 418 In this example, if normal pages (index=2) are required to this DMA zone and 419 watermark[WMARK_HIGH] is used for watermark, t 419 watermark[WMARK_HIGH] is used for watermark, the kernel judges this zone should 420 not be used because pages_free(1355) is smalle 420 not be used because pages_free(1355) is smaller than watermark + protection[2] 421 (4 + 2004 = 2008). If this protection value is 421 (4 + 2004 = 2008). If this protection value is 0, this zone would be used for 422 normal page requirement. If requirement is DMA 422 normal page requirement. If requirement is DMA zone(index=0), protection[0] 423 (=0) is used. 423 (=0) is used. 424 424 425 zone[i]'s protection[j] is calculated by follo 425 zone[i]'s protection[j] is calculated by following expression:: 426 426 427 (i < j): 427 (i < j): 428 zone[i]->protection[j] 428 zone[i]->protection[j] 429 = (total sums of managed_pages from zone[i 429 = (total sums of managed_pages from zone[i+1] to zone[j] on the node) 430 / lowmem_reserve_ratio[i]; 430 / lowmem_reserve_ratio[i]; 431 (i = j): 431 (i = j): 432 (should not be protected. = 0; 432 (should not be protected. = 0; 433 (i > j): 433 (i > j): 434 (not necessary, but looks 0) 434 (not necessary, but looks 0) 435 435 436 The default values of lowmem_reserve_ratio[i] 436 The default values of lowmem_reserve_ratio[i] are 437 437 438 === ==================================== 438 === ==================================== 439 256 (if zone[i] means DMA or DMA32 zone) 439 256 (if zone[i] means DMA or DMA32 zone) 440 32 (others) 440 32 (others) 441 === ==================================== 441 === ==================================== 442 442 443 As above expression, they are reciprocal numbe 443 As above expression, they are reciprocal number of ratio. 444 256 means 1/256. # of protection pages becomes 444 256 means 1/256. # of protection pages becomes about "0.39%" of total managed 445 pages of higher zones on the node. 445 pages of higher zones on the node. 446 446 447 If you would like to protect more pages, small 447 If you would like to protect more pages, smaller values are effective. 448 The minimum value is 1 (1/1 -> 100%). The valu 448 The minimum value is 1 (1/1 -> 100%). The value less than 1 completely 449 disables protection of the pages. 449 disables protection of the pages. 450 450 451 451 452 max_map_count: 452 max_map_count: 453 ============== 453 ============== 454 454 455 This file contains the maximum number of memor 455 This file contains the maximum number of memory map areas a process 456 may have. Memory map areas are used as a side- 456 may have. Memory map areas are used as a side-effect of calling 457 malloc, directly by mmap, mprotect, and madvis 457 malloc, directly by mmap, mprotect, and madvise, and also when loading 458 shared libraries. 458 shared libraries. 459 459 460 While most applications need less than a thous 460 While most applications need less than a thousand maps, certain 461 programs, particularly malloc debuggers, may c 461 programs, particularly malloc debuggers, may consume lots of them, 462 e.g., up to one or two maps per allocation. 462 e.g., up to one or two maps per allocation. 463 463 464 The default value is 65530. 464 The default value is 65530. 465 465 466 466 467 mem_profiling 467 mem_profiling 468 ============== 468 ============== 469 469 470 Enable memory profiling (when CONFIG_MEM_ALLOC 470 Enable memory profiling (when CONFIG_MEM_ALLOC_PROFILING=y) 471 471 472 1: Enable memory profiling. 472 1: Enable memory profiling. 473 473 474 0: Disable memory profiling. 474 0: Disable memory profiling. 475 475 476 Enabling memory profiling introduces a small p 476 Enabling memory profiling introduces a small performance overhead for all 477 memory allocations. 477 memory allocations. 478 478 479 The default value depends on CONFIG_MEM_ALLOC_ 479 The default value depends on CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT. 480 480 481 481 482 memory_failure_early_kill: 482 memory_failure_early_kill: 483 ========================== 483 ========================== 484 484 485 Control how to kill processes when uncorrected 485 Control how to kill processes when uncorrected memory error (typically 486 a 2bit error in a memory module) is detected i 486 a 2bit error in a memory module) is detected in the background by hardware 487 that cannot be handled by the kernel. In some 487 that cannot be handled by the kernel. In some cases (like the page 488 still having a valid copy on disk) the kernel 488 still having a valid copy on disk) the kernel will handle the failure 489 transparently without affecting any applicatio 489 transparently without affecting any applications. But if there is 490 no other up-to-date copy of the data it will k 490 no other up-to-date copy of the data it will kill to prevent any data 491 corruptions from propagating. 491 corruptions from propagating. 492 492 493 1: Kill all processes that have the corrupted 493 1: Kill all processes that have the corrupted and not reloadable page mapped 494 as soon as the corruption is detected. Note t 494 as soon as the corruption is detected. Note this is not supported 495 for a few types of pages, like kernel internal 495 for a few types of pages, like kernel internally allocated data or 496 the swap cache, but works for the majority of 496 the swap cache, but works for the majority of user pages. 497 497 498 0: Only unmap the corrupted page from all proc 498 0: Only unmap the corrupted page from all processes and only kill a process 499 who tries to access it. 499 who tries to access it. 500 500 501 The kill is done using a catchable SIGBUS with 501 The kill is done using a catchable SIGBUS with BUS_MCEERR_AO, so processes can 502 handle this if they want to. 502 handle this if they want to. 503 503 504 This is only active on architectures/platforms 504 This is only active on architectures/platforms with advanced machine 505 check handling and depends on the hardware cap 505 check handling and depends on the hardware capabilities. 506 506 507 Applications can override this setting individ 507 Applications can override this setting individually with the PR_MCE_KILL prctl 508 508 509 509 510 memory_failure_recovery 510 memory_failure_recovery 511 ======================= 511 ======================= 512 512 513 Enable memory failure recovery (when supported 513 Enable memory failure recovery (when supported by the platform) 514 514 515 1: Attempt recovery. 515 1: Attempt recovery. 516 516 517 0: Always panic on a memory failure. 517 0: Always panic on a memory failure. 518 518 519 519 520 min_free_kbytes 520 min_free_kbytes 521 =============== 521 =============== 522 522 523 This is used to force the Linux VM to keep a m 523 This is used to force the Linux VM to keep a minimum number 524 of kilobytes free. The VM uses this number to 524 of kilobytes free. The VM uses this number to compute a 525 watermark[WMARK_MIN] value for each lowmem zon 525 watermark[WMARK_MIN] value for each lowmem zone in the system. 526 Each lowmem zone gets a number of reserved fre 526 Each lowmem zone gets a number of reserved free pages based 527 proportionally on its size. 527 proportionally on its size. 528 528 529 Some minimal amount of memory is needed to sat 529 Some minimal amount of memory is needed to satisfy PF_MEMALLOC 530 allocations; if you set this to lower than 102 530 allocations; if you set this to lower than 1024KB, your system will 531 become subtly broken, and prone to deadlock un 531 become subtly broken, and prone to deadlock under high loads. 532 532 533 Setting this too high will OOM your machine in 533 Setting this too high will OOM your machine instantly. 534 534 535 535 536 min_slab_ratio 536 min_slab_ratio 537 ============== 537 ============== 538 538 539 This is available only on NUMA kernels. 539 This is available only on NUMA kernels. 540 540 541 A percentage of the total pages in each zone. 541 A percentage of the total pages in each zone. On Zone reclaim 542 (fallback from the local zone occurs) slabs wi 542 (fallback from the local zone occurs) slabs will be reclaimed if more 543 than this percentage of pages in a zone are re 543 than this percentage of pages in a zone are reclaimable slab pages. 544 This insures that the slab growth stays under 544 This insures that the slab growth stays under control even in NUMA 545 systems that rarely perform global reclaim. 545 systems that rarely perform global reclaim. 546 546 547 The default is 5 percent. 547 The default is 5 percent. 548 548 549 Note that slab reclaim is triggered in a per z 549 Note that slab reclaim is triggered in a per zone / node fashion. 550 The process of reclaiming slab memory is curre 550 The process of reclaiming slab memory is currently not node specific 551 and may not be fast. 551 and may not be fast. 552 552 553 553 554 min_unmapped_ratio 554 min_unmapped_ratio 555 ================== 555 ================== 556 556 557 This is available only on NUMA kernels. 557 This is available only on NUMA kernels. 558 558 559 This is a percentage of the total pages in eac 559 This is a percentage of the total pages in each zone. Zone reclaim will 560 only occur if more than this percentage of pag 560 only occur if more than this percentage of pages are in a state that 561 zone_reclaim_mode allows to be reclaimed. 561 zone_reclaim_mode allows to be reclaimed. 562 562 563 If zone_reclaim_mode has the value 4 OR'd, the 563 If zone_reclaim_mode has the value 4 OR'd, then the percentage is compared 564 against all file-backed unmapped pages includi 564 against all file-backed unmapped pages including swapcache pages and tmpfs 565 files. Otherwise, only unmapped pages backed b 565 files. Otherwise, only unmapped pages backed by normal files but not tmpfs 566 files and similar are considered. 566 files and similar are considered. 567 567 568 The default is 1 percent. 568 The default is 1 percent. 569 569 570 570 571 mmap_min_addr 571 mmap_min_addr 572 ============= 572 ============= 573 573 574 This file indicates the amount of address spac 574 This file indicates the amount of address space which a user process will 575 be restricted from mmapping. Since kernel nul 575 be restricted from mmapping. Since kernel null dereference bugs could 576 accidentally operate based on the information 576 accidentally operate based on the information in the first couple of pages 577 of memory userspace processes should not be al 577 of memory userspace processes should not be allowed to write to them. By 578 default this value is set to 0 and no protecti 578 default this value is set to 0 and no protections will be enforced by the 579 security module. Setting this value to someth 579 security module. Setting this value to something like 64k will allow the 580 vast majority of applications to work correctl 580 vast majority of applications to work correctly and provide defense in depth 581 against future potential kernel bugs. 581 against future potential kernel bugs. 582 582 583 583 584 mmap_rnd_bits 584 mmap_rnd_bits 585 ============= 585 ============= 586 586 587 This value can be used to select the number of 587 This value can be used to select the number of bits to use to 588 determine the random offset to the base addres 588 determine the random offset to the base address of vma regions 589 resulting from mmap allocations on architectur 589 resulting from mmap allocations on architectures which support 590 tuning address space randomization. This valu 590 tuning address space randomization. This value will be bounded 591 by the architecture's minimum and maximum supp 591 by the architecture's minimum and maximum supported values. 592 592 593 This value can be changed after boot using the 593 This value can be changed after boot using the 594 /proc/sys/vm/mmap_rnd_bits tunable 594 /proc/sys/vm/mmap_rnd_bits tunable 595 595 596 596 597 mmap_rnd_compat_bits 597 mmap_rnd_compat_bits 598 ==================== 598 ==================== 599 599 600 This value can be used to select the number of 600 This value can be used to select the number of bits to use to 601 determine the random offset to the base addres 601 determine the random offset to the base address of vma regions 602 resulting from mmap allocations for applicatio 602 resulting from mmap allocations for applications run in 603 compatibility mode on architectures which supp 603 compatibility mode on architectures which support tuning address 604 space randomization. This value will be bound 604 space randomization. This value will be bounded by the 605 architecture's minimum and maximum supported v 605 architecture's minimum and maximum supported values. 606 606 607 This value can be changed after boot using the 607 This value can be changed after boot using the 608 /proc/sys/vm/mmap_rnd_compat_bits tunable 608 /proc/sys/vm/mmap_rnd_compat_bits tunable 609 609 610 610 611 nr_hugepages 611 nr_hugepages 612 ============ 612 ============ 613 613 614 Change the minimum size of the hugepage pool. 614 Change the minimum size of the hugepage pool. 615 615 616 See Documentation/admin-guide/mm/hugetlbpage.r 616 See Documentation/admin-guide/mm/hugetlbpage.rst 617 617 618 618 619 hugetlb_optimize_vmemmap 619 hugetlb_optimize_vmemmap 620 ======================== 620 ======================== 621 621 622 This knob is not available when the size of 's 622 This knob is not available when the size of 'struct page' (a structure defined 623 in include/linux/mm_types.h) is not power of t 623 in include/linux/mm_types.h) is not power of two (an unusual system config could 624 result in this). 624 result in this). 625 625 626 Enable (set to 1) or disable (set to 0) HugeTL 626 Enable (set to 1) or disable (set to 0) HugeTLB Vmemmap Optimization (HVO). 627 627 628 Once enabled, the vmemmap pages of subsequent 628 Once enabled, the vmemmap pages of subsequent allocation of HugeTLB pages from 629 buddy allocator will be optimized (7 pages per 629 buddy allocator will be optimized (7 pages per 2MB HugeTLB page and 4095 pages 630 per 1GB HugeTLB page), whereas already allocat 630 per 1GB HugeTLB page), whereas already allocated HugeTLB pages will not be 631 optimized. When those optimized HugeTLB pages 631 optimized. When those optimized HugeTLB pages are freed from the HugeTLB pool 632 to the buddy allocator, the vmemmap pages repr 632 to the buddy allocator, the vmemmap pages representing that range needs to be 633 remapped again and the vmemmap pages discarded 633 remapped again and the vmemmap pages discarded earlier need to be rellocated 634 again. If your use case is that HugeTLB pages 634 again. If your use case is that HugeTLB pages are allocated 'on the fly' (e.g. 635 never explicitly allocating HugeTLB pages with 635 never explicitly allocating HugeTLB pages with 'nr_hugepages' but only set 636 'nr_overcommit_hugepages', those overcommitted 636 'nr_overcommit_hugepages', those overcommitted HugeTLB pages are allocated 'on 637 the fly') instead of being pulled from the Hug 637 the fly') instead of being pulled from the HugeTLB pool, you should weigh the 638 benefits of memory savings against the more ov 638 benefits of memory savings against the more overhead (~2x slower than before) 639 of allocation or freeing HugeTLB pages between 639 of allocation or freeing HugeTLB pages between the HugeTLB pool and the buddy 640 allocator. Another behavior to note is that i 640 allocator. Another behavior to note is that if the system is under heavy memory 641 pressure, it could prevent the user from freei 641 pressure, it could prevent the user from freeing HugeTLB pages from the HugeTLB 642 pool to the buddy allocator since the allocati 642 pool to the buddy allocator since the allocation of vmemmap pages could be 643 failed, you have to retry later if your system 643 failed, you have to retry later if your system encounter this situation. 644 644 645 Once disabled, the vmemmap pages of subsequent 645 Once disabled, the vmemmap pages of subsequent allocation of HugeTLB pages from 646 buddy allocator will not be optimized meaning 646 buddy allocator will not be optimized meaning the extra overhead at allocation 647 time from buddy allocator disappears, whereas 647 time from buddy allocator disappears, whereas already optimized HugeTLB pages 648 will not be affected. If you want to make sur 648 will not be affected. If you want to make sure there are no optimized HugeTLB 649 pages, you can set "nr_hugepages" to 0 first a 649 pages, you can set "nr_hugepages" to 0 first and then disable this. Note that 650 writing 0 to nr_hugepages will make any "in us 650 writing 0 to nr_hugepages will make any "in use" HugeTLB pages become surplus 651 pages. So, those surplus pages are still opti 651 pages. So, those surplus pages are still optimized until they are no longer 652 in use. You would need to wait for those surp 652 in use. You would need to wait for those surplus pages to be released before 653 there are no optimized pages in the system. 653 there are no optimized pages in the system. 654 654 655 655 656 nr_hugepages_mempolicy 656 nr_hugepages_mempolicy 657 ====================== 657 ====================== 658 658 659 Change the size of the hugepage pool at run-ti 659 Change the size of the hugepage pool at run-time on a specific 660 set of NUMA nodes. 660 set of NUMA nodes. 661 661 662 See Documentation/admin-guide/mm/hugetlbpage.r 662 See Documentation/admin-guide/mm/hugetlbpage.rst 663 663 664 664 665 nr_overcommit_hugepages 665 nr_overcommit_hugepages 666 ======================= 666 ======================= 667 667 668 Change the maximum size of the hugepage pool. 668 Change the maximum size of the hugepage pool. The maximum is 669 nr_hugepages + nr_overcommit_hugepages. 669 nr_hugepages + nr_overcommit_hugepages. 670 670 671 See Documentation/admin-guide/mm/hugetlbpage.r 671 See Documentation/admin-guide/mm/hugetlbpage.rst 672 672 673 673 674 nr_trim_pages 674 nr_trim_pages 675 ============= 675 ============= 676 676 677 This is available only on NOMMU kernels. 677 This is available only on NOMMU kernels. 678 678 679 This value adjusts the excess page trimming be 679 This value adjusts the excess page trimming behaviour of power-of-2 aligned 680 NOMMU mmap allocations. 680 NOMMU mmap allocations. 681 681 682 A value of 0 disables trimming of allocations 682 A value of 0 disables trimming of allocations entirely, while a value of 1 683 trims excess pages aggressively. Any value >= 683 trims excess pages aggressively. Any value >= 1 acts as the watermark where 684 trimming of allocations is initiated. 684 trimming of allocations is initiated. 685 685 686 The default value is 1. 686 The default value is 1. 687 687 688 See Documentation/admin-guide/mm/nommu-mmap.rs 688 See Documentation/admin-guide/mm/nommu-mmap.rst for more information. 689 689 690 690 691 numa_zonelist_order 691 numa_zonelist_order 692 =================== 692 =================== 693 693 694 This sysctl is only for NUMA and it is depreca 694 This sysctl is only for NUMA and it is deprecated. Anything but 695 Node order will fail! 695 Node order will fail! 696 696 697 'where the memory is allocated from' is contro 697 'where the memory is allocated from' is controlled by zonelists. 698 698 699 (This documentation ignores ZONE_HIGHMEM/ZONE_ 699 (This documentation ignores ZONE_HIGHMEM/ZONE_DMA32 for simple explanation. 700 you may be able to read ZONE_DMA as ZONE_DMA32 700 you may be able to read ZONE_DMA as ZONE_DMA32...) 701 701 702 In non-NUMA case, a zonelist for GFP_KERNEL is 702 In non-NUMA case, a zonelist for GFP_KERNEL is ordered as following. 703 ZONE_NORMAL -> ZONE_DMA 703 ZONE_NORMAL -> ZONE_DMA 704 This means that a memory allocation request fo 704 This means that a memory allocation request for GFP_KERNEL will 705 get memory from ZONE_DMA only when ZONE_NORMAL 705 get memory from ZONE_DMA only when ZONE_NORMAL is not available. 706 706 707 In NUMA case, you can think of following 2 typ 707 In NUMA case, you can think of following 2 types of order. 708 Assume 2 node NUMA and below is zonelist of No 708 Assume 2 node NUMA and below is zonelist of Node(0)'s GFP_KERNEL:: 709 709 710 (A) Node(0) ZONE_NORMAL -> Node(0) ZONE_DMA 710 (A) Node(0) ZONE_NORMAL -> Node(0) ZONE_DMA -> Node(1) ZONE_NORMAL 711 (B) Node(0) ZONE_NORMAL -> Node(1) ZONE_NORM 711 (B) Node(0) ZONE_NORMAL -> Node(1) ZONE_NORMAL -> Node(0) ZONE_DMA. 712 712 713 Type(A) offers the best locality for processes 713 Type(A) offers the best locality for processes on Node(0), but ZONE_DMA 714 will be used before ZONE_NORMAL exhaustion. Th 714 will be used before ZONE_NORMAL exhaustion. This increases possibility of 715 out-of-memory(OOM) of ZONE_DMA because ZONE_DM 715 out-of-memory(OOM) of ZONE_DMA because ZONE_DMA is tend to be small. 716 716 717 Type(B) cannot offer the best locality but is 717 Type(B) cannot offer the best locality but is more robust against OOM of 718 the DMA zone. 718 the DMA zone. 719 719 720 Type(A) is called as "Node" order. Type (B) is 720 Type(A) is called as "Node" order. Type (B) is "Zone" order. 721 721 722 "Node order" orders the zonelists by node, the 722 "Node order" orders the zonelists by node, then by zone within each node. 723 Specify "[Nn]ode" for node order 723 Specify "[Nn]ode" for node order 724 724 725 "Zone Order" orders the zonelists by zone type 725 "Zone Order" orders the zonelists by zone type, then by node within each 726 zone. Specify "[Zz]one" for zone order. 726 zone. Specify "[Zz]one" for zone order. 727 727 728 Specify "[Dd]efault" to request automatic conf 728 Specify "[Dd]efault" to request automatic configuration. 729 729 730 On 32-bit, the Normal zone needs to be preserv 730 On 32-bit, the Normal zone needs to be preserved for allocations accessible 731 by the kernel, so "zone" order will be selecte 731 by the kernel, so "zone" order will be selected. 732 732 733 On 64-bit, devices that require DMA32/DMA are 733 On 64-bit, devices that require DMA32/DMA are relatively rare, so "node" 734 order will be selected. 734 order will be selected. 735 735 736 Default order is recommended unless this is ca 736 Default order is recommended unless this is causing problems for your 737 system/application. 737 system/application. 738 738 739 739 740 oom_dump_tasks 740 oom_dump_tasks 741 ============== 741 ============== 742 742 743 Enables a system-wide task dump (excluding ker 743 Enables a system-wide task dump (excluding kernel threads) to be produced 744 when the kernel performs an OOM-killing and in 744 when the kernel performs an OOM-killing and includes such information as 745 pid, uid, tgid, vm size, rss, pgtables_bytes, 745 pid, uid, tgid, vm size, rss, pgtables_bytes, swapents, oom_score_adj 746 score, and name. This is helpful to determine 746 score, and name. This is helpful to determine why the OOM killer was 747 invoked, to identify the rogue task that cause 747 invoked, to identify the rogue task that caused it, and to determine why 748 the OOM killer chose the task it did to kill. 748 the OOM killer chose the task it did to kill. 749 749 750 If this is set to zero, this information is su 750 If this is set to zero, this information is suppressed. On very 751 large systems with thousands of tasks it may n 751 large systems with thousands of tasks it may not be feasible to dump 752 the memory state information for each one. Su 752 the memory state information for each one. Such systems should not 753 be forced to incur a performance penalty in OO 753 be forced to incur a performance penalty in OOM conditions when the 754 information may not be desired. 754 information may not be desired. 755 755 756 If this is set to non-zero, this information i 756 If this is set to non-zero, this information is shown whenever the 757 OOM killer actually kills a memory-hogging tas 757 OOM killer actually kills a memory-hogging task. 758 758 759 The default value is 1 (enabled). 759 The default value is 1 (enabled). 760 760 761 761 762 oom_kill_allocating_task 762 oom_kill_allocating_task 763 ======================== 763 ======================== 764 764 765 This enables or disables killing the OOM-trigg 765 This enables or disables killing the OOM-triggering task in 766 out-of-memory situations. 766 out-of-memory situations. 767 767 768 If this is set to zero, the OOM killer will sc 768 If this is set to zero, the OOM killer will scan through the entire 769 tasklist and select a task based on heuristics 769 tasklist and select a task based on heuristics to kill. This normally 770 selects a rogue memory-hogging task that frees 770 selects a rogue memory-hogging task that frees up a large amount of 771 memory when killed. 771 memory when killed. 772 772 773 If this is set to non-zero, the OOM killer sim 773 If this is set to non-zero, the OOM killer simply kills the task that 774 triggered the out-of-memory condition. This a 774 triggered the out-of-memory condition. This avoids the expensive 775 tasklist scan. 775 tasklist scan. 776 776 777 If panic_on_oom is selected, it takes preceden 777 If panic_on_oom is selected, it takes precedence over whatever value 778 is used in oom_kill_allocating_task. 778 is used in oom_kill_allocating_task. 779 779 780 The default value is 0. 780 The default value is 0. 781 781 782 782 783 overcommit_kbytes 783 overcommit_kbytes 784 ================= 784 ================= 785 785 786 When overcommit_memory is set to 2, the commit 786 When overcommit_memory is set to 2, the committed address space is not 787 permitted to exceed swap plus this amount of p 787 permitted to exceed swap plus this amount of physical RAM. See below. 788 788 789 Note: overcommit_kbytes is the counterpart of 789 Note: overcommit_kbytes is the counterpart of overcommit_ratio. Only one 790 of them may be specified at a time. Setting on 790 of them may be specified at a time. Setting one disables the other (which 791 then appears as 0 when read). 791 then appears as 0 when read). 792 792 793 793 794 overcommit_memory 794 overcommit_memory 795 ================= 795 ================= 796 796 797 This value contains a flag that enables memory 797 This value contains a flag that enables memory overcommitment. 798 798 799 When this flag is 0, the kernel compares the u 799 When this flag is 0, the kernel compares the userspace memory request 800 size against total memory plus swap and reject 800 size against total memory plus swap and rejects obvious overcommits. 801 801 802 When this flag is 1, the kernel pretends there 802 When this flag is 1, the kernel pretends there is always enough 803 memory until it actually runs out. 803 memory until it actually runs out. 804 804 805 When this flag is 2, the kernel uses a "never 805 When this flag is 2, the kernel uses a "never overcommit" 806 policy that attempts to prevent any overcommit 806 policy that attempts to prevent any overcommit of memory. 807 Note that user_reserve_kbytes affects this pol 807 Note that user_reserve_kbytes affects this policy. 808 808 809 This feature can be very useful because there 809 This feature can be very useful because there are a lot of 810 programs that malloc() huge amounts of memory 810 programs that malloc() huge amounts of memory "just-in-case" 811 and don't use much of it. 811 and don't use much of it. 812 812 813 The default value is 0. 813 The default value is 0. 814 814 815 See Documentation/mm/overcommit-accounting.rst 815 See Documentation/mm/overcommit-accounting.rst and 816 mm/util.c::__vm_enough_memory() for more infor 816 mm/util.c::__vm_enough_memory() for more information. 817 817 818 818 819 overcommit_ratio 819 overcommit_ratio 820 ================ 820 ================ 821 821 822 When overcommit_memory is set to 2, the commit 822 When overcommit_memory is set to 2, the committed address 823 space is not permitted to exceed swap plus thi 823 space is not permitted to exceed swap plus this percentage 824 of physical RAM. See above. 824 of physical RAM. See above. 825 825 826 826 827 page-cluster 827 page-cluster 828 ============ 828 ============ 829 829 830 page-cluster controls the number of pages up t 830 page-cluster controls the number of pages up to which consecutive pages 831 are read in from swap in a single attempt. Thi 831 are read in from swap in a single attempt. This is the swap counterpart 832 to page cache readahead. 832 to page cache readahead. 833 The mentioned consecutivity is not in terms of 833 The mentioned consecutivity is not in terms of virtual/physical addresses, 834 but consecutive on swap space - that means the 834 but consecutive on swap space - that means they were swapped out together. 835 835 836 It is a logarithmic value - setting it to zero 836 It is a logarithmic value - setting it to zero means "1 page", setting 837 it to 1 means "2 pages", setting it to 2 means 837 it to 1 means "2 pages", setting it to 2 means "4 pages", etc. 838 Zero disables swap readahead completely. 838 Zero disables swap readahead completely. 839 839 840 The default value is three (eight pages at a t 840 The default value is three (eight pages at a time). There may be some 841 small benefits in tuning this to a different v 841 small benefits in tuning this to a different value if your workload is 842 swap-intensive. 842 swap-intensive. 843 843 844 Lower values mean lower latencies for initial 844 Lower values mean lower latencies for initial faults, but at the same time 845 extra faults and I/O delays for following faul 845 extra faults and I/O delays for following faults if they would have been part of 846 that consecutive pages readahead would have br 846 that consecutive pages readahead would have brought in. 847 847 848 848 849 page_lock_unfairness 849 page_lock_unfairness 850 ==================== 850 ==================== 851 851 852 This value determines the number of times that 852 This value determines the number of times that the page lock can be 853 stolen from under a waiter. After the lock is 853 stolen from under a waiter. After the lock is stolen the number of times 854 specified in this file (default is 5), the "fa 854 specified in this file (default is 5), the "fair lock handoff" semantics 855 will apply, and the waiter will only be awaken 855 will apply, and the waiter will only be awakened if the lock can be taken. 856 856 857 panic_on_oom 857 panic_on_oom 858 ============ 858 ============ 859 859 860 This enables or disables panic on out-of-memor 860 This enables or disables panic on out-of-memory feature. 861 861 862 If this is set to 0, the kernel will kill some 862 If this is set to 0, the kernel will kill some rogue process, 863 called oom_killer. Usually, oom_killer can ki 863 called oom_killer. Usually, oom_killer can kill rogue processes and 864 system will survive. 864 system will survive. 865 865 866 If this is set to 1, the kernel panics when ou 866 If this is set to 1, the kernel panics when out-of-memory happens. 867 However, if a process limits using nodes by me 867 However, if a process limits using nodes by mempolicy/cpusets, 868 and those nodes become memory exhaustion statu 868 and those nodes become memory exhaustion status, one process 869 may be killed by oom-killer. No panic occurs i 869 may be killed by oom-killer. No panic occurs in this case. 870 Because other nodes' memory may be free. This 870 Because other nodes' memory may be free. This means system total status 871 may be not fatal yet. 871 may be not fatal yet. 872 872 873 If this is set to 2, the kernel panics compuls 873 If this is set to 2, the kernel panics compulsorily even on the 874 above-mentioned. Even oom happens under memory 874 above-mentioned. Even oom happens under memory cgroup, the whole 875 system panics. 875 system panics. 876 876 877 The default value is 0. 877 The default value is 0. 878 878 879 1 and 2 are for failover of clustering. Please 879 1 and 2 are for failover of clustering. Please select either 880 according to your policy of failover. 880 according to your policy of failover. 881 881 882 panic_on_oom=2+kdump gives you very strong too 882 panic_on_oom=2+kdump gives you very strong tool to investigate 883 why oom happens. You can get snapshot. 883 why oom happens. You can get snapshot. 884 884 885 885 886 percpu_pagelist_high_fraction 886 percpu_pagelist_high_fraction 887 ============================= 887 ============================= 888 888 889 This is the fraction of pages in each zone tha 889 This is the fraction of pages in each zone that are can be stored to 890 per-cpu page lists. It is an upper boundary th 890 per-cpu page lists. It is an upper boundary that is divided depending 891 on the number of online CPUs. The min value fo 891 on the number of online CPUs. The min value for this is 8 which means 892 that we do not allow more than 1/8th of pages 892 that we do not allow more than 1/8th of pages in each zone to be stored 893 on per-cpu page lists. This entry only changes 893 on per-cpu page lists. This entry only changes the value of hot per-cpu 894 page lists. A user can specify a number like 1 894 page lists. A user can specify a number like 100 to allocate 1/100th of 895 each zone between per-cpu lists. 895 each zone between per-cpu lists. 896 896 897 The batch value of each per-cpu page list rema 897 The batch value of each per-cpu page list remains the same regardless of 898 the value of the high fraction so allocation l 898 the value of the high fraction so allocation latencies are unaffected. 899 899 900 The initial value is zero. Kernel uses this va 900 The initial value is zero. Kernel uses this value to set the high pcp->high 901 mark based on the low watermark for the zone a 901 mark based on the low watermark for the zone and the number of local 902 online CPUs. If the user writes '0' to this s 902 online CPUs. If the user writes '0' to this sysctl, it will revert to 903 this default behavior. 903 this default behavior. 904 904 905 905 906 stat_interval 906 stat_interval 907 ============= 907 ============= 908 908 909 The time interval between which vm statistics 909 The time interval between which vm statistics are updated. The default 910 is 1 second. 910 is 1 second. 911 911 912 912 913 stat_refresh 913 stat_refresh 914 ============ 914 ============ 915 915 916 Any read or write (by root only) flushes all t 916 Any read or write (by root only) flushes all the per-cpu vm statistics 917 into their global totals, for more accurate re 917 into their global totals, for more accurate reports when testing 918 e.g. cat /proc/sys/vm/stat_refresh /proc/memin 918 e.g. cat /proc/sys/vm/stat_refresh /proc/meminfo 919 919 920 As a side-effect, it also checks for negative 920 As a side-effect, it also checks for negative totals (elsewhere reported 921 as 0) and "fails" with EINVAL if any are found 921 as 0) and "fails" with EINVAL if any are found, with a warning in dmesg. 922 (At time of writing, a few stats are known som 922 (At time of writing, a few stats are known sometimes to be found negative, 923 with no ill effects: errors and warnings on th 923 with no ill effects: errors and warnings on these stats are suppressed.) 924 924 925 925 926 numa_stat 926 numa_stat 927 ========= 927 ========= 928 928 929 This interface allows runtime configuration of 929 This interface allows runtime configuration of numa statistics. 930 930 931 When page allocation performance becomes a bot 931 When page allocation performance becomes a bottleneck and you can tolerate 932 some possible tool breakage and decreased numa 932 some possible tool breakage and decreased numa counter precision, you can 933 do:: 933 do:: 934 934 935 echo 0 > /proc/sys/vm/numa_stat 935 echo 0 > /proc/sys/vm/numa_stat 936 936 937 When page allocation performance is not a bott 937 When page allocation performance is not a bottleneck and you want all 938 tooling to work, you can do:: 938 tooling to work, you can do:: 939 939 940 echo 1 > /proc/sys/vm/numa_stat 940 echo 1 > /proc/sys/vm/numa_stat 941 941 942 942 943 swappiness 943 swappiness 944 ========== 944 ========== 945 945 946 This control is used to define the rough relat 946 This control is used to define the rough relative IO cost of swapping 947 and filesystem paging, as a value between 0 an 947 and filesystem paging, as a value between 0 and 200. At 100, the VM 948 assumes equal IO cost and will thus apply memo 948 assumes equal IO cost and will thus apply memory pressure to the page 949 cache and swap-backed pages equally; lower val 949 cache and swap-backed pages equally; lower values signify more 950 expensive swap IO, higher values indicates che 950 expensive swap IO, higher values indicates cheaper. 951 951 952 Keep in mind that filesystem IO patterns under 952 Keep in mind that filesystem IO patterns under memory pressure tend to 953 be more efficient than swap's random IO. An op 953 be more efficient than swap's random IO. An optimal value will require 954 experimentation and will also be workload-depe 954 experimentation and will also be workload-dependent. 955 955 956 The default value is 60. 956 The default value is 60. 957 957 958 For in-memory swap, like zram or zswap, as wel 958 For in-memory swap, like zram or zswap, as well as hybrid setups that 959 have swap on faster devices than the filesyste 959 have swap on faster devices than the filesystem, values beyond 100 can 960 be considered. For example, if the random IO a 960 be considered. For example, if the random IO against the swap device 961 is on average 2x faster than IO from the files 961 is on average 2x faster than IO from the filesystem, swappiness should 962 be 133 (x + 2x = 200, 2x = 133.33). 962 be 133 (x + 2x = 200, 2x = 133.33). 963 963 964 At 0, the kernel will not initiate swap until 964 At 0, the kernel will not initiate swap until the amount of free and 965 file-backed pages is less than the high waterm 965 file-backed pages is less than the high watermark in a zone. 966 966 967 967 968 unprivileged_userfaultfd 968 unprivileged_userfaultfd 969 ======================== 969 ======================== 970 970 971 This flag controls the mode in which unprivile 971 This flag controls the mode in which unprivileged users can use the 972 userfaultfd system calls. Set this to 0 to res 972 userfaultfd system calls. Set this to 0 to restrict unprivileged users 973 to handle page faults in user mode only. In th 973 to handle page faults in user mode only. In this case, users without 974 SYS_CAP_PTRACE must pass UFFD_USER_MODE_ONLY i 974 SYS_CAP_PTRACE must pass UFFD_USER_MODE_ONLY in order for userfaultfd to 975 succeed. Prohibiting use of userfaultfd for ha 975 succeed. Prohibiting use of userfaultfd for handling faults from kernel 976 mode may make certain vulnerabilities more dif 976 mode may make certain vulnerabilities more difficult to exploit. 977 977 978 Set this to 1 to allow unprivileged users to u 978 Set this to 1 to allow unprivileged users to use the userfaultfd system 979 calls without any restrictions. 979 calls without any restrictions. 980 980 981 The default value is 0. 981 The default value is 0. 982 982 983 Another way to control permissions for userfau 983 Another way to control permissions for userfaultfd is to use 984 /dev/userfaultfd instead of userfaultfd(2). Se 984 /dev/userfaultfd instead of userfaultfd(2). See 985 Documentation/admin-guide/mm/userfaultfd.rst. 985 Documentation/admin-guide/mm/userfaultfd.rst. 986 986 987 user_reserve_kbytes 987 user_reserve_kbytes 988 =================== 988 =================== 989 989 990 When overcommit_memory is set to 2, "never ove 990 When overcommit_memory is set to 2, "never overcommit" mode, reserve 991 min(3% of current process size, user_reserve_k 991 min(3% of current process size, user_reserve_kbytes) of free memory. 992 This is intended to prevent a user from starti 992 This is intended to prevent a user from starting a single memory hogging 993 process, such that they cannot recover (kill t 993 process, such that they cannot recover (kill the hog). 994 994 995 user_reserve_kbytes defaults to min(3% of the 995 user_reserve_kbytes defaults to min(3% of the current process size, 128MB). 996 996 997 If this is reduced to zero, then the user will 997 If this is reduced to zero, then the user will be allowed to allocate 998 all free memory with a single process, minus a 998 all free memory with a single process, minus admin_reserve_kbytes. 999 Any subsequent attempts to execute a command w 999 Any subsequent attempts to execute a command will result in 1000 "fork: Cannot allocate memory". 1000 "fork: Cannot allocate memory". 1001 1001 1002 Changing this takes effect whenever an applic 1002 Changing this takes effect whenever an application requests memory. 1003 1003 1004 1004 1005 vfs_cache_pressure 1005 vfs_cache_pressure 1006 ================== 1006 ================== 1007 1007 1008 This percentage value controls the tendency o 1008 This percentage value controls the tendency of the kernel to reclaim 1009 the memory which is used for caching of direc 1009 the memory which is used for caching of directory and inode objects. 1010 1010 1011 At the default value of vfs_cache_pressure=10 1011 At the default value of vfs_cache_pressure=100 the kernel will attempt to 1012 reclaim dentries and inodes at a "fair" rate 1012 reclaim dentries and inodes at a "fair" rate with respect to pagecache and 1013 swapcache reclaim. Decreasing vfs_cache_pres 1013 swapcache reclaim. Decreasing vfs_cache_pressure causes the kernel to prefer 1014 to retain dentry and inode caches. When vfs_c 1014 to retain dentry and inode caches. When vfs_cache_pressure=0, the kernel will 1015 never reclaim dentries and inodes due to memo 1015 never reclaim dentries and inodes due to memory pressure and this can easily 1016 lead to out-of-memory conditions. Increasing 1016 lead to out-of-memory conditions. Increasing vfs_cache_pressure beyond 100 1017 causes the kernel to prefer to reclaim dentri 1017 causes the kernel to prefer to reclaim dentries and inodes. 1018 1018 1019 Increasing vfs_cache_pressure significantly b 1019 Increasing vfs_cache_pressure significantly beyond 100 may have negative 1020 performance impact. Reclaim code needs to tak 1020 performance impact. Reclaim code needs to take various locks to find freeable 1021 directory and inode objects. With vfs_cache_p 1021 directory and inode objects. With vfs_cache_pressure=1000, it will look for 1022 ten times more freeable objects than there ar 1022 ten times more freeable objects than there are. 1023 1023 1024 1024 1025 watermark_boost_factor 1025 watermark_boost_factor 1026 ====================== 1026 ====================== 1027 1027 1028 This factor controls the level of reclaim whe 1028 This factor controls the level of reclaim when memory is being fragmented. 1029 It defines the percentage of the high waterma 1029 It defines the percentage of the high watermark of a zone that will be 1030 reclaimed if pages of different mobility are 1030 reclaimed if pages of different mobility are being mixed within pageblocks. 1031 The intent is that compaction has less work t 1031 The intent is that compaction has less work to do in the future and to 1032 increase the success rate of future high-orde 1032 increase the success rate of future high-order allocations such as SLUB 1033 allocations, THP and hugetlbfs pages. 1033 allocations, THP and hugetlbfs pages. 1034 1034 1035 To make it sensible with respect to the water 1035 To make it sensible with respect to the watermark_scale_factor 1036 parameter, the unit is in fractions of 10,000 1036 parameter, the unit is in fractions of 10,000. The default value of 1037 15,000 means that up to 150% of the high wate 1037 15,000 means that up to 150% of the high watermark will be reclaimed in the 1038 event of a pageblock being mixed due to fragm 1038 event of a pageblock being mixed due to fragmentation. The level of reclaim 1039 is determined by the number of fragmentation 1039 is determined by the number of fragmentation events that occurred in the 1040 recent past. If this value is smaller than a 1040 recent past. If this value is smaller than a pageblock then a pageblocks 1041 worth of pages will be reclaimed (e.g. 2MB o 1041 worth of pages will be reclaimed (e.g. 2MB on 64-bit x86). A boost factor 1042 of 0 will disable the feature. 1042 of 0 will disable the feature. 1043 1043 1044 1044 1045 watermark_scale_factor 1045 watermark_scale_factor 1046 ====================== 1046 ====================== 1047 1047 1048 This factor controls the aggressiveness of ks 1048 This factor controls the aggressiveness of kswapd. It defines the 1049 amount of memory left in a node/system before 1049 amount of memory left in a node/system before kswapd is woken up and 1050 how much memory needs to be free before kswap 1050 how much memory needs to be free before kswapd goes back to sleep. 1051 1051 1052 The unit is in fractions of 10,000. The defau 1052 The unit is in fractions of 10,000. The default value of 10 means the 1053 distances between watermarks are 0.1% of the 1053 distances between watermarks are 0.1% of the available memory in the 1054 node/system. The maximum value is 3000, or 30 1054 node/system. The maximum value is 3000, or 30% of memory. 1055 1055 1056 A high rate of threads entering direct reclai 1056 A high rate of threads entering direct reclaim (allocstall) or kswapd 1057 going to sleep prematurely (kswapd_low_wmark_ 1057 going to sleep prematurely (kswapd_low_wmark_hit_quickly) can indicate 1058 that the number of free pages kswapd maintain 1058 that the number of free pages kswapd maintains for latency reasons is 1059 too small for the allocation bursts occurring 1059 too small for the allocation bursts occurring in the system. This knob 1060 can then be used to tune kswapd aggressivenes 1060 can then be used to tune kswapd aggressiveness accordingly. 1061 1061 1062 1062 1063 zone_reclaim_mode 1063 zone_reclaim_mode 1064 ================= 1064 ================= 1065 1065 1066 Zone_reclaim_mode allows someone to set more 1066 Zone_reclaim_mode allows someone to set more or less aggressive approaches to 1067 reclaim memory when a zone runs out of memory 1067 reclaim memory when a zone runs out of memory. If it is set to zero then no 1068 zone reclaim occurs. Allocations will be sati 1068 zone reclaim occurs. Allocations will be satisfied from other zones / nodes 1069 in the system. 1069 in the system. 1070 1070 1071 This is value OR'ed together of 1071 This is value OR'ed together of 1072 1072 1073 = =================================== 1073 = =================================== 1074 1 Zone reclaim on 1074 1 Zone reclaim on 1075 2 Zone reclaim writes dirty pages out 1075 2 Zone reclaim writes dirty pages out 1076 4 Zone reclaim swaps pages 1076 4 Zone reclaim swaps pages 1077 = =================================== 1077 = =================================== 1078 1078 1079 zone_reclaim_mode is disabled by default. Fo 1079 zone_reclaim_mode is disabled by default. For file servers or workloads 1080 that benefit from having their data cached, z 1080 that benefit from having their data cached, zone_reclaim_mode should be 1081 left disabled as the caching effect is likely 1081 left disabled as the caching effect is likely to be more important than 1082 data locality. 1082 data locality. 1083 1083 1084 Consider enabling one or more zone_reclaim mo 1084 Consider enabling one or more zone_reclaim mode bits if it's known that the 1085 workload is partitioned such that each partit 1085 workload is partitioned such that each partition fits within a NUMA node 1086 and that accessing remote memory would cause 1086 and that accessing remote memory would cause a measurable performance 1087 reduction. The page allocator will take addi 1087 reduction. The page allocator will take additional actions before 1088 allocating off node pages. 1088 allocating off node pages. 1089 1089 1090 Allowing zone reclaim to write out pages stop 1090 Allowing zone reclaim to write out pages stops processes that are 1091 writing large amounts of data from dirtying p 1091 writing large amounts of data from dirtying pages on other nodes. Zone 1092 reclaim will write out dirty pages if a zone 1092 reclaim will write out dirty pages if a zone fills up and so effectively 1093 throttle the process. This may decrease the p 1093 throttle the process. This may decrease the performance of a single process 1094 since it cannot use all of system memory to b 1094 since it cannot use all of system memory to buffer the outgoing writes 1095 anymore but it preserve the memory on other n 1095 anymore but it preserve the memory on other nodes so that the performance 1096 of other processes running on other nodes wil 1096 of other processes running on other nodes will not be affected. 1097 1097 1098 Allowing regular swap effectively restricts a 1098 Allowing regular swap effectively restricts allocations to the local 1099 node unless explicitly overridden by memory p 1099 node unless explicitly overridden by memory policies or cpuset 1100 configurations. 1100 configurations.
Linux® is a registered trademark of Linus Torvalds in the United States and other countries.
TOMOYO® is a registered trademark of NTT DATA CORPORATION.