~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

TOMOYO Linux Cross Reference
Linux/Documentation/admin-guide/cgroup-v2.rst

Version: ~ [ linux-6.11.5 ] ~ [ linux-6.10.14 ] ~ [ linux-6.9.12 ] ~ [ linux-6.8.12 ] ~ [ linux-6.7.12 ] ~ [ linux-6.6.58 ] ~ [ linux-6.5.13 ] ~ [ linux-6.4.16 ] ~ [ linux-6.3.13 ] ~ [ linux-6.2.16 ] ~ [ linux-6.1.114 ] ~ [ linux-6.0.19 ] ~ [ linux-5.19.17 ] ~ [ linux-5.18.19 ] ~ [ linux-5.17.15 ] ~ [ linux-5.16.20 ] ~ [ linux-5.15.169 ] ~ [ linux-5.14.21 ] ~ [ linux-5.13.19 ] ~ [ linux-5.12.19 ] ~ [ linux-5.11.22 ] ~ [ linux-5.10.228 ] ~ [ linux-5.9.16 ] ~ [ linux-5.8.18 ] ~ [ linux-5.7.19 ] ~ [ linux-5.6.19 ] ~ [ linux-5.5.19 ] ~ [ linux-5.4.284 ] ~ [ linux-5.3.18 ] ~ [ linux-5.2.21 ] ~ [ linux-5.1.21 ] ~ [ linux-5.0.21 ] ~ [ linux-4.20.17 ] ~ [ linux-4.19.322 ] ~ [ linux-4.18.20 ] ~ [ linux-4.17.19 ] ~ [ linux-4.16.18 ] ~ [ linux-4.15.18 ] ~ [ linux-4.14.336 ] ~ [ linux-4.13.16 ] ~ [ linux-4.12.14 ] ~ [ linux-4.11.12 ] ~ [ linux-4.10.17 ] ~ [ linux-4.9.337 ] ~ [ linux-4.4.302 ] ~ [ linux-3.10.108 ] ~ [ linux-2.6.32.71 ] ~ [ linux-2.6.0 ] ~ [ linux-2.4.37.11 ] ~ [ unix-v6-master ] ~ [ ccs-tools-1.8.9 ] ~ [ policy-sample ] ~
Architecture: ~ [ i386 ] ~ [ alpha ] ~ [ m68k ] ~ [ mips ] ~ [ ppc ] ~ [ sparc ] ~ [ sparc64 ] ~

  1 .. _cgroup-v2:
  2 
  3 ================
  4 Control Group v2
  5 ================
  6 
  7 :Date: October, 2015
  8 :Author: Tejun Heo <tj@kernel.org>
  9 
 10 This is the authoritative documentation on the design, interface and
 11 conventions of cgroup v2.  It describes all userland-visible aspects
 12 of cgroup including core and specific controller behaviors.  All
 13 future changes must be reflected in this document.  Documentation for
 14 v1 is available under :ref:`Documentation/admin-guide/cgroup-v1/index.rst <cgroup-v1>`.
 15 
 16 .. CONTENTS
 17 
 18    1. Introduction
 19      1-1. Terminology
 20      1-2. What is cgroup?
 21    2. Basic Operations
 22      2-1. Mounting
 23      2-2. Organizing Processes and Threads
 24        2-2-1. Processes
 25        2-2-2. Threads
 26      2-3. [Un]populated Notification
 27      2-4. Controlling Controllers
 28        2-4-1. Enabling and Disabling
 29        2-4-2. Top-down Constraint
 30        2-4-3. No Internal Process Constraint
 31      2-5. Delegation
 32        2-5-1. Model of Delegation
 33        2-5-2. Delegation Containment
 34      2-6. Guidelines
 35        2-6-1. Organize Once and Control
 36        2-6-2. Avoid Name Collisions
 37    3. Resource Distribution Models
 38      3-1. Weights
 39      3-2. Limits
 40      3-3. Protections
 41      3-4. Allocations
 42    4. Interface Files
 43      4-1. Format
 44      4-2. Conventions
 45      4-3. Core Interface Files
 46    5. Controllers
 47      5-1. CPU
 48        5-1-1. CPU Interface Files
 49      5-2. Memory
 50        5-2-1. Memory Interface Files
 51        5-2-2. Usage Guidelines
 52        5-2-3. Memory Ownership
 53      5-3. IO
 54        5-3-1. IO Interface Files
 55        5-3-2. Writeback
 56        5-3-3. IO Latency
 57          5-3-3-1. How IO Latency Throttling Works
 58          5-3-3-2. IO Latency Interface Files
 59        5-3-4. IO Priority
 60      5-4. PID
 61        5-4-1. PID Interface Files
 62      5-5. Cpuset
 63        5.5-1. Cpuset Interface Files
 64      5-6. Device
 65      5-7. RDMA
 66        5-7-1. RDMA Interface Files
 67      5-8. HugeTLB
 68        5.8-1. HugeTLB Interface Files
 69      5-9. Misc
 70        5.9-1 Miscellaneous cgroup Interface Files
 71        5.9-2 Migration and Ownership
 72      5-10. Others
 73        5-10-1. perf_event
 74      5-N. Non-normative information
 75        5-N-1. CPU controller root cgroup process behaviour
 76        5-N-2. IO controller root cgroup process behaviour
 77    6. Namespace
 78      6-1. Basics
 79      6-2. The Root and Views
 80      6-3. Migration and setns(2)
 81      6-4. Interaction with Other Namespaces
 82    P. Information on Kernel Programming
 83      P-1. Filesystem Support for Writeback
 84    D. Deprecated v1 Core Features
 85    R. Issues with v1 and Rationales for v2
 86      R-1. Multiple Hierarchies
 87      R-2. Thread Granularity
 88      R-3. Competition Between Inner Nodes and Threads
 89      R-4. Other Interface Issues
 90      R-5. Controller Issues and Remedies
 91        R-5-1. Memory
 92 
 93 
 94 Introduction
 95 ============
 96 
 97 Terminology
 98 -----------
 99 
100 "cgroup" stands for "control group" and is never capitalized.  The
101 singular form is used to designate the whole feature and also as a
102 qualifier as in "cgroup controllers".  When explicitly referring to
103 multiple individual control groups, the plural form "cgroups" is used.
104 
105 
106 What is cgroup?
107 ---------------
108 
109 cgroup is a mechanism to organize processes hierarchically and
110 distribute system resources along the hierarchy in a controlled and
111 configurable manner.
112 
113 cgroup is largely composed of two parts - the core and controllers.
114 cgroup core is primarily responsible for hierarchically organizing
115 processes.  A cgroup controller is usually responsible for
116 distributing a specific type of system resource along the hierarchy
117 although there are utility controllers which serve purposes other than
118 resource distribution.
119 
120 cgroups form a tree structure and every process in the system belongs
121 to one and only one cgroup.  All threads of a process belong to the
122 same cgroup.  On creation, all processes are put in the cgroup that
123 the parent process belongs to at the time.  A process can be migrated
124 to another cgroup.  Migration of a process doesn't affect already
125 existing descendant processes.
126 
127 Following certain structural constraints, controllers may be enabled or
128 disabled selectively on a cgroup.  All controller behaviors are
129 hierarchical - if a controller is enabled on a cgroup, it affects all
130 processes which belong to the cgroups consisting the inclusive
131 sub-hierarchy of the cgroup.  When a controller is enabled on a nested
132 cgroup, it always restricts the resource distribution further.  The
133 restrictions set closer to the root in the hierarchy can not be
134 overridden from further away.
135 
136 
137 Basic Operations
138 ================
139 
140 Mounting
141 --------
142 
143 Unlike v1, cgroup v2 has only single hierarchy.  The cgroup v2
144 hierarchy can be mounted with the following mount command::
145 
146   # mount -t cgroup2 none $MOUNT_POINT
147 
148 cgroup2 filesystem has the magic number 0x63677270 ("cgrp").  All
149 controllers which support v2 and are not bound to a v1 hierarchy are
150 automatically bound to the v2 hierarchy and show up at the root.
151 Controllers which are not in active use in the v2 hierarchy can be
152 bound to other hierarchies.  This allows mixing v2 hierarchy with the
153 legacy v1 multiple hierarchies in a fully backward compatible way.
154 
155 A controller can be moved across hierarchies only after the controller
156 is no longer referenced in its current hierarchy.  Because per-cgroup
157 controller states are destroyed asynchronously and controllers may
158 have lingering references, a controller may not show up immediately on
159 the v2 hierarchy after the final umount of the previous hierarchy.
160 Similarly, a controller should be fully disabled to be moved out of
161 the unified hierarchy and it may take some time for the disabled
162 controller to become available for other hierarchies; furthermore, due
163 to inter-controller dependencies, other controllers may need to be
164 disabled too.
165 
166 While useful for development and manual configurations, moving
167 controllers dynamically between the v2 and other hierarchies is
168 strongly discouraged for production use.  It is recommended to decide
169 the hierarchies and controller associations before starting using the
170 controllers after system boot.
171 
172 During transition to v2, system management software might still
173 automount the v1 cgroup filesystem and so hijack all controllers
174 during boot, before manual intervention is possible. To make testing
175 and experimenting easier, the kernel parameter cgroup_no_v1= allows
176 disabling controllers in v1 and make them always available in v2.
177 
178 cgroup v2 currently supports the following mount options.
179 
180   nsdelegate
181         Consider cgroup namespaces as delegation boundaries.  This
182         option is system wide and can only be set on mount or modified
183         through remount from the init namespace.  The mount option is
184         ignored on non-init namespace mounts.  Please refer to the
185         Delegation section for details.
186 
187   favordynmods
188         Reduce the latencies of dynamic cgroup modifications such as
189         task migrations and controller on/offs at the cost of making
190         hot path operations such as forks and exits more expensive.
191         The static usage pattern of creating a cgroup, enabling
192         controllers, and then seeding it with CLONE_INTO_CGROUP is
193         not affected by this option.
194 
195   memory_localevents
196         Only populate memory.events with data for the current cgroup,
197         and not any subtrees. This is legacy behaviour, the default
198         behaviour without this option is to include subtree counts.
199         This option is system wide and can only be set on mount or
200         modified through remount from the init namespace. The mount
201         option is ignored on non-init namespace mounts.
202 
203   memory_recursiveprot
204         Recursively apply memory.min and memory.low protection to
205         entire subtrees, without requiring explicit downward
206         propagation into leaf cgroups.  This allows protecting entire
207         subtrees from one another, while retaining free competition
208         within those subtrees.  This should have been the default
209         behavior but is a mount-option to avoid regressing setups
210         relying on the original semantics (e.g. specifying bogusly
211         high 'bypass' protection values at higher tree levels).
212 
213   memory_hugetlb_accounting
214         Count HugeTLB memory usage towards the cgroup's overall
215         memory usage for the memory controller (for the purpose of
216         statistics reporting and memory protetion). This is a new
217         behavior that could regress existing setups, so it must be
218         explicitly opted in with this mount option.
219 
220         A few caveats to keep in mind:
221 
222         * There is no HugeTLB pool management involved in the memory
223           controller. The pre-allocated pool does not belong to anyone.
224           Specifically, when a new HugeTLB folio is allocated to
225           the pool, it is not accounted for from the perspective of the
226           memory controller. It is only charged to a cgroup when it is
227           actually used (for e.g at page fault time). Host memory
228           overcommit management has to consider this when configuring
229           hard limits. In general, HugeTLB pool management should be
230           done via other mechanisms (such as the HugeTLB controller).
231         * Failure to charge a HugeTLB folio to the memory controller
232           results in SIGBUS. This could happen even if the HugeTLB pool
233           still has pages available (but the cgroup limit is hit and
234           reclaim attempt fails).
235         * Charging HugeTLB memory towards the memory controller affects
236           memory protection and reclaim dynamics. Any userspace tuning
237           (of low, min limits for e.g) needs to take this into account.
238         * HugeTLB pages utilized while this option is not selected
239           will not be tracked by the memory controller (even if cgroup
240           v2 is remounted later on).
241 
242   pids_localevents
243         The option restores v1-like behavior of pids.events:max, that is only
244         local (inside cgroup proper) fork failures are counted. Without this
245         option pids.events.max represents any pids.max enforcemnt across
246         cgroup's subtree.
247 
248 
249 
250 Organizing Processes and Threads
251 --------------------------------
252 
253 Processes
254 ~~~~~~~~~
255 
256 Initially, only the root cgroup exists to which all processes belong.
257 A child cgroup can be created by creating a sub-directory::
258 
259   # mkdir $CGROUP_NAME
260 
261 A given cgroup may have multiple child cgroups forming a tree
262 structure.  Each cgroup has a read-writable interface file
263 "cgroup.procs".  When read, it lists the PIDs of all processes which
264 belong to the cgroup one-per-line.  The PIDs are not ordered and the
265 same PID may show up more than once if the process got moved to
266 another cgroup and then back or the PID got recycled while reading.
267 
268 A process can be migrated into a cgroup by writing its PID to the
269 target cgroup's "cgroup.procs" file.  Only one process can be migrated
270 on a single write(2) call.  If a process is composed of multiple
271 threads, writing the PID of any thread migrates all threads of the
272 process.
273 
274 When a process forks a child process, the new process is born into the
275 cgroup that the forking process belongs to at the time of the
276 operation.  After exit, a process stays associated with the cgroup
277 that it belonged to at the time of exit until it's reaped; however, a
278 zombie process does not appear in "cgroup.procs" and thus can't be
279 moved to another cgroup.
280 
281 A cgroup which doesn't have any children or live processes can be
282 destroyed by removing the directory.  Note that a cgroup which doesn't
283 have any children and is associated only with zombie processes is
284 considered empty and can be removed::
285 
286   # rmdir $CGROUP_NAME
287 
288 "/proc/$PID/cgroup" lists a process's cgroup membership.  If legacy
289 cgroup is in use in the system, this file may contain multiple lines,
290 one for each hierarchy.  The entry for cgroup v2 is always in the
291 format "0::$PATH"::
292 
293   # cat /proc/842/cgroup
294   ...
295   0::/test-cgroup/test-cgroup-nested
296 
297 If the process becomes a zombie and the cgroup it was associated with
298 is removed subsequently, " (deleted)" is appended to the path::
299 
300   # cat /proc/842/cgroup
301   ...
302   0::/test-cgroup/test-cgroup-nested (deleted)
303 
304 
305 Threads
306 ~~~~~~~
307 
308 cgroup v2 supports thread granularity for a subset of controllers to
309 support use cases requiring hierarchical resource distribution across
310 the threads of a group of processes.  By default, all threads of a
311 process belong to the same cgroup, which also serves as the resource
312 domain to host resource consumptions which are not specific to a
313 process or thread.  The thread mode allows threads to be spread across
314 a subtree while still maintaining the common resource domain for them.
315 
316 Controllers which support thread mode are called threaded controllers.
317 The ones which don't are called domain controllers.
318 
319 Marking a cgroup threaded makes it join the resource domain of its
320 parent as a threaded cgroup.  The parent may be another threaded
321 cgroup whose resource domain is further up in the hierarchy.  The root
322 of a threaded subtree, that is, the nearest ancestor which is not
323 threaded, is called threaded domain or thread root interchangeably and
324 serves as the resource domain for the entire subtree.
325 
326 Inside a threaded subtree, threads of a process can be put in
327 different cgroups and are not subject to the no internal process
328 constraint - threaded controllers can be enabled on non-leaf cgroups
329 whether they have threads in them or not.
330 
331 As the threaded domain cgroup hosts all the domain resource
332 consumptions of the subtree, it is considered to have internal
333 resource consumptions whether there are processes in it or not and
334 can't have populated child cgroups which aren't threaded.  Because the
335 root cgroup is not subject to no internal process constraint, it can
336 serve both as a threaded domain and a parent to domain cgroups.
337 
338 The current operation mode or type of the cgroup is shown in the
339 "cgroup.type" file which indicates whether the cgroup is a normal
340 domain, a domain which is serving as the domain of a threaded subtree,
341 or a threaded cgroup.
342 
343 On creation, a cgroup is always a domain cgroup and can be made
344 threaded by writing "threaded" to the "cgroup.type" file.  The
345 operation is single direction::
346 
347   # echo threaded > cgroup.type
348 
349 Once threaded, the cgroup can't be made a domain again.  To enable the
350 thread mode, the following conditions must be met.
351 
352 - As the cgroup will join the parent's resource domain.  The parent
353   must either be a valid (threaded) domain or a threaded cgroup.
354 
355 - When the parent is an unthreaded domain, it must not have any domain
356   controllers enabled or populated domain children.  The root is
357   exempt from this requirement.
358 
359 Topology-wise, a cgroup can be in an invalid state.  Please consider
360 the following topology::
361 
362   A (threaded domain) - B (threaded) - C (domain, just created)
363 
364 C is created as a domain but isn't connected to a parent which can
365 host child domains.  C can't be used until it is turned into a
366 threaded cgroup.  "cgroup.type" file will report "domain (invalid)" in
367 these cases.  Operations which fail due to invalid topology use
368 EOPNOTSUPP as the errno.
369 
370 A domain cgroup is turned into a threaded domain when one of its child
371 cgroup becomes threaded or threaded controllers are enabled in the
372 "cgroup.subtree_control" file while there are processes in the cgroup.
373 A threaded domain reverts to a normal domain when the conditions
374 clear.
375 
376 When read, "cgroup.threads" contains the list of the thread IDs of all
377 threads in the cgroup.  Except that the operations are per-thread
378 instead of per-process, "cgroup.threads" has the same format and
379 behaves the same way as "cgroup.procs".  While "cgroup.threads" can be
380 written to in any cgroup, as it can only move threads inside the same
381 threaded domain, its operations are confined inside each threaded
382 subtree.
383 
384 The threaded domain cgroup serves as the resource domain for the whole
385 subtree, and, while the threads can be scattered across the subtree,
386 all the processes are considered to be in the threaded domain cgroup.
387 "cgroup.procs" in a threaded domain cgroup contains the PIDs of all
388 processes in the subtree and is not readable in the subtree proper.
389 However, "cgroup.procs" can be written to from anywhere in the subtree
390 to migrate all threads of the matching process to the cgroup.
391 
392 Only threaded controllers can be enabled in a threaded subtree.  When
393 a threaded controller is enabled inside a threaded subtree, it only
394 accounts for and controls resource consumptions associated with the
395 threads in the cgroup and its descendants.  All consumptions which
396 aren't tied to a specific thread belong to the threaded domain cgroup.
397 
398 Because a threaded subtree is exempt from no internal process
399 constraint, a threaded controller must be able to handle competition
400 between threads in a non-leaf cgroup and its child cgroups.  Each
401 threaded controller defines how such competitions are handled.
402 
403 Currently, the following controllers are threaded and can be enabled
404 in a threaded cgroup::
405 
406 - cpu
407 - cpuset
408 - perf_event
409 - pids
410 
411 [Un]populated Notification
412 --------------------------
413 
414 Each non-root cgroup has a "cgroup.events" file which contains
415 "populated" field indicating whether the cgroup's sub-hierarchy has
416 live processes in it.  Its value is 0 if there is no live process in
417 the cgroup and its descendants; otherwise, 1.  poll and [id]notify
418 events are triggered when the value changes.  This can be used, for
419 example, to start a clean-up operation after all processes of a given
420 sub-hierarchy have exited.  The populated state updates and
421 notifications are recursive.  Consider the following sub-hierarchy
422 where the numbers in the parentheses represent the numbers of processes
423 in each cgroup::
424 
425   A(4) - B(0) - C(1)
426               \ D(0)
427 
428 A, B and C's "populated" fields would be 1 while D's 0.  After the one
429 process in C exits, B and C's "populated" fields would flip to "0" and
430 file modified events will be generated on the "cgroup.events" files of
431 both cgroups.
432 
433 
434 Controlling Controllers
435 -----------------------
436 
437 Enabling and Disabling
438 ~~~~~~~~~~~~~~~~~~~~~~
439 
440 Each cgroup has a "cgroup.controllers" file which lists all
441 controllers available for the cgroup to enable::
442 
443   # cat cgroup.controllers
444   cpu io memory
445 
446 No controller is enabled by default.  Controllers can be enabled and
447 disabled by writing to the "cgroup.subtree_control" file::
448 
449   # echo "+cpu +memory -io" > cgroup.subtree_control
450 
451 Only controllers which are listed in "cgroup.controllers" can be
452 enabled.  When multiple operations are specified as above, either they
453 all succeed or fail.  If multiple operations on the same controller
454 are specified, the last one is effective.
455 
456 Enabling a controller in a cgroup indicates that the distribution of
457 the target resource across its immediate children will be controlled.
458 Consider the following sub-hierarchy.  The enabled controllers are
459 listed in parentheses::
460 
461   A(cpu,memory) - B(memory) - C()
462                             \ D()
463 
464 As A has "cpu" and "memory" enabled, A will control the distribution
465 of CPU cycles and memory to its children, in this case, B.  As B has
466 "memory" enabled but not "CPU", C and D will compete freely on CPU
467 cycles but their division of memory available to B will be controlled.
468 
469 As a controller regulates the distribution of the target resource to
470 the cgroup's children, enabling it creates the controller's interface
471 files in the child cgroups.  In the above example, enabling "cpu" on B
472 would create the "cpu." prefixed controller interface files in C and
473 D.  Likewise, disabling "memory" from B would remove the "memory."
474 prefixed controller interface files from C and D.  This means that the
475 controller interface files - anything which doesn't start with
476 "cgroup." are owned by the parent rather than the cgroup itself.
477 
478 
479 Top-down Constraint
480 ~~~~~~~~~~~~~~~~~~~
481 
482 Resources are distributed top-down and a cgroup can further distribute
483 a resource only if the resource has been distributed to it from the
484 parent.  This means that all non-root "cgroup.subtree_control" files
485 can only contain controllers which are enabled in the parent's
486 "cgroup.subtree_control" file.  A controller can be enabled only if
487 the parent has the controller enabled and a controller can't be
488 disabled if one or more children have it enabled.
489 
490 
491 No Internal Process Constraint
492 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
493 
494 Non-root cgroups can distribute domain resources to their children
495 only when they don't have any processes of their own.  In other words,
496 only domain cgroups which don't contain any processes can have domain
497 controllers enabled in their "cgroup.subtree_control" files.
498 
499 This guarantees that, when a domain controller is looking at the part
500 of the hierarchy which has it enabled, processes are always only on
501 the leaves.  This rules out situations where child cgroups compete
502 against internal processes of the parent.
503 
504 The root cgroup is exempt from this restriction.  Root contains
505 processes and anonymous resource consumption which can't be associated
506 with any other cgroups and requires special treatment from most
507 controllers.  How resource consumption in the root cgroup is governed
508 is up to each controller (for more information on this topic please
509 refer to the Non-normative information section in the Controllers
510 chapter).
511 
512 Note that the restriction doesn't get in the way if there is no
513 enabled controller in the cgroup's "cgroup.subtree_control".  This is
514 important as otherwise it wouldn't be possible to create children of a
515 populated cgroup.  To control resource distribution of a cgroup, the
516 cgroup must create children and transfer all its processes to the
517 children before enabling controllers in its "cgroup.subtree_control"
518 file.
519 
520 
521 Delegation
522 ----------
523 
524 Model of Delegation
525 ~~~~~~~~~~~~~~~~~~~
526 
527 A cgroup can be delegated in two ways.  First, to a less privileged
528 user by granting write access of the directory and its "cgroup.procs",
529 "cgroup.threads" and "cgroup.subtree_control" files to the user.
530 Second, if the "nsdelegate" mount option is set, automatically to a
531 cgroup namespace on namespace creation.
532 
533 Because the resource control interface files in a given directory
534 control the distribution of the parent's resources, the delegatee
535 shouldn't be allowed to write to them.  For the first method, this is
536 achieved by not granting access to these files.  For the second, the
537 kernel rejects writes to all files other than "cgroup.procs" and
538 "cgroup.subtree_control" on a namespace root from inside the
539 namespace.
540 
541 The end results are equivalent for both delegation types.  Once
542 delegated, the user can build sub-hierarchy under the directory,
543 organize processes inside it as it sees fit and further distribute the
544 resources it received from the parent.  The limits and other settings
545 of all resource controllers are hierarchical and regardless of what
546 happens in the delegated sub-hierarchy, nothing can escape the
547 resource restrictions imposed by the parent.
548 
549 Currently, cgroup doesn't impose any restrictions on the number of
550 cgroups in or nesting depth of a delegated sub-hierarchy; however,
551 this may be limited explicitly in the future.
552 
553 
554 Delegation Containment
555 ~~~~~~~~~~~~~~~~~~~~~~
556 
557 A delegated sub-hierarchy is contained in the sense that processes
558 can't be moved into or out of the sub-hierarchy by the delegatee.
559 
560 For delegations to a less privileged user, this is achieved by
561 requiring the following conditions for a process with a non-root euid
562 to migrate a target process into a cgroup by writing its PID to the
563 "cgroup.procs" file.
564 
565 - The writer must have write access to the "cgroup.procs" file.
566 
567 - The writer must have write access to the "cgroup.procs" file of the
568   common ancestor of the source and destination cgroups.
569 
570 The above two constraints ensure that while a delegatee may migrate
571 processes around freely in the delegated sub-hierarchy it can't pull
572 in from or push out to outside the sub-hierarchy.
573 
574 For an example, let's assume cgroups C0 and C1 have been delegated to
575 user U0 who created C00, C01 under C0 and C10 under C1 as follows and
576 all processes under C0 and C1 belong to U0::
577 
578   ~~~~~~~~~~~~~ - C0 - C00
579   ~ cgroup    ~      \ C01
580   ~ hierarchy ~
581   ~~~~~~~~~~~~~ - C1 - C10
582 
583 Let's also say U0 wants to write the PID of a process which is
584 currently in C10 into "C00/cgroup.procs".  U0 has write access to the
585 file; however, the common ancestor of the source cgroup C10 and the
586 destination cgroup C00 is above the points of delegation and U0 would
587 not have write access to its "cgroup.procs" files and thus the write
588 will be denied with -EACCES.
589 
590 For delegations to namespaces, containment is achieved by requiring
591 that both the source and destination cgroups are reachable from the
592 namespace of the process which is attempting the migration.  If either
593 is not reachable, the migration is rejected with -ENOENT.
594 
595 
596 Guidelines
597 ----------
598 
599 Organize Once and Control
600 ~~~~~~~~~~~~~~~~~~~~~~~~~
601 
602 Migrating a process across cgroups is a relatively expensive operation
603 and stateful resources such as memory are not moved together with the
604 process.  This is an explicit design decision as there often exist
605 inherent trade-offs between migration and various hot paths in terms
606 of synchronization cost.
607 
608 As such, migrating processes across cgroups frequently as a means to
609 apply different resource restrictions is discouraged.  A workload
610 should be assigned to a cgroup according to the system's logical and
611 resource structure once on start-up.  Dynamic adjustments to resource
612 distribution can be made by changing controller configuration through
613 the interface files.
614 
615 
616 Avoid Name Collisions
617 ~~~~~~~~~~~~~~~~~~~~~
618 
619 Interface files for a cgroup and its children cgroups occupy the same
620 directory and it is possible to create children cgroups which collide
621 with interface files.
622 
623 All cgroup core interface files are prefixed with "cgroup." and each
624 controller's interface files are prefixed with the controller name and
625 a dot.  A controller's name is composed of lower case alphabets and
626 '_'s but never begins with an '_' so it can be used as the prefix
627 character for collision avoidance.  Also, interface file names won't
628 start or end with terms which are often used in categorizing workloads
629 such as job, service, slice, unit or workload.
630 
631 cgroup doesn't do anything to prevent name collisions and it's the
632 user's responsibility to avoid them.
633 
634 
635 Resource Distribution Models
636 ============================
637 
638 cgroup controllers implement several resource distribution schemes
639 depending on the resource type and expected use cases.  This section
640 describes major schemes in use along with their expected behaviors.
641 
642 
643 Weights
644 -------
645 
646 A parent's resource is distributed by adding up the weights of all
647 active children and giving each the fraction matching the ratio of its
648 weight against the sum.  As only children which can make use of the
649 resource at the moment participate in the distribution, this is
650 work-conserving.  Due to the dynamic nature, this model is usually
651 used for stateless resources.
652 
653 All weights are in the range [1, 10000] with the default at 100.  This
654 allows symmetric multiplicative biases in both directions at fine
655 enough granularity while staying in the intuitive range.
656 
657 As long as the weight is in range, all configuration combinations are
658 valid and there is no reason to reject configuration changes or
659 process migrations.
660 
661 "cpu.weight" proportionally distributes CPU cycles to active children
662 and is an example of this type.
663 
664 
665 .. _cgroupv2-limits-distributor:
666 
667 Limits
668 ------
669 
670 A child can only consume up to the configured amount of the resource.
671 Limits can be over-committed - the sum of the limits of children can
672 exceed the amount of resource available to the parent.
673 
674 Limits are in the range [0, max] and defaults to "max", which is noop.
675 
676 As limits can be over-committed, all configuration combinations are
677 valid and there is no reason to reject configuration changes or
678 process migrations.
679 
680 "io.max" limits the maximum BPS and/or IOPS that a cgroup can consume
681 on an IO device and is an example of this type.
682 
683 .. _cgroupv2-protections-distributor:
684 
685 Protections
686 -----------
687 
688 A cgroup is protected up to the configured amount of the resource
689 as long as the usages of all its ancestors are under their
690 protected levels.  Protections can be hard guarantees or best effort
691 soft boundaries.  Protections can also be over-committed in which case
692 only up to the amount available to the parent is protected among
693 children.
694 
695 Protections are in the range [0, max] and defaults to 0, which is
696 noop.
697 
698 As protections can be over-committed, all configuration combinations
699 are valid and there is no reason to reject configuration changes or
700 process migrations.
701 
702 "memory.low" implements best-effort memory protection and is an
703 example of this type.
704 
705 
706 Allocations
707 -----------
708 
709 A cgroup is exclusively allocated a certain amount of a finite
710 resource.  Allocations can't be over-committed - the sum of the
711 allocations of children can not exceed the amount of resource
712 available to the parent.
713 
714 Allocations are in the range [0, max] and defaults to 0, which is no
715 resource.
716 
717 As allocations can't be over-committed, some configuration
718 combinations are invalid and should be rejected.  Also, if the
719 resource is mandatory for execution of processes, process migrations
720 may be rejected.
721 
722 "cpu.rt.max" hard-allocates realtime slices and is an example of this
723 type.
724 
725 
726 Interface Files
727 ===============
728 
729 Format
730 ------
731 
732 All interface files should be in one of the following formats whenever
733 possible::
734 
735   New-line separated values
736   (when only one value can be written at once)
737 
738         VAL0\n
739         VAL1\n
740         ...
741 
742   Space separated values
743   (when read-only or multiple values can be written at once)
744 
745         VAL0 VAL1 ...\n
746 
747   Flat keyed
748 
749         KEY0 VAL0\n
750         KEY1 VAL1\n
751         ...
752 
753   Nested keyed
754 
755         KEY0 SUB_KEY0=VAL00 SUB_KEY1=VAL01...
756         KEY1 SUB_KEY0=VAL10 SUB_KEY1=VAL11...
757         ...
758 
759 For a writable file, the format for writing should generally match
760 reading; however, controllers may allow omitting later fields or
761 implement restricted shortcuts for most common use cases.
762 
763 For both flat and nested keyed files, only the values for a single key
764 can be written at a time.  For nested keyed files, the sub key pairs
765 may be specified in any order and not all pairs have to be specified.
766 
767 
768 Conventions
769 -----------
770 
771 - Settings for a single feature should be contained in a single file.
772 
773 - The root cgroup should be exempt from resource control and thus
774   shouldn't have resource control interface files.
775 
776 - The default time unit is microseconds.  If a different unit is ever
777   used, an explicit unit suffix must be present.
778 
779 - A parts-per quantity should use a percentage decimal with at least
780   two digit fractional part - e.g. 13.40.
781 
782 - If a controller implements weight based resource distribution, its
783   interface file should be named "weight" and have the range [1,
784   10000] with 100 as the default.  The values are chosen to allow
785   enough and symmetric bias in both directions while keeping it
786   intuitive (the default is 100%).
787 
788 - If a controller implements an absolute resource guarantee and/or
789   limit, the interface files should be named "min" and "max"
790   respectively.  If a controller implements best effort resource
791   guarantee and/or limit, the interface files should be named "low"
792   and "high" respectively.
793 
794   In the above four control files, the special token "max" should be
795   used to represent upward infinity for both reading and writing.
796 
797 - If a setting has a configurable default value and keyed specific
798   overrides, the default entry should be keyed with "default" and
799   appear as the first entry in the file.
800 
801   The default value can be updated by writing either "default $VAL" or
802   "$VAL".
803 
804   When writing to update a specific override, "default" can be used as
805   the value to indicate removal of the override.  Override entries
806   with "default" as the value must not appear when read.
807 
808   For example, a setting which is keyed by major:minor device numbers
809   with integer values may look like the following::
810 
811     # cat cgroup-example-interface-file
812     default 150
813     8:0 300
814 
815   The default value can be updated by::
816 
817     # echo 125 > cgroup-example-interface-file
818 
819   or::
820 
821     # echo "default 125" > cgroup-example-interface-file
822 
823   An override can be set by::
824 
825     # echo "8:16 170" > cgroup-example-interface-file
826 
827   and cleared by::
828 
829     # echo "8:0 default" > cgroup-example-interface-file
830     # cat cgroup-example-interface-file
831     default 125
832     8:16 170
833 
834 - For events which are not very high frequency, an interface file
835   "events" should be created which lists event key value pairs.
836   Whenever a notifiable event happens, file modified event should be
837   generated on the file.
838 
839 
840 Core Interface Files
841 --------------------
842 
843 All cgroup core files are prefixed with "cgroup."
844 
845   cgroup.type
846         A read-write single value file which exists on non-root
847         cgroups.
848 
849         When read, it indicates the current type of the cgroup, which
850         can be one of the following values.
851 
852         - "domain" : A normal valid domain cgroup.
853 
854         - "domain threaded" : A threaded domain cgroup which is
855           serving as the root of a threaded subtree.
856 
857         - "domain invalid" : A cgroup which is in an invalid state.
858           It can't be populated or have controllers enabled.  It may
859           be allowed to become a threaded cgroup.
860 
861         - "threaded" : A threaded cgroup which is a member of a
862           threaded subtree.
863 
864         A cgroup can be turned into a threaded cgroup by writing
865         "threaded" to this file.
866 
867   cgroup.procs
868         A read-write new-line separated values file which exists on
869         all cgroups.
870 
871         When read, it lists the PIDs of all processes which belong to
872         the cgroup one-per-line.  The PIDs are not ordered and the
873         same PID may show up more than once if the process got moved
874         to another cgroup and then back or the PID got recycled while
875         reading.
876 
877         A PID can be written to migrate the process associated with
878         the PID to the cgroup.  The writer should match all of the
879         following conditions.
880 
881         - It must have write access to the "cgroup.procs" file.
882 
883         - It must have write access to the "cgroup.procs" file of the
884           common ancestor of the source and destination cgroups.
885 
886         When delegating a sub-hierarchy, write access to this file
887         should be granted along with the containing directory.
888 
889         In a threaded cgroup, reading this file fails with EOPNOTSUPP
890         as all the processes belong to the thread root.  Writing is
891         supported and moves every thread of the process to the cgroup.
892 
893   cgroup.threads
894         A read-write new-line separated values file which exists on
895         all cgroups.
896 
897         When read, it lists the TIDs of all threads which belong to
898         the cgroup one-per-line.  The TIDs are not ordered and the
899         same TID may show up more than once if the thread got moved to
900         another cgroup and then back or the TID got recycled while
901         reading.
902 
903         A TID can be written to migrate the thread associated with the
904         TID to the cgroup.  The writer should match all of the
905         following conditions.
906 
907         - It must have write access to the "cgroup.threads" file.
908 
909         - The cgroup that the thread is currently in must be in the
910           same resource domain as the destination cgroup.
911 
912         - It must have write access to the "cgroup.procs" file of the
913           common ancestor of the source and destination cgroups.
914 
915         When delegating a sub-hierarchy, write access to this file
916         should be granted along with the containing directory.
917 
918   cgroup.controllers
919         A read-only space separated values file which exists on all
920         cgroups.
921 
922         It shows space separated list of all controllers available to
923         the cgroup.  The controllers are not ordered.
924 
925   cgroup.subtree_control
926         A read-write space separated values file which exists on all
927         cgroups.  Starts out empty.
928 
929         When read, it shows space separated list of the controllers
930         which are enabled to control resource distribution from the
931         cgroup to its children.
932 
933         Space separated list of controllers prefixed with '+' or '-'
934         can be written to enable or disable controllers.  A controller
935         name prefixed with '+' enables the controller and '-'
936         disables.  If a controller appears more than once on the list,
937         the last one is effective.  When multiple enable and disable
938         operations are specified, either all succeed or all fail.
939 
940   cgroup.events
941         A read-only flat-keyed file which exists on non-root cgroups.
942         The following entries are defined.  Unless specified
943         otherwise, a value change in this file generates a file
944         modified event.
945 
946           populated
947                 1 if the cgroup or its descendants contains any live
948                 processes; otherwise, 0.
949           frozen
950                 1 if the cgroup is frozen; otherwise, 0.
951 
952   cgroup.max.descendants
953         A read-write single value files.  The default is "max".
954 
955         Maximum allowed number of descent cgroups.
956         If the actual number of descendants is equal or larger,
957         an attempt to create a new cgroup in the hierarchy will fail.
958 
959   cgroup.max.depth
960         A read-write single value files.  The default is "max".
961 
962         Maximum allowed descent depth below the current cgroup.
963         If the actual descent depth is equal or larger,
964         an attempt to create a new child cgroup will fail.
965 
966   cgroup.stat
967         A read-only flat-keyed file with the following entries:
968 
969           nr_descendants
970                 Total number of visible descendant cgroups.
971 
972           nr_dying_descendants
973                 Total number of dying descendant cgroups. A cgroup becomes
974                 dying after being deleted by a user. The cgroup will remain
975                 in dying state for some time undefined time (which can depend
976                 on system load) before being completely destroyed.
977 
978                 A process can't enter a dying cgroup under any circumstances,
979                 a dying cgroup can't revive.
980 
981                 A dying cgroup can consume system resources not exceeding
982                 limits, which were active at the moment of cgroup deletion.
983 
984   cgroup.freeze
985         A read-write single value file which exists on non-root cgroups.
986         Allowed values are "0" and "1". The default is "0".
987 
988         Writing "1" to the file causes freezing of the cgroup and all
989         descendant cgroups. This means that all belonging processes will
990         be stopped and will not run until the cgroup will be explicitly
991         unfrozen. Freezing of the cgroup may take some time; when this action
992         is completed, the "frozen" value in the cgroup.events control file
993         will be updated to "1" and the corresponding notification will be
994         issued.
995 
996         A cgroup can be frozen either by its own settings, or by settings
997         of any ancestor cgroups. If any of ancestor cgroups is frozen, the
998         cgroup will remain frozen.
999 
1000         Processes in the frozen cgroup can be killed by a fatal signal.
1001         They also can enter and leave a frozen cgroup: either by an explicit
1002         move by a user, or if freezing of the cgroup races with fork().
1003         If a process is moved to a frozen cgroup, it stops. If a process is
1004         moved out of a frozen cgroup, it becomes running.
1005 
1006         Frozen status of a cgroup doesn't affect any cgroup tree operations:
1007         it's possible to delete a frozen (and empty) cgroup, as well as
1008         create new sub-cgroups.
1009 
1010   cgroup.kill
1011         A write-only single value file which exists in non-root cgroups.
1012         The only allowed value is "1".
1013 
1014         Writing "1" to the file causes the cgroup and all descendant cgroups to
1015         be killed. This means that all processes located in the affected cgroup
1016         tree will be killed via SIGKILL.
1017 
1018         Killing a cgroup tree will deal with concurrent forks appropriately and
1019         is protected against migrations.
1020 
1021         In a threaded cgroup, writing this file fails with EOPNOTSUPP as
1022         killing cgroups is a process directed operation, i.e. it affects
1023         the whole thread-group.
1024 
1025   cgroup.pressure
1026         A read-write single value file that allowed values are "0" and "1".
1027         The default is "1".
1028 
1029         Writing "0" to the file will disable the cgroup PSI accounting.
1030         Writing "1" to the file will re-enable the cgroup PSI accounting.
1031 
1032         This control attribute is not hierarchical, so disable or enable PSI
1033         accounting in a cgroup does not affect PSI accounting in descendants
1034         and doesn't need pass enablement via ancestors from root.
1035 
1036         The reason this control attribute exists is that PSI accounts stalls for
1037         each cgroup separately and aggregates it at each level of the hierarchy.
1038         This may cause non-negligible overhead for some workloads when under
1039         deep level of the hierarchy, in which case this control attribute can
1040         be used to disable PSI accounting in the non-leaf cgroups.
1041 
1042   irq.pressure
1043         A read-write nested-keyed file.
1044 
1045         Shows pressure stall information for IRQ/SOFTIRQ. See
1046         :ref:`Documentation/accounting/psi.rst <psi>` for details.
1047 
1048 Controllers
1049 ===========
1050 
1051 .. _cgroup-v2-cpu:
1052 
1053 CPU
1054 ---
1055 
1056 The "cpu" controllers regulates distribution of CPU cycles.  This
1057 controller implements weight and absolute bandwidth limit models for
1058 normal scheduling policy and absolute bandwidth allocation model for
1059 realtime scheduling policy.
1060 
1061 In all the above models, cycles distribution is defined only on a temporal
1062 base and it does not account for the frequency at which tasks are executed.
1063 The (optional) utilization clamping support allows to hint the schedutil
1064 cpufreq governor about the minimum desired frequency which should always be
1065 provided by a CPU, as well as the maximum desired frequency, which should not
1066 be exceeded by a CPU.
1067 
1068 WARNING: cgroup2 doesn't yet support control of realtime processes. For
1069 a kernel built with the CONFIG_RT_GROUP_SCHED option enabled for group
1070 scheduling of realtime processes, the cpu controller can only be enabled
1071 when all RT processes are in the root cgroup.  This limitation does
1072 not apply if CONFIG_RT_GROUP_SCHED is disabled.  Be aware that system
1073 management software may already have placed RT processes into nonroot
1074 cgroups during the system boot process, and these processes may need
1075 to be moved to the root cgroup before the cpu controller can be enabled
1076 with a CONFIG_RT_GROUP_SCHED enabled kernel.
1077 
1078 
1079 CPU Interface Files
1080 ~~~~~~~~~~~~~~~~~~~
1081 
1082 All time durations are in microseconds.
1083 
1084   cpu.stat
1085         A read-only flat-keyed file.
1086         This file exists whether the controller is enabled or not.
1087 
1088         It always reports the following three stats:
1089 
1090         - usage_usec
1091         - user_usec
1092         - system_usec
1093 
1094         and the following five when the controller is enabled:
1095 
1096         - nr_periods
1097         - nr_throttled
1098         - throttled_usec
1099         - nr_bursts
1100         - burst_usec
1101 
1102   cpu.weight
1103         A read-write single value file which exists on non-root
1104         cgroups.  The default is "100".
1105 
1106         For non idle groups (cpu.idle = 0), the weight is in the
1107         range [1, 10000].
1108 
1109         If the cgroup has been configured to be SCHED_IDLE (cpu.idle = 1),
1110         then the weight will show as a 0.
1111 
1112   cpu.weight.nice
1113         A read-write single value file which exists on non-root
1114         cgroups.  The default is "0".
1115 
1116         The nice value is in the range [-20, 19].
1117 
1118         This interface file is an alternative interface for
1119         "cpu.weight" and allows reading and setting weight using the
1120         same values used by nice(2).  Because the range is smaller and
1121         granularity is coarser for the nice values, the read value is
1122         the closest approximation of the current weight.
1123 
1124   cpu.max
1125         A read-write two value file which exists on non-root cgroups.
1126         The default is "max 100000".
1127 
1128         The maximum bandwidth limit.  It's in the following format::
1129 
1130           $MAX $PERIOD
1131 
1132         which indicates that the group may consume up to $MAX in each
1133         $PERIOD duration.  "max" for $MAX indicates no limit.  If only
1134         one number is written, $MAX is updated.
1135 
1136   cpu.max.burst
1137         A read-write single value file which exists on non-root
1138         cgroups.  The default is "0".
1139 
1140         The burst in the range [0, $MAX].
1141 
1142   cpu.pressure
1143         A read-write nested-keyed file.
1144 
1145         Shows pressure stall information for CPU. See
1146         :ref:`Documentation/accounting/psi.rst <psi>` for details.
1147 
1148   cpu.uclamp.min
1149         A read-write single value file which exists on non-root cgroups.
1150         The default is "0", i.e. no utilization boosting.
1151 
1152         The requested minimum utilization (protection) as a percentage
1153         rational number, e.g. 12.34 for 12.34%.
1154 
1155         This interface allows reading and setting minimum utilization clamp
1156         values similar to the sched_setattr(2). This minimum utilization
1157         value is used to clamp the task specific minimum utilization clamp.
1158 
1159         The requested minimum utilization (protection) is always capped by
1160         the current value for the maximum utilization (limit), i.e.
1161         `cpu.uclamp.max`.
1162 
1163   cpu.uclamp.max
1164         A read-write single value file which exists on non-root cgroups.
1165         The default is "max". i.e. no utilization capping
1166 
1167         The requested maximum utilization (limit) as a percentage rational
1168         number, e.g. 98.76 for 98.76%.
1169 
1170         This interface allows reading and setting maximum utilization clamp
1171         values similar to the sched_setattr(2). This maximum utilization
1172         value is used to clamp the task specific maximum utilization clamp.
1173 
1174   cpu.idle
1175         A read-write single value file which exists on non-root cgroups.
1176         The default is 0.
1177 
1178         This is the cgroup analog of the per-task SCHED_IDLE sched policy.
1179         Setting this value to a 1 will make the scheduling policy of the
1180         cgroup SCHED_IDLE. The threads inside the cgroup will retain their
1181         own relative priorities, but the cgroup itself will be treated as
1182         very low priority relative to its peers.
1183 
1184 
1185 
1186 Memory
1187 ------
1188 
1189 The "memory" controller regulates distribution of memory.  Memory is
1190 stateful and implements both limit and protection models.  Due to the
1191 intertwining between memory usage and reclaim pressure and the
1192 stateful nature of memory, the distribution model is relatively
1193 complex.
1194 
1195 While not completely water-tight, all major memory usages by a given
1196 cgroup are tracked so that the total memory consumption can be
1197 accounted and controlled to a reasonable extent.  Currently, the
1198 following types of memory usages are tracked.
1199 
1200 - Userland memory - page cache and anonymous memory.
1201 
1202 - Kernel data structures such as dentries and inodes.
1203 
1204 - TCP socket buffers.
1205 
1206 The above list may expand in the future for better coverage.
1207 
1208 
1209 Memory Interface Files
1210 ~~~~~~~~~~~~~~~~~~~~~~
1211 
1212 All memory amounts are in bytes.  If a value which is not aligned to
1213 PAGE_SIZE is written, the value may be rounded up to the closest
1214 PAGE_SIZE multiple when read back.
1215 
1216   memory.current
1217         A read-only single value file which exists on non-root
1218         cgroups.
1219 
1220         The total amount of memory currently being used by the cgroup
1221         and its descendants.
1222 
1223   memory.min
1224         A read-write single value file which exists on non-root
1225         cgroups.  The default is "0".
1226 
1227         Hard memory protection.  If the memory usage of a cgroup
1228         is within its effective min boundary, the cgroup's memory
1229         won't be reclaimed under any conditions. If there is no
1230         unprotected reclaimable memory available, OOM killer
1231         is invoked. Above the effective min boundary (or
1232         effective low boundary if it is higher), pages are reclaimed
1233         proportionally to the overage, reducing reclaim pressure for
1234         smaller overages.
1235 
1236         Effective min boundary is limited by memory.min values of
1237         all ancestor cgroups. If there is memory.min overcommitment
1238         (child cgroup or cgroups are requiring more protected memory
1239         than parent will allow), then each child cgroup will get
1240         the part of parent's protection proportional to its
1241         actual memory usage below memory.min.
1242 
1243         Putting more memory than generally available under this
1244         protection is discouraged and may lead to constant OOMs.
1245 
1246         If a memory cgroup is not populated with processes,
1247         its memory.min is ignored.
1248 
1249   memory.low
1250         A read-write single value file which exists on non-root
1251         cgroups.  The default is "0".
1252 
1253         Best-effort memory protection.  If the memory usage of a
1254         cgroup is within its effective low boundary, the cgroup's
1255         memory won't be reclaimed unless there is no reclaimable
1256         memory available in unprotected cgroups.
1257         Above the effective low boundary (or 
1258         effective min boundary if it is higher), pages are reclaimed
1259         proportionally to the overage, reducing reclaim pressure for
1260         smaller overages.
1261 
1262         Effective low boundary is limited by memory.low values of
1263         all ancestor cgroups. If there is memory.low overcommitment
1264         (child cgroup or cgroups are requiring more protected memory
1265         than parent will allow), then each child cgroup will get
1266         the part of parent's protection proportional to its
1267         actual memory usage below memory.low.
1268 
1269         Putting more memory than generally available under this
1270         protection is discouraged.
1271 
1272   memory.high
1273         A read-write single value file which exists on non-root
1274         cgroups.  The default is "max".
1275 
1276         Memory usage throttle limit.  If a cgroup's usage goes
1277         over the high boundary, the processes of the cgroup are
1278         throttled and put under heavy reclaim pressure.
1279 
1280         Going over the high limit never invokes the OOM killer and
1281         under extreme conditions the limit may be breached. The high
1282         limit should be used in scenarios where an external process
1283         monitors the limited cgroup to alleviate heavy reclaim
1284         pressure.
1285 
1286   memory.max
1287         A read-write single value file which exists on non-root
1288         cgroups.  The default is "max".
1289 
1290         Memory usage hard limit.  This is the main mechanism to limit
1291         memory usage of a cgroup.  If a cgroup's memory usage reaches
1292         this limit and can't be reduced, the OOM killer is invoked in
1293         the cgroup. Under certain circumstances, the usage may go
1294         over the limit temporarily.
1295 
1296         In default configuration regular 0-order allocations always
1297         succeed unless OOM killer chooses current task as a victim.
1298 
1299         Some kinds of allocations don't invoke the OOM killer.
1300         Caller could retry them differently, return into userspace
1301         as -ENOMEM or silently ignore in cases like disk readahead.
1302 
1303   memory.reclaim
1304         A write-only nested-keyed file which exists for all cgroups.
1305 
1306         This is a simple interface to trigger memory reclaim in the
1307         target cgroup.
1308 
1309         Example::
1310 
1311           echo "1G" > memory.reclaim
1312 
1313         Please note that the kernel can over or under reclaim from
1314         the target cgroup. If less bytes are reclaimed than the
1315         specified amount, -EAGAIN is returned.
1316 
1317         Please note that the proactive reclaim (triggered by this
1318         interface) is not meant to indicate memory pressure on the
1319         memory cgroup. Therefore socket memory balancing triggered by
1320         the memory reclaim normally is not exercised in this case.
1321         This means that the networking layer will not adapt based on
1322         reclaim induced by memory.reclaim.
1323 
1324 The following nested keys are defined.
1325 
1326           ==========            ================================
1327           swappiness            Swappiness value to reclaim with
1328           ==========            ================================
1329 
1330         Specifying a swappiness value instructs the kernel to perform
1331         the reclaim with that swappiness value. Note that this has the
1332         same semantics as vm.swappiness applied to memcg reclaim with
1333         all the existing limitations and potential future extensions.
1334 
1335   memory.peak
1336         A read-only single value file which exists on non-root
1337         cgroups.
1338 
1339         The max memory usage recorded for the cgroup and its
1340         descendants since the creation of the cgroup.
1341 
1342   memory.oom.group
1343         A read-write single value file which exists on non-root
1344         cgroups.  The default value is "0".
1345 
1346         Determines whether the cgroup should be treated as
1347         an indivisible workload by the OOM killer. If set,
1348         all tasks belonging to the cgroup or to its descendants
1349         (if the memory cgroup is not a leaf cgroup) are killed
1350         together or not at all. This can be used to avoid
1351         partial kills to guarantee workload integrity.
1352 
1353         Tasks with the OOM protection (oom_score_adj set to -1000)
1354         are treated as an exception and are never killed.
1355 
1356         If the OOM killer is invoked in a cgroup, it's not going
1357         to kill any tasks outside of this cgroup, regardless
1358         memory.oom.group values of ancestor cgroups.
1359 
1360   memory.events
1361         A read-only flat-keyed file which exists on non-root cgroups.
1362         The following entries are defined.  Unless specified
1363         otherwise, a value change in this file generates a file
1364         modified event.
1365 
1366         Note that all fields in this file are hierarchical and the
1367         file modified event can be generated due to an event down the
1368         hierarchy. For the local events at the cgroup level see
1369         memory.events.local.
1370 
1371           low
1372                 The number of times the cgroup is reclaimed due to
1373                 high memory pressure even though its usage is under
1374                 the low boundary.  This usually indicates that the low
1375                 boundary is over-committed.
1376 
1377           high
1378                 The number of times processes of the cgroup are
1379                 throttled and routed to perform direct memory reclaim
1380                 because the high memory boundary was exceeded.  For a
1381                 cgroup whose memory usage is capped by the high limit
1382                 rather than global memory pressure, this event's
1383                 occurrences are expected.
1384 
1385           max
1386                 The number of times the cgroup's memory usage was
1387                 about to go over the max boundary.  If direct reclaim
1388                 fails to bring it down, the cgroup goes to OOM state.
1389 
1390           oom
1391                 The number of time the cgroup's memory usage was
1392                 reached the limit and allocation was about to fail.
1393 
1394                 This event is not raised if the OOM killer is not
1395                 considered as an option, e.g. for failed high-order
1396                 allocations or if caller asked to not retry attempts.
1397 
1398           oom_kill
1399                 The number of processes belonging to this cgroup
1400                 killed by any kind of OOM killer.
1401 
1402           oom_group_kill
1403                 The number of times a group OOM has occurred.
1404 
1405   memory.events.local
1406         Similar to memory.events but the fields in the file are local
1407         to the cgroup i.e. not hierarchical. The file modified event
1408         generated on this file reflects only the local events.
1409 
1410   memory.stat
1411         A read-only flat-keyed file which exists on non-root cgroups.
1412 
1413         This breaks down the cgroup's memory footprint into different
1414         types of memory, type-specific details, and other information
1415         on the state and past events of the memory management system.
1416 
1417         All memory amounts are in bytes.
1418 
1419         The entries are ordered to be human readable, and new entries
1420         can show up in the middle. Don't rely on items remaining in a
1421         fixed position; use the keys to look up specific values!
1422 
1423         If the entry has no per-node counter (or not show in the
1424         memory.numa_stat). We use 'npn' (non-per-node) as the tag
1425         to indicate that it will not show in the memory.numa_stat.
1426 
1427           anon
1428                 Amount of memory used in anonymous mappings such as
1429                 brk(), sbrk(), and mmap(MAP_ANONYMOUS)
1430 
1431           file
1432                 Amount of memory used to cache filesystem data,
1433                 including tmpfs and shared memory.
1434 
1435           kernel (npn)
1436                 Amount of total kernel memory, including
1437                 (kernel_stack, pagetables, percpu, vmalloc, slab) in
1438                 addition to other kernel memory use cases.
1439 
1440           kernel_stack
1441                 Amount of memory allocated to kernel stacks.
1442 
1443           pagetables
1444                 Amount of memory allocated for page tables.
1445 
1446           sec_pagetables
1447                 Amount of memory allocated for secondary page tables,
1448                 this currently includes KVM mmu allocations on x86
1449                 and arm64 and IOMMU page tables.
1450 
1451           percpu (npn)
1452                 Amount of memory used for storing per-cpu kernel
1453                 data structures.
1454 
1455           sock (npn)
1456                 Amount of memory used in network transmission buffers
1457 
1458           vmalloc (npn)
1459                 Amount of memory used for vmap backed memory.
1460 
1461           shmem
1462                 Amount of cached filesystem data that is swap-backed,
1463                 such as tmpfs, shm segments, shared anonymous mmap()s
1464 
1465           zswap
1466                 Amount of memory consumed by the zswap compression backend.
1467 
1468           zswapped
1469                 Amount of application memory swapped out to zswap.
1470 
1471           file_mapped
1472                 Amount of cached filesystem data mapped with mmap()
1473 
1474           file_dirty
1475                 Amount of cached filesystem data that was modified but
1476                 not yet written back to disk
1477 
1478           file_writeback
1479                 Amount of cached filesystem data that was modified and
1480                 is currently being written back to disk
1481 
1482           swapcached
1483                 Amount of swap cached in memory. The swapcache is accounted
1484                 against both memory and swap usage.
1485 
1486           anon_thp
1487                 Amount of memory used in anonymous mappings backed by
1488                 transparent hugepages
1489 
1490           file_thp
1491                 Amount of cached filesystem data backed by transparent
1492                 hugepages
1493 
1494           shmem_thp
1495                 Amount of shm, tmpfs, shared anonymous mmap()s backed by
1496                 transparent hugepages
1497 
1498           inactive_anon, active_anon, inactive_file, active_file, unevictable
1499                 Amount of memory, swap-backed and filesystem-backed,
1500                 on the internal memory management lists used by the
1501                 page reclaim algorithm.
1502 
1503                 As these represent internal list state (eg. shmem pages are on anon
1504                 memory management lists), inactive_foo + active_foo may not be equal to
1505                 the value for the foo counter, since the foo counter is type-based, not
1506                 list-based.
1507 
1508           slab_reclaimable
1509                 Part of "slab" that might be reclaimed, such as
1510                 dentries and inodes.
1511 
1512           slab_unreclaimable
1513                 Part of "slab" that cannot be reclaimed on memory
1514                 pressure.
1515 
1516           slab (npn)
1517                 Amount of memory used for storing in-kernel data
1518                 structures.
1519 
1520           workingset_refault_anon
1521                 Number of refaults of previously evicted anonymous pages.
1522 
1523           workingset_refault_file
1524                 Number of refaults of previously evicted file pages.
1525 
1526           workingset_activate_anon
1527                 Number of refaulted anonymous pages that were immediately
1528                 activated.
1529 
1530           workingset_activate_file
1531                 Number of refaulted file pages that were immediately activated.
1532 
1533           workingset_restore_anon
1534                 Number of restored anonymous pages which have been detected as
1535                 an active workingset before they got reclaimed.
1536 
1537           workingset_restore_file
1538                 Number of restored file pages which have been detected as an
1539                 active workingset before they got reclaimed.
1540 
1541           workingset_nodereclaim
1542                 Number of times a shadow node has been reclaimed
1543 
1544           pgscan (npn)
1545                 Amount of scanned pages (in an inactive LRU list)
1546 
1547           pgsteal (npn)
1548                 Amount of reclaimed pages
1549 
1550           pgscan_kswapd (npn)
1551                 Amount of scanned pages by kswapd (in an inactive LRU list)
1552 
1553           pgscan_direct (npn)
1554                 Amount of scanned pages directly  (in an inactive LRU list)
1555 
1556           pgscan_khugepaged (npn)
1557                 Amount of scanned pages by khugepaged  (in an inactive LRU list)
1558 
1559           pgsteal_kswapd (npn)
1560                 Amount of reclaimed pages by kswapd
1561 
1562           pgsteal_direct (npn)
1563                 Amount of reclaimed pages directly
1564 
1565           pgsteal_khugepaged (npn)
1566                 Amount of reclaimed pages by khugepaged
1567 
1568           pgfault (npn)
1569                 Total number of page faults incurred
1570 
1571           pgmajfault (npn)
1572                 Number of major page faults incurred
1573 
1574           pgrefill (npn)
1575                 Amount of scanned pages (in an active LRU list)
1576 
1577           pgactivate (npn)
1578                 Amount of pages moved to the active LRU list
1579 
1580           pgdeactivate (npn)
1581                 Amount of pages moved to the inactive LRU list
1582 
1583           pglazyfree (npn)
1584                 Amount of pages postponed to be freed under memory pressure
1585 
1586           pglazyfreed (npn)
1587                 Amount of reclaimed lazyfree pages
1588 
1589           zswpin
1590                 Number of pages moved in to memory from zswap.
1591 
1592           zswpout
1593                 Number of pages moved out of memory to zswap.
1594 
1595           zswpwb
1596                 Number of pages written from zswap to swap.
1597 
1598           thp_fault_alloc (npn)
1599                 Number of transparent hugepages which were allocated to satisfy
1600                 a page fault. This counter is not present when CONFIG_TRANSPARENT_HUGEPAGE
1601                 is not set.
1602 
1603           thp_collapse_alloc (npn)
1604                 Number of transparent hugepages which were allocated to allow
1605                 collapsing an existing range of pages. This counter is not
1606                 present when CONFIG_TRANSPARENT_HUGEPAGE is not set.
1607 
1608           thp_swpout (npn)
1609                 Number of transparent hugepages which are swapout in one piece
1610                 without splitting.
1611 
1612           thp_swpout_fallback (npn)
1613                 Number of transparent hugepages which were split before swapout.
1614                 Usually because failed to allocate some continuous swap space
1615                 for the huge page.
1616 
1617   memory.numa_stat
1618         A read-only nested-keyed file which exists on non-root cgroups.
1619 
1620         This breaks down the cgroup's memory footprint into different
1621         types of memory, type-specific details, and other information
1622         per node on the state of the memory management system.
1623 
1624         This is useful for providing visibility into the NUMA locality
1625         information within an memcg since the pages are allowed to be
1626         allocated from any physical node. One of the use case is evaluating
1627         application performance by combining this information with the
1628         application's CPU allocation.
1629 
1630         All memory amounts are in bytes.
1631 
1632         The output format of memory.numa_stat is::
1633 
1634           type N0=<bytes in node 0> N1=<bytes in node 1> ...
1635 
1636         The entries are ordered to be human readable, and new entries
1637         can show up in the middle. Don't rely on items remaining in a
1638         fixed position; use the keys to look up specific values!
1639 
1640         The entries can refer to the memory.stat.
1641 
1642   memory.swap.current
1643         A read-only single value file which exists on non-root
1644         cgroups.
1645 
1646         The total amount of swap currently being used by the cgroup
1647         and its descendants.
1648 
1649   memory.swap.high
1650         A read-write single value file which exists on non-root
1651         cgroups.  The default is "max".
1652 
1653         Swap usage throttle limit.  If a cgroup's swap usage exceeds
1654         this limit, all its further allocations will be throttled to
1655         allow userspace to implement custom out-of-memory procedures.
1656 
1657         This limit marks a point of no return for the cgroup. It is NOT
1658         designed to manage the amount of swapping a workload does
1659         during regular operation. Compare to memory.swap.max, which
1660         prohibits swapping past a set amount, but lets the cgroup
1661         continue unimpeded as long as other memory can be reclaimed.
1662 
1663         Healthy workloads are not expected to reach this limit.
1664 
1665   memory.swap.peak
1666         A read-only single value file which exists on non-root
1667         cgroups.
1668 
1669         The max swap usage recorded for the cgroup and its
1670         descendants since the creation of the cgroup.
1671 
1672   memory.swap.max
1673         A read-write single value file which exists on non-root
1674         cgroups.  The default is "max".
1675 
1676         Swap usage hard limit.  If a cgroup's swap usage reaches this
1677         limit, anonymous memory of the cgroup will not be swapped out.
1678 
1679   memory.swap.events
1680         A read-only flat-keyed file which exists on non-root cgroups.
1681         The following entries are defined.  Unless specified
1682         otherwise, a value change in this file generates a file
1683         modified event.
1684 
1685           high
1686                 The number of times the cgroup's swap usage was over
1687                 the high threshold.
1688 
1689           max
1690                 The number of times the cgroup's swap usage was about
1691                 to go over the max boundary and swap allocation
1692                 failed.
1693 
1694           fail
1695                 The number of times swap allocation failed either
1696                 because of running out of swap system-wide or max
1697                 limit.
1698 
1699         When reduced under the current usage, the existing swap
1700         entries are reclaimed gradually and the swap usage may stay
1701         higher than the limit for an extended period of time.  This
1702         reduces the impact on the workload and memory management.
1703 
1704   memory.zswap.current
1705         A read-only single value file which exists on non-root
1706         cgroups.
1707 
1708         The total amount of memory consumed by the zswap compression
1709         backend.
1710 
1711   memory.zswap.max
1712         A read-write single value file which exists on non-root
1713         cgroups.  The default is "max".
1714 
1715         Zswap usage hard limit. If a cgroup's zswap pool reaches this
1716         limit, it will refuse to take any more stores before existing
1717         entries fault back in or are written out to disk.
1718 
1719   memory.zswap.writeback
1720         A read-write single value file. The default value is "1".
1721         Note that this setting is hierarchical, i.e. the writeback would be
1722         implicitly disabled for child cgroups if the upper hierarchy
1723         does so.
1724 
1725         When this is set to 0, all swapping attempts to swapping devices
1726         are disabled. This included both zswap writebacks, and swapping due
1727         to zswap store failures. If the zswap store failures are recurring
1728         (for e.g if the pages are incompressible), users can observe
1729         reclaim inefficiency after disabling writeback (because the same
1730         pages might be rejected again and again).
1731 
1732         Note that this is subtly different from setting memory.swap.max to
1733         0, as it still allows for pages to be written to the zswap pool.
1734 
1735   memory.pressure
1736         A read-only nested-keyed file.
1737 
1738         Shows pressure stall information for memory. See
1739         :ref:`Documentation/accounting/psi.rst <psi>` for details.
1740 
1741 
1742 Usage Guidelines
1743 ~~~~~~~~~~~~~~~~
1744 
1745 "memory.high" is the main mechanism to control memory usage.
1746 Over-committing on high limit (sum of high limits > available memory)
1747 and letting global memory pressure to distribute memory according to
1748 usage is a viable strategy.
1749 
1750 Because breach of the high limit doesn't trigger the OOM killer but
1751 throttles the offending cgroup, a management agent has ample
1752 opportunities to monitor and take appropriate actions such as granting
1753 more memory or terminating the workload.
1754 
1755 Determining whether a cgroup has enough memory is not trivial as
1756 memory usage doesn't indicate whether the workload can benefit from
1757 more memory.  For example, a workload which writes data received from
1758 network to a file can use all available memory but can also operate as
1759 performant with a small amount of memory.  A measure of memory
1760 pressure - how much the workload is being impacted due to lack of
1761 memory - is necessary to determine whether a workload needs more
1762 memory; unfortunately, memory pressure monitoring mechanism isn't
1763 implemented yet.
1764 
1765 
1766 Memory Ownership
1767 ~~~~~~~~~~~~~~~~
1768 
1769 A memory area is charged to the cgroup which instantiated it and stays
1770 charged to the cgroup until the area is released.  Migrating a process
1771 to a different cgroup doesn't move the memory usages that it
1772 instantiated while in the previous cgroup to the new cgroup.
1773 
1774 A memory area may be used by processes belonging to different cgroups.
1775 To which cgroup the area will be charged is in-deterministic; however,
1776 over time, the memory area is likely to end up in a cgroup which has
1777 enough memory allowance to avoid high reclaim pressure.
1778 
1779 If a cgroup sweeps a considerable amount of memory which is expected
1780 to be accessed repeatedly by other cgroups, it may make sense to use
1781 POSIX_FADV_DONTNEED to relinquish the ownership of memory areas
1782 belonging to the affected files to ensure correct memory ownership.
1783 
1784 
1785 IO
1786 --
1787 
1788 The "io" controller regulates the distribution of IO resources.  This
1789 controller implements both weight based and absolute bandwidth or IOPS
1790 limit distribution; however, weight based distribution is available
1791 only if cfq-iosched is in use and neither scheme is available for
1792 blk-mq devices.
1793 
1794 
1795 IO Interface Files
1796 ~~~~~~~~~~~~~~~~~~
1797 
1798   io.stat
1799         A read-only nested-keyed file.
1800 
1801         Lines are keyed by $MAJ:$MIN device numbers and not ordered.
1802         The following nested keys are defined.
1803 
1804           ======        =====================
1805           rbytes        Bytes read
1806           wbytes        Bytes written
1807           rios          Number of read IOs
1808           wios          Number of write IOs
1809           dbytes        Bytes discarded
1810           dios          Number of discard IOs
1811           ======        =====================
1812 
1813         An example read output follows::
1814 
1815           8:16 rbytes=1459200 wbytes=314773504 rios=192 wios=353 dbytes=0 dios=0
1816           8:0 rbytes=90430464 wbytes=299008000 rios=8950 wios=1252 dbytes=50331648 dios=3021
1817 
1818   io.cost.qos
1819         A read-write nested-keyed file which exists only on the root
1820         cgroup.
1821 
1822         This file configures the Quality of Service of the IO cost
1823         model based controller (CONFIG_BLK_CGROUP_IOCOST) which
1824         currently implements "io.weight" proportional control.  Lines
1825         are keyed by $MAJ:$MIN device numbers and not ordered.  The
1826         line for a given device is populated on the first write for
1827         the device on "io.cost.qos" or "io.cost.model".  The following
1828         nested keys are defined.
1829 
1830           ======        =====================================
1831           enable        Weight-based control enable
1832           ctrl          "auto" or "user"
1833           rpct          Read latency percentile    [0, 100]
1834           rlat          Read latency threshold
1835           wpct          Write latency percentile   [0, 100]
1836           wlat          Write latency threshold
1837           min           Minimum scaling percentage [1, 10000]
1838           max           Maximum scaling percentage [1, 10000]
1839           ======        =====================================
1840 
1841         The controller is disabled by default and can be enabled by
1842         setting "enable" to 1.  "rpct" and "wpct" parameters default
1843         to zero and the controller uses internal device saturation
1844         state to adjust the overall IO rate between "min" and "max".
1845 
1846         When a better control quality is needed, latency QoS
1847         parameters can be configured.  For example::
1848 
1849           8:16 enable=1 ctrl=auto rpct=95.00 rlat=75000 wpct=95.00 wlat=150000 min=50.00 max=150.0
1850 
1851         shows that on sdb, the controller is enabled, will consider
1852         the device saturated if the 95th percentile of read completion
1853         latencies is above 75ms or write 150ms, and adjust the overall
1854         IO issue rate between 50% and 150% accordingly.
1855 
1856         The lower the saturation point, the better the latency QoS at
1857         the cost of aggregate bandwidth.  The narrower the allowed
1858         adjustment range between "min" and "max", the more conformant
1859         to the cost model the IO behavior.  Note that the IO issue
1860         base rate may be far off from 100% and setting "min" and "max"
1861         blindly can lead to a significant loss of device capacity or
1862         control quality.  "min" and "max" are useful for regulating
1863         devices which show wide temporary behavior changes - e.g. a
1864         ssd which accepts writes at the line speed for a while and
1865         then completely stalls for multiple seconds.
1866 
1867         When "ctrl" is "auto", the parameters are controlled by the
1868         kernel and may change automatically.  Setting "ctrl" to "user"
1869         or setting any of the percentile and latency parameters puts
1870         it into "user" mode and disables the automatic changes.  The
1871         automatic mode can be restored by setting "ctrl" to "auto".
1872 
1873   io.cost.model
1874         A read-write nested-keyed file which exists only on the root
1875         cgroup.
1876 
1877         This file configures the cost model of the IO cost model based
1878         controller (CONFIG_BLK_CGROUP_IOCOST) which currently
1879         implements "io.weight" proportional control.  Lines are keyed
1880         by $MAJ:$MIN device numbers and not ordered.  The line for a
1881         given device is populated on the first write for the device on
1882         "io.cost.qos" or "io.cost.model".  The following nested keys
1883         are defined.
1884 
1885           =====         ================================
1886           ctrl          "auto" or "user"
1887           model         The cost model in use - "linear"
1888           =====         ================================
1889 
1890         When "ctrl" is "auto", the kernel may change all parameters
1891         dynamically.  When "ctrl" is set to "user" or any other
1892         parameters are written to, "ctrl" become "user" and the
1893         automatic changes are disabled.
1894 
1895         When "model" is "linear", the following model parameters are
1896         defined.
1897 
1898           ============= ========================================
1899           [r|w]bps      The maximum sequential IO throughput
1900           [r|w]seqiops  The maximum 4k sequential IOs per second
1901           [r|w]randiops The maximum 4k random IOs per second
1902           ============= ========================================
1903 
1904         From the above, the builtin linear model determines the base
1905         costs of a sequential and random IO and the cost coefficient
1906         for the IO size.  While simple, this model can cover most
1907         common device classes acceptably.
1908 
1909         The IO cost model isn't expected to be accurate in absolute
1910         sense and is scaled to the device behavior dynamically.
1911 
1912         If needed, tools/cgroup/iocost_coef_gen.py can be used to
1913         generate device-specific coefficients.
1914 
1915   io.weight
1916         A read-write flat-keyed file which exists on non-root cgroups.
1917         The default is "default 100".
1918 
1919         The first line is the default weight applied to devices
1920         without specific override.  The rest are overrides keyed by
1921         $MAJ:$MIN device numbers and not ordered.  The weights are in
1922         the range [1, 10000] and specifies the relative amount IO time
1923         the cgroup can use in relation to its siblings.
1924 
1925         The default weight can be updated by writing either "default
1926         $WEIGHT" or simply "$WEIGHT".  Overrides can be set by writing
1927         "$MAJ:$MIN $WEIGHT" and unset by writing "$MAJ:$MIN default".
1928 
1929         An example read output follows::
1930 
1931           default 100
1932           8:16 200
1933           8:0 50
1934 
1935   io.max
1936         A read-write nested-keyed file which exists on non-root
1937         cgroups.
1938 
1939         BPS and IOPS based IO limit.  Lines are keyed by $MAJ:$MIN
1940         device numbers and not ordered.  The following nested keys are
1941         defined.
1942 
1943           =====         ==================================
1944           rbps          Max read bytes per second
1945           wbps          Max write bytes per second
1946           riops         Max read IO operations per second
1947           wiops         Max write IO operations per second
1948           =====         ==================================
1949 
1950         When writing, any number of nested key-value pairs can be
1951         specified in any order.  "max" can be specified as the value
1952         to remove a specific limit.  If the same key is specified
1953         multiple times, the outcome is undefined.
1954 
1955         BPS and IOPS are measured in each IO direction and IOs are
1956         delayed if limit is reached.  Temporary bursts are allowed.
1957 
1958         Setting read limit at 2M BPS and write at 120 IOPS for 8:16::
1959 
1960           echo "8:16 rbps=2097152 wiops=120" > io.max
1961 
1962         Reading returns the following::
1963 
1964           8:16 rbps=2097152 wbps=max riops=max wiops=120
1965 
1966         Write IOPS limit can be removed by writing the following::
1967 
1968           echo "8:16 wiops=max" > io.max
1969 
1970         Reading now returns the following::
1971 
1972           8:16 rbps=2097152 wbps=max riops=max wiops=max
1973 
1974   io.pressure
1975         A read-only nested-keyed file.
1976 
1977         Shows pressure stall information for IO. See
1978         :ref:`Documentation/accounting/psi.rst <psi>` for details.
1979 
1980 
1981 Writeback
1982 ~~~~~~~~~
1983 
1984 Page cache is dirtied through buffered writes and shared mmaps and
1985 written asynchronously to the backing filesystem by the writeback
1986 mechanism.  Writeback sits between the memory and IO domains and
1987 regulates the proportion of dirty memory by balancing dirtying and
1988 write IOs.
1989 
1990 The io controller, in conjunction with the memory controller,
1991 implements control of page cache writeback IOs.  The memory controller
1992 defines the memory domain that dirty memory ratio is calculated and
1993 maintained for and the io controller defines the io domain which
1994 writes out dirty pages for the memory domain.  Both system-wide and
1995 per-cgroup dirty memory states are examined and the more restrictive
1996 of the two is enforced.
1997 
1998 cgroup writeback requires explicit support from the underlying
1999 filesystem.  Currently, cgroup writeback is implemented on ext2, ext4,
2000 btrfs, f2fs, and xfs.  On other filesystems, all writeback IOs are 
2001 attributed to the root cgroup.
2002 
2003 There are inherent differences in memory and writeback management
2004 which affects how cgroup ownership is tracked.  Memory is tracked per
2005 page while writeback per inode.  For the purpose of writeback, an
2006 inode is assigned to a cgroup and all IO requests to write dirty pages
2007 from the inode are attributed to that cgroup.
2008 
2009 As cgroup ownership for memory is tracked per page, there can be pages
2010 which are associated with different cgroups than the one the inode is
2011 associated with.  These are called foreign pages.  The writeback
2012 constantly keeps track of foreign pages and, if a particular foreign
2013 cgroup becomes the majority over a certain period of time, switches
2014 the ownership of the inode to that cgroup.
2015 
2016 While this model is enough for most use cases where a given inode is
2017 mostly dirtied by a single cgroup even when the main writing cgroup
2018 changes over time, use cases where multiple cgroups write to a single
2019 inode simultaneously are not supported well.  In such circumstances, a
2020 significant portion of IOs are likely to be attributed incorrectly.
2021 As memory controller assigns page ownership on the first use and
2022 doesn't update it until the page is released, even if writeback
2023 strictly follows page ownership, multiple cgroups dirtying overlapping
2024 areas wouldn't work as expected.  It's recommended to avoid such usage
2025 patterns.
2026 
2027 The sysctl knobs which affect writeback behavior are applied to cgroup
2028 writeback as follows.
2029 
2030   vm.dirty_background_ratio, vm.dirty_ratio
2031         These ratios apply the same to cgroup writeback with the
2032         amount of available memory capped by limits imposed by the
2033         memory controller and system-wide clean memory.
2034 
2035   vm.dirty_background_bytes, vm.dirty_bytes
2036         For cgroup writeback, this is calculated into ratio against
2037         total available memory and applied the same way as
2038         vm.dirty[_background]_ratio.
2039 
2040 
2041 IO Latency
2042 ~~~~~~~~~~
2043 
2044 This is a cgroup v2 controller for IO workload protection.  You provide a group
2045 with a latency target, and if the average latency exceeds that target the
2046 controller will throttle any peers that have a lower latency target than the
2047 protected workload.
2048 
2049 The limits are only applied at the peer level in the hierarchy.  This means that
2050 in the diagram below, only groups A, B, and C will influence each other, and
2051 groups D and F will influence each other.  Group G will influence nobody::
2052 
2053                         [root]
2054                 /          |            \
2055                 A          B            C
2056                /  \        |
2057               D    F       G
2058 
2059 
2060 So the ideal way to configure this is to set io.latency in groups A, B, and C.
2061 Generally you do not want to set a value lower than the latency your device
2062 supports.  Experiment to find the value that works best for your workload.
2063 Start at higher than the expected latency for your device and watch the
2064 avg_lat value in io.stat for your workload group to get an idea of the
2065 latency you see during normal operation.  Use the avg_lat value as a basis for
2066 your real setting, setting at 10-15% higher than the value in io.stat.
2067 
2068 How IO Latency Throttling Works
2069 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2070 
2071 io.latency is work conserving; so as long as everybody is meeting their latency
2072 target the controller doesn't do anything.  Once a group starts missing its
2073 target it begins throttling any peer group that has a higher target than itself.
2074 This throttling takes 2 forms:
2075 
2076 - Queue depth throttling.  This is the number of outstanding IO's a group is
2077   allowed to have.  We will clamp down relatively quickly, starting at no limit
2078   and going all the way down to 1 IO at a time.
2079 
2080 - Artificial delay induction.  There are certain types of IO that cannot be
2081   throttled without possibly adversely affecting higher priority groups.  This
2082   includes swapping and metadata IO.  These types of IO are allowed to occur
2083   normally, however they are "charged" to the originating group.  If the
2084   originating group is being throttled you will see the use_delay and delay
2085   fields in io.stat increase.  The delay value is how many microseconds that are
2086   being added to any process that runs in this group.  Because this number can
2087   grow quite large if there is a lot of swapping or metadata IO occurring we
2088   limit the individual delay events to 1 second at a time.
2089 
2090 Once the victimized group starts meeting its latency target again it will start
2091 unthrottling any peer groups that were throttled previously.  If the victimized
2092 group simply stops doing IO the global counter will unthrottle appropriately.
2093 
2094 IO Latency Interface Files
2095 ~~~~~~~~~~~~~~~~~~~~~~~~~~
2096 
2097   io.latency
2098         This takes a similar format as the other controllers.
2099 
2100                 "MAJOR:MINOR target=<target time in microseconds>"
2101 
2102   io.stat
2103         If the controller is enabled you will see extra stats in io.stat in
2104         addition to the normal ones.
2105 
2106           depth
2107                 This is the current queue depth for the group.
2108 
2109           avg_lat
2110                 This is an exponential moving average with a decay rate of 1/exp
2111                 bound by the sampling interval.  The decay rate interval can be
2112                 calculated by multiplying the win value in io.stat by the
2113                 corresponding number of samples based on the win value.
2114 
2115           win
2116                 The sampling window size in milliseconds.  This is the minimum
2117                 duration of time between evaluation events.  Windows only elapse
2118                 with IO activity.  Idle periods extend the most recent window.
2119 
2120 IO Priority
2121 ~~~~~~~~~~~
2122 
2123 A single attribute controls the behavior of the I/O priority cgroup policy,
2124 namely the io.prio.class attribute. The following values are accepted for
2125 that attribute:
2126 
2127   no-change
2128         Do not modify the I/O priority class.
2129 
2130   promote-to-rt
2131         For requests that have a non-RT I/O priority class, change it into RT.
2132         Also change the priority level of these requests to 4. Do not modify
2133         the I/O priority of requests that have priority class RT.
2134 
2135   restrict-to-be
2136         For requests that do not have an I/O priority class or that have I/O
2137         priority class RT, change it into BE. Also change the priority level
2138         of these requests to 0. Do not modify the I/O priority class of
2139         requests that have priority class IDLE.
2140 
2141   idle
2142         Change the I/O priority class of all requests into IDLE, the lowest
2143         I/O priority class.
2144 
2145   none-to-rt
2146         Deprecated. Just an alias for promote-to-rt.
2147 
2148 The following numerical values are associated with the I/O priority policies:
2149 
2150 +----------------+---+
2151 | no-change      | 0 |
2152 +----------------+---+
2153 | promote-to-rt  | 1 |
2154 +----------------+---+
2155 | restrict-to-be | 2 |
2156 +----------------+---+
2157 | idle           | 3 |
2158 +----------------+---+
2159 
2160 The numerical value that corresponds to each I/O priority class is as follows:
2161 
2162 +-------------------------------+---+
2163 | IOPRIO_CLASS_NONE             | 0 |
2164 +-------------------------------+---+
2165 | IOPRIO_CLASS_RT (real-time)   | 1 |
2166 +-------------------------------+---+
2167 | IOPRIO_CLASS_BE (best effort) | 2 |
2168 +-------------------------------+---+
2169 | IOPRIO_CLASS_IDLE             | 3 |
2170 +-------------------------------+---+
2171 
2172 The algorithm to set the I/O priority class for a request is as follows:
2173 
2174 - If I/O priority class policy is promote-to-rt, change the request I/O
2175   priority class to IOPRIO_CLASS_RT and change the request I/O priority
2176   level to 4.
2177 - If I/O priority class policy is not promote-to-rt, translate the I/O priority
2178   class policy into a number, then change the request I/O priority class
2179   into the maximum of the I/O priority class policy number and the numerical
2180   I/O priority class.
2181 
2182 PID
2183 ---
2184 
2185 The process number controller is used to allow a cgroup to stop any
2186 new tasks from being fork()'d or clone()'d after a specified limit is
2187 reached.
2188 
2189 The number of tasks in a cgroup can be exhausted in ways which other
2190 controllers cannot prevent, thus warranting its own controller.  For
2191 example, a fork bomb is likely to exhaust the number of tasks before
2192 hitting memory restrictions.
2193 
2194 Note that PIDs used in this controller refer to TIDs, process IDs as
2195 used by the kernel.
2196 
2197 
2198 PID Interface Files
2199 ~~~~~~~~~~~~~~~~~~~
2200 
2201   pids.max
2202         A read-write single value file which exists on non-root
2203         cgroups.  The default is "max".
2204 
2205         Hard limit of number of processes.
2206 
2207   pids.current
2208         A read-only single value file which exists on non-root cgroups.
2209 
2210         The number of processes currently in the cgroup and its
2211         descendants.
2212 
2213   pids.peak
2214         A read-only single value file which exists on non-root cgroups.
2215 
2216         The maximum value that the number of processes in the cgroup and its
2217         descendants has ever reached.
2218 
2219   pids.events
2220         A read-only flat-keyed file which exists on non-root cgroups. Unless
2221         specified otherwise, a value change in this file generates a file
2222         modified event. The following entries are defined.
2223 
2224           max
2225                 The number of times the cgroup's total number of processes hit the pids.max
2226                 limit (see also pids_localevents).
2227 
2228   pids.events.local
2229         Similar to pids.events but the fields in the file are local
2230         to the cgroup i.e. not hierarchical. The file modified event
2231         generated on this file reflects only the local events.
2232 
2233 Organisational operations are not blocked by cgroup policies, so it is
2234 possible to have pids.current > pids.max.  This can be done by either
2235 setting the limit to be smaller than pids.current, or attaching enough
2236 processes to the cgroup such that pids.current is larger than
2237 pids.max.  However, it is not possible to violate a cgroup PID policy
2238 through fork() or clone(). These will return -EAGAIN if the creation
2239 of a new process would cause a cgroup policy to be violated.
2240 
2241 
2242 Cpuset
2243 ------
2244 
2245 The "cpuset" controller provides a mechanism for constraining
2246 the CPU and memory node placement of tasks to only the resources
2247 specified in the cpuset interface files in a task's current cgroup.
2248 This is especially valuable on large NUMA systems where placing jobs
2249 on properly sized subsets of the systems with careful processor and
2250 memory placement to reduce cross-node memory access and contention
2251 can improve overall system performance.
2252 
2253 The "cpuset" controller is hierarchical.  That means the controller
2254 cannot use CPUs or memory nodes not allowed in its parent.
2255 
2256 
2257 Cpuset Interface Files
2258 ~~~~~~~~~~~~~~~~~~~~~~
2259 
2260   cpuset.cpus
2261         A read-write multiple values file which exists on non-root
2262         cpuset-enabled cgroups.
2263 
2264         It lists the requested CPUs to be used by tasks within this
2265         cgroup.  The actual list of CPUs to be granted, however, is
2266         subjected to constraints imposed by its parent and can differ
2267         from the requested CPUs.
2268 
2269         The CPU numbers are comma-separated numbers or ranges.
2270         For example::
2271 
2272           # cat cpuset.cpus
2273           0-4,6,8-10
2274 
2275         An empty value indicates that the cgroup is using the same
2276         setting as the nearest cgroup ancestor with a non-empty
2277         "cpuset.cpus" or all the available CPUs if none is found.
2278 
2279         The value of "cpuset.cpus" stays constant until the next update
2280         and won't be affected by any CPU hotplug events.
2281 
2282   cpuset.cpus.effective
2283         A read-only multiple values file which exists on all
2284         cpuset-enabled cgroups.
2285 
2286         It lists the onlined CPUs that are actually granted to this
2287         cgroup by its parent.  These CPUs are allowed to be used by
2288         tasks within the current cgroup.
2289 
2290         If "cpuset.cpus" is empty, the "cpuset.cpus.effective" file shows
2291         all the CPUs from the parent cgroup that can be available to
2292         be used by this cgroup.  Otherwise, it should be a subset of
2293         "cpuset.cpus" unless none of the CPUs listed in "cpuset.cpus"
2294         can be granted.  In this case, it will be treated just like an
2295         empty "cpuset.cpus".
2296 
2297         Its value will be affected by CPU hotplug events.
2298 
2299   cpuset.mems
2300         A read-write multiple values file which exists on non-root
2301         cpuset-enabled cgroups.
2302 
2303         It lists the requested memory nodes to be used by tasks within
2304         this cgroup.  The actual list of memory nodes granted, however,
2305         is subjected to constraints imposed by its parent and can differ
2306         from the requested memory nodes.
2307 
2308         The memory node numbers are comma-separated numbers or ranges.
2309         For example::
2310 
2311           # cat cpuset.mems
2312           0-1,3
2313 
2314         An empty value indicates that the cgroup is using the same
2315         setting as the nearest cgroup ancestor with a non-empty
2316         "cpuset.mems" or all the available memory nodes if none
2317         is found.
2318 
2319         The value of "cpuset.mems" stays constant until the next update
2320         and won't be affected by any memory nodes hotplug events.
2321 
2322         Setting a non-empty value to "cpuset.mems" causes memory of
2323         tasks within the cgroup to be migrated to the designated nodes if
2324         they are currently using memory outside of the designated nodes.
2325 
2326         There is a cost for this memory migration.  The migration
2327         may not be complete and some memory pages may be left behind.
2328         So it is recommended that "cpuset.mems" should be set properly
2329         before spawning new tasks into the cpuset.  Even if there is
2330         a need to change "cpuset.mems" with active tasks, it shouldn't
2331         be done frequently.
2332 
2333   cpuset.mems.effective
2334         A read-only multiple values file which exists on all
2335         cpuset-enabled cgroups.
2336 
2337         It lists the onlined memory nodes that are actually granted to
2338         this cgroup by its parent. These memory nodes are allowed to
2339         be used by tasks within the current cgroup.
2340 
2341         If "cpuset.mems" is empty, it shows all the memory nodes from the
2342         parent cgroup that will be available to be used by this cgroup.
2343         Otherwise, it should be a subset of "cpuset.mems" unless none of
2344         the memory nodes listed in "cpuset.mems" can be granted.  In this
2345         case, it will be treated just like an empty "cpuset.mems".
2346 
2347         Its value will be affected by memory nodes hotplug events.
2348 
2349   cpuset.cpus.exclusive
2350         A read-write multiple values file which exists on non-root
2351         cpuset-enabled cgroups.
2352 
2353         It lists all the exclusive CPUs that are allowed to be used
2354         to create a new cpuset partition.  Its value is not used
2355         unless the cgroup becomes a valid partition root.  See the
2356         "cpuset.cpus.partition" section below for a description of what
2357         a cpuset partition is.
2358 
2359         When the cgroup becomes a partition root, the actual exclusive
2360         CPUs that are allocated to that partition are listed in
2361         "cpuset.cpus.exclusive.effective" which may be different
2362         from "cpuset.cpus.exclusive".  If "cpuset.cpus.exclusive"
2363         has previously been set, "cpuset.cpus.exclusive.effective"
2364         is always a subset of it.
2365 
2366         Users can manually set it to a value that is different from
2367         "cpuset.cpus".  One constraint in setting it is that the list of
2368         CPUs must be exclusive with respect to "cpuset.cpus.exclusive"
2369         of its sibling.  If "cpuset.cpus.exclusive" of a sibling cgroup
2370         isn't set, its "cpuset.cpus" value, if set, cannot be a subset
2371         of it to leave at least one CPU available when the exclusive
2372         CPUs are taken away.
2373 
2374         For a parent cgroup, any one of its exclusive CPUs can only
2375         be distributed to at most one of its child cgroups.  Having an
2376         exclusive CPU appearing in two or more of its child cgroups is
2377         not allowed (the exclusivity rule).  A value that violates the
2378         exclusivity rule will be rejected with a write error.
2379 
2380         The root cgroup is a partition root and all its available CPUs
2381         are in its exclusive CPU set.
2382 
2383   cpuset.cpus.exclusive.effective
2384         A read-only multiple values file which exists on all non-root
2385         cpuset-enabled cgroups.
2386 
2387         This file shows the effective set of exclusive CPUs that
2388         can be used to create a partition root.  The content
2389         of this file will always be a subset of its parent's
2390         "cpuset.cpus.exclusive.effective" if its parent is not the root
2391         cgroup.  It will also be a subset of "cpuset.cpus.exclusive"
2392         if it is set.  If "cpuset.cpus.exclusive" is not set, it is
2393         treated to have an implicit value of "cpuset.cpus" in the
2394         formation of local partition.
2395 
2396   cpuset.cpus.isolated
2397         A read-only and root cgroup only multiple values file.
2398 
2399         This file shows the set of all isolated CPUs used in existing
2400         isolated partitions. It will be empty if no isolated partition
2401         is created.
2402 
2403   cpuset.cpus.partition
2404         A read-write single value file which exists on non-root
2405         cpuset-enabled cgroups.  This flag is owned by the parent cgroup
2406         and is not delegatable.
2407 
2408         It accepts only the following input values when written to.
2409 
2410           ==========    =====================================
2411           "member"      Non-root member of a partition
2412           "root"        Partition root
2413           "isolated"    Partition root without load balancing
2414           ==========    =====================================
2415 
2416         A cpuset partition is a collection of cpuset-enabled cgroups with
2417         a partition root at the top of the hierarchy and its descendants
2418         except those that are separate partition roots themselves and
2419         their descendants.  A partition has exclusive access to the
2420         set of exclusive CPUs allocated to it.  Other cgroups outside
2421         of that partition cannot use any CPUs in that set.
2422 
2423         There are two types of partitions - local and remote.  A local
2424         partition is one whose parent cgroup is also a valid partition
2425         root.  A remote partition is one whose parent cgroup is not a
2426         valid partition root itself.  Writing to "cpuset.cpus.exclusive"
2427         is optional for the creation of a local partition as its
2428         "cpuset.cpus.exclusive" file will assume an implicit value that
2429         is the same as "cpuset.cpus" if it is not set.  Writing the
2430         proper "cpuset.cpus.exclusive" values down the cgroup hierarchy
2431         before the target partition root is mandatory for the creation
2432         of a remote partition.
2433 
2434         Currently, a remote partition cannot be created under a local
2435         partition.  All the ancestors of a remote partition root except
2436         the root cgroup cannot be a partition root.
2437 
2438         The root cgroup is always a partition root and its state cannot
2439         be changed.  All other non-root cgroups start out as "member".
2440 
2441         When set to "root", the current cgroup is the root of a new
2442         partition or scheduling domain.  The set of exclusive CPUs is
2443         determined by the value of its "cpuset.cpus.exclusive.effective".
2444 
2445         When set to "isolated", the CPUs in that partition will be in
2446         an isolated state without any load balancing from the scheduler
2447         and excluded from the unbound workqueues.  Tasks placed in such
2448         a partition with multiple CPUs should be carefully distributed
2449         and bound to each of the individual CPUs for optimal performance.
2450 
2451         A partition root ("root" or "isolated") can be in one of the
2452         two possible states - valid or invalid.  An invalid partition
2453         root is in a degraded state where some state information may
2454         be retained, but behaves more like a "member".
2455 
2456         All possible state transitions among "member", "root" and
2457         "isolated" are allowed.
2458 
2459         On read, the "cpuset.cpus.partition" file can show the following
2460         values.
2461 
2462           ============================= =====================================
2463           "member"                      Non-root member of a partition
2464           "root"                        Partition root
2465           "isolated"                    Partition root without load balancing
2466           "root invalid (<reason>)"     Invalid partition root
2467           "isolated invalid (<reason>)" Invalid isolated partition root
2468           ============================= =====================================
2469 
2470         In the case of an invalid partition root, a descriptive string on
2471         why the partition is invalid is included within parentheses.
2472 
2473         For a local partition root to be valid, the following conditions
2474         must be met.
2475 
2476         1) The parent cgroup is a valid partition root.
2477         2) The "cpuset.cpus.exclusive.effective" file cannot be empty,
2478            though it may contain offline CPUs.
2479         3) The "cpuset.cpus.effective" cannot be empty unless there is
2480            no task associated with this partition.
2481 
2482         For a remote partition root to be valid, all the above conditions
2483         except the first one must be met.
2484 
2485         External events like hotplug or changes to "cpuset.cpus" or
2486         "cpuset.cpus.exclusive" can cause a valid partition root to
2487         become invalid and vice versa.  Note that a task cannot be
2488         moved to a cgroup with empty "cpuset.cpus.effective".
2489 
2490         A valid non-root parent partition may distribute out all its CPUs
2491         to its child local partitions when there is no task associated
2492         with it.
2493 
2494         Care must be taken to change a valid partition root to "member"
2495         as all its child local partitions, if present, will become
2496         invalid causing disruption to tasks running in those child
2497         partitions. These inactivated partitions could be recovered if
2498         their parent is switched back to a partition root with a proper
2499         value in "cpuset.cpus" or "cpuset.cpus.exclusive".
2500 
2501         Poll and inotify events are triggered whenever the state of
2502         "cpuset.cpus.partition" changes.  That includes changes caused
2503         by write to "cpuset.cpus.partition", cpu hotplug or other
2504         changes that modify the validity status of the partition.
2505         This will allow user space agents to monitor unexpected changes
2506         to "cpuset.cpus.partition" without the need to do continuous
2507         polling.
2508 
2509         A user can pre-configure certain CPUs to an isolated state
2510         with load balancing disabled at boot time with the "isolcpus"
2511         kernel boot command line option.  If those CPUs are to be put
2512         into a partition, they have to be used in an isolated partition.
2513 
2514 
2515 Device controller
2516 -----------------
2517 
2518 Device controller manages access to device files. It includes both
2519 creation of new device files (using mknod), and access to the
2520 existing device files.
2521 
2522 Cgroup v2 device controller has no interface files and is implemented
2523 on top of cgroup BPF. To control access to device files, a user may
2524 create bpf programs of type BPF_PROG_TYPE_CGROUP_DEVICE and attach
2525 them to cgroups with BPF_CGROUP_DEVICE flag. On an attempt to access a
2526 device file, corresponding BPF programs will be executed, and depending
2527 on the return value the attempt will succeed or fail with -EPERM.
2528 
2529 A BPF_PROG_TYPE_CGROUP_DEVICE program takes a pointer to the
2530 bpf_cgroup_dev_ctx structure, which describes the device access attempt:
2531 access type (mknod/read/write) and device (type, major and minor numbers).
2532 If the program returns 0, the attempt fails with -EPERM, otherwise it
2533 succeeds.
2534 
2535 An example of BPF_PROG_TYPE_CGROUP_DEVICE program may be found in
2536 tools/testing/selftests/bpf/progs/dev_cgroup.c in the kernel source tree.
2537 
2538 
2539 RDMA
2540 ----
2541 
2542 The "rdma" controller regulates the distribution and accounting of
2543 RDMA resources.
2544 
2545 RDMA Interface Files
2546 ~~~~~~~~~~~~~~~~~~~~
2547 
2548   rdma.max
2549         A readwrite nested-keyed file that exists for all the cgroups
2550         except root that describes current configured resource limit
2551         for a RDMA/IB device.
2552 
2553         Lines are keyed by device name and are not ordered.
2554         Each line contains space separated resource name and its configured
2555         limit that can be distributed.
2556 
2557         The following nested keys are defined.
2558 
2559           ==========    =============================
2560           hca_handle    Maximum number of HCA Handles
2561           hca_object    Maximum number of HCA Objects
2562           ==========    =============================
2563 
2564         An example for mlx4 and ocrdma device follows::
2565 
2566           mlx4_0 hca_handle=2 hca_object=2000
2567           ocrdma1 hca_handle=3 hca_object=max
2568 
2569   rdma.current
2570         A read-only file that describes current resource usage.
2571         It exists for all the cgroup except root.
2572 
2573         An example for mlx4 and ocrdma device follows::
2574 
2575           mlx4_0 hca_handle=1 hca_object=20
2576           ocrdma1 hca_handle=1 hca_object=23
2577 
2578 HugeTLB
2579 -------
2580 
2581 The HugeTLB controller allows to limit the HugeTLB usage per control group and
2582 enforces the controller limit during page fault.
2583 
2584 HugeTLB Interface Files
2585 ~~~~~~~~~~~~~~~~~~~~~~~
2586 
2587   hugetlb.<hugepagesize>.current
2588         Show current usage for "hugepagesize" hugetlb.  It exists for all
2589         the cgroup except root.
2590 
2591   hugetlb.<hugepagesize>.max
2592         Set/show the hard limit of "hugepagesize" hugetlb usage.
2593         The default value is "max".  It exists for all the cgroup except root.
2594 
2595   hugetlb.<hugepagesize>.events
2596         A read-only flat-keyed file which exists on non-root cgroups.
2597 
2598           max
2599                 The number of allocation failure due to HugeTLB limit
2600 
2601   hugetlb.<hugepagesize>.events.local
2602         Similar to hugetlb.<hugepagesize>.events but the fields in the file
2603         are local to the cgroup i.e. not hierarchical. The file modified event
2604         generated on this file reflects only the local events.
2605 
2606   hugetlb.<hugepagesize>.numa_stat
2607         Similar to memory.numa_stat, it shows the numa information of the
2608         hugetlb pages of <hugepagesize> in this cgroup.  Only active in
2609         use hugetlb pages are included.  The per-node values are in bytes.
2610 
2611 Misc
2612 ----
2613 
2614 The Miscellaneous cgroup provides the resource limiting and tracking
2615 mechanism for the scalar resources which cannot be abstracted like the other
2616 cgroup resources. Controller is enabled by the CONFIG_CGROUP_MISC config
2617 option.
2618 
2619 A resource can be added to the controller via enum misc_res_type{} in the
2620 include/linux/misc_cgroup.h file and the corresponding name via misc_res_name[]
2621 in the kernel/cgroup/misc.c file. Provider of the resource must set its
2622 capacity prior to using the resource by calling misc_cg_set_capacity().
2623 
2624 Once a capacity is set then the resource usage can be updated using charge and
2625 uncharge APIs. All of the APIs to interact with misc controller are in
2626 include/linux/misc_cgroup.h.
2627 
2628 Misc Interface Files
2629 ~~~~~~~~~~~~~~~~~~~~
2630 
2631 Miscellaneous controller provides 3 interface files. If two misc resources (res_a and res_b) are registered then:
2632 
2633   misc.capacity
2634         A read-only flat-keyed file shown only in the root cgroup.  It shows
2635         miscellaneous scalar resources available on the platform along with
2636         their quantities::
2637 
2638           $ cat misc.capacity
2639           res_a 50
2640           res_b 10
2641 
2642   misc.current
2643         A read-only flat-keyed file shown in the all cgroups.  It shows
2644         the current usage of the resources in the cgroup and its children.::
2645 
2646           $ cat misc.current
2647           res_a 3
2648           res_b 0
2649 
2650   misc.peak
2651         A read-only flat-keyed file shown in all cgroups.  It shows the
2652         historical maximum usage of the resources in the cgroup and its
2653         children.::
2654 
2655           $ cat misc.peak
2656           res_a 10
2657           res_b 8
2658 
2659   misc.max
2660         A read-write flat-keyed file shown in the non root cgroups. Allowed
2661         maximum usage of the resources in the cgroup and its children.::
2662 
2663           $ cat misc.max
2664           res_a max
2665           res_b 4
2666 
2667         Limit can be set by::
2668 
2669           # echo res_a 1 > misc.max
2670 
2671         Limit can be set to max by::
2672 
2673           # echo res_a max > misc.max
2674 
2675         Limits can be set higher than the capacity value in the misc.capacity
2676         file.
2677 
2678   misc.events
2679         A read-only flat-keyed file which exists on non-root cgroups. The
2680         following entries are defined. Unless specified otherwise, a value
2681         change in this file generates a file modified event. All fields in
2682         this file are hierarchical.
2683 
2684           max
2685                 The number of times the cgroup's resource usage was
2686                 about to go over the max boundary.
2687 
2688   misc.events.local
2689         Similar to misc.events but the fields in the file are local to the
2690         cgroup i.e. not hierarchical. The file modified event generated on
2691         this file reflects only the local events.
2692 
2693 Migration and Ownership
2694 ~~~~~~~~~~~~~~~~~~~~~~~
2695 
2696 A miscellaneous scalar resource is charged to the cgroup in which it is used
2697 first, and stays charged to that cgroup until that resource is freed. Migrating
2698 a process to a different cgroup does not move the charge to the destination
2699 cgroup where the process has moved.
2700 
2701 Others
2702 ------
2703 
2704 perf_event
2705 ~~~~~~~~~~
2706 
2707 perf_event controller, if not mounted on a legacy hierarchy, is
2708 automatically enabled on the v2 hierarchy so that perf events can
2709 always be filtered by cgroup v2 path.  The controller can still be
2710 moved to a legacy hierarchy after v2 hierarchy is populated.
2711 
2712 
2713 Non-normative information
2714 -------------------------
2715 
2716 This section contains information that isn't considered to be a part of
2717 the stable kernel API and so is subject to change.
2718 
2719 
2720 CPU controller root cgroup process behaviour
2721 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2722 
2723 When distributing CPU cycles in the root cgroup each thread in this
2724 cgroup is treated as if it was hosted in a separate child cgroup of the
2725 root cgroup. This child cgroup weight is dependent on its thread nice
2726 level.
2727 
2728 For details of this mapping see sched_prio_to_weight array in
2729 kernel/sched/core.c file (values from this array should be scaled
2730 appropriately so the neutral - nice 0 - value is 100 instead of 1024).
2731 
2732 
2733 IO controller root cgroup process behaviour
2734 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2735 
2736 Root cgroup processes are hosted in an implicit leaf child node.
2737 When distributing IO resources this implicit child node is taken into
2738 account as if it was a normal child cgroup of the root cgroup with a
2739 weight value of 200.
2740 
2741 
2742 Namespace
2743 =========
2744 
2745 Basics
2746 ------
2747 
2748 cgroup namespace provides a mechanism to virtualize the view of the
2749 "/proc/$PID/cgroup" file and cgroup mounts.  The CLONE_NEWCGROUP clone
2750 flag can be used with clone(2) and unshare(2) to create a new cgroup
2751 namespace.  The process running inside the cgroup namespace will have
2752 its "/proc/$PID/cgroup" output restricted to cgroupns root.  The
2753 cgroupns root is the cgroup of the process at the time of creation of
2754 the cgroup namespace.
2755 
2756 Without cgroup namespace, the "/proc/$PID/cgroup" file shows the
2757 complete path of the cgroup of a process.  In a container setup where
2758 a set of cgroups and namespaces are intended to isolate processes the
2759 "/proc/$PID/cgroup" file may leak potential system level information
2760 to the isolated processes.  For example::
2761 
2762   # cat /proc/self/cgroup
2763   0::/batchjobs/container_id1
2764 
2765 The path '/batchjobs/container_id1' can be considered as system-data
2766 and undesirable to expose to the isolated processes.  cgroup namespace
2767 can be used to restrict visibility of this path.  For example, before
2768 creating a cgroup namespace, one would see::
2769 
2770   # ls -l /proc/self/ns/cgroup
2771   lrwxrwxrwx 1 root root 0 2014-07-15 10:37 /proc/self/ns/cgroup -> cgroup:[4026531835]
2772   # cat /proc/self/cgroup
2773   0::/batchjobs/container_id1
2774 
2775 After unsharing a new namespace, the view changes::
2776 
2777   # ls -l /proc/self/ns/cgroup
2778   lrwxrwxrwx 1 root root 0 2014-07-15 10:35 /proc/self/ns/cgroup -> cgroup:[4026532183]
2779   # cat /proc/self/cgroup
2780   0::/
2781 
2782 When some thread from a multi-threaded process unshares its cgroup
2783 namespace, the new cgroupns gets applied to the entire process (all
2784 the threads).  This is natural for the v2 hierarchy; however, for the
2785 legacy hierarchies, this may be unexpected.
2786 
2787 A cgroup namespace is alive as long as there are processes inside or
2788 mounts pinning it.  When the last usage goes away, the cgroup
2789 namespace is destroyed.  The cgroupns root and the actual cgroups
2790 remain.
2791 
2792 
2793 The Root and Views
2794 ------------------
2795 
2796 The 'cgroupns root' for a cgroup namespace is the cgroup in which the
2797 process calling unshare(2) is running.  For example, if a process in
2798 /batchjobs/container_id1 cgroup calls unshare, cgroup
2799 /batchjobs/container_id1 becomes the cgroupns root.  For the
2800 init_cgroup_ns, this is the real root ('/') cgroup.
2801 
2802 The cgroupns root cgroup does not change even if the namespace creator
2803 process later moves to a different cgroup::
2804 
2805   # ~/unshare -c # unshare cgroupns in some cgroup
2806   # cat /proc/self/cgroup
2807   0::/
2808   # mkdir sub_cgrp_1
2809   # echo 0 > sub_cgrp_1/cgroup.procs
2810   # cat /proc/self/cgroup
2811   0::/sub_cgrp_1
2812 
2813 Each process gets its namespace-specific view of "/proc/$PID/cgroup"
2814 
2815 Processes running inside the cgroup namespace will be able to see
2816 cgroup paths (in /proc/self/cgroup) only inside their root cgroup.
2817 From within an unshared cgroupns::
2818 
2819   # sleep 100000 &
2820   [1] 7353
2821   # echo 7353 > sub_cgrp_1/cgroup.procs
2822   # cat /proc/7353/cgroup
2823   0::/sub_cgrp_1
2824 
2825 From the initial cgroup namespace, the real cgroup path will be
2826 visible::
2827 
2828   $ cat /proc/7353/cgroup
2829   0::/batchjobs/container_id1/sub_cgrp_1
2830 
2831 From a sibling cgroup namespace (that is, a namespace rooted at a
2832 different cgroup), the cgroup path relative to its own cgroup
2833 namespace root will be shown.  For instance, if PID 7353's cgroup
2834 namespace root is at '/batchjobs/container_id2', then it will see::
2835 
2836   # cat /proc/7353/cgroup
2837   0::/../container_id2/sub_cgrp_1
2838 
2839 Note that the relative path always starts with '/' to indicate that
2840 its relative to the cgroup namespace root of the caller.
2841 
2842 
2843 Migration and setns(2)
2844 ----------------------
2845 
2846 Processes inside a cgroup namespace can move into and out of the
2847 namespace root if they have proper access to external cgroups.  For
2848 example, from inside a namespace with cgroupns root at
2849 /batchjobs/container_id1, and assuming that the global hierarchy is
2850 still accessible inside cgroupns::
2851 
2852   # cat /proc/7353/cgroup
2853   0::/sub_cgrp_1
2854   # echo 7353 > batchjobs/container_id2/cgroup.procs
2855   # cat /proc/7353/cgroup
2856   0::/../container_id2
2857 
2858 Note that this kind of setup is not encouraged.  A task inside cgroup
2859 namespace should only be exposed to its own cgroupns hierarchy.
2860 
2861 setns(2) to another cgroup namespace is allowed when:
2862 
2863 (a) the process has CAP_SYS_ADMIN against its current user namespace
2864 (b) the process has CAP_SYS_ADMIN against the target cgroup
2865     namespace's userns
2866 
2867 No implicit cgroup changes happen with attaching to another cgroup
2868 namespace.  It is expected that the someone moves the attaching
2869 process under the target cgroup namespace root.
2870 
2871 
2872 Interaction with Other Namespaces
2873 ---------------------------------
2874 
2875 Namespace specific cgroup hierarchy can be mounted by a process
2876 running inside a non-init cgroup namespace::
2877 
2878   # mount -t cgroup2 none $MOUNT_POINT
2879 
2880 This will mount the unified cgroup hierarchy with cgroupns root as the
2881 filesystem root.  The process needs CAP_SYS_ADMIN against its user and
2882 mount namespaces.
2883 
2884 The virtualization of /proc/self/cgroup file combined with restricting
2885 the view of cgroup hierarchy by namespace-private cgroupfs mount
2886 provides a properly isolated cgroup view inside the container.
2887 
2888 
2889 Information on Kernel Programming
2890 =================================
2891 
2892 This section contains kernel programming information in the areas
2893 where interacting with cgroup is necessary.  cgroup core and
2894 controllers are not covered.
2895 
2896 
2897 Filesystem Support for Writeback
2898 --------------------------------
2899 
2900 A filesystem can support cgroup writeback by updating
2901 address_space_operations->writepage[s]() to annotate bio's using the
2902 following two functions.
2903 
2904   wbc_init_bio(@wbc, @bio)
2905         Should be called for each bio carrying writeback data and
2906         associates the bio with the inode's owner cgroup and the
2907         corresponding request queue.  This must be called after
2908         a queue (device) has been associated with the bio and
2909         before submission.
2910 
2911   wbc_account_cgroup_owner(@wbc, @page, @bytes)
2912         Should be called for each data segment being written out.
2913         While this function doesn't care exactly when it's called
2914         during the writeback session, it's the easiest and most
2915         natural to call it as data segments are added to a bio.
2916 
2917 With writeback bio's annotated, cgroup support can be enabled per
2918 super_block by setting SB_I_CGROUPWB in ->s_iflags.  This allows for
2919 selective disabling of cgroup writeback support which is helpful when
2920 certain filesystem features, e.g. journaled data mode, are
2921 incompatible.
2922 
2923 wbc_init_bio() binds the specified bio to its cgroup.  Depending on
2924 the configuration, the bio may be executed at a lower priority and if
2925 the writeback session is holding shared resources, e.g. a journal
2926 entry, may lead to priority inversion.  There is no one easy solution
2927 for the problem.  Filesystems can try to work around specific problem
2928 cases by skipping wbc_init_bio() and using bio_associate_blkg()
2929 directly.
2930 
2931 
2932 Deprecated v1 Core Features
2933 ===========================
2934 
2935 - Multiple hierarchies including named ones are not supported.
2936 
2937 - All v1 mount options are not supported.
2938 
2939 - The "tasks" file is removed and "cgroup.procs" is not sorted.
2940 
2941 - "cgroup.clone_children" is removed.
2942 
2943 - /proc/cgroups is meaningless for v2.  Use "cgroup.controllers" file
2944   at the root instead.
2945 
2946 
2947 Issues with v1 and Rationales for v2
2948 ====================================
2949 
2950 Multiple Hierarchies
2951 --------------------
2952 
2953 cgroup v1 allowed an arbitrary number of hierarchies and each
2954 hierarchy could host any number of controllers.  While this seemed to
2955 provide a high level of flexibility, it wasn't useful in practice.
2956 
2957 For example, as there is only one instance of each controller, utility
2958 type controllers such as freezer which can be useful in all
2959 hierarchies could only be used in one.  The issue is exacerbated by
2960 the fact that controllers couldn't be moved to another hierarchy once
2961 hierarchies were populated.  Another issue was that all controllers
2962 bound to a hierarchy were forced to have exactly the same view of the
2963 hierarchy.  It wasn't possible to vary the granularity depending on
2964 the specific controller.
2965 
2966 In practice, these issues heavily limited which controllers could be
2967 put on the same hierarchy and most configurations resorted to putting
2968 each controller on its own hierarchy.  Only closely related ones, such
2969 as the cpu and cpuacct controllers, made sense to be put on the same
2970 hierarchy.  This often meant that userland ended up managing multiple
2971 similar hierarchies repeating the same steps on each hierarchy
2972 whenever a hierarchy management operation was necessary.
2973 
2974 Furthermore, support for multiple hierarchies came at a steep cost.
2975 It greatly complicated cgroup core implementation but more importantly
2976 the support for multiple hierarchies restricted how cgroup could be
2977 used in general and what controllers was able to do.
2978 
2979 There was no limit on how many hierarchies there might be, which meant
2980 that a thread's cgroup membership couldn't be described in finite
2981 length.  The key might contain any number of entries and was unlimited
2982 in length, which made it highly awkward to manipulate and led to
2983 addition of controllers which existed only to identify membership,
2984 which in turn exacerbated the original problem of proliferating number
2985 of hierarchies.
2986 
2987 Also, as a controller couldn't have any expectation regarding the
2988 topologies of hierarchies other controllers might be on, each
2989 controller had to assume that all other controllers were attached to
2990 completely orthogonal hierarchies.  This made it impossible, or at
2991 least very cumbersome, for controllers to cooperate with each other.
2992 
2993 In most use cases, putting controllers on hierarchies which are
2994 completely orthogonal to each other isn't necessary.  What usually is
2995 called for is the ability to have differing levels of granularity
2996 depending on the specific controller.  In other words, hierarchy may
2997 be collapsed from leaf towards root when viewed from specific
2998 controllers.  For example, a given configuration might not care about
2999 how memory is distributed beyond a certain level while still wanting
3000 to control how CPU cycles are distributed.
3001 
3002 
3003 Thread Granularity
3004 ------------------
3005 
3006 cgroup v1 allowed threads of a process to belong to different cgroups.
3007 This didn't make sense for some controllers and those controllers
3008 ended up implementing different ways to ignore such situations but
3009 much more importantly it blurred the line between API exposed to
3010 individual applications and system management interface.
3011 
3012 Generally, in-process knowledge is available only to the process
3013 itself; thus, unlike service-level organization of processes,
3014 categorizing threads of a process requires active participation from
3015 the application which owns the target process.
3016 
3017 cgroup v1 had an ambiguously defined delegation model which got abused
3018 in combination with thread granularity.  cgroups were delegated to
3019 individual applications so that they can create and manage their own
3020 sub-hierarchies and control resource distributions along them.  This
3021 effectively raised cgroup to the status of a syscall-like API exposed
3022 to lay programs.
3023 
3024 First of all, cgroup has a fundamentally inadequate interface to be
3025 exposed this way.  For a process to access its own knobs, it has to
3026 extract the path on the target hierarchy from /proc/self/cgroup,
3027 construct the path by appending the name of the knob to the path, open
3028 and then read and/or write to it.  This is not only extremely clunky
3029 and unusual but also inherently racy.  There is no conventional way to
3030 define transaction across the required steps and nothing can guarantee
3031 that the process would actually be operating on its own sub-hierarchy.
3032 
3033 cgroup controllers implemented a number of knobs which would never be
3034 accepted as public APIs because they were just adding control knobs to
3035 system-management pseudo filesystem.  cgroup ended up with interface
3036 knobs which were not properly abstracted or refined and directly
3037 revealed kernel internal details.  These knobs got exposed to
3038 individual applications through the ill-defined delegation mechanism
3039 effectively abusing cgroup as a shortcut to implementing public APIs
3040 without going through the required scrutiny.
3041 
3042 This was painful for both userland and kernel.  Userland ended up with
3043 misbehaving and poorly abstracted interfaces and kernel exposing and
3044 locked into constructs inadvertently.
3045 
3046 
3047 Competition Between Inner Nodes and Threads
3048 -------------------------------------------
3049 
3050 cgroup v1 allowed threads to be in any cgroups which created an
3051 interesting problem where threads belonging to a parent cgroup and its
3052 children cgroups competed for resources.  This was nasty as two
3053 different types of entities competed and there was no obvious way to
3054 settle it.  Different controllers did different things.
3055 
3056 The cpu controller considered threads and cgroups as equivalents and
3057 mapped nice levels to cgroup weights.  This worked for some cases but
3058 fell flat when children wanted to be allocated specific ratios of CPU
3059 cycles and the number of internal threads fluctuated - the ratios
3060 constantly changed as the number of competing entities fluctuated.
3061 There also were other issues.  The mapping from nice level to weight
3062 wasn't obvious or universal, and there were various other knobs which
3063 simply weren't available for threads.
3064 
3065 The io controller implicitly created a hidden leaf node for each
3066 cgroup to host the threads.  The hidden leaf had its own copies of all
3067 the knobs with ``leaf_`` prefixed.  While this allowed equivalent
3068 control over internal threads, it was with serious drawbacks.  It
3069 always added an extra layer of nesting which wouldn't be necessary
3070 otherwise, made the interface messy and significantly complicated the
3071 implementation.
3072 
3073 The memory controller didn't have a way to control what happened
3074 between internal tasks and child cgroups and the behavior was not
3075 clearly defined.  There were attempts to add ad-hoc behaviors and
3076 knobs to tailor the behavior to specific workloads which would have
3077 led to problems extremely difficult to resolve in the long term.
3078 
3079 Multiple controllers struggled with internal tasks and came up with
3080 different ways to deal with it; unfortunately, all the approaches were
3081 severely flawed and, furthermore, the widely different behaviors
3082 made cgroup as a whole highly inconsistent.
3083 
3084 This clearly is a problem which needs to be addressed from cgroup core
3085 in a uniform way.
3086 
3087 
3088 Other Interface Issues
3089 ----------------------
3090 
3091 cgroup v1 grew without oversight and developed a large number of
3092 idiosyncrasies and inconsistencies.  One issue on the cgroup core side
3093 was how an empty cgroup was notified - a userland helper binary was
3094 forked and executed for each event.  The event delivery wasn't
3095 recursive or delegatable.  The limitations of the mechanism also led
3096 to in-kernel event delivery filtering mechanism further complicating
3097 the interface.
3098 
3099 Controller interfaces were problematic too.  An extreme example is
3100 controllers completely ignoring hierarchical organization and treating
3101 all cgroups as if they were all located directly under the root
3102 cgroup.  Some controllers exposed a large amount of inconsistent
3103 implementation details to userland.
3104 
3105 There also was no consistency across controllers.  When a new cgroup
3106 was created, some controllers defaulted to not imposing extra
3107 restrictions while others disallowed any resource usage until
3108 explicitly configured.  Configuration knobs for the same type of
3109 control used widely differing naming schemes and formats.  Statistics
3110 and information knobs were named arbitrarily and used different
3111 formats and units even in the same controller.
3112 
3113 cgroup v2 establishes common conventions where appropriate and updates
3114 controllers so that they expose minimal and consistent interfaces.
3115 
3116 
3117 Controller Issues and Remedies
3118 ------------------------------
3119 
3120 Memory
3121 ~~~~~~
3122 
3123 The original lower boundary, the soft limit, is defined as a limit
3124 that is per default unset.  As a result, the set of cgroups that
3125 global reclaim prefers is opt-in, rather than opt-out.  The costs for
3126 optimizing these mostly negative lookups are so high that the
3127 implementation, despite its enormous size, does not even provide the
3128 basic desirable behavior.  First off, the soft limit has no
3129 hierarchical meaning.  All configured groups are organized in a global
3130 rbtree and treated like equal peers, regardless where they are located
3131 in the hierarchy.  This makes subtree delegation impossible.  Second,
3132 the soft limit reclaim pass is so aggressive that it not just
3133 introduces high allocation latencies into the system, but also impacts
3134 system performance due to overreclaim, to the point where the feature
3135 becomes self-defeating.
3136 
3137 The memory.low boundary on the other hand is a top-down allocated
3138 reserve.  A cgroup enjoys reclaim protection when it's within its
3139 effective low, which makes delegation of subtrees possible. It also
3140 enjoys having reclaim pressure proportional to its overage when
3141 above its effective low.
3142 
3143 The original high boundary, the hard limit, is defined as a strict
3144 limit that can not budge, even if the OOM killer has to be called.
3145 But this generally goes against the goal of making the most out of the
3146 available memory.  The memory consumption of workloads varies during
3147 runtime, and that requires users to overcommit.  But doing that with a
3148 strict upper limit requires either a fairly accurate prediction of the
3149 working set size or adding slack to the limit.  Since working set size
3150 estimation is hard and error prone, and getting it wrong results in
3151 OOM kills, most users tend to err on the side of a looser limit and
3152 end up wasting precious resources.
3153 
3154 The memory.high boundary on the other hand can be set much more
3155 conservatively.  When hit, it throttles allocations by forcing them
3156 into direct reclaim to work off the excess, but it never invokes the
3157 OOM killer.  As a result, a high boundary that is chosen too
3158 aggressively will not terminate the processes, but instead it will
3159 lead to gradual performance degradation.  The user can monitor this
3160 and make corrections until the minimal memory footprint that still
3161 gives acceptable performance is found.
3162 
3163 In extreme cases, with many concurrent allocations and a complete
3164 breakdown of reclaim progress within the group, the high boundary can
3165 be exceeded.  But even then it's mostly better to satisfy the
3166 allocation from the slack available in other groups or the rest of the
3167 system than killing the group.  Otherwise, memory.max is there to
3168 limit this type of spillover and ultimately contain buggy or even
3169 malicious applications.
3170 
3171 Setting the original memory.limit_in_bytes below the current usage was
3172 subject to a race condition, where concurrent charges could cause the
3173 limit setting to fail. memory.max on the other hand will first set the
3174 limit to prevent new charges, and then reclaim and OOM kill until the
3175 new limit is met - or the task writing to memory.max is killed.
3176 
3177 The combined memory+swap accounting and limiting is replaced by real
3178 control over swap space.
3179 
3180 The main argument for a combined memory+swap facility in the original
3181 cgroup design was that global or parental pressure would always be
3182 able to swap all anonymous memory of a child group, regardless of the
3183 child's own (possibly untrusted) configuration.  However, untrusted
3184 groups can sabotage swapping by other means - such as referencing its
3185 anonymous memory in a tight loop - and an admin can not assume full
3186 swappability when overcommitting untrusted jobs.
3187 
3188 For trusted jobs, on the other hand, a combined counter is not an
3189 intuitive userspace interface, and it flies in the face of the idea
3190 that cgroup controllers should account and limit specific physical
3191 resources.  Swap space is a resource like all others in the system,
3192 and that's why unified hierarchy allows distributing it separately.

~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

kernel.org | git.kernel.org | LWN.net | Project Home | SVN repository | Mail admin

Linux® is a registered trademark of Linus Torvalds in the United States and other countries.
TOMOYO® is a registered trademark of NTT DATA CORPORATION.

sflogo.php