1 .. SPDX-License-Identifier: GPL-2.0 2 .. include:: <isonum.txt> 3 4 .. |struct cpuidle_state| replace:: :c:type:`struct cpuidle_state <cpuidle_state>` 5 .. |cpufreq| replace:: :doc:`CPU Performance Scaling <cpufreq>` 6 7 ======================== 8 CPU Idle Time Management 9 ======================== 10 11 :Copyright: |copy| 2018 Intel Corporation 12 13 :Author: Rafael J. Wysocki <rafael.j.wysocki@intel.com> 14 15 16 Concepts 17 ======== 18 19 Modern processors are generally able to enter states in which the execution of 20 a program is suspended and instructions belonging to it are not fetched from 21 memory or executed. Those states are the *idle* states of the processor. 22 23 Since part of the processor hardware is not used in idle states, entering them 24 generally allows power drawn by the processor to be reduced and, in consequence, 25 it is an opportunity to save energy. 26 27 CPU idle time management is an energy-efficiency feature concerned about using 28 the idle states of processors for this purpose. 29 30 Logical CPUs 31 ------------ 32 33 CPU idle time management operates on CPUs as seen by the *CPU scheduler* (that 34 is the part of the kernel responsible for the distribution of computational 35 work in the system). In its view, CPUs are *logical* units. That is, they need 36 not be separate physical entities and may just be interfaces appearing to 37 software as individual single-core processors. In other words, a CPU is an 38 entity which appears to be fetching instructions that belong to one sequence 39 (program) from memory and executing them, but it need not work this way 40 physically. Generally, three different cases can be consider here. 41 42 First, if the whole processor can only follow one sequence of instructions (one 43 program) at a time, it is a CPU. In that case, if the hardware is asked to 44 enter an idle state, that applies to the processor as a whole. 45 46 Second, if the processor is multi-core, each core in it is able to follow at 47 least one program at a time. The cores need not be entirely independent of each 48 other (for example, they may share caches), but still most of the time they 49 work physically in parallel with each other, so if each of them executes only 50 one program, those programs run mostly independently of each other at the same 51 time. The entire cores are CPUs in that case and if the hardware is asked to 52 enter an idle state, that applies to the core that asked for it in the first 53 place, but it also may apply to a larger unit (say a "package" or a "cluster") 54 that the core belongs to (in fact, it may apply to an entire hierarchy of larger 55 units containing the core). Namely, if all of the cores in the larger unit 56 except for one have been put into idle states at the "core level" and the 57 remaining core asks the processor to enter an idle state, that may trigger it 58 to put the whole larger unit into an idle state which also will affect the 59 other cores in that unit. 60 61 Finally, each core in a multi-core processor may be able to follow more than one 62 program in the same time frame (that is, each core may be able to fetch 63 instructions from multiple locations in memory and execute them in the same time 64 frame, but not necessarily entirely in parallel with each other). In that case 65 the cores present themselves to software as "bundles" each consisting of 66 multiple individual single-core "processors", referred to as *hardware threads* 67 (or hyper-threads specifically on Intel hardware), that each can follow one 68 sequence of instructions. Then, the hardware threads are CPUs from the CPU idle 69 time management perspective and if the processor is asked to enter an idle state 70 by one of them, the hardware thread (or CPU) that asked for it is stopped, but 71 nothing more happens, unless all of the other hardware threads within the same 72 core also have asked the processor to enter an idle state. In that situation, 73 the core may be put into an idle state individually or a larger unit containing 74 it may be put into an idle state as a whole (if the other cores within the 75 larger unit are in idle states already). 76 77 Idle CPUs 78 --------- 79 80 Logical CPUs, simply referred to as "CPUs" in what follows, are regarded as 81 *idle* by the Linux kernel when there are no tasks to run on them except for the 82 special "idle" task. 83 84 Tasks are the CPU scheduler's representation of work. Each task consists of a 85 sequence of instructions to execute, or code, data to be manipulated while 86 running that code, and some context information that needs to be loaded into the 87 processor every time the task's code is run by a CPU. The CPU scheduler 88 distributes work by assigning tasks to run to the CPUs present in the system. 89 90 Tasks can be in various states. In particular, they are *runnable* if there are 91 no specific conditions preventing their code from being run by a CPU as long as 92 there is a CPU available for that (for example, they are not waiting for any 93 events to occur or similar). When a task becomes runnable, the CPU scheduler 94 assigns it to one of the available CPUs to run and if there are no more runnable 95 tasks assigned to it, the CPU will load the given task's context and run its 96 code (from the instruction following the last one executed so far, possibly by 97 another CPU). [If there are multiple runnable tasks assigned to one CPU 98 simultaneously, they will be subject to prioritization and time sharing in order 99 to allow them to make some progress over time.] 100 101 The special "idle" task becomes runnable if there are no other runnable tasks 102 assigned to the given CPU and the CPU is then regarded as idle. In other words, 103 in Linux idle CPUs run the code of the "idle" task called *the idle loop*. That 104 code may cause the processor to be put into one of its idle states, if they are 105 supported, in order to save energy, but if the processor does not support any 106 idle states, or there is not enough time to spend in an idle state before the 107 next wakeup event, or there are strict latency constraints preventing any of the 108 available idle states from being used, the CPU will simply execute more or less 109 useless instructions in a loop until it is assigned a new task to run. 110 111 112 .. _idle-loop: 113 114 The Idle Loop 115 ============= 116 117 The idle loop code takes two major steps in every iteration of it. First, it 118 calls into a code module referred to as the *governor* that belongs to the CPU 119 idle time management subsystem called ``CPUIdle`` to select an idle state for 120 the CPU to ask the hardware to enter. Second, it invokes another code module 121 from the ``CPUIdle`` subsystem, called the *driver*, to actually ask the 122 processor hardware to enter the idle state selected by the governor. 123 124 The role of the governor is to find an idle state most suitable for the 125 conditions at hand. For this purpose, idle states that the hardware can be 126 asked to enter by logical CPUs are represented in an abstract way independent of 127 the platform or the processor architecture and organized in a one-dimensional 128 (linear) array. That array has to be prepared and supplied by the ``CPUIdle`` 129 driver matching the platform the kernel is running on at the initialization 130 time. This allows ``CPUIdle`` governors to be independent of the underlying 131 hardware and to work with any platforms that the Linux kernel can run on. 132 133 Each idle state present in that array is characterized by two parameters to be 134 taken into account by the governor, the *target residency* and the (worst-case) 135 *exit latency*. The target residency is the minimum time the hardware must 136 spend in the given state, including the time needed to enter it (which may be 137 substantial), in order to save more energy than it would save by entering one of 138 the shallower idle states instead. [The "depth" of an idle state roughly 139 corresponds to the power drawn by the processor in that state.] The exit 140 latency, in turn, is the maximum time it will take a CPU asking the processor 141 hardware to enter an idle state to start executing the first instruction after a 142 wakeup from that state. Note that in general the exit latency also must cover 143 the time needed to enter the given state in case the wakeup occurs when the 144 hardware is entering it and it must be entered completely to be exited in an 145 ordered manner. 146 147 There are two types of information that can influence the governor's decisions. 148 First of all, the governor knows the time until the closest timer event. That 149 time is known exactly, because the kernel programs timers and it knows exactly 150 when they will trigger, and it is the maximum time the hardware that the given 151 CPU depends on can spend in an idle state, including the time necessary to enter 152 and exit it. However, the CPU may be woken up by a non-timer event at any time 153 (in particular, before the closest timer triggers) and it generally is not known 154 when that may happen. The governor can only see how much time the CPU actually 155 was idle after it has been woken up (that time will be referred to as the *idle 156 duration* from now on) and it can use that information somehow along with the 157 time until the closest timer to estimate the idle duration in future. How the 158 governor uses that information depends on what algorithm is implemented by it 159 and that is the primary reason for having more than one governor in the 160 ``CPUIdle`` subsystem. 161 162 There are four ``CPUIdle`` governors available, ``menu``, `TEO <teo-gov_>`_, 163 ``ladder`` and ``haltpoll``. Which of them is used by default depends on the 164 configuration of the kernel and in particular on whether or not the scheduler 165 tick can be `stopped by the idle loop <idle-cpus-and-tick_>`_. Available 166 governors can be read from the :file:`available_governors`, and the governor 167 can be changed at runtime. The name of the ``CPUIdle`` governor currently 168 used by the kernel can be read from the :file:`current_governor_ro` or 169 :file:`current_governor` file under :file:`/sys/devices/system/cpu/cpuidle/` 170 in ``sysfs``. 171 172 Which ``CPUIdle`` driver is used, on the other hand, usually depends on the 173 platform the kernel is running on, but there are platforms with more than one 174 matching driver. For example, there are two drivers that can work with the 175 majority of Intel platforms, ``intel_idle`` and ``acpi_idle``, one with 176 hardcoded idle states information and the other able to read that information 177 from the system's ACPI tables, respectively. Still, even in those cases, the 178 driver chosen at the system initialization time cannot be replaced later, so the 179 decision on which one of them to use has to be made early (on Intel platforms 180 the ``acpi_idle`` driver will be used if ``intel_idle`` is disabled for some 181 reason or if it does not recognize the processor). The name of the ``CPUIdle`` 182 driver currently used by the kernel can be read from the :file:`current_driver` 183 file under :file:`/sys/devices/system/cpu/cpuidle/` in ``sysfs``. 184 185 186 .. _idle-cpus-and-tick: 187 188 Idle CPUs and The Scheduler Tick 189 ================================ 190 191 The scheduler tick is a timer that triggers periodically in order to implement 192 the time sharing strategy of the CPU scheduler. Of course, if there are 193 multiple runnable tasks assigned to one CPU at the same time, the only way to 194 allow them to make reasonable progress in a given time frame is to make them 195 share the available CPU time. Namely, in rough approximation, each task is 196 given a slice of the CPU time to run its code, subject to the scheduling class, 197 prioritization and so on and when that time slice is used up, the CPU should be 198 switched over to running (the code of) another task. The currently running task 199 may not want to give the CPU away voluntarily, however, and the scheduler tick 200 is there to make the switch happen regardless. That is not the only role of the 201 tick, but it is the primary reason for using it. 202 203 The scheduler tick is problematic from the CPU idle time management perspective, 204 because it triggers periodically and relatively often (depending on the kernel 205 configuration, the length of the tick period is between 1 ms and 10 ms). 206 Thus, if the tick is allowed to trigger on idle CPUs, it will not make sense 207 for them to ask the hardware to enter idle states with target residencies above 208 the tick period length. Moreover, in that case the idle duration of any CPU 209 will never exceed the tick period length and the energy used for entering and 210 exiting idle states due to the tick wakeups on idle CPUs will be wasted. 211 212 Fortunately, it is not really necessary to allow the tick to trigger on idle 213 CPUs, because (by definition) they have no tasks to run except for the special 214 "idle" one. In other words, from the CPU scheduler perspective, the only user 215 of the CPU time on them is the idle loop. Since the time of an idle CPU need 216 not be shared between multiple runnable tasks, the primary reason for using the 217 tick goes away if the given CPU is idle. Consequently, it is possible to stop 218 the scheduler tick entirely on idle CPUs in principle, even though that may not 219 always be worth the effort. 220 221 Whether or not it makes sense to stop the scheduler tick in the idle loop 222 depends on what is expected by the governor. First, if there is another 223 (non-tick) timer due to trigger within the tick range, stopping the tick clearly 224 would be a waste of time, even though the timer hardware may not need to be 225 reprogrammed in that case. Second, if the governor is expecting a non-timer 226 wakeup within the tick range, stopping the tick is not necessary and it may even 227 be harmful. Namely, in that case the governor will select an idle state with 228 the target residency within the time until the expected wakeup, so that state is 229 going to be relatively shallow. The governor really cannot select a deep idle 230 state then, as that would contradict its own expectation of a wakeup in short 231 order. Now, if the wakeup really occurs shortly, stopping the tick would be a 232 waste of time and in this case the timer hardware would need to be reprogrammed, 233 which is expensive. On the other hand, if the tick is stopped and the wakeup 234 does not occur any time soon, the hardware may spend indefinite amount of time 235 in the shallow idle state selected by the governor, which will be a waste of 236 energy. Hence, if the governor is expecting a wakeup of any kind within the 237 tick range, it is better to allow the tick trigger. Otherwise, however, the 238 governor will select a relatively deep idle state, so the tick should be stopped 239 so that it does not wake up the CPU too early. 240 241 In any case, the governor knows what it is expecting and the decision on whether 242 or not to stop the scheduler tick belongs to it. Still, if the tick has been 243 stopped already (in one of the previous iterations of the loop), it is better 244 to leave it as is and the governor needs to take that into account. 245 246 The kernel can be configured to disable stopping the scheduler tick in the idle 247 loop altogether. That can be done through the build-time configuration of it 248 (by unsetting the ``CONFIG_NO_HZ_IDLE`` configuration option) or by passing 249 ``nohz=off`` to it in the command line. In both cases, as the stopping of the 250 scheduler tick is disabled, the governor's decisions regarding it are simply 251 ignored by the idle loop code and the tick is never stopped. 252 253 The systems that run kernels configured to allow the scheduler tick to be 254 stopped on idle CPUs are referred to as *tickless* systems and they are 255 generally regarded as more energy-efficient than the systems running kernels in 256 which the tick cannot be stopped. If the given system is tickless, it will use 257 the ``menu`` governor by default and if it is not tickless, the default 258 ``CPUIdle`` governor on it will be ``ladder``. 259 260 261 .. _menu-gov: 262 263 The ``menu`` Governor 264 ===================== 265 266 The ``menu`` governor is the default ``CPUIdle`` governor for tickless systems. 267 It is quite complex, but the basic principle of its design is straightforward. 268 Namely, when invoked to select an idle state for a CPU (i.e. an idle state that 269 the CPU will ask the processor hardware to enter), it attempts to predict the 270 idle duration and uses the predicted value for idle state selection. 271 272 It first obtains the time until the closest timer event with the assumption 273 that the scheduler tick will be stopped. That time, referred to as the *sleep 274 length* in what follows, is the upper bound on the time before the next CPU 275 wakeup. It is used to determine the sleep length range, which in turn is needed 276 to get the sleep length correction factor. 277 278 The ``menu`` governor maintains two arrays of sleep length correction factors. 279 One of them is used when tasks previously running on the given CPU are waiting 280 for some I/O operations to complete and the other one is used when that is not 281 the case. Each array contains several correction factor values that correspond 282 to different sleep length ranges organized so that each range represented in the 283 array is approximately 10 times wider than the previous one. 284 285 The correction factor for the given sleep length range (determined before 286 selecting the idle state for the CPU) is updated after the CPU has been woken 287 up and the closer the sleep length is to the observed idle duration, the closer 288 to 1 the correction factor becomes (it must fall between 0 and 1 inclusive). 289 The sleep length is multiplied by the correction factor for the range that it 290 falls into to obtain the first approximation of the predicted idle duration. 291 292 Next, the governor uses a simple pattern recognition algorithm to refine its 293 idle duration prediction. Namely, it saves the last 8 observed idle duration 294 values and, when predicting the idle duration next time, it computes the average 295 and variance of them. If the variance is small (smaller than 400 square 296 milliseconds) or it is small relative to the average (the average is greater 297 that 6 times the standard deviation), the average is regarded as the "typical 298 interval" value. Otherwise, the longest of the saved observed idle duration 299 values is discarded and the computation is repeated for the remaining ones. 300 Again, if the variance of them is small (in the above sense), the average is 301 taken as the "typical interval" value and so on, until either the "typical 302 interval" is determined or too many data points are disregarded, in which case 303 the "typical interval" is assumed to equal "infinity" (the maximum unsigned 304 integer value). The "typical interval" computed this way is compared with the 305 sleep length multiplied by the correction factor and the minimum of the two is 306 taken as the predicted idle duration. 307 308 Then, the governor computes an extra latency limit to help "interactive" 309 workloads. It uses the observation that if the exit latency of the selected 310 idle state is comparable with the predicted idle duration, the total time spent 311 in that state probably will be very short and the amount of energy to save by 312 entering it will be relatively small, so likely it is better to avoid the 313 overhead related to entering that state and exiting it. Thus selecting a 314 shallower state is likely to be a better option then. The first approximation 315 of the extra latency limit is the predicted idle duration itself which 316 additionally is divided by a value depending on the number of tasks that 317 previously ran on the given CPU and now they are waiting for I/O operations to 318 complete. The result of that division is compared with the latency limit coming 319 from the power management quality of service, or `PM QoS <cpu-pm-qos_>`_, 320 framework and the minimum of the two is taken as the limit for the idle states' 321 exit latency. 322 323 Now, the governor is ready to walk the list of idle states and choose one of 324 them. For this purpose, it compares the target residency of each state with 325 the predicted idle duration and the exit latency of it with the computed latency 326 limit. It selects the state with the target residency closest to the predicted 327 idle duration, but still below it, and exit latency that does not exceed the 328 limit. 329 330 In the final step the governor may still need to refine the idle state selection 331 if it has not decided to `stop the scheduler tick <idle-cpus-and-tick_>`_. That 332 happens if the idle duration predicted by it is less than the tick period and 333 the tick has not been stopped already (in a previous iteration of the idle 334 loop). Then, the sleep length used in the previous computations may not reflect 335 the real time until the closest timer event and if it really is greater than 336 that time, the governor may need to select a shallower state with a suitable 337 target residency. 338 339 340 .. _teo-gov: 341 342 The Timer Events Oriented (TEO) Governor 343 ======================================== 344 345 The timer events oriented (TEO) governor is an alternative ``CPUIdle`` governor 346 for tickless systems. It follows the same basic strategy as the ``menu`` `one 347 <menu-gov_>`_: it always tries to find the deepest idle state suitable for the 348 given conditions. However, it applies a different approach to that problem. 349 350 .. kernel-doc:: drivers/cpuidle/governors/teo.c 351 :doc: teo-description 352 353 .. _idle-states-representation: 354 355 Representation of Idle States 356 ============================= 357 358 For the CPU idle time management purposes all of the physical idle states 359 supported by the processor have to be represented as a one-dimensional array of 360 |struct cpuidle_state| objects each allowing an individual (logical) CPU to ask 361 the processor hardware to enter an idle state of certain properties. If there 362 is a hierarchy of units in the processor, one |struct cpuidle_state| object can 363 cover a combination of idle states supported by the units at different levels of 364 the hierarchy. In that case, the `target residency and exit latency parameters 365 of it <idle-loop_>`_, must reflect the properties of the idle state at the 366 deepest level (i.e. the idle state of the unit containing all of the other 367 units). 368 369 For example, take a processor with two cores in a larger unit referred to as 370 a "module" and suppose that asking the hardware to enter a specific idle state 371 (say "X") at the "core" level by one core will trigger the module to try to 372 enter a specific idle state of its own (say "MX") if the other core is in idle 373 state "X" already. In other words, asking for idle state "X" at the "core" 374 level gives the hardware a license to go as deep as to idle state "MX" at the 375 "module" level, but there is no guarantee that this is going to happen (the core 376 asking for idle state "X" may just end up in that state by itself instead). 377 Then, the target residency of the |struct cpuidle_state| object representing 378 idle state "X" must reflect the minimum time to spend in idle state "MX" of 379 the module (including the time needed to enter it), because that is the minimum 380 time the CPU needs to be idle to save any energy in case the hardware enters 381 that state. Analogously, the exit latency parameter of that object must cover 382 the exit time of idle state "MX" of the module (and usually its entry time too), 383 because that is the maximum delay between a wakeup signal and the time the CPU 384 will start to execute the first new instruction (assuming that both cores in the 385 module will always be ready to execute instructions as soon as the module 386 becomes operational as a whole). 387 388 There are processors without direct coordination between different levels of the 389 hierarchy of units inside them, however. In those cases asking for an idle 390 state at the "core" level does not automatically affect the "module" level, for 391 example, in any way and the ``CPUIdle`` driver is responsible for the entire 392 handling of the hierarchy. Then, the definition of the idle state objects is 393 entirely up to the driver, but still the physical properties of the idle state 394 that the processor hardware finally goes into must always follow the parameters 395 used by the governor for idle state selection (for instance, the actual exit 396 latency of that idle state must not exceed the exit latency parameter of the 397 idle state object selected by the governor). 398 399 In addition to the target residency and exit latency idle state parameters 400 discussed above, the objects representing idle states each contain a few other 401 parameters describing the idle state and a pointer to the function to run in 402 order to ask the hardware to enter that state. Also, for each 403 |struct cpuidle_state| object, there is a corresponding 404 :c:type:`struct cpuidle_state_usage <cpuidle_state_usage>` one containing usage 405 statistics of the given idle state. That information is exposed by the kernel 406 via ``sysfs``. 407 408 For each CPU in the system, there is a :file:`/sys/devices/system/cpu/cpu<N>/cpuidle/` 409 directory in ``sysfs``, where the number ``<N>`` is assigned to the given 410 CPU at the initialization time. That directory contains a set of subdirectories 411 called :file:`state0`, :file:`state1` and so on, up to the number of idle state 412 objects defined for the given CPU minus one. Each of these directories 413 corresponds to one idle state object and the larger the number in its name, the 414 deeper the (effective) idle state represented by it. Each of them contains 415 a number of files (attributes) representing the properties of the idle state 416 object corresponding to it, as follows: 417 418 ``above`` 419 Total number of times this idle state had been asked for, but the 420 observed idle duration was certainly too short to match its target 421 residency. 422 423 ``below`` 424 Total number of times this idle state had been asked for, but certainly 425 a deeper idle state would have been a better match for the observed idle 426 duration. 427 428 ``desc`` 429 Description of the idle state. 430 431 ``disable`` 432 Whether or not this idle state is disabled. 433 434 ``default_status`` 435 The default status of this state, "enabled" or "disabled". 436 437 ``latency`` 438 Exit latency of the idle state in microseconds. 439 440 ``name`` 441 Name of the idle state. 442 443 ``power`` 444 Power drawn by hardware in this idle state in milliwatts (if specified, 445 0 otherwise). 446 447 ``residency`` 448 Target residency of the idle state in microseconds. 449 450 ``time`` 451 Total time spent in this idle state by the given CPU (as measured by the 452 kernel) in microseconds. 453 454 ``usage`` 455 Total number of times the hardware has been asked by the given CPU to 456 enter this idle state. 457 458 ``rejected`` 459 Total number of times a request to enter this idle state on the given 460 CPU was rejected. 461 462 The :file:`desc` and :file:`name` files both contain strings. The difference 463 between them is that the name is expected to be more concise, while the 464 description may be longer and it may contain white space or special characters. 465 The other files listed above contain integer numbers. 466 467 The :file:`disable` attribute is the only writeable one. If it contains 1, the 468 given idle state is disabled for this particular CPU, which means that the 469 governor will never select it for this particular CPU and the ``CPUIdle`` 470 driver will never ask the hardware to enter it for that CPU as a result. 471 However, disabling an idle state for one CPU does not prevent it from being 472 asked for by the other CPUs, so it must be disabled for all of them in order to 473 never be asked for by any of them. [Note that, due to the way the ``ladder`` 474 governor is implemented, disabling an idle state prevents that governor from 475 selecting any idle states deeper than the disabled one too.] 476 477 If the :file:`disable` attribute contains 0, the given idle state is enabled for 478 this particular CPU, but it still may be disabled for some or all of the other 479 CPUs in the system at the same time. Writing 1 to it causes the idle state to 480 be disabled for this particular CPU and writing 0 to it allows the governor to 481 take it into consideration for the given CPU and the driver to ask for it, 482 unless that state was disabled globally in the driver (in which case it cannot 483 be used at all). 484 485 The :file:`power` attribute is not defined very well, especially for idle state 486 objects representing combinations of idle states at different levels of the 487 hierarchy of units in the processor, and it generally is hard to obtain idle 488 state power numbers for complex hardware, so :file:`power` often contains 0 (not 489 available) and if it contains a nonzero number, that number may not be very 490 accurate and it should not be relied on for anything meaningful. 491 492 The number in the :file:`time` file generally may be greater than the total time 493 really spent by the given CPU in the given idle state, because it is measured by 494 the kernel and it may not cover the cases in which the hardware refused to enter 495 this idle state and entered a shallower one instead of it (or even it did not 496 enter any idle state at all). The kernel can only measure the time span between 497 asking the hardware to enter an idle state and the subsequent wakeup of the CPU 498 and it cannot say what really happened in the meantime at the hardware level. 499 Moreover, if the idle state object in question represents a combination of idle 500 states at different levels of the hierarchy of units in the processor, 501 the kernel can never say how deep the hardware went down the hierarchy in any 502 particular case. For these reasons, the only reliable way to find out how 503 much time has been spent by the hardware in different idle states supported by 504 it is to use idle state residency counters in the hardware, if available. 505 506 Generally, an interrupt received when trying to enter an idle state causes the 507 idle state entry request to be rejected, in which case the ``CPUIdle`` driver 508 may return an error code to indicate that this was the case. The :file:`usage` 509 and :file:`rejected` files report the number of times the given idle state 510 was entered successfully or rejected, respectively. 511 512 .. _cpu-pm-qos: 513 514 Power Management Quality of Service for CPUs 515 ============================================ 516 517 The power management quality of service (PM QoS) framework in the Linux kernel 518 allows kernel code and user space processes to set constraints on various 519 energy-efficiency features of the kernel to prevent performance from dropping 520 below a required level. 521 522 CPU idle time management can be affected by PM QoS in two ways, through the 523 global CPU latency limit and through the resume latency constraints for 524 individual CPUs. Kernel code (e.g. device drivers) can set both of them with 525 the help of special internal interfaces provided by the PM QoS framework. User 526 space can modify the former by opening the :file:`cpu_dma_latency` special 527 device file under :file:`/dev/` and writing a binary value (interpreted as a 528 signed 32-bit integer) to it. In turn, the resume latency constraint for a CPU 529 can be modified from user space by writing a string (representing a signed 530 32-bit integer) to the :file:`power/pm_qos_resume_latency_us` file under 531 :file:`/sys/devices/system/cpu/cpu<N>/` in ``sysfs``, where the CPU number 532 ``<N>`` is allocated at the system initialization time. Negative values 533 will be rejected in both cases and, also in both cases, the written integer 534 number will be interpreted as a requested PM QoS constraint in microseconds. 535 536 The requested value is not automatically applied as a new constraint, however, 537 as it may be less restrictive (greater in this particular case) than another 538 constraint previously requested by someone else. For this reason, the PM QoS 539 framework maintains a list of requests that have been made so far for the 540 global CPU latency limit and for each individual CPU, aggregates them and 541 applies the effective (minimum in this particular case) value as the new 542 constraint. 543 544 In fact, opening the :file:`cpu_dma_latency` special device file causes a new 545 PM QoS request to be created and added to a global priority list of CPU latency 546 limit requests and the file descriptor coming from the "open" operation 547 represents that request. If that file descriptor is then used for writing, the 548 number written to it will be associated with the PM QoS request represented by 549 it as a new requested limit value. Next, the priority list mechanism will be 550 used to determine the new effective value of the entire list of requests and 551 that effective value will be set as a new CPU latency limit. Thus requesting a 552 new limit value will only change the real limit if the effective "list" value is 553 affected by it, which is the case if it is the minimum of the requested values 554 in the list. 555 556 The process holding a file descriptor obtained by opening the 557 :file:`cpu_dma_latency` special device file controls the PM QoS request 558 associated with that file descriptor, but it controls this particular PM QoS 559 request only. 560 561 Closing the :file:`cpu_dma_latency` special device file or, more precisely, the 562 file descriptor obtained while opening it, causes the PM QoS request associated 563 with that file descriptor to be removed from the global priority list of CPU 564 latency limit requests and destroyed. If that happens, the priority list 565 mechanism will be used again, to determine the new effective value for the whole 566 list and that value will become the new limit. 567 568 In turn, for each CPU there is one resume latency PM QoS request associated with 569 the :file:`power/pm_qos_resume_latency_us` file under 570 :file:`/sys/devices/system/cpu/cpu<N>/` in ``sysfs`` and writing to it causes 571 this single PM QoS request to be updated regardless of which user space 572 process does that. In other words, this PM QoS request is shared by the entire 573 user space, so access to the file associated with it needs to be arbitrated 574 to avoid confusion. [Arguably, the only legitimate use of this mechanism in 575 practice is to pin a process to the CPU in question and let it use the 576 ``sysfs`` interface to control the resume latency constraint for it.] It is 577 still only a request, however. It is an entry in a priority list used to 578 determine the effective value to be set as the resume latency constraint for the 579 CPU in question every time the list of requests is updated this way or another 580 (there may be other requests coming from kernel code in that list). 581 582 CPU idle time governors are expected to regard the minimum of the global 583 (effective) CPU latency limit and the effective resume latency constraint for 584 the given CPU as the upper limit for the exit latency of the idle states that 585 they are allowed to select for that CPU. They should never select any idle 586 states with exit latency beyond that limit. 587 588 589 Idle States Control Via Kernel Command Line 590 =========================================== 591 592 In addition to the ``sysfs`` interface allowing individual idle states to be 593 `disabled for individual CPUs <idle-states-representation_>`_, there are kernel 594 command line parameters affecting CPU idle time management. 595 596 The ``cpuidle.off=1`` kernel command line option can be used to disable the 597 CPU idle time management entirely. It does not prevent the idle loop from 598 running on idle CPUs, but it prevents the CPU idle time governors and drivers 599 from being invoked. If it is added to the kernel command line, the idle loop 600 will ask the hardware to enter idle states on idle CPUs via the CPU architecture 601 support code that is expected to provide a default mechanism for this purpose. 602 That default mechanism usually is the least common denominator for all of the 603 processors implementing the architecture (i.e. CPU instruction set) in question, 604 however, so it is rather crude and not very energy-efficient. For this reason, 605 it is not recommended for production use. 606 607 The ``cpuidle.governor=`` kernel command line switch allows the ``CPUIdle`` 608 governor to use to be specified. It has to be appended with a string matching 609 the name of an available governor (e.g. ``cpuidle.governor=menu``) and that 610 governor will be used instead of the default one. It is possible to force 611 the ``menu`` governor to be used on the systems that use the ``ladder`` governor 612 by default this way, for example. 613 614 The other kernel command line parameters controlling CPU idle time management 615 described below are only relevant for the *x86* architecture and references 616 to ``intel_idle`` affect Intel processors only. 617 618 The *x86* architecture support code recognizes three kernel command line 619 options related to CPU idle time management: ``idle=poll``, ``idle=halt``, 620 and ``idle=nomwait``. The first two of them disable the ``acpi_idle`` and 621 ``intel_idle`` drivers altogether, which effectively causes the entire 622 ``CPUIdle`` subsystem to be disabled and makes the idle loop invoke the 623 architecture support code to deal with idle CPUs. How it does that depends on 624 which of the two parameters is added to the kernel command line. In the 625 ``idle=halt`` case, the architecture support code will use the ``HLT`` 626 instruction of the CPUs (which, as a rule, suspends the execution of the program 627 and causes the hardware to attempt to enter the shallowest available idle state) 628 for this purpose, and if ``idle=poll`` is used, idle CPUs will execute a 629 more or less "lightweight" sequence of instructions in a tight loop. [Note 630 that using ``idle=poll`` is somewhat drastic in many cases, as preventing idle 631 CPUs from saving almost any energy at all may not be the only effect of it. 632 For example, on Intel hardware it effectively prevents CPUs from using 633 P-states (see |cpufreq|) that require any number of CPUs in a package to be 634 idle, so it very well may hurt single-thread computations performance as well as 635 energy-efficiency. Thus using it for performance reasons may not be a good idea 636 at all.] 637 638 The ``idle=nomwait`` option prevents the use of ``MWAIT`` instruction of 639 the CPU to enter idle states. When this option is used, the ``acpi_idle`` 640 driver will use the ``HLT`` instruction instead of ``MWAIT``. On systems 641 running Intel processors, this option disables the ``intel_idle`` driver 642 and forces the use of the ``acpi_idle`` driver instead. Note that in either 643 case, ``acpi_idle`` driver will function only if all the information needed 644 by it is in the system's ACPI tables. 645 646 In addition to the architecture-level kernel command line options affecting CPU 647 idle time management, there are parameters affecting individual ``CPUIdle`` 648 drivers that can be passed to them via the kernel command line. Specifically, 649 the ``intel_idle.max_cstate=<n>`` and ``processor.max_cstate=<n>`` parameters, 650 where ``<n>`` is an idle state index also used in the name of the given 651 state's directory in ``sysfs`` (see 652 `Representation of Idle States <idle-states-representation_>`_), causes the 653 ``intel_idle`` and ``acpi_idle`` drivers, respectively, to discard all of the 654 idle states deeper than idle state ``<n>``. In that case, they will never ask 655 for any of those idle states or expose them to the governor. [The behavior of 656 the two drivers is different for ``<n>`` equal to ``0``. Adding 657 ``intel_idle.max_cstate=0`` to the kernel command line disables the 658 ``intel_idle`` driver and allows ``acpi_idle`` to be used, whereas 659 ``processor.max_cstate=0`` is equivalent to ``processor.max_cstate=1``. 660 Also, the ``acpi_idle`` driver is part of the ``processor`` kernel module that 661 can be loaded separately and ``max_cstate=<n>`` can be passed to it as a module 662 parameter when it is loaded.]
Linux® is a registered trademark of Linus Torvalds in the United States and other countries.
TOMOYO® is a registered trademark of NTT DATA CORPORATION.