~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

TOMOYO Linux Cross Reference
Linux/Documentation/arch/x86/resctrl.rst

Version: ~ [ linux-6.12-rc7 ] ~ [ linux-6.11.7 ] ~ [ linux-6.10.14 ] ~ [ linux-6.9.12 ] ~ [ linux-6.8.12 ] ~ [ linux-6.7.12 ] ~ [ linux-6.6.60 ] ~ [ linux-6.5.13 ] ~ [ linux-6.4.16 ] ~ [ linux-6.3.13 ] ~ [ linux-6.2.16 ] ~ [ linux-6.1.116 ] ~ [ linux-6.0.19 ] ~ [ linux-5.19.17 ] ~ [ linux-5.18.19 ] ~ [ linux-5.17.15 ] ~ [ linux-5.16.20 ] ~ [ linux-5.15.171 ] ~ [ linux-5.14.21 ] ~ [ linux-5.13.19 ] ~ [ linux-5.12.19 ] ~ [ linux-5.11.22 ] ~ [ linux-5.10.229 ] ~ [ linux-5.9.16 ] ~ [ linux-5.8.18 ] ~ [ linux-5.7.19 ] ~ [ linux-5.6.19 ] ~ [ linux-5.5.19 ] ~ [ linux-5.4.285 ] ~ [ linux-5.3.18 ] ~ [ linux-5.2.21 ] ~ [ linux-5.1.21 ] ~ [ linux-5.0.21 ] ~ [ linux-4.20.17 ] ~ [ linux-4.19.323 ] ~ [ linux-4.18.20 ] ~ [ linux-4.17.19 ] ~ [ linux-4.16.18 ] ~ [ linux-4.15.18 ] ~ [ linux-4.14.336 ] ~ [ linux-4.13.16 ] ~ [ linux-4.12.14 ] ~ [ linux-4.11.12 ] ~ [ linux-4.10.17 ] ~ [ linux-4.9.337 ] ~ [ linux-4.4.302 ] ~ [ linux-3.10.108 ] ~ [ linux-2.6.32.71 ] ~ [ linux-2.6.0 ] ~ [ linux-2.4.37.11 ] ~ [ unix-v6-master ] ~ [ ccs-tools-1.8.12 ] ~ [ policy-sample ] ~
Architecture: ~ [ i386 ] ~ [ alpha ] ~ [ m68k ] ~ [ mips ] ~ [ ppc ] ~ [ sparc ] ~ [ sparc64 ] ~

  1 .. SPDX-License-Identifier: GPL-2.0
  2 .. include:: <isonum.txt>
  3 
  4 ===========================================
  5 User Interface for Resource Control feature
  6 ===========================================
  7 
  8 :Copyright: |copy| 2016 Intel Corporation
  9 :Authors: - Fenghua Yu <fenghua.yu@intel.com>
 10           - Tony Luck <tony.luck@intel.com>
 11           - Vikas Shivappa <vikas.shivappa@intel.com>
 12 
 13 
 14 Intel refers to this feature as Intel Resource Director Technology(Intel(R) RDT).
 15 AMD refers to this feature as AMD Platform Quality of Service(AMD QoS).
 16 
 17 This feature is enabled by the CONFIG_X86_CPU_RESCTRL and the x86 /proc/cpuinfo
 18 flag bits:
 19 
 20 =============================================== ================================
 21 RDT (Resource Director Technology) Allocation   "rdt_a"
 22 CAT (Cache Allocation Technology)               "cat_l3", "cat_l2"
 23 CDP (Code and Data Prioritization)              "cdp_l3", "cdp_l2"
 24 CQM (Cache QoS Monitoring)                      "cqm_llc", "cqm_occup_llc"
 25 MBM (Memory Bandwidth Monitoring)               "cqm_mbm_total", "cqm_mbm_local"
 26 MBA (Memory Bandwidth Allocation)               "mba"
 27 SMBA (Slow Memory Bandwidth Allocation)         ""
 28 BMEC (Bandwidth Monitoring Event Configuration) ""
 29 =============================================== ================================
 30 
 31 Historically, new features were made visible by default in /proc/cpuinfo. This
 32 resulted in the feature flags becoming hard to parse by humans. Adding a new
 33 flag to /proc/cpuinfo should be avoided if user space can obtain information
 34 about the feature from resctrl's info directory.
 35 
 36 To use the feature mount the file system::
 37 
 38  # mount -t resctrl resctrl [-o cdp[,cdpl2][,mba_MBps][,debug]] /sys/fs/resctrl
 39 
 40 mount options are:
 41 
 42 "cdp":
 43         Enable code/data prioritization in L3 cache allocations.
 44 "cdpl2":
 45         Enable code/data prioritization in L2 cache allocations.
 46 "mba_MBps":
 47         Enable the MBA Software Controller(mba_sc) to specify MBA
 48         bandwidth in MiBps
 49 "debug":
 50         Make debug files accessible. Available debug files are annotated with
 51         "Available only with debug option".
 52 
 53 L2 and L3 CDP are controlled separately.
 54 
 55 RDT features are orthogonal. A particular system may support only
 56 monitoring, only control, or both monitoring and control.  Cache
 57 pseudo-locking is a unique way of using cache control to "pin" or
 58 "lock" data in the cache. Details can be found in
 59 "Cache Pseudo-Locking".
 60 
 61 
 62 The mount succeeds if either of allocation or monitoring is present, but
 63 only those files and directories supported by the system will be created.
 64 For more details on the behavior of the interface during monitoring
 65 and allocation, see the "Resource alloc and monitor groups" section.
 66 
 67 Info directory
 68 ==============
 69 
 70 The 'info' directory contains information about the enabled
 71 resources. Each resource has its own subdirectory. The subdirectory
 72 names reflect the resource names.
 73 
 74 Each subdirectory contains the following files with respect to
 75 allocation:
 76 
 77 Cache resource(L3/L2)  subdirectory contains the following files
 78 related to allocation:
 79 
 80 "num_closids":
 81                 The number of CLOSIDs which are valid for this
 82                 resource. The kernel uses the smallest number of
 83                 CLOSIDs of all enabled resources as limit.
 84 "cbm_mask":
 85                 The bitmask which is valid for this resource.
 86                 This mask is equivalent to 100%.
 87 "min_cbm_bits":
 88                 The minimum number of consecutive bits which
 89                 must be set when writing a mask.
 90 
 91 "shareable_bits":
 92                 Bitmask of shareable resource with other executing
 93                 entities (e.g. I/O). User can use this when
 94                 setting up exclusive cache partitions. Note that
 95                 some platforms support devices that have their
 96                 own settings for cache use which can over-ride
 97                 these bits.
 98 "bit_usage":
 99                 Annotated capacity bitmasks showing how all
100                 instances of the resource are used. The legend is:
101 
102                         "0":
103                               Corresponding region is unused. When the system's
104                               resources have been allocated and a "0" is found
105                               in "bit_usage" it is a sign that resources are
106                               wasted.
107 
108                         "H":
109                               Corresponding region is used by hardware only
110                               but available for software use. If a resource
111                               has bits set in "shareable_bits" but not all
112                               of these bits appear in the resource groups'
113                               schematas then the bits appearing in
114                               "shareable_bits" but no resource group will
115                               be marked as "H".
116                         "X":
117                               Corresponding region is available for sharing and
118                               used by hardware and software. These are the
119                               bits that appear in "shareable_bits" as
120                               well as a resource group's allocation.
121                         "S":
122                               Corresponding region is used by software
123                               and available for sharing.
124                         "E":
125                               Corresponding region is used exclusively by
126                               one resource group. No sharing allowed.
127                         "P":
128                               Corresponding region is pseudo-locked. No
129                               sharing allowed.
130 "sparse_masks":
131                 Indicates if non-contiguous 1s value in CBM is supported.
132 
133                         "0":
134                               Only contiguous 1s value in CBM is supported.
135                         "1":
136                               Non-contiguous 1s value in CBM is supported.
137 
138 Memory bandwidth(MB) subdirectory contains the following files
139 with respect to allocation:
140 
141 "min_bandwidth":
142                 The minimum memory bandwidth percentage which
143                 user can request.
144 
145 "bandwidth_gran":
146                 The granularity in which the memory bandwidth
147                 percentage is allocated. The allocated
148                 b/w percentage is rounded off to the next
149                 control step available on the hardware. The
150                 available bandwidth control steps are:
151                 min_bandwidth + N * bandwidth_gran.
152 
153 "delay_linear":
154                 Indicates if the delay scale is linear or
155                 non-linear. This field is purely informational
156                 only.
157 
158 "thread_throttle_mode":
159                 Indicator on Intel systems of how tasks running on threads
160                 of a physical core are throttled in cases where they
161                 request different memory bandwidth percentages:
162 
163                 "max":
164                         the smallest percentage is applied
165                         to all threads
166                 "per-thread":
167                         bandwidth percentages are directly applied to
168                         the threads running on the core
169 
170 If RDT monitoring is available there will be an "L3_MON" directory
171 with the following files:
172 
173 "num_rmids":
174                 The number of RMIDs available. This is the
175                 upper bound for how many "CTRL_MON" + "MON"
176                 groups can be created.
177 
178 "mon_features":
179                 Lists the monitoring events if
180                 monitoring is enabled for the resource.
181                 Example::
182 
183                         # cat /sys/fs/resctrl/info/L3_MON/mon_features
184                         llc_occupancy
185                         mbm_total_bytes
186                         mbm_local_bytes
187 
188                 If the system supports Bandwidth Monitoring Event
189                 Configuration (BMEC), then the bandwidth events will
190                 be configurable. The output will be::
191 
192                         # cat /sys/fs/resctrl/info/L3_MON/mon_features
193                         llc_occupancy
194                         mbm_total_bytes
195                         mbm_total_bytes_config
196                         mbm_local_bytes
197                         mbm_local_bytes_config
198 
199 "mbm_total_bytes_config", "mbm_local_bytes_config":
200         Read/write files containing the configuration for the mbm_total_bytes
201         and mbm_local_bytes events, respectively, when the Bandwidth
202         Monitoring Event Configuration (BMEC) feature is supported.
203         The event configuration settings are domain specific and affect
204         all the CPUs in the domain. When either event configuration is
205         changed, the bandwidth counters for all RMIDs of both events
206         (mbm_total_bytes as well as mbm_local_bytes) are cleared for that
207         domain. The next read for every RMID will report "Unavailable"
208         and subsequent reads will report the valid value.
209 
210         Following are the types of events supported:
211 
212         ====    ========================================================
213         Bits    Description
214         ====    ========================================================
215         6       Dirty Victims from the QOS domain to all types of memory
216         5       Reads to slow memory in the non-local NUMA domain
217         4       Reads to slow memory in the local NUMA domain
218         3       Non-temporal writes to non-local NUMA domain
219         2       Non-temporal writes to local NUMA domain
220         1       Reads to memory in the non-local NUMA domain
221         0       Reads to memory in the local NUMA domain
222         ====    ========================================================
223 
224         By default, the mbm_total_bytes configuration is set to 0x7f to count
225         all the event types and the mbm_local_bytes configuration is set to
226         0x15 to count all the local memory events.
227 
228         Examples:
229 
230         * To view the current configuration::
231           ::
232 
233             # cat /sys/fs/resctrl/info/L3_MON/mbm_total_bytes_config
234             0=0x7f;1=0x7f;2=0x7f;3=0x7f
235 
236             # cat /sys/fs/resctrl/info/L3_MON/mbm_local_bytes_config
237             0=0x15;1=0x15;3=0x15;4=0x15
238 
239         * To change the mbm_total_bytes to count only reads on domain 0,
240           the bits 0, 1, 4 and 5 needs to be set, which is 110011b in binary
241           (in hexadecimal 0x33):
242           ::
243 
244             # echo  "0=0x33" > /sys/fs/resctrl/info/L3_MON/mbm_total_bytes_config
245 
246             # cat /sys/fs/resctrl/info/L3_MON/mbm_total_bytes_config
247             0=0x33;1=0x7f;2=0x7f;3=0x7f
248 
249         * To change the mbm_local_bytes to count all the slow memory reads on
250           domain 0 and 1, the bits 4 and 5 needs to be set, which is 110000b
251           in binary (in hexadecimal 0x30):
252           ::
253 
254             # echo  "0=0x30;1=0x30" > /sys/fs/resctrl/info/L3_MON/mbm_local_bytes_config
255 
256             # cat /sys/fs/resctrl/info/L3_MON/mbm_local_bytes_config
257             0=0x30;1=0x30;3=0x15;4=0x15
258 
259 "max_threshold_occupancy":
260                 Read/write file provides the largest value (in
261                 bytes) at which a previously used LLC_occupancy
262                 counter can be considered for re-use.
263 
264 Finally, in the top level of the "info" directory there is a file
265 named "last_cmd_status". This is reset with every "command" issued
266 via the file system (making new directories or writing to any of the
267 control files). If the command was successful, it will read as "ok".
268 If the command failed, it will provide more information that can be
269 conveyed in the error returns from file operations. E.g.
270 ::
271 
272         # echo L3:0=f7 > schemata
273         bash: echo: write error: Invalid argument
274         # cat info/last_cmd_status
275         mask f7 has non-consecutive 1-bits
276 
277 Resource alloc and monitor groups
278 =================================
279 
280 Resource groups are represented as directories in the resctrl file
281 system.  The default group is the root directory which, immediately
282 after mounting, owns all the tasks and cpus in the system and can make
283 full use of all resources.
284 
285 On a system with RDT control features additional directories can be
286 created in the root directory that specify different amounts of each
287 resource (see "schemata" below). The root and these additional top level
288 directories are referred to as "CTRL_MON" groups below.
289 
290 On a system with RDT monitoring the root directory and other top level
291 directories contain a directory named "mon_groups" in which additional
292 directories can be created to monitor subsets of tasks in the CTRL_MON
293 group that is their ancestor. These are called "MON" groups in the rest
294 of this document.
295 
296 Removing a directory will move all tasks and cpus owned by the group it
297 represents to the parent. Removing one of the created CTRL_MON groups
298 will automatically remove all MON groups below it.
299 
300 Moving MON group directories to a new parent CTRL_MON group is supported
301 for the purpose of changing the resource allocations of a MON group
302 without impacting its monitoring data or assigned tasks. This operation
303 is not allowed for MON groups which monitor CPUs. No other move
304 operation is currently allowed other than simply renaming a CTRL_MON or
305 MON group.
306 
307 All groups contain the following files:
308 
309 "tasks":
310         Reading this file shows the list of all tasks that belong to
311         this group. Writing a task id to the file will add a task to the
312         group. Multiple tasks can be added by separating the task ids
313         with commas. Tasks will be assigned sequentially. Multiple
314         failures are not supported. A single failure encountered while
315         attempting to assign a task will cause the operation to abort and
316         already added tasks before the failure will remain in the group.
317         Failures will be logged to /sys/fs/resctrl/info/last_cmd_status.
318 
319         If the group is a CTRL_MON group the task is removed from
320         whichever previous CTRL_MON group owned the task and also from
321         any MON group that owned the task. If the group is a MON group,
322         then the task must already belong to the CTRL_MON parent of this
323         group. The task is removed from any previous MON group.
324 
325 
326 "cpus":
327         Reading this file shows a bitmask of the logical CPUs owned by
328         this group. Writing a mask to this file will add and remove
329         CPUs to/from this group. As with the tasks file a hierarchy is
330         maintained where MON groups may only include CPUs owned by the
331         parent CTRL_MON group.
332         When the resource group is in pseudo-locked mode this file will
333         only be readable, reflecting the CPUs associated with the
334         pseudo-locked region.
335 
336 
337 "cpus_list":
338         Just like "cpus", only using ranges of CPUs instead of bitmasks.
339 
340 
341 When control is enabled all CTRL_MON groups will also contain:
342 
343 "schemata":
344         A list of all the resources available to this group.
345         Each resource has its own line and format - see below for details.
346 
347 "size":
348         Mirrors the display of the "schemata" file to display the size in
349         bytes of each allocation instead of the bits representing the
350         allocation.
351 
352 "mode":
353         The "mode" of the resource group dictates the sharing of its
354         allocations. A "shareable" resource group allows sharing of its
355         allocations while an "exclusive" resource group does not. A
356         cache pseudo-locked region is created by first writing
357         "pseudo-locksetup" to the "mode" file before writing the cache
358         pseudo-locked region's schemata to the resource group's "schemata"
359         file. On successful pseudo-locked region creation the mode will
360         automatically change to "pseudo-locked".
361 
362 "ctrl_hw_id":
363         Available only with debug option. The identifier used by hardware
364         for the control group. On x86 this is the CLOSID.
365 
366 When monitoring is enabled all MON groups will also contain:
367 
368 "mon_data":
369         This contains a set of files organized by L3 domain and by
370         RDT event. E.g. on a system with two L3 domains there will
371         be subdirectories "mon_L3_00" and "mon_L3_01".  Each of these
372         directories have one file per event (e.g. "llc_occupancy",
373         "mbm_total_bytes", and "mbm_local_bytes"). In a MON group these
374         files provide a read out of the current value of the event for
375         all tasks in the group. In CTRL_MON groups these files provide
376         the sum for all tasks in the CTRL_MON group and all tasks in
377         MON groups. Please see example section for more details on usage.
378         On systems with Sub-NUMA Cluster (SNC) enabled there are extra
379         directories for each node (located within the "mon_L3_XX" directory
380         for the L3 cache they occupy). These are named "mon_sub_L3_YY"
381         where "YY" is the node number.
382 
383 "mon_hw_id":
384         Available only with debug option. The identifier used by hardware
385         for the monitor group. On x86 this is the RMID.
386 
387 Resource allocation rules
388 -------------------------
389 
390 When a task is running the following rules define which resources are
391 available to it:
392 
393 1) If the task is a member of a non-default group, then the schemata
394    for that group is used.
395 
396 2) Else if the task belongs to the default group, but is running on a
397    CPU that is assigned to some specific group, then the schemata for the
398    CPU's group is used.
399 
400 3) Otherwise the schemata for the default group is used.
401 
402 Resource monitoring rules
403 -------------------------
404 1) If a task is a member of a MON group, or non-default CTRL_MON group
405    then RDT events for the task will be reported in that group.
406 
407 2) If a task is a member of the default CTRL_MON group, but is running
408    on a CPU that is assigned to some specific group, then the RDT events
409    for the task will be reported in that group.
410 
411 3) Otherwise RDT events for the task will be reported in the root level
412    "mon_data" group.
413 
414 
415 Notes on cache occupancy monitoring and control
416 ===============================================
417 When moving a task from one group to another you should remember that
418 this only affects *new* cache allocations by the task. E.g. you may have
419 a task in a monitor group showing 3 MB of cache occupancy. If you move
420 to a new group and immediately check the occupancy of the old and new
421 groups you will likely see that the old group is still showing 3 MB and
422 the new group zero. When the task accesses locations still in cache from
423 before the move, the h/w does not update any counters. On a busy system
424 you will likely see the occupancy in the old group go down as cache lines
425 are evicted and re-used while the occupancy in the new group rises as
426 the task accesses memory and loads into the cache are counted based on
427 membership in the new group.
428 
429 The same applies to cache allocation control. Moving a task to a group
430 with a smaller cache partition will not evict any cache lines. The
431 process may continue to use them from the old partition.
432 
433 Hardware uses CLOSid(Class of service ID) and an RMID(Resource monitoring ID)
434 to identify a control group and a monitoring group respectively. Each of
435 the resource groups are mapped to these IDs based on the kind of group. The
436 number of CLOSid and RMID are limited by the hardware and hence the creation of
437 a "CTRL_MON" directory may fail if we run out of either CLOSID or RMID
438 and creation of "MON" group may fail if we run out of RMIDs.
439 
440 max_threshold_occupancy - generic concepts
441 ------------------------------------------
442 
443 Note that an RMID once freed may not be immediately available for use as
444 the RMID is still tagged the cache lines of the previous user of RMID.
445 Hence such RMIDs are placed on limbo list and checked back if the cache
446 occupancy has gone down. If there is a time when system has a lot of
447 limbo RMIDs but which are not ready to be used, user may see an -EBUSY
448 during mkdir.
449 
450 max_threshold_occupancy is a user configurable value to determine the
451 occupancy at which an RMID can be freed.
452 
453 The mon_llc_occupancy_limbo tracepoint gives the precise occupancy in bytes
454 for a subset of RMID that are not immediately available for allocation.
455 This can't be relied on to produce output every second, it may be necessary
456 to attempt to create an empty monitor group to force an update. Output may
457 only be produced if creation of a control or monitor group fails.
458 
459 Schemata files - general concepts
460 ---------------------------------
461 Each line in the file describes one resource. The line starts with
462 the name of the resource, followed by specific values to be applied
463 in each of the instances of that resource on the system.
464 
465 Cache IDs
466 ---------
467 On current generation systems there is one L3 cache per socket and L2
468 caches are generally just shared by the hyperthreads on a core, but this
469 isn't an architectural requirement. We could have multiple separate L3
470 caches on a socket, multiple cores could share an L2 cache. So instead
471 of using "socket" or "core" to define the set of logical cpus sharing
472 a resource we use a "Cache ID". At a given cache level this will be a
473 unique number across the whole system (but it isn't guaranteed to be a
474 contiguous sequence, there may be gaps).  To find the ID for each logical
475 CPU look in /sys/devices/system/cpu/cpu*/cache/index*/id
476 
477 Cache Bit Masks (CBM)
478 ---------------------
479 For cache resources we describe the portion of the cache that is available
480 for allocation using a bitmask. The maximum value of the mask is defined
481 by each cpu model (and may be different for different cache levels). It
482 is found using CPUID, but is also provided in the "info" directory of
483 the resctrl file system in "info/{resource}/cbm_mask". Some Intel hardware
484 requires that these masks have all the '1' bits in a contiguous block. So
485 0x3, 0x6 and 0xC are legal 4-bit masks with two bits set, but 0x5, 0x9
486 and 0xA are not. Check /sys/fs/resctrl/info/{resource}/sparse_masks
487 if non-contiguous 1s value is supported. On a system with a 20-bit mask
488 each bit represents 5% of the capacity of the cache. You could partition
489 the cache into four equal parts with masks: 0x1f, 0x3e0, 0x7c00, 0xf8000.
490 
491 Notes on Sub-NUMA Cluster mode
492 ==============================
493 When SNC mode is enabled, Linux may load balance tasks between Sub-NUMA
494 nodes much more readily than between regular NUMA nodes since the CPUs
495 on Sub-NUMA nodes share the same L3 cache and the system may report
496 the NUMA distance between Sub-NUMA nodes with a lower value than used
497 for regular NUMA nodes.
498 
499 The top-level monitoring files in each "mon_L3_XX" directory provide
500 the sum of data across all SNC nodes sharing an L3 cache instance.
501 Users who bind tasks to the CPUs of a specific Sub-NUMA node can read
502 the "llc_occupancy", "mbm_total_bytes", and "mbm_local_bytes" in the
503 "mon_sub_L3_YY" directories to get node local data.
504 
505 Memory bandwidth allocation is still performed at the L3 cache
506 level. I.e. throttling controls are applied to all SNC nodes.
507 
508 L3 cache allocation bitmaps also apply to all SNC nodes. But note that
509 the amount of L3 cache represented by each bit is divided by the number
510 of SNC nodes per L3 cache. E.g. with a 100MB cache on a system with 10-bit
511 allocation masks each bit normally represents 10MB. With SNC mode enabled
512 with two SNC nodes per L3 cache, each bit only represents 5MB.
513 
514 Memory bandwidth Allocation and monitoring
515 ==========================================
516 
517 For Memory bandwidth resource, by default the user controls the resource
518 by indicating the percentage of total memory bandwidth.
519 
520 The minimum bandwidth percentage value for each cpu model is predefined
521 and can be looked up through "info/MB/min_bandwidth". The bandwidth
522 granularity that is allocated is also dependent on the cpu model and can
523 be looked up at "info/MB/bandwidth_gran". The available bandwidth
524 control steps are: min_bw + N * bw_gran. Intermediate values are rounded
525 to the next control step available on the hardware.
526 
527 The bandwidth throttling is a core specific mechanism on some of Intel
528 SKUs. Using a high bandwidth and a low bandwidth setting on two threads
529 sharing a core may result in both threads being throttled to use the
530 low bandwidth (see "thread_throttle_mode").
531 
532 The fact that Memory bandwidth allocation(MBA) may be a core
533 specific mechanism where as memory bandwidth monitoring(MBM) is done at
534 the package level may lead to confusion when users try to apply control
535 via the MBA and then monitor the bandwidth to see if the controls are
536 effective. Below are such scenarios:
537 
538 1. User may *not* see increase in actual bandwidth when percentage
539    values are increased:
540 
541 This can occur when aggregate L2 external bandwidth is more than L3
542 external bandwidth. Consider an SKL SKU with 24 cores on a package and
543 where L2 external  is 10GBps (hence aggregate L2 external bandwidth is
544 240GBps) and L3 external bandwidth is 100GBps. Now a workload with '20
545 threads, having 50% bandwidth, each consuming 5GBps' consumes the max L3
546 bandwidth of 100GBps although the percentage value specified is only 50%
547 << 100%. Hence increasing the bandwidth percentage will not yield any
548 more bandwidth. This is because although the L2 external bandwidth still
549 has capacity, the L3 external bandwidth is fully used. Also note that
550 this would be dependent on number of cores the benchmark is run on.
551 
552 2. Same bandwidth percentage may mean different actual bandwidth
553    depending on # of threads:
554 
555 For the same SKU in #1, a 'single thread, with 10% bandwidth' and '4
556 thread, with 10% bandwidth' can consume upto 10GBps and 40GBps although
557 they have same percentage bandwidth of 10%. This is simply because as
558 threads start using more cores in an rdtgroup, the actual bandwidth may
559 increase or vary although user specified bandwidth percentage is same.
560 
561 In order to mitigate this and make the interface more user friendly,
562 resctrl added support for specifying the bandwidth in MiBps as well.  The
563 kernel underneath would use a software feedback mechanism or a "Software
564 Controller(mba_sc)" which reads the actual bandwidth using MBM counters
565 and adjust the memory bandwidth percentages to ensure::
566 
567         "actual bandwidth < user specified bandwidth".
568 
569 By default, the schemata would take the bandwidth percentage values
570 where as user can switch to the "MBA software controller" mode using
571 a mount option 'mba_MBps'. The schemata format is specified in the below
572 sections.
573 
574 L3 schemata file details (code and data prioritization disabled)
575 ----------------------------------------------------------------
576 With CDP disabled the L3 schemata format is::
577 
578         L3:<cache_id0>=<cbm>;<cache_id1>=<cbm>;...
579 
580 L3 schemata file details (CDP enabled via mount option to resctrl)
581 ------------------------------------------------------------------
582 When CDP is enabled L3 control is split into two separate resources
583 so you can specify independent masks for code and data like this::
584 
585         L3DATA:<cache_id0>=<cbm>;<cache_id1>=<cbm>;...
586         L3CODE:<cache_id0>=<cbm>;<cache_id1>=<cbm>;...
587 
588 L2 schemata file details
589 ------------------------
590 CDP is supported at L2 using the 'cdpl2' mount option. The schemata
591 format is either::
592 
593         L2:<cache_id0>=<cbm>;<cache_id1>=<cbm>;...
594 
595 or
596 
597         L2DATA:<cache_id0>=<cbm>;<cache_id1>=<cbm>;...
598         L2CODE:<cache_id0>=<cbm>;<cache_id1>=<cbm>;...
599 
600 
601 Memory bandwidth Allocation (default mode)
602 ------------------------------------------
603 
604 Memory b/w domain is L3 cache.
605 ::
606 
607         MB:<cache_id0>=bandwidth0;<cache_id1>=bandwidth1;...
608 
609 Memory bandwidth Allocation specified in MiBps
610 ----------------------------------------------
611 
612 Memory bandwidth domain is L3 cache.
613 ::
614 
615         MB:<cache_id0>=bw_MiBps0;<cache_id1>=bw_MiBps1;...
616 
617 Slow Memory Bandwidth Allocation (SMBA)
618 ---------------------------------------
619 AMD hardware supports Slow Memory Bandwidth Allocation (SMBA).
620 CXL.memory is the only supported "slow" memory device. With the
621 support of SMBA, the hardware enables bandwidth allocation on
622 the slow memory devices. If there are multiple such devices in
623 the system, the throttling logic groups all the slow sources
624 together and applies the limit on them as a whole.
625 
626 The presence of SMBA (with CXL.memory) is independent of slow memory
627 devices presence. If there are no such devices on the system, then
628 configuring SMBA will have no impact on the performance of the system.
629 
630 The bandwidth domain for slow memory is L3 cache. Its schemata file
631 is formatted as:
632 ::
633 
634         SMBA:<cache_id0>=bandwidth0;<cache_id1>=bandwidth1;...
635 
636 Reading/writing the schemata file
637 ---------------------------------
638 Reading the schemata file will show the state of all resources
639 on all domains. When writing you only need to specify those values
640 which you wish to change.  E.g.
641 ::
642 
643   # cat schemata
644   L3DATA:0=fffff;1=fffff;2=fffff;3=fffff
645   L3CODE:0=fffff;1=fffff;2=fffff;3=fffff
646   # echo "L3DATA:2=3c0;" > schemata
647   # cat schemata
648   L3DATA:0=fffff;1=fffff;2=3c0;3=fffff
649   L3CODE:0=fffff;1=fffff;2=fffff;3=fffff
650 
651 Reading/writing the schemata file (on AMD systems)
652 --------------------------------------------------
653 Reading the schemata file will show the current bandwidth limit on all
654 domains. The allocated resources are in multiples of one eighth GB/s.
655 When writing to the file, you need to specify what cache id you wish to
656 configure the bandwidth limit.
657 
658 For example, to allocate 2GB/s limit on the first cache id:
659 
660 ::
661 
662   # cat schemata
663     MB:0=2048;1=2048;2=2048;3=2048
664     L3:0=ffff;1=ffff;2=ffff;3=ffff
665 
666   # echo "MB:1=16" > schemata
667   # cat schemata
668     MB:0=2048;1=  16;2=2048;3=2048
669     L3:0=ffff;1=ffff;2=ffff;3=ffff
670 
671 Reading/writing the schemata file (on AMD systems) with SMBA feature
672 --------------------------------------------------------------------
673 Reading and writing the schemata file is the same as without SMBA in
674 above section.
675 
676 For example, to allocate 8GB/s limit on the first cache id:
677 
678 ::
679 
680   # cat schemata
681     SMBA:0=2048;1=2048;2=2048;3=2048
682       MB:0=2048;1=2048;2=2048;3=2048
683       L3:0=ffff;1=ffff;2=ffff;3=ffff
684 
685   # echo "SMBA:1=64" > schemata
686   # cat schemata
687     SMBA:0=2048;1=  64;2=2048;3=2048
688       MB:0=2048;1=2048;2=2048;3=2048
689       L3:0=ffff;1=ffff;2=ffff;3=ffff
690 
691 Cache Pseudo-Locking
692 ====================
693 CAT enables a user to specify the amount of cache space that an
694 application can fill. Cache pseudo-locking builds on the fact that a
695 CPU can still read and write data pre-allocated outside its current
696 allocated area on a cache hit. With cache pseudo-locking, data can be
697 preloaded into a reserved portion of cache that no application can
698 fill, and from that point on will only serve cache hits. The cache
699 pseudo-locked memory is made accessible to user space where an
700 application can map it into its virtual address space and thus have
701 a region of memory with reduced average read latency.
702 
703 The creation of a cache pseudo-locked region is triggered by a request
704 from the user to do so that is accompanied by a schemata of the region
705 to be pseudo-locked. The cache pseudo-locked region is created as follows:
706 
707 - Create a CAT allocation CLOSNEW with a CBM matching the schemata
708   from the user of the cache region that will contain the pseudo-locked
709   memory. This region must not overlap with any current CAT allocation/CLOS
710   on the system and no future overlap with this cache region is allowed
711   while the pseudo-locked region exists.
712 - Create a contiguous region of memory of the same size as the cache
713   region.
714 - Flush the cache, disable hardware prefetchers, disable preemption.
715 - Make CLOSNEW the active CLOS and touch the allocated memory to load
716   it into the cache.
717 - Set the previous CLOS as active.
718 - At this point the closid CLOSNEW can be released - the cache
719   pseudo-locked region is protected as long as its CBM does not appear in
720   any CAT allocation. Even though the cache pseudo-locked region will from
721   this point on not appear in any CBM of any CLOS an application running with
722   any CLOS will be able to access the memory in the pseudo-locked region since
723   the region continues to serve cache hits.
724 - The contiguous region of memory loaded into the cache is exposed to
725   user-space as a character device.
726 
727 Cache pseudo-locking increases the probability that data will remain
728 in the cache via carefully configuring the CAT feature and controlling
729 application behavior. There is no guarantee that data is placed in
730 cache. Instructions like INVD, WBINVD, CLFLUSH, etc. can still evict
731 “locked” data from cache. Power management C-states may shrink or
732 power off cache. Deeper C-states will automatically be restricted on
733 pseudo-locked region creation.
734 
735 It is required that an application using a pseudo-locked region runs
736 with affinity to the cores (or a subset of the cores) associated
737 with the cache on which the pseudo-locked region resides. A sanity check
738 within the code will not allow an application to map pseudo-locked memory
739 unless it runs with affinity to cores associated with the cache on which the
740 pseudo-locked region resides. The sanity check is only done during the
741 initial mmap() handling, there is no enforcement afterwards and the
742 application self needs to ensure it remains affine to the correct cores.
743 
744 Pseudo-locking is accomplished in two stages:
745 
746 1) During the first stage the system administrator allocates a portion
747    of cache that should be dedicated to pseudo-locking. At this time an
748    equivalent portion of memory is allocated, loaded into allocated
749    cache portion, and exposed as a character device.
750 2) During the second stage a user-space application maps (mmap()) the
751    pseudo-locked memory into its address space.
752 
753 Cache Pseudo-Locking Interface
754 ------------------------------
755 A pseudo-locked region is created using the resctrl interface as follows:
756 
757 1) Create a new resource group by creating a new directory in /sys/fs/resctrl.
758 2) Change the new resource group's mode to "pseudo-locksetup" by writing
759    "pseudo-locksetup" to the "mode" file.
760 3) Write the schemata of the pseudo-locked region to the "schemata" file. All
761    bits within the schemata should be "unused" according to the "bit_usage"
762    file.
763 
764 On successful pseudo-locked region creation the "mode" file will contain
765 "pseudo-locked" and a new character device with the same name as the resource
766 group will exist in /dev/pseudo_lock. This character device can be mmap()'ed
767 by user space in order to obtain access to the pseudo-locked memory region.
768 
769 An example of cache pseudo-locked region creation and usage can be found below.
770 
771 Cache Pseudo-Locking Debugging Interface
772 ----------------------------------------
773 The pseudo-locking debugging interface is enabled by default (if
774 CONFIG_DEBUG_FS is enabled) and can be found in /sys/kernel/debug/resctrl.
775 
776 There is no explicit way for the kernel to test if a provided memory
777 location is present in the cache. The pseudo-locking debugging interface uses
778 the tracing infrastructure to provide two ways to measure cache residency of
779 the pseudo-locked region:
780 
781 1) Memory access latency using the pseudo_lock_mem_latency tracepoint. Data
782    from these measurements are best visualized using a hist trigger (see
783    example below). In this test the pseudo-locked region is traversed at
784    a stride of 32 bytes while hardware prefetchers and preemption
785    are disabled. This also provides a substitute visualization of cache
786    hits and misses.
787 2) Cache hit and miss measurements using model specific precision counters if
788    available. Depending on the levels of cache on the system the pseudo_lock_l2
789    and pseudo_lock_l3 tracepoints are available.
790 
791 When a pseudo-locked region is created a new debugfs directory is created for
792 it in debugfs as /sys/kernel/debug/resctrl/<newdir>. A single
793 write-only file, pseudo_lock_measure, is present in this directory. The
794 measurement of the pseudo-locked region depends on the number written to this
795 debugfs file:
796 
797 1:
798      writing "1" to the pseudo_lock_measure file will trigger the latency
799      measurement captured in the pseudo_lock_mem_latency tracepoint. See
800      example below.
801 2:
802      writing "2" to the pseudo_lock_measure file will trigger the L2 cache
803      residency (cache hits and misses) measurement captured in the
804      pseudo_lock_l2 tracepoint. See example below.
805 3:
806      writing "3" to the pseudo_lock_measure file will trigger the L3 cache
807      residency (cache hits and misses) measurement captured in the
808      pseudo_lock_l3 tracepoint.
809 
810 All measurements are recorded with the tracing infrastructure. This requires
811 the relevant tracepoints to be enabled before the measurement is triggered.
812 
813 Example of latency debugging interface
814 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
815 In this example a pseudo-locked region named "newlock" was created. Here is
816 how we can measure the latency in cycles of reading from this region and
817 visualize this data with a histogram that is available if CONFIG_HIST_TRIGGERS
818 is set::
819 
820   # :> /sys/kernel/tracing/trace
821   # echo 'hist:keys=latency' > /sys/kernel/tracing/events/resctrl/pseudo_lock_mem_latency/trigger
822   # echo 1 > /sys/kernel/tracing/events/resctrl/pseudo_lock_mem_latency/enable
823   # echo 1 > /sys/kernel/debug/resctrl/newlock/pseudo_lock_measure
824   # echo 0 > /sys/kernel/tracing/events/resctrl/pseudo_lock_mem_latency/enable
825   # cat /sys/kernel/tracing/events/resctrl/pseudo_lock_mem_latency/hist
826 
827   # event histogram
828   #
829   # trigger info: hist:keys=latency:vals=hitcount:sort=hitcount:size=2048 [active]
830   #
831 
832   { latency:        456 } hitcount:          1
833   { latency:         50 } hitcount:         83
834   { latency:         36 } hitcount:         96
835   { latency:         44 } hitcount:        174
836   { latency:         48 } hitcount:        195
837   { latency:         46 } hitcount:        262
838   { latency:         42 } hitcount:        693
839   { latency:         40 } hitcount:       3204
840   { latency:         38 } hitcount:       3484
841 
842   Totals:
843       Hits: 8192
844       Entries: 9
845     Dropped: 0
846 
847 Example of cache hits/misses debugging
848 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
849 In this example a pseudo-locked region named "newlock" was created on the L2
850 cache of a platform. Here is how we can obtain details of the cache hits
851 and misses using the platform's precision counters.
852 ::
853 
854   # :> /sys/kernel/tracing/trace
855   # echo 1 > /sys/kernel/tracing/events/resctrl/pseudo_lock_l2/enable
856   # echo 2 > /sys/kernel/debug/resctrl/newlock/pseudo_lock_measure
857   # echo 0 > /sys/kernel/tracing/events/resctrl/pseudo_lock_l2/enable
858   # cat /sys/kernel/tracing/trace
859 
860   # tracer: nop
861   #
862   #                              _-----=> irqs-off
863   #                             / _----=> need-resched
864   #                            | / _---=> hardirq/softirq
865   #                            || / _--=> preempt-depth
866   #                            ||| /     delay
867   #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
868   #              | |       |   ||||       |         |
869   pseudo_lock_mea-1672  [002] ....  3132.860500: pseudo_lock_l2: hits=4097 miss=0
870 
871 
872 Examples for RDT allocation usage
873 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
874 
875 1) Example 1
876 
877 On a two socket machine (one L3 cache per socket) with just four bits
878 for cache bit masks, minimum b/w of 10% with a memory bandwidth
879 granularity of 10%.
880 ::
881 
882   # mount -t resctrl resctrl /sys/fs/resctrl
883   # cd /sys/fs/resctrl
884   # mkdir p0 p1
885   # echo "L3:0=3;1=c\nMB:0=50;1=50" > /sys/fs/resctrl/p0/schemata
886   # echo "L3:0=3;1=3\nMB:0=50;1=50" > /sys/fs/resctrl/p1/schemata
887 
888 The default resource group is unmodified, so we have access to all parts
889 of all caches (its schemata file reads "L3:0=f;1=f").
890 
891 Tasks that are under the control of group "p0" may only allocate from the
892 "lower" 50% on cache ID 0, and the "upper" 50% of cache ID 1.
893 Tasks in group "p1" use the "lower" 50% of cache on both sockets.
894 
895 Similarly, tasks that are under the control of group "p0" may use a
896 maximum memory b/w of 50% on socket0 and 50% on socket 1.
897 Tasks in group "p1" may also use 50% memory b/w on both sockets.
898 Note that unlike cache masks, memory b/w cannot specify whether these
899 allocations can overlap or not. The allocations specifies the maximum
900 b/w that the group may be able to use and the system admin can configure
901 the b/w accordingly.
902 
903 If resctrl is using the software controller (mba_sc) then user can enter the
904 max b/w in MB rather than the percentage values.
905 ::
906 
907   # echo "L3:0=3;1=c\nMB:0=1024;1=500" > /sys/fs/resctrl/p0/schemata
908   # echo "L3:0=3;1=3\nMB:0=1024;1=500" > /sys/fs/resctrl/p1/schemata
909 
910 In the above example the tasks in "p1" and "p0" on socket 0 would use a max b/w
911 of 1024MB where as on socket 1 they would use 500MB.
912 
913 2) Example 2
914 
915 Again two sockets, but this time with a more realistic 20-bit mask.
916 
917 Two real time tasks pid=1234 running on processor 0 and pid=5678 running on
918 processor 1 on socket 0 on a 2-socket and dual core machine. To avoid noisy
919 neighbors, each of the two real-time tasks exclusively occupies one quarter
920 of L3 cache on socket 0.
921 ::
922 
923   # mount -t resctrl resctrl /sys/fs/resctrl
924   # cd /sys/fs/resctrl
925 
926 First we reset the schemata for the default group so that the "upper"
927 50% of the L3 cache on socket 0 and 50% of memory b/w cannot be used by
928 ordinary tasks::
929 
930   # echo "L3:0=3ff;1=fffff\nMB:0=50;1=100" > schemata
931 
932 Next we make a resource group for our first real time task and give
933 it access to the "top" 25% of the cache on socket 0.
934 ::
935 
936   # mkdir p0
937   # echo "L3:0=f8000;1=fffff" > p0/schemata
938 
939 Finally we move our first real time task into this resource group. We
940 also use taskset(1) to ensure the task always runs on a dedicated CPU
941 on socket 0. Most uses of resource groups will also constrain which
942 processors tasks run on.
943 ::
944 
945   # echo 1234 > p0/tasks
946   # taskset -cp 1 1234
947 
948 Ditto for the second real time task (with the remaining 25% of cache)::
949 
950   # mkdir p1
951   # echo "L3:0=7c00;1=fffff" > p1/schemata
952   # echo 5678 > p1/tasks
953   # taskset -cp 2 5678
954 
955 For the same 2 socket system with memory b/w resource and CAT L3 the
956 schemata would look like(Assume min_bandwidth 10 and bandwidth_gran is
957 10):
958 
959 For our first real time task this would request 20% memory b/w on socket 0.
960 ::
961 
962   # echo -e "L3:0=f8000;1=fffff\nMB:0=20;1=100" > p0/schemata
963 
964 For our second real time task this would request an other 20% memory b/w
965 on socket 0.
966 ::
967 
968   # echo -e "L3:0=f8000;1=fffff\nMB:0=20;1=100" > p0/schemata
969 
970 3) Example 3
971 
972 A single socket system which has real-time tasks running on core 4-7 and
973 non real-time workload assigned to core 0-3. The real-time tasks share text
974 and data, so a per task association is not required and due to interaction
975 with the kernel it's desired that the kernel on these cores shares L3 with
976 the tasks.
977 ::
978 
979   # mount -t resctrl resctrl /sys/fs/resctrl
980   # cd /sys/fs/resctrl
981 
982 First we reset the schemata for the default group so that the "upper"
983 50% of the L3 cache on socket 0, and 50% of memory bandwidth on socket 0
984 cannot be used by ordinary tasks::
985 
986   # echo "L3:0=3ff\nMB:0=50" > schemata
987 
988 Next we make a resource group for our real time cores and give it access
989 to the "top" 50% of the cache on socket 0 and 50% of memory bandwidth on
990 socket 0.
991 ::
992 
993   # mkdir p0
994   # echo "L3:0=ffc00\nMB:0=50" > p0/schemata
995 
996 Finally we move core 4-7 over to the new group and make sure that the
997 kernel and the tasks running there get 50% of the cache. They should
998 also get 50% of memory bandwidth assuming that the cores 4-7 are SMT
999 siblings and only the real time threads are scheduled on the cores 4-7.
1000 ::
1001 
1002   # echo F0 > p0/cpus
1003 
1004 4) Example 4
1005 
1006 The resource groups in previous examples were all in the default "shareable"
1007 mode allowing sharing of their cache allocations. If one resource group
1008 configures a cache allocation then nothing prevents another resource group
1009 to overlap with that allocation.
1010 
1011 In this example a new exclusive resource group will be created on a L2 CAT
1012 system with two L2 cache instances that can be configured with an 8-bit
1013 capacity bitmask. The new exclusive resource group will be configured to use
1014 25% of each cache instance.
1015 ::
1016 
1017   # mount -t resctrl resctrl /sys/fs/resctrl/
1018   # cd /sys/fs/resctrl
1019 
1020 First, we observe that the default group is configured to allocate to all L2
1021 cache::
1022 
1023   # cat schemata
1024   L2:0=ff;1=ff
1025 
1026 We could attempt to create the new resource group at this point, but it will
1027 fail because of the overlap with the schemata of the default group::
1028 
1029   # mkdir p0
1030   # echo 'L2:0=0x3;1=0x3' > p0/schemata
1031   # cat p0/mode
1032   shareable
1033   # echo exclusive > p0/mode
1034   -sh: echo: write error: Invalid argument
1035   # cat info/last_cmd_status
1036   schemata overlaps
1037 
1038 To ensure that there is no overlap with another resource group the default
1039 resource group's schemata has to change, making it possible for the new
1040 resource group to become exclusive.
1041 ::
1042 
1043   # echo 'L2:0=0xfc;1=0xfc' > schemata
1044   # echo exclusive > p0/mode
1045   # grep . p0/*
1046   p0/cpus:0
1047   p0/mode:exclusive
1048   p0/schemata:L2:0=03;1=03
1049   p0/size:L2:0=262144;1=262144
1050 
1051 A new resource group will on creation not overlap with an exclusive resource
1052 group::
1053 
1054   # mkdir p1
1055   # grep . p1/*
1056   p1/cpus:0
1057   p1/mode:shareable
1058   p1/schemata:L2:0=fc;1=fc
1059   p1/size:L2:0=786432;1=786432
1060 
1061 The bit_usage will reflect how the cache is used::
1062 
1063   # cat info/L2/bit_usage
1064   0=SSSSSSEE;1=SSSSSSEE
1065 
1066 A resource group cannot be forced to overlap with an exclusive resource group::
1067 
1068   # echo 'L2:0=0x1;1=0x1' > p1/schemata
1069   -sh: echo: write error: Invalid argument
1070   # cat info/last_cmd_status
1071   overlaps with exclusive group
1072 
1073 Example of Cache Pseudo-Locking
1074 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1075 Lock portion of L2 cache from cache id 1 using CBM 0x3. Pseudo-locked
1076 region is exposed at /dev/pseudo_lock/newlock that can be provided to
1077 application for argument to mmap().
1078 ::
1079 
1080   # mount -t resctrl resctrl /sys/fs/resctrl/
1081   # cd /sys/fs/resctrl
1082 
1083 Ensure that there are bits available that can be pseudo-locked, since only
1084 unused bits can be pseudo-locked the bits to be pseudo-locked needs to be
1085 removed from the default resource group's schemata::
1086 
1087   # cat info/L2/bit_usage
1088   0=SSSSSSSS;1=SSSSSSSS
1089   # echo 'L2:1=0xfc' > schemata
1090   # cat info/L2/bit_usage
1091   0=SSSSSSSS;1=SSSSSS00
1092 
1093 Create a new resource group that will be associated with the pseudo-locked
1094 region, indicate that it will be used for a pseudo-locked region, and
1095 configure the requested pseudo-locked region capacity bitmask::
1096 
1097   # mkdir newlock
1098   # echo pseudo-locksetup > newlock/mode
1099   # echo 'L2:1=0x3' > newlock/schemata
1100 
1101 On success the resource group's mode will change to pseudo-locked, the
1102 bit_usage will reflect the pseudo-locked region, and the character device
1103 exposing the pseudo-locked region will exist::
1104 
1105   # cat newlock/mode
1106   pseudo-locked
1107   # cat info/L2/bit_usage
1108   0=SSSSSSSS;1=SSSSSSPP
1109   # ls -l /dev/pseudo_lock/newlock
1110   crw------- 1 root root 243, 0 Apr  3 05:01 /dev/pseudo_lock/newlock
1111 
1112 ::
1113 
1114   /*
1115   * Example code to access one page of pseudo-locked cache region
1116   * from user space.
1117   */
1118   #define _GNU_SOURCE
1119   #include <fcntl.h>
1120   #include <sched.h>
1121   #include <stdio.h>
1122   #include <stdlib.h>
1123   #include <unistd.h>
1124   #include <sys/mman.h>
1125 
1126   /*
1127   * It is required that the application runs with affinity to only
1128   * cores associated with the pseudo-locked region. Here the cpu
1129   * is hardcoded for convenience of example.
1130   */
1131   static int cpuid = 2;
1132 
1133   int main(int argc, char *argv[])
1134   {
1135     cpu_set_t cpuset;
1136     long page_size;
1137     void *mapping;
1138     int dev_fd;
1139     int ret;
1140 
1141     page_size = sysconf(_SC_PAGESIZE);
1142 
1143     CPU_ZERO(&cpuset);
1144     CPU_SET(cpuid, &cpuset);
1145     ret = sched_setaffinity(0, sizeof(cpuset), &cpuset);
1146     if (ret < 0) {
1147       perror("sched_setaffinity");
1148       exit(EXIT_FAILURE);
1149     }
1150 
1151     dev_fd = open("/dev/pseudo_lock/newlock", O_RDWR);
1152     if (dev_fd < 0) {
1153       perror("open");
1154       exit(EXIT_FAILURE);
1155     }
1156 
1157     mapping = mmap(0, page_size, PROT_READ | PROT_WRITE, MAP_SHARED,
1158             dev_fd, 0);
1159     if (mapping == MAP_FAILED) {
1160       perror("mmap");
1161       close(dev_fd);
1162       exit(EXIT_FAILURE);
1163     }
1164 
1165     /* Application interacts with pseudo-locked memory @mapping */
1166 
1167     ret = munmap(mapping, page_size);
1168     if (ret < 0) {
1169       perror("munmap");
1170       close(dev_fd);
1171       exit(EXIT_FAILURE);
1172     }
1173 
1174     close(dev_fd);
1175     exit(EXIT_SUCCESS);
1176   }
1177 
1178 Locking between applications
1179 ----------------------------
1180 
1181 Certain operations on the resctrl filesystem, composed of read/writes
1182 to/from multiple files, must be atomic.
1183 
1184 As an example, the allocation of an exclusive reservation of L3 cache
1185 involves:
1186 
1187   1. Read the cbmmasks from each directory or the per-resource "bit_usage"
1188   2. Find a contiguous set of bits in the global CBM bitmask that is clear
1189      in any of the directory cbmmasks
1190   3. Create a new directory
1191   4. Set the bits found in step 2 to the new directory "schemata" file
1192 
1193 If two applications attempt to allocate space concurrently then they can
1194 end up allocating the same bits so the reservations are shared instead of
1195 exclusive.
1196 
1197 To coordinate atomic operations on the resctrlfs and to avoid the problem
1198 above, the following locking procedure is recommended:
1199 
1200 Locking is based on flock, which is available in libc and also as a shell
1201 script command
1202 
1203 Write lock:
1204 
1205  A) Take flock(LOCK_EX) on /sys/fs/resctrl
1206  B) Read/write the directory structure.
1207  C) funlock
1208 
1209 Read lock:
1210 
1211  A) Take flock(LOCK_SH) on /sys/fs/resctrl
1212  B) If success read the directory structure.
1213  C) funlock
1214 
1215 Example with bash::
1216 
1217   # Atomically read directory structure
1218   $ flock -s /sys/fs/resctrl/ find /sys/fs/resctrl
1219 
1220   # Read directory contents and create new subdirectory
1221 
1222   $ cat create-dir.sh
1223   find /sys/fs/resctrl/ > output.txt
1224   mask = function-of(output.txt)
1225   mkdir /sys/fs/resctrl/newres/
1226   echo mask > /sys/fs/resctrl/newres/schemata
1227 
1228   $ flock /sys/fs/resctrl/ ./create-dir.sh
1229 
1230 Example with C::
1231 
1232   /*
1233   * Example code do take advisory locks
1234   * before accessing resctrl filesystem
1235   */
1236   #include <sys/file.h>
1237   #include <stdlib.h>
1238 
1239   void resctrl_take_shared_lock(int fd)
1240   {
1241     int ret;
1242 
1243     /* take shared lock on resctrl filesystem */
1244     ret = flock(fd, LOCK_SH);
1245     if (ret) {
1246       perror("flock");
1247       exit(-1);
1248     }
1249   }
1250 
1251   void resctrl_take_exclusive_lock(int fd)
1252   {
1253     int ret;
1254 
1255     /* release lock on resctrl filesystem */
1256     ret = flock(fd, LOCK_EX);
1257     if (ret) {
1258       perror("flock");
1259       exit(-1);
1260     }
1261   }
1262 
1263   void resctrl_release_lock(int fd)
1264   {
1265     int ret;
1266 
1267     /* take shared lock on resctrl filesystem */
1268     ret = flock(fd, LOCK_UN);
1269     if (ret) {
1270       perror("flock");
1271       exit(-1);
1272     }
1273   }
1274 
1275   void main(void)
1276   {
1277     int fd, ret;
1278 
1279     fd = open("/sys/fs/resctrl", O_DIRECTORY);
1280     if (fd == -1) {
1281       perror("open");
1282       exit(-1);
1283     }
1284     resctrl_take_shared_lock(fd);
1285     /* code to read directory contents */
1286     resctrl_release_lock(fd);
1287 
1288     resctrl_take_exclusive_lock(fd);
1289     /* code to read and write directory contents */
1290     resctrl_release_lock(fd);
1291   }
1292 
1293 Examples for RDT Monitoring along with allocation usage
1294 =======================================================
1295 Reading monitored data
1296 ----------------------
1297 Reading an event file (for ex: mon_data/mon_L3_00/llc_occupancy) would
1298 show the current snapshot of LLC occupancy of the corresponding MON
1299 group or CTRL_MON group.
1300 
1301 
1302 Example 1 (Monitor CTRL_MON group and subset of tasks in CTRL_MON group)
1303 ------------------------------------------------------------------------
1304 On a two socket machine (one L3 cache per socket) with just four bits
1305 for cache bit masks::
1306 
1307   # mount -t resctrl resctrl /sys/fs/resctrl
1308   # cd /sys/fs/resctrl
1309   # mkdir p0 p1
1310   # echo "L3:0=3;1=c" > /sys/fs/resctrl/p0/schemata
1311   # echo "L3:0=3;1=3" > /sys/fs/resctrl/p1/schemata
1312   # echo 5678 > p1/tasks
1313   # echo 5679 > p1/tasks
1314 
1315 The default resource group is unmodified, so we have access to all parts
1316 of all caches (its schemata file reads "L3:0=f;1=f").
1317 
1318 Tasks that are under the control of group "p0" may only allocate from the
1319 "lower" 50% on cache ID 0, and the "upper" 50% of cache ID 1.
1320 Tasks in group "p1" use the "lower" 50% of cache on both sockets.
1321 
1322 Create monitor groups and assign a subset of tasks to each monitor group.
1323 ::
1324 
1325   # cd /sys/fs/resctrl/p1/mon_groups
1326   # mkdir m11 m12
1327   # echo 5678 > m11/tasks
1328   # echo 5679 > m12/tasks
1329 
1330 fetch data (data shown in bytes)
1331 ::
1332 
1333   # cat m11/mon_data/mon_L3_00/llc_occupancy
1334   16234000
1335   # cat m11/mon_data/mon_L3_01/llc_occupancy
1336   14789000
1337   # cat m12/mon_data/mon_L3_00/llc_occupancy
1338   16789000
1339 
1340 The parent ctrl_mon group shows the aggregated data.
1341 ::
1342 
1343   # cat /sys/fs/resctrl/p1/mon_data/mon_l3_00/llc_occupancy
1344   31234000
1345 
1346 Example 2 (Monitor a task from its creation)
1347 --------------------------------------------
1348 On a two socket machine (one L3 cache per socket)::
1349 
1350   # mount -t resctrl resctrl /sys/fs/resctrl
1351   # cd /sys/fs/resctrl
1352   # mkdir p0 p1
1353 
1354 An RMID is allocated to the group once its created and hence the <cmd>
1355 below is monitored from its creation.
1356 ::
1357 
1358   # echo $$ > /sys/fs/resctrl/p1/tasks
1359   # <cmd>
1360 
1361 Fetch the data::
1362 
1363   # cat /sys/fs/resctrl/p1/mon_data/mon_l3_00/llc_occupancy
1364   31789000
1365 
1366 Example 3 (Monitor without CAT support or before creating CAT groups)
1367 ---------------------------------------------------------------------
1368 
1369 Assume a system like HSW has only CQM and no CAT support. In this case
1370 the resctrl will still mount but cannot create CTRL_MON directories.
1371 But user can create different MON groups within the root group thereby
1372 able to monitor all tasks including kernel threads.
1373 
1374 This can also be used to profile jobs cache size footprint before being
1375 able to allocate them to different allocation groups.
1376 ::
1377 
1378   # mount -t resctrl resctrl /sys/fs/resctrl
1379   # cd /sys/fs/resctrl
1380   # mkdir mon_groups/m01
1381   # mkdir mon_groups/m02
1382 
1383   # echo 3478 > /sys/fs/resctrl/mon_groups/m01/tasks
1384   # echo 2467 > /sys/fs/resctrl/mon_groups/m02/tasks
1385 
1386 Monitor the groups separately and also get per domain data. From the
1387 below its apparent that the tasks are mostly doing work on
1388 domain(socket) 0.
1389 ::
1390 
1391   # cat /sys/fs/resctrl/mon_groups/m01/mon_L3_00/llc_occupancy
1392   31234000
1393   # cat /sys/fs/resctrl/mon_groups/m01/mon_L3_01/llc_occupancy
1394   34555
1395   # cat /sys/fs/resctrl/mon_groups/m02/mon_L3_00/llc_occupancy
1396   31234000
1397   # cat /sys/fs/resctrl/mon_groups/m02/mon_L3_01/llc_occupancy
1398   32789
1399 
1400 
1401 Example 4 (Monitor real time tasks)
1402 -----------------------------------
1403 
1404 A single socket system which has real time tasks running on cores 4-7
1405 and non real time tasks on other cpus. We want to monitor the cache
1406 occupancy of the real time threads on these cores.
1407 ::
1408 
1409   # mount -t resctrl resctrl /sys/fs/resctrl
1410   # cd /sys/fs/resctrl
1411   # mkdir p1
1412 
1413 Move the cpus 4-7 over to p1::
1414 
1415   # echo f0 > p1/cpus
1416 
1417 View the llc occupancy snapshot::
1418 
1419   # cat /sys/fs/resctrl/p1/mon_data/mon_L3_00/llc_occupancy
1420   11234000
1421 
1422 Intel RDT Errata
1423 ================
1424 
1425 Intel MBM Counters May Report System Memory Bandwidth Incorrectly
1426 -----------------------------------------------------------------
1427 
1428 Errata SKX99 for Skylake server and BDF102 for Broadwell server.
1429 
1430 Problem: Intel Memory Bandwidth Monitoring (MBM) counters track metrics
1431 according to the assigned Resource Monitor ID (RMID) for that logical
1432 core. The IA32_QM_CTR register (MSR 0xC8E), used to report these
1433 metrics, may report incorrect system bandwidth for certain RMID values.
1434 
1435 Implication: Due to the errata, system memory bandwidth may not match
1436 what is reported.
1437 
1438 Workaround: MBM total and local readings are corrected according to the
1439 following correction factor table:
1440 
1441 +---------------+---------------+---------------+-----------------+
1442 |core count     |rmid count     |rmid threshold |correction factor|
1443 +---------------+---------------+---------------+-----------------+
1444 |1              |8              |0              |1.000000         |
1445 +---------------+---------------+---------------+-----------------+
1446 |2              |16             |0              |1.000000         |
1447 +---------------+---------------+---------------+-----------------+
1448 |3              |24             |15             |0.969650         |
1449 +---------------+---------------+---------------+-----------------+
1450 |4              |32             |0              |1.000000         |
1451 +---------------+---------------+---------------+-----------------+
1452 |6              |48             |31             |0.969650         |
1453 +---------------+---------------+---------------+-----------------+
1454 |7              |56             |47             |1.142857         |
1455 +---------------+---------------+---------------+-----------------+
1456 |8              |64             |0              |1.000000         |
1457 +---------------+---------------+---------------+-----------------+
1458 |9              |72             |63             |1.185115         |
1459 +---------------+---------------+---------------+-----------------+
1460 |10             |80             |63             |1.066553         |
1461 +---------------+---------------+---------------+-----------------+
1462 |11             |88             |79             |1.454545         |
1463 +---------------+---------------+---------------+-----------------+
1464 |12             |96             |0              |1.000000         |
1465 +---------------+---------------+---------------+-----------------+
1466 |13             |104            |95             |1.230769         |
1467 +---------------+---------------+---------------+-----------------+
1468 |14             |112            |95             |1.142857         |
1469 +---------------+---------------+---------------+-----------------+
1470 |15             |120            |95             |1.066667         |
1471 +---------------+---------------+---------------+-----------------+
1472 |16             |128            |0              |1.000000         |
1473 +---------------+---------------+---------------+-----------------+
1474 |17             |136            |127            |1.254863         |
1475 +---------------+---------------+---------------+-----------------+
1476 |18             |144            |127            |1.185255         |
1477 +---------------+---------------+---------------+-----------------+
1478 |19             |152            |0              |1.000000         |
1479 +---------------+---------------+---------------+-----------------+
1480 |20             |160            |127            |1.066667         |
1481 +---------------+---------------+---------------+-----------------+
1482 |21             |168            |0              |1.000000         |
1483 +---------------+---------------+---------------+-----------------+
1484 |22             |176            |159            |1.454334         |
1485 +---------------+---------------+---------------+-----------------+
1486 |23             |184            |0              |1.000000         |
1487 +---------------+---------------+---------------+-----------------+
1488 |24             |192            |127            |0.969744         |
1489 +---------------+---------------+---------------+-----------------+
1490 |25             |200            |191            |1.280246         |
1491 +---------------+---------------+---------------+-----------------+
1492 |26             |208            |191            |1.230921         |
1493 +---------------+---------------+---------------+-----------------+
1494 |27             |216            |0              |1.000000         |
1495 +---------------+---------------+---------------+-----------------+
1496 |28             |224            |191            |1.143118         |
1497 +---------------+---------------+---------------+-----------------+
1498 
1499 If rmid > rmid threshold, MBM total and local values should be multiplied
1500 by the correction factor.
1501 
1502 See:
1503 
1504 1. Erratum SKX99 in Intel Xeon Processor Scalable Family Specification Update:
1505 http://web.archive.org/web/20200716124958/https://www.intel.com/content/www/us/en/processors/xeon/scalable/xeon-scalable-spec-update.html
1506 
1507 2. Erratum BDF102 in Intel Xeon E5-2600 v4 Processor Product Family Specification Update:
1508 http://web.archive.org/web/20191125200531/https://www.intel.com/content/dam/www/public/us/en/documents/specification-updates/xeon-e5-v4-spec-update.pdf
1509 
1510 3. The errata in Intel Resource Director Technology (Intel RDT) on 2nd Generation Intel Xeon Scalable Processors Reference Manual:
1511 https://software.intel.com/content/www/us/en/develop/articles/intel-resource-director-technology-rdt-reference-manual.html
1512 
1513 for further information.

~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

kernel.org | git.kernel.org | LWN.net | Project Home | SVN repository | Mail admin

Linux® is a registered trademark of Linus Torvalds in the United States and other countries.
TOMOYO® is a registered trademark of NTT DATA CORPORATION.

sflogo.php