~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

TOMOYO Linux Cross Reference
Linux/Documentation/admin-guide/mm/numaperf.rst

Version: ~ [ linux-6.12-rc7 ] ~ [ linux-6.11.7 ] ~ [ linux-6.10.14 ] ~ [ linux-6.9.12 ] ~ [ linux-6.8.12 ] ~ [ linux-6.7.12 ] ~ [ linux-6.6.60 ] ~ [ linux-6.5.13 ] ~ [ linux-6.4.16 ] ~ [ linux-6.3.13 ] ~ [ linux-6.2.16 ] ~ [ linux-6.1.116 ] ~ [ linux-6.0.19 ] ~ [ linux-5.19.17 ] ~ [ linux-5.18.19 ] ~ [ linux-5.17.15 ] ~ [ linux-5.16.20 ] ~ [ linux-5.15.171 ] ~ [ linux-5.14.21 ] ~ [ linux-5.13.19 ] ~ [ linux-5.12.19 ] ~ [ linux-5.11.22 ] ~ [ linux-5.10.229 ] ~ [ linux-5.9.16 ] ~ [ linux-5.8.18 ] ~ [ linux-5.7.19 ] ~ [ linux-5.6.19 ] ~ [ linux-5.5.19 ] ~ [ linux-5.4.285 ] ~ [ linux-5.3.18 ] ~ [ linux-5.2.21 ] ~ [ linux-5.1.21 ] ~ [ linux-5.0.21 ] ~ [ linux-4.20.17 ] ~ [ linux-4.19.323 ] ~ [ linux-4.18.20 ] ~ [ linux-4.17.19 ] ~ [ linux-4.16.18 ] ~ [ linux-4.15.18 ] ~ [ linux-4.14.336 ] ~ [ linux-4.13.16 ] ~ [ linux-4.12.14 ] ~ [ linux-4.11.12 ] ~ [ linux-4.10.17 ] ~ [ linux-4.9.337 ] ~ [ linux-4.4.302 ] ~ [ linux-3.10.108 ] ~ [ linux-2.6.32.71 ] ~ [ linux-2.6.0 ] ~ [ linux-2.4.37.11 ] ~ [ unix-v6-master ] ~ [ ccs-tools-1.8.12 ] ~ [ policy-sample ] ~
Architecture: ~ [ i386 ] ~ [ alpha ] ~ [ m68k ] ~ [ mips ] ~ [ ppc ] ~ [ sparc ] ~ [ sparc64 ] ~

  1 =======================
  2 NUMA Memory Performance
  3 =======================
  4 
  5 NUMA Locality
  6 =============
  7 
  8 Some platforms may have multiple types of memory attached to a compute
  9 node. These disparate memory ranges may share some characteristics, such
 10 as CPU cache coherence, but may have different performance. For example,
 11 different media types and buses affect bandwidth and latency.
 12 
 13 A system supports such heterogeneous memory by grouping each memory type
 14 under different domains, or "nodes", based on locality and performance
 15 characteristics.  Some memory may share the same node as a CPU, and others
 16 are provided as memory only nodes. While memory only nodes do not provide
 17 CPUs, they may still be local to one or more compute nodes relative to
 18 other nodes. The following diagram shows one such example of two compute
 19 nodes with local memory and a memory only node for each of compute node::
 20 
 21  +------------------+     +------------------+
 22  | Compute Node 0   +-----+ Compute Node 1   |
 23  | Local Node0 Mem  |     | Local Node1 Mem  |
 24  +--------+---------+     +--------+---------+
 25           |                        |
 26  +--------+---------+     +--------+---------+
 27  | Slower Node2 Mem |     | Slower Node3 Mem |
 28  +------------------+     +--------+---------+
 29 
 30 A "memory initiator" is a node containing one or more devices such as
 31 CPUs or separate memory I/O devices that can initiate memory requests.
 32 A "memory target" is a node containing one or more physical address
 33 ranges accessible from one or more memory initiators.
 34 
 35 When multiple memory initiators exist, they may not all have the same
 36 performance when accessing a given memory target. Each initiator-target
 37 pair may be organized into different ranked access classes to represent
 38 this relationship. The highest performing initiator to a given target
 39 is considered to be one of that target's local initiators, and given
 40 the highest access class, 0. Any given target may have one or more
 41 local initiators, and any given initiator may have multiple local
 42 memory targets.
 43 
 44 To aid applications matching memory targets with their initiators, the
 45 kernel provides symlinks to each other. The following example lists the
 46 relationship for the access class "0" memory initiators and targets::
 47 
 48         # symlinks -v /sys/devices/system/node/nodeX/access0/targets/
 49         relative: /sys/devices/system/node/nodeX/access0/targets/nodeY -> ../../nodeY
 50 
 51         # symlinks -v /sys/devices/system/node/nodeY/access0/initiators/
 52         relative: /sys/devices/system/node/nodeY/access0/initiators/nodeX -> ../../nodeX
 53 
 54 A memory initiator may have multiple memory targets in the same access
 55 class. The target memory's initiators in a given class indicate the
 56 nodes' access characteristics share the same performance relative to other
 57 linked initiator nodes. Each target within an initiator's access class,
 58 though, do not necessarily perform the same as each other.
 59 
 60 The access class "1" is used to allow differentiation between initiators
 61 that are CPUs and hence suitable for generic task scheduling, and
 62 IO initiators such as GPUs and NICs.  Unlike access class 0, only
 63 nodes containing CPUs are considered.
 64 
 65 NUMA Performance
 66 ================
 67 
 68 Applications may wish to consider which node they want their memory to
 69 be allocated from based on the node's performance characteristics. If
 70 the system provides these attributes, the kernel exports them under the
 71 node sysfs hierarchy by appending the attributes directory under the
 72 memory node's access class 0 initiators as follows::
 73 
 74         /sys/devices/system/node/nodeY/access0/initiators/
 75 
 76 These attributes apply only when accessed from nodes that have the
 77 are linked under the this access's initiators.
 78 
 79 The performance characteristics the kernel provides for the local initiators
 80 are exported are as follows::
 81 
 82         # tree -P "read*|write*" /sys/devices/system/node/nodeY/access0/initiators/
 83         /sys/devices/system/node/nodeY/access0/initiators/
 84         |-- read_bandwidth
 85         |-- read_latency
 86         |-- write_bandwidth
 87         `-- write_latency
 88 
 89 The bandwidth attributes are provided in MiB/second.
 90 
 91 The latency attributes are provided in nanoseconds.
 92 
 93 The values reported here correspond to the rated latency and bandwidth
 94 for the platform.
 95 
 96 Access class 1 takes the same form but only includes values for CPU to
 97 memory activity.
 98 
 99 NUMA Cache
100 ==========
101 
102 System memory may be constructed in a hierarchy of elements with various
103 performance characteristics in order to provide large address space of
104 slower performing memory cached by a smaller higher performing memory. The
105 system physical addresses memory  initiators are aware of are provided
106 by the last memory level in the hierarchy. The system meanwhile uses
107 higher performing memory to transparently cache access to progressively
108 slower levels.
109 
110 The term "far memory" is used to denote the last level memory in the
111 hierarchy. Each increasing cache level provides higher performing
112 initiator access, and the term "near memory" represents the fastest
113 cache provided by the system.
114 
115 This numbering is different than CPU caches where the cache level (ex:
116 L1, L2, L3) uses the CPU-side view where each increased level is lower
117 performing. In contrast, the memory cache level is centric to the last
118 level memory, so the higher numbered cache level corresponds to  memory
119 nearer to the CPU, and further from far memory.
120 
121 The memory-side caches are not directly addressable by software. When
122 software accesses a system address, the system will return it from the
123 near memory cache if it is present. If it is not present, the system
124 accesses the next level of memory until there is either a hit in that
125 cache level, or it reaches far memory.
126 
127 An application does not need to know about caching attributes in order
128 to use the system. Software may optionally query the memory cache
129 attributes in order to maximize the performance out of such a setup.
130 If the system provides a way for the kernel to discover this information,
131 for example with ACPI HMAT (Heterogeneous Memory Attribute Table),
132 the kernel will append these attributes to the NUMA node memory target.
133 
134 When the kernel first registers a memory cache with a node, the kernel
135 will create the following directory::
136 
137         /sys/devices/system/node/nodeX/memory_side_cache/
138 
139 If that directory is not present, the system either does not provide
140 a memory-side cache, or that information is not accessible to the kernel.
141 
142 The attributes for each level of cache is provided under its cache
143 level index::
144 
145         /sys/devices/system/node/nodeX/memory_side_cache/indexA/
146         /sys/devices/system/node/nodeX/memory_side_cache/indexB/
147         /sys/devices/system/node/nodeX/memory_side_cache/indexC/
148 
149 Each cache level's directory provides its attributes. For example, the
150 following shows a single cache level and the attributes available for
151 software to query::
152 
153         # tree /sys/devices/system/node/node0/memory_side_cache/
154         /sys/devices/system/node/node0/memory_side_cache/
155         |-- index1
156         |   |-- indexing
157         |   |-- line_size
158         |   |-- size
159         |   `-- write_policy
160 
161 The "indexing" will be 0 if it is a direct-mapped cache, and non-zero
162 for any other indexed based, multi-way associativity.
163 
164 The "line_size" is the number of bytes accessed from the next cache
165 level on a miss.
166 
167 The "size" is the number of bytes provided by this cache level.
168 
169 The "write_policy" will be 0 for write-back, and non-zero for
170 write-through caching.
171 
172 See Also
173 ========
174 
175 [1] https://www.uefi.org/sites/default/files/resources/ACPI_6_2.pdf
176 - Section 5.2.27

~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

kernel.org | git.kernel.org | LWN.net | Project Home | SVN repository | Mail admin

Linux® is a registered trademark of Linus Torvalds in the United States and other countries.
TOMOYO® is a registered trademark of NTT DATA CORPORATION.

sflogo.php