~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

TOMOYO Linux Cross Reference
Linux/Documentation/admin-guide/perf/nvidia-pmu.rst

Version: ~ [ linux-6.11.5 ] ~ [ linux-6.10.14 ] ~ [ linux-6.9.12 ] ~ [ linux-6.8.12 ] ~ [ linux-6.7.12 ] ~ [ linux-6.6.58 ] ~ [ linux-6.5.13 ] ~ [ linux-6.4.16 ] ~ [ linux-6.3.13 ] ~ [ linux-6.2.16 ] ~ [ linux-6.1.114 ] ~ [ linux-6.0.19 ] ~ [ linux-5.19.17 ] ~ [ linux-5.18.19 ] ~ [ linux-5.17.15 ] ~ [ linux-5.16.20 ] ~ [ linux-5.15.169 ] ~ [ linux-5.14.21 ] ~ [ linux-5.13.19 ] ~ [ linux-5.12.19 ] ~ [ linux-5.11.22 ] ~ [ linux-5.10.228 ] ~ [ linux-5.9.16 ] ~ [ linux-5.8.18 ] ~ [ linux-5.7.19 ] ~ [ linux-5.6.19 ] ~ [ linux-5.5.19 ] ~ [ linux-5.4.284 ] ~ [ linux-5.3.18 ] ~ [ linux-5.2.21 ] ~ [ linux-5.1.21 ] ~ [ linux-5.0.21 ] ~ [ linux-4.20.17 ] ~ [ linux-4.19.322 ] ~ [ linux-4.18.20 ] ~ [ linux-4.17.19 ] ~ [ linux-4.16.18 ] ~ [ linux-4.15.18 ] ~ [ linux-4.14.336 ] ~ [ linux-4.13.16 ] ~ [ linux-4.12.14 ] ~ [ linux-4.11.12 ] ~ [ linux-4.10.17 ] ~ [ linux-4.9.337 ] ~ [ linux-4.4.302 ] ~ [ linux-3.10.108 ] ~ [ linux-2.6.32.71 ] ~ [ linux-2.6.0 ] ~ [ linux-2.4.37.11 ] ~ [ unix-v6-master ] ~ [ ccs-tools-1.8.9 ] ~ [ policy-sample ] ~
Architecture: ~ [ i386 ] ~ [ alpha ] ~ [ m68k ] ~ [ mips ] ~ [ ppc ] ~ [ sparc ] ~ [ sparc64 ] ~

  1 =========================================================
  2 NVIDIA Tegra SoC Uncore Performance Monitoring Unit (PMU)
  3 =========================================================
  4 
  5 The NVIDIA Tegra SoC includes various system PMUs to measure key performance
  6 metrics like memory bandwidth, latency, and utilization:
  7 
  8 * Scalable Coherency Fabric (SCF)
  9 * NVLink-C2C0
 10 * NVLink-C2C1
 11 * CNVLink
 12 * PCIE
 13 
 14 PMU Driver
 15 ----------
 16 
 17 The PMUs in this document are based on ARM CoreSight PMU Architecture as
 18 described in document: ARM IHI 0091. Since this is a standard architecture, the
 19 PMUs are managed by a common driver "arm-cs-arch-pmu". This driver describes
 20 the available events and configuration of each PMU in sysfs. Please see the
 21 sections below to get the sysfs path of each PMU. Like other uncore PMU drivers,
 22 the driver provides "cpumask" sysfs attribute to show the CPU id used to handle
 23 the PMU event. There is also "associated_cpus" sysfs attribute, which contains a
 24 list of CPUs associated with the PMU instance.
 25 
 26 .. _SCF_PMU_Section:
 27 
 28 SCF PMU
 29 -------
 30 
 31 The SCF PMU monitors system level cache events, CPU traffic, and
 32 strongly-ordered (SO) PCIE write traffic to local/remote memory. Please see
 33 :ref:`NVIDIA_Uncore_PMU_Traffic_Coverage_Section` for more info about the PMU
 34 traffic coverage.
 35 
 36 The events and configuration options of this PMU device are described in sysfs,
 37 see /sys/bus/event_sources/devices/nvidia_scf_pmu_<socket-id>.
 38 
 39 Example usage:
 40 
 41 * Count event id 0x0 in socket 0::
 42 
 43    perf stat -a -e nvidia_scf_pmu_0/event=0x0/
 44 
 45 * Count event id 0x0 in socket 1::
 46 
 47    perf stat -a -e nvidia_scf_pmu_1/event=0x0/
 48 
 49 NVLink-C2C0 PMU
 50 --------------------
 51 
 52 The NVLink-C2C0 PMU monitors incoming traffic from a GPU/CPU connected with
 53 NVLink-C2C (Chip-2-Chip) interconnect. The type of traffic captured by this PMU
 54 varies dependent on the chip configuration:
 55 
 56 * NVIDIA Grace Hopper Superchip: Hopper GPU is connected with Grace SoC.
 57 
 58   In this config, the PMU captures GPU ATS translated or EGM traffic from the GPU.
 59 
 60 * NVIDIA Grace CPU Superchip: two Grace CPU SoCs are connected.
 61 
 62   In this config, the PMU captures read and relaxed ordered (RO) writes from
 63   PCIE device of the remote SoC.
 64 
 65 Please see :ref:`NVIDIA_Uncore_PMU_Traffic_Coverage_Section` for more info about
 66 the PMU traffic coverage.
 67 
 68 The events and configuration options of this PMU device are described in sysfs,
 69 see /sys/bus/event_sources/devices/nvidia_nvlink_c2c0_pmu_<socket-id>.
 70 
 71 Example usage:
 72 
 73 * Count event id 0x0 from the GPU/CPU connected with socket 0::
 74 
 75    perf stat -a -e nvidia_nvlink_c2c0_pmu_0/event=0x0/
 76 
 77 * Count event id 0x0 from the GPU/CPU connected with socket 1::
 78 
 79    perf stat -a -e nvidia_nvlink_c2c0_pmu_1/event=0x0/
 80 
 81 * Count event id 0x0 from the GPU/CPU connected with socket 2::
 82 
 83    perf stat -a -e nvidia_nvlink_c2c0_pmu_2/event=0x0/
 84 
 85 * Count event id 0x0 from the GPU/CPU connected with socket 3::
 86 
 87    perf stat -a -e nvidia_nvlink_c2c0_pmu_3/event=0x0/
 88 
 89 NVLink-C2C1 PMU
 90 -------------------
 91 
 92 The NVLink-C2C1 PMU monitors incoming traffic from a GPU connected with
 93 NVLink-C2C (Chip-2-Chip) interconnect. This PMU captures untranslated GPU
 94 traffic, in contrast with NvLink-C2C0 PMU that captures ATS translated traffic.
 95 Please see :ref:`NVIDIA_Uncore_PMU_Traffic_Coverage_Section` for more info about
 96 the PMU traffic coverage.
 97 
 98 The events and configuration options of this PMU device are described in sysfs,
 99 see /sys/bus/event_sources/devices/nvidia_nvlink_c2c1_pmu_<socket-id>.
100 
101 Example usage:
102 
103 * Count event id 0x0 from the GPU connected with socket 0::
104 
105    perf stat -a -e nvidia_nvlink_c2c1_pmu_0/event=0x0/
106 
107 * Count event id 0x0 from the GPU connected with socket 1::
108 
109    perf stat -a -e nvidia_nvlink_c2c1_pmu_1/event=0x0/
110 
111 * Count event id 0x0 from the GPU connected with socket 2::
112 
113    perf stat -a -e nvidia_nvlink_c2c1_pmu_2/event=0x0/
114 
115 * Count event id 0x0 from the GPU connected with socket 3::
116 
117    perf stat -a -e nvidia_nvlink_c2c1_pmu_3/event=0x0/
118 
119 CNVLink PMU
120 ---------------
121 
122 The CNVLink PMU monitors traffic from GPU and PCIE device on remote sockets
123 to local memory. For PCIE traffic, this PMU captures read and relaxed ordered
124 (RO) write traffic. Please see :ref:`NVIDIA_Uncore_PMU_Traffic_Coverage_Section`
125 for more info about the PMU traffic coverage.
126 
127 The events and configuration options of this PMU device are described in sysfs,
128 see /sys/bus/event_sources/devices/nvidia_cnvlink_pmu_<socket-id>.
129 
130 Each SoC socket can be connected to one or more sockets via CNVLink. The user can
131 use "rem_socket" bitmap parameter to select the remote socket(s) to monitor.
132 Each bit represents the socket number, e.g. "rem_socket=0xE" corresponds to
133 socket 1 to 3.
134 /sys/bus/event_sources/devices/nvidia_cnvlink_pmu_<socket-id>/format/rem_socket
135 shows the valid bits that can be set in the "rem_socket" parameter.
136 
137 The PMU can not distinguish the remote traffic initiator, therefore it does not
138 provide filter to select the traffic source to monitor. It reports combined
139 traffic from remote GPU and PCIE devices.
140 
141 Example usage:
142 
143 * Count event id 0x0 for the traffic from remote socket 1, 2, and 3 to socket 0::
144 
145    perf stat -a -e nvidia_cnvlink_pmu_0/event=0x0,rem_socket=0xE/
146 
147 * Count event id 0x0 for the traffic from remote socket 0, 2, and 3 to socket 1::
148 
149    perf stat -a -e nvidia_cnvlink_pmu_1/event=0x0,rem_socket=0xD/
150 
151 * Count event id 0x0 for the traffic from remote socket 0, 1, and 3 to socket 2::
152 
153    perf stat -a -e nvidia_cnvlink_pmu_2/event=0x0,rem_socket=0xB/
154 
155 * Count event id 0x0 for the traffic from remote socket 0, 1, and 2 to socket 3::
156 
157    perf stat -a -e nvidia_cnvlink_pmu_3/event=0x0,rem_socket=0x7/
158 
159 
160 PCIE PMU
161 ------------
162 
163 The PCIE PMU monitors all read/write traffic from PCIE root ports to
164 local/remote memory. Please see :ref:`NVIDIA_Uncore_PMU_Traffic_Coverage_Section`
165 for more info about the PMU traffic coverage.
166 
167 The events and configuration options of this PMU device are described in sysfs,
168 see /sys/bus/event_sources/devices/nvidia_pcie_pmu_<socket-id>.
169 
170 Each SoC socket can support multiple root ports. The user can use
171 "root_port" bitmap parameter to select the port(s) to monitor, i.e.
172 "root_port=0xF" corresponds to root port 0 to 3.
173 /sys/bus/event_sources/devices/nvidia_pcie_pmu_<socket-id>/format/root_port
174 shows the valid bits that can be set in the "root_port" parameter.
175 
176 Example usage:
177 
178 * Count event id 0x0 from root port 0 and 1 of socket 0::
179 
180    perf stat -a -e nvidia_pcie_pmu_0/event=0x0,root_port=0x3/
181 
182 * Count event id 0x0 from root port 0 and 1 of socket 1::
183 
184    perf stat -a -e nvidia_pcie_pmu_1/event=0x0,root_port=0x3/
185 
186 .. _NVIDIA_Uncore_PMU_Traffic_Coverage_Section:
187 
188 Traffic Coverage
189 ----------------
190 
191 The PMU traffic coverage may vary dependent on the chip configuration:
192 
193 * **NVIDIA Grace Hopper Superchip**: Hopper GPU is connected with Grace SoC.
194 
195   Example configuration with two Grace SoCs::
196 
197    *********************************          *********************************
198    * SOCKET-A                      *          * SOCKET-B                      *
199    *                               *          *                               *
200    *                     ::::::::  *          *  ::::::::                     *
201    *                     : PCIE :  *          *  : PCIE :                     *
202    *                     ::::::::  *          *  ::::::::                     *
203    *                         |     *          *      |                        *
204    *                         |     *          *      |                        *
205    *  :::::::            ::::::::: *          *  :::::::::            ::::::: *
206    *  :     :            :       : *          *  :       :            :     : *
207    *  : GPU :<--NVLink-->: Grace :<---CNVLink--->: Grace :<--NVLink-->: GPU : *
208    *  :     :    C2C     :  SoC  : *          *  :  SoC  :    C2C     :     : *
209    *  :::::::            ::::::::: *          *  :::::::::            ::::::: *
210    *     |                   |     *          *      |                   |    *
211    *     |                   |     *          *      |                   |    *
212    *  &&&&&&&&           &&&&&&&&  *          *   &&&&&&&&           &&&&&&&& *
213    *  & GMEM &           & CMEM &  *          *   & CMEM &           & GMEM & *
214    *  &&&&&&&&           &&&&&&&&  *          *   &&&&&&&&           &&&&&&&& *
215    *                               *          *                               *
216    *********************************          *********************************
217 
218    GMEM = GPU Memory (e.g. HBM)
219    CMEM = CPU Memory (e.g. LPDDR5X)
220 
221   |
222   | Following table contains traffic coverage of Grace SoC PMU in socket-A:
223 
224   ::
225 
226    +--------------+-------+-----------+-----------+-----+----------+----------+
227    |              |                        Source                             |
228    +              +-------+-----------+-----------+-----+----------+----------+
229    | Destination  |       |GPU ATS    |GPU Not-ATS|     | Socket-B | Socket-B |
230    |              |PCI R/W|Translated,|Translated | CPU | CPU/PCIE1| GPU/PCIE2|
231    |              |       |EGM        |           |     |          |          |
232    +==============+=======+===========+===========+=====+==========+==========+
233    | Local        | PCIE  |NVLink-C2C0|NVLink-C2C1| SCF | SCF PMU  | CNVLink  |
234    | SYSRAM/CMEM  | PMU   |PMU        |PMU        | PMU |          | PMU      |
235    +--------------+-------+-----------+-----------+-----+----------+----------+
236    | Local GMEM   | PCIE  |    N/A    |NVLink-C2C1| SCF | SCF PMU  | CNVLink  |
237    |              | PMU   |           |PMU        | PMU |          | PMU      |
238    +--------------+-------+-----------+-----------+-----+----------+----------+
239    | Remote       | PCIE  |NVLink-C2C0|NVLink-C2C1| SCF |          |          |
240    | SYSRAM/CMEM  | PMU   |PMU        |PMU        | PMU |   N/A    |   N/A    |
241    | over CNVLink |       |           |           |     |          |          |
242    +--------------+-------+-----------+-----------+-----+----------+----------+
243    | Remote GMEM  | PCIE  |NVLink-C2C0|NVLink-C2C1| SCF |          |          |
244    | over CNVLink | PMU   |PMU        |PMU        | PMU |   N/A    |   N/A    |
245    +--------------+-------+-----------+-----------+-----+----------+----------+
246 
247    PCIE1 traffic represents strongly ordered (SO) writes.
248    PCIE2 traffic represents reads and relaxed ordered (RO) writes.
249 
250 * **NVIDIA Grace CPU Superchip**: two Grace CPU SoCs are connected.
251 
252   Example configuration with two Grace SoCs::
253 
254    *******************             *******************
255    * SOCKET-A        *             * SOCKET-B        *
256    *                 *             *                 *
257    *    ::::::::     *             *    ::::::::     *
258    *    : PCIE :     *             *    : PCIE :     *
259    *    ::::::::     *             *    ::::::::     *
260    *        |        *             *        |        *
261    *        |        *             *        |        *
262    *    :::::::::    *             *    :::::::::    *
263    *    :       :    *             *    :       :    *
264    *    : Grace :<--------NVLink------->: Grace :    *
265    *    :  SoC  :    *     C2C     *    :  SoC  :    *
266    *    :::::::::    *             *    :::::::::    *
267    *        |        *             *        |        *
268    *        |        *             *        |        *
269    *     &&&&&&&&    *             *     &&&&&&&&    *
270    *     & CMEM &    *             *     & CMEM &    *
271    *     &&&&&&&&    *             *     &&&&&&&&    *
272    *                 *             *                 *
273    *******************             *******************
274 
275    GMEM = GPU Memory (e.g. HBM)
276    CMEM = CPU Memory (e.g. LPDDR5X)
277 
278   |
279   | Following table contains traffic coverage of Grace SoC PMU in socket-A:
280 
281   ::
282 
283    +-----------------+-----------+---------+----------+-------------+
284    |                 |                      Source                  |
285    +                 +-----------+---------+----------+-------------+
286    | Destination     |           |         | Socket-B | Socket-B    |
287    |                 |  PCI R/W  |   CPU   | CPU/PCIE1| PCIE2       |
288    |                 |           |         |          |             |
289    +=================+===========+=========+==========+=============+
290    | Local           |  PCIE PMU | SCF PMU | SCF PMU  | NVLink-C2C0 |
291    | SYSRAM/CMEM     |           |         |          | PMU         |
292    +-----------------+-----------+---------+----------+-------------+
293    | Remote          |           |         |          |             |
294    | SYSRAM/CMEM     |  PCIE PMU | SCF PMU |   N/A    |     N/A     |
295    | over NVLink-C2C |           |         |          |             |
296    +-----------------+-----------+---------+----------+-------------+
297 
298    PCIE1 traffic represents strongly ordered (SO) writes.
299    PCIE2 traffic represents reads and relaxed ordered (RO) writes.

~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

kernel.org | git.kernel.org | LWN.net | Project Home | SVN repository | Mail admin

Linux® is a registered trademark of Linus Torvalds in the United States and other countries.
TOMOYO® is a registered trademark of NTT DATA CORPORATION.

sflogo.php