1 perf-c2c(1) 2 =========== 3 4 NAME 5 ---- 6 perf-c2c - Shared Data C2C/HITM Analyzer. 7 8 SYNOPSIS 9 -------- 10 [verse] 11 'perf c2c record' [<options>] <command> 12 'perf c2c record' [<options>] \-- [<record command options>] <command> 13 'perf c2c report' [<options>] 14 15 DESCRIPTION 16 ----------- 17 C2C stands for Cache To Cache. 18 19 The perf c2c tool provides means for Shared Data C2C/HITM analysis. It allows 20 you to track down the cacheline contentions. 21 22 On Intel, the tool is based on load latency and precise store facility events 23 provided by Intel CPUs. On PowerPC, the tool uses random instruction sampling 24 with thresholding feature. On AMD, the tool uses IBS op pmu (due to hardware 25 limitations, perf c2c is not supported on Zen3 cpus). On Arm64 it uses SPE to 26 sample load and store operations, therefore hardware and kernel support is 27 required. See linkperf:perf-arm-spe[1] for a setup guide. Due to the 28 statistical nature of Arm SPE sampling, not every memory operation will be 29 sampled. 30 31 These events provide: 32 - memory address of the access 33 - type of the access (load and store details) 34 - latency (in cycles) of the load access 35 36 The c2c tool provide means to record this data and report back access details 37 for cachelines with highest contention - highest number of HITM accesses. 38 39 The basic workflow with this tool follows the standard record/report phase. 40 User uses the record command to record events data and report command to 41 display it. 42 43 44 RECORD OPTIONS 45 -------------- 46 -e:: 47 --event=:: 48 Select the PMU event. Use 'perf c2c record -e list' 49 to list available events. 50 51 -v:: 52 --verbose:: 53 Be more verbose (show counter open errors, etc). 54 55 -l:: 56 --ldlat:: 57 Configure mem-loads latency. Supported on Intel and Arm64 processors 58 only. Ignored on other archs. 59 60 -k:: 61 --all-kernel:: 62 Configure all used events to run in kernel space. 63 64 -u:: 65 --all-user:: 66 Configure all used events to run in user space. 67 68 REPORT OPTIONS 69 -------------- 70 -k:: 71 --vmlinux=<file>:: 72 vmlinux pathname 73 74 -v:: 75 --verbose:: 76 Be more verbose (show counter open errors, etc). 77 78 -i:: 79 --input:: 80 Specify the input file to process. 81 82 -N:: 83 --node-info:: 84 Show extra node info in report (see NODE INFO section) 85 86 -c:: 87 --coalesce:: 88 Specify sorting fields for single cacheline display. 89 Following fields are available: tid,pid,iaddr,dso 90 (see COALESCE) 91 92 -g:: 93 --call-graph:: 94 Setup callchains parameters. 95 Please refer to perf-report man page for details. 96 97 --stdio:: 98 Force the stdio output (see STDIO OUTPUT) 99 100 --stats:: 101 Display only statistic tables and force stdio mode. 102 103 --full-symbols:: 104 Display full length of symbols. 105 106 --no-source:: 107 Do not display Source:Line column. 108 109 --show-all:: 110 Show all captured HITM lines, with no regard to HITM % 0.0005 limit. 111 112 -f:: 113 --force:: 114 Don't do ownership validation. 115 116 -d:: 117 --display:: 118 Switch to HITM type (rmt, lcl) or peer snooping type (peer) to display 119 and sort on. Total HITMs (tot) as default, except Arm64 uses peer mode 120 as default. 121 122 --stitch-lbr:: 123 Show callgraph with stitched LBRs, which may have more complete 124 callgraph. The perf.data file must have been obtained using 125 perf c2c record --call-graph lbr. 126 Disabled by default. In common cases with call stack overflows, 127 it can recreate better call stacks than the default lbr call stack 128 output. But this approach is not foolproof. There can be cases 129 where it creates incorrect call stacks from incorrect matches. 130 The known limitations include exception handing such as 131 setjmp/longjmp will have calls/returns not match. 132 133 --double-cl:: 134 Group the detection of shared cacheline events into double cacheline 135 granularity. Some architectures have an Adjacent Cacheline Prefetch 136 feature, which causes cacheline sharing to behave like the cacheline 137 size is doubled. 138 139 C2C RECORD 140 ---------- 141 The perf c2c record command setup options related to HITM cacheline analysis 142 and calls standard perf record command. 143 144 Following perf record options are configured by default: 145 (check perf record man page for details) 146 147 -W,-d,--phys-data,--sample-cpu 148 149 Unless specified otherwise with '-e' option, following events are monitored by 150 default on Intel: 151 152 cpu/mem-loads,ldlat=30/P 153 cpu/mem-stores/P 154 155 following on AMD: 156 157 ibs_op// 158 159 and following on PowerPC: 160 161 cpu/mem-loads/ 162 cpu/mem-stores/ 163 164 User can pass any 'perf record' option behind '--' mark, like (to enable 165 callchains and system wide monitoring): 166 167 $ perf c2c record -- -g -a 168 169 Please check RECORD OPTIONS section for specific c2c record options. 170 171 C2C REPORT 172 ---------- 173 The perf c2c report command displays shared data analysis. It comes in two 174 display modes: stdio and tui (default). 175 176 The report command workflow is following: 177 - sort all the data based on the cacheline address 178 - store access details for each cacheline 179 - sort all cachelines based on user settings 180 - display data 181 182 In general perf report output consist of 2 basic views: 183 1) most expensive cachelines list 184 2) offsets details for each cacheline 185 186 For each cacheline in the 1) list we display following data: 187 (Both stdio and TUI modes follow the same fields output) 188 189 Index 190 - zero based index to identify the cacheline 191 192 Cacheline 193 - cacheline address (hex number) 194 195 Rmt/Lcl Hitm (Display with HITM types) 196 - cacheline percentage of all Remote/Local HITM accesses 197 198 Peer Snoop (Display with peer type) 199 - cacheline percentage of all peer accesses 200 201 LLC Load Hitm - Total, LclHitm, RmtHitm (For display with HITM types) 202 - count of Total/Local/Remote load HITMs 203 204 Load Peer - Total, Local, Remote (For display with peer type) 205 - count of Total/Local/Remote load from peer cache or DRAM 206 207 Total records 208 - sum of all cachelines accesses 209 210 Total loads 211 - sum of all load accesses 212 213 Total stores 214 - sum of all store accesses 215 216 Store Reference - L1Hit, L1Miss, N/A 217 L1Hit - store accesses that hit L1 218 L1Miss - store accesses that missed L1 219 N/A - store accesses with memory level is not available 220 221 Core Load Hit - FB, L1, L2 222 - count of load hits in FB (Fill Buffer), L1 and L2 cache 223 224 LLC Load Hit - LlcHit, LclHitm 225 - count of LLC load accesses, includes LLC hits and LLC HITMs 226 227 RMT Load Hit - RmtHit, RmtHitm 228 - count of remote load accesses, includes remote hits and remote HITMs; 229 on Arm neoverse cores, RmtHit is used to account remote accesses, 230 includes remote DRAM or any upward cache level in remote node 231 232 Load Dram - Lcl, Rmt 233 - count of local and remote DRAM accesses 234 235 For each offset in the 2) list we display following data: 236 237 HITM - Rmt, Lcl (Display with HITM types) 238 - % of Remote/Local HITM accesses for given offset within cacheline 239 240 Peer Snoop - Rmt, Lcl (Display with peer type) 241 - % of Remote/Local peer accesses for given offset within cacheline 242 243 Store Refs - L1 Hit, L1 Miss, N/A 244 - % of store accesses that hit L1, missed L1 and N/A (no available) memory 245 level for given offset within cacheline 246 247 Data address - Offset 248 - offset address 249 250 Pid 251 - pid of the process responsible for the accesses 252 253 Tid 254 - tid of the process responsible for the accesses 255 256 Code address 257 - code address responsible for the accesses 258 259 cycles - rmt hitm, lcl hitm, load (Display with HITM types) 260 - sum of cycles for given accesses - Remote/Local HITM and generic load 261 262 cycles - rmt peer, lcl peer, load (Display with peer type) 263 - sum of cycles for given accesses - Remote/Local peer load and generic load 264 265 cpu cnt 266 - number of cpus that participated on the access 267 268 Symbol 269 - code symbol related to the 'Code address' value 270 271 Shared Object 272 - shared object name related to the 'Code address' value 273 274 Source:Line 275 - source information related to the 'Code address' value 276 277 Node 278 - nodes participating on the access (see NODE INFO section) 279 280 NODE INFO 281 --------- 282 The 'Node' field displays nodes that accesses given cacheline 283 offset. Its output comes in 3 flavors: 284 - node IDs separated by ',' 285 - node IDs with stats for each ID, in following format: 286 Node{cpus %hitms %stores} (Display with HITM types) 287 Node{cpus %peers %stores} (Display with peer type) 288 - node IDs with list of affected CPUs in following format: 289 Node{cpu list} 290 291 User can switch between above flavors with -N option or 292 use 'n' key to interactively switch in TUI mode. 293 294 COALESCE 295 -------- 296 User can specify how to sort offsets for cacheline. 297 298 Following fields are available and governs the final 299 output fields set for cacheline offsets output: 300 301 tid - coalesced by process TIDs 302 pid - coalesced by process PIDs 303 iaddr - coalesced by code address, following fields are displayed: 304 Code address, Code symbol, Shared Object, Source line 305 dso - coalesced by shared object 306 307 By default the coalescing is setup with 'pid,iaddr'. 308 309 STDIO OUTPUT 310 ------------ 311 The stdio output displays data on standard output. 312 313 Following tables are displayed: 314 Trace Event Information 315 - overall statistics of memory accesses 316 317 Global Shared Cache Line Event Information 318 - overall statistics on shared cachelines 319 320 Shared Data Cache Line Table 321 - list of most expensive cachelines 322 323 Shared Cache Line Distribution Pareto 324 - list of all accessed offsets for each cacheline 325 326 TUI OUTPUT 327 ---------- 328 The TUI output provides interactive interface to navigate 329 through cachelines list and to display offset details. 330 331 For details please refer to the help window by pressing '?' key. 332 333 CREDITS 334 ------- 335 Although Don Zickus, Dick Fowles and Joe Mario worked together 336 to get this implemented, we got lots of early help from Arnaldo 337 Carvalho de Melo, Stephane Eranian, Jiri Olsa and Andi Kleen. 338 339 C2C BLOG 340 -------- 341 Check Joe's blog on c2c tool for detailed use case explanation: 342 https://joemario.github.io/blog/2016/09/01/c2c-blog/ 343 344 SEE ALSO 345 -------- 346 linkperf:perf-record[1], linkperf:perf-mem[1], linkperf:perf-arm-spe[1]
Linux® is a registered trademark of Linus Torvalds in the United States and other countries.
TOMOYO® is a registered trademark of NTT DATA CORPORATION.