~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

TOMOYO Linux Cross Reference
Linux/Documentation/admin-guide/iostats.rst

Version: ~ [ linux-6.12-rc7 ] ~ [ linux-6.11.7 ] ~ [ linux-6.10.14 ] ~ [ linux-6.9.12 ] ~ [ linux-6.8.12 ] ~ [ linux-6.7.12 ] ~ [ linux-6.6.60 ] ~ [ linux-6.5.13 ] ~ [ linux-6.4.16 ] ~ [ linux-6.3.13 ] ~ [ linux-6.2.16 ] ~ [ linux-6.1.116 ] ~ [ linux-6.0.19 ] ~ [ linux-5.19.17 ] ~ [ linux-5.18.19 ] ~ [ linux-5.17.15 ] ~ [ linux-5.16.20 ] ~ [ linux-5.15.171 ] ~ [ linux-5.14.21 ] ~ [ linux-5.13.19 ] ~ [ linux-5.12.19 ] ~ [ linux-5.11.22 ] ~ [ linux-5.10.229 ] ~ [ linux-5.9.16 ] ~ [ linux-5.8.18 ] ~ [ linux-5.7.19 ] ~ [ linux-5.6.19 ] ~ [ linux-5.5.19 ] ~ [ linux-5.4.285 ] ~ [ linux-5.3.18 ] ~ [ linux-5.2.21 ] ~ [ linux-5.1.21 ] ~ [ linux-5.0.21 ] ~ [ linux-4.20.17 ] ~ [ linux-4.19.323 ] ~ [ linux-4.18.20 ] ~ [ linux-4.17.19 ] ~ [ linux-4.16.18 ] ~ [ linux-4.15.18 ] ~ [ linux-4.14.336 ] ~ [ linux-4.13.16 ] ~ [ linux-4.12.14 ] ~ [ linux-4.11.12 ] ~ [ linux-4.10.17 ] ~ [ linux-4.9.337 ] ~ [ linux-4.4.302 ] ~ [ linux-3.10.108 ] ~ [ linux-2.6.32.71 ] ~ [ linux-2.6.0 ] ~ [ linux-2.4.37.11 ] ~ [ unix-v6-master ] ~ [ ccs-tools-1.8.12 ] ~ [ policy-sample ] ~
Architecture: ~ [ i386 ] ~ [ alpha ] ~ [ m68k ] ~ [ mips ] ~ [ ppc ] ~ [ sparc ] ~ [ sparc64 ] ~

Diff markup

Differences between /Documentation/admin-guide/iostats.rst (Version linux-6.12-rc7) and /Documentation/admin-guide/iostats.rst (Version linux-4.16.18)


  1 =====================                             
  2 I/O statistics fields                             
  3 =====================                             
  4                                                   
  5 Since 2.4.20 (and some versions before, with p    
  6 more extensive disk statistics have been intro    
  7 activity. Tools such as ``sar`` and ``iostat``    
  8 the work for you, but in case you are interest    
  9 tools, the fields are explained here.             
 10                                                   
 11 In 2.4 now, the information is found as additi    
 12 ``/proc/partitions``.  In 2.6 and upper, the s    
 13 places: one is in the file ``/proc/diskstats``    
 14 the sysfs file system, which must be mounted i    
 15 the information. Throughout this document we'l    
 16 is mounted on ``/sys``, although of course it     
 17 Both ``/proc/diskstats`` and sysfs use the sam    
 18 and so should not differ.                         
 19                                                   
 20 Here are examples of these different formats::    
 21                                                   
 22    2.4:                                           
 23       3     0   39082680 hda 446216 784926 955    
 24       3     1    9221278 hda1 35486 0 35496 38    
 25                                                   
 26    2.6+ sysfs:                                    
 27       446216 784926 9550688 4382310 424847 312    
 28       35486    38030    38030    38030            
 29                                                   
 30    2.6+ diskstats:                                
 31       3    0   hda 446216 784926 9550688 43823    
 32       3    1   hda1 35486 38030 38030 38030       
 33                                                   
 34    4.18+ diskstats:                               
 35       3    0   hda 446216 784926 9550688 43823    
 36                                                   
 37 On 2.4 you might execute ``grep 'hda ' /proc/p    
 38 a choice of ``cat /sys/block/hda/stat`` or ``g    
 39                                                   
 40 The advantage of one over the other is that th    
 41 if you are watching a known, small set of disk    
 42 be a better choice if you are watching a large    
 43 you'll avoid the overhead of 50, 100, or 500 o    
 44 each snapshot of your disk statistics.            
 45                                                   
 46 In 2.4, the statistics fields are those after     
 47 the above example, the first field of statisti    
 48 By contrast, in 2.6+ if you look at ``/sys/blo    
 49 find just the 15 fields, beginning with 446216    
 50 ``/proc/diskstats``, the 15 fields will be pre    
 51 minor device numbers, and device name.  Each o    
 52 15 fields of statistics, each meaning exactly     
 53 All fields except field 9 are cumulative since    
 54 go to zero as I/Os complete; all others only i    
 55 overflow and wrap). Wrapping might eventually     
 56 or long-lived system; so applications should b    
 57 it. Regarding wrapping, the types of the field    
 58 int (32 bit) or unsigned long (32-bit or 64-bi    
 59 machine) as noted per-field below. Unless your    
 60 spread in time, these fields should not wrap t    
 61                                                   
 62 Each set of stats only applies to the indicate    
 63 system-wide stats you'll have to find all the     
 64                                                   
 65 Field  1 -- # of reads completed (unsigned lon    
 66     This is the total number of reads complete    
 67                                                   
 68 Field  2 -- # of reads merged, field 6 -- # of    
 69     Reads and writes which are adjacent to eac    
 70     efficiency.  Thus two 4K reads may become     
 71     ultimately handed to the disk, and so it w    
 72     as only one I/O.  This field lets you know    
 73                                                   
 74 Field  3 -- # of sectors read (unsigned long)     
 75     This is the total number of sectors read s    
 76                                                   
 77 Field  4 -- # of milliseconds spent reading (u    
 78     This is the total number of milliseconds s    
 79     measured from blk_mq_alloc_request() to __    
 80                                                   
 81 Field  5 -- # of writes completed (unsigned lo    
 82     This is the total number of writes complet    
 83                                                   
 84 Field  6 -- # of writes merged  (unsigned long    
 85     See the description of field 2.               
 86                                                   
 87 Field  7 -- # of sectors written (unsigned lon    
 88     This is the total number of sectors writte    
 89                                                   
 90 Field  8 -- # of milliseconds spent writing (u    
 91     This is the total number of milliseconds s    
 92     measured from blk_mq_alloc_request() to __    
 93                                                   
 94 Field  9 -- # of I/Os currently in progress (u    
 95     The only field that should go to zero. Inc    
 96     given to appropriate struct request_queue     
 97                                                   
 98 Field 10 -- # of milliseconds spent doing I/Os    
 99     This field increases so long as field 9 is    
100                                                   
101     Since 5.0 this field counts jiffies when a    
102     started or completed. If request runs more    
103     I/O time might be not accounted in case of    
104                                                   
105 Field 11 -- weighted # of milliseconds spent d    
106     This field is incremented at each I/O star    
107     merge, or read of these stats by the numbe    
108     (field 9) times the number of milliseconds    
109     last update of this field.  This can provi    
110     I/O completion time and the backlog that m    
111                                                   
112 Field 12 -- # of discards completed (unsigned     
113     This is the total number of discards compl    
114                                                   
115 Field 13 -- # of discards merged (unsigned lon    
116     See the description of field 2                
117                                                   
118 Field 14 -- # of sectors discarded (unsigned l    
119     This is the total number of sectors discar    
120                                                   
121 Field 15 -- # of milliseconds spent discarding    
122     This is the total number of milliseconds s    
123     measured from blk_mq_alloc_request() to __    
124                                                   
125 Field 16 -- # of flush requests completed         
126     This is the total number of flush requests    
127                                                   
128     Block layer combines flush requests and ex    
129     This counts flush requests executed by dis    
130                                                   
131 Field 17 -- # of milliseconds spent flushing      
132     This is the total number of milliseconds s    
133                                                   
134 To avoid introducing performance bottlenecks,     
135 modifying these counters.  This implies that m    
136 introduced when changes collide, so (for insta    
137 read I/Os issued per partition should equal th    
138 but due to the lack of locking it may only be     
139                                                   
140 In 2.6+, there are counters for each CPU, whic    
141 almost a non-issue.  When the statistics are r    
142 are summed (possibly overflowing the unsigned     
143 summed to) and the result given to the user.      
144 user interface for accessing the per-CPU count    
145                                                   
146 Since 4.19 request times are measured with nan    
147 truncated to milliseconds before showing in th    
148                                                   
149 Disks vs Partitions                               
150 -------------------                               
151                                                   
152 There were significant changes between 2.4 and    
153 As a result, some statistic information disapp    
154 a disk address relative to a partition to the     
155 the host disk happens much earlier.  All merge    
156 at the disk level rather than at both the disk    
157 in 2.4.  Consequently, you'll see a different     
158 partitions from that for disks.  There are onl    
159 for partitions on 2.6+ machines.  This is refl    
160                                                   
161 Field  1 -- # of reads issued                     
162     This is the total number of reads issued t    
163                                                   
164 Field  2 -- # of sectors read                     
165     This is the total number of sectors reques    
166     partition.                                    
167                                                   
168 Field  3 -- # of writes issued                    
169     This is the total number of writes issued     
170                                                   
171 Field  4 -- # of sectors written                  
172     This is the total number of sectors reques    
173     this partition.                               
174                                                   
175 Note that since the address is translated to a    
176 record of the partition-relative address is ke    
177 or failure of the read cannot be attributed to    
178 words, the number of reads for partitions is c    
179 of queuing for partitions, and at completion f    
180 a subtle distinction that is probably unintere    
181                                                   
182 More significant is the error induced by count    
183 reads/writes before merges for partitions and     
184 typical workload usually contains a lot of suc    
185 the number of reads/writes issued can be sever    
186 number of reads/writes completed.                 
187                                                   
188 In 2.6.25, the full statistic set is again ava    
189 disk and partition statistics are consistent a    
190 keep record of the partition-relative address,    
191 the partition which contains the first sector     
192 eventual merges. As requests can be merged acr    
193 to some (probably insignificant) inaccuracy.      
194                                                   
195 Additional notes                                  
196 ----------------                                  
197                                                   
198 In 2.6+, sysfs is not mounted by default.  If     
199 Linux hasn't added it already, here's the line    
200 your ``/etc/fstab``::                             
201                                                   
202         none /sys sysfs defaults 0 0              
203                                                   
204                                                   
205 In 2.6+, all disk statistics were removed from    
206 appear in both ``/proc/partitions`` and ``/pro    
207 ``/proc/stat`` take a very different format fr    
208 (see proc(5), if your system has it.)             
209                                                   
210 -- ricklind@us.ibm.com                            
                                                      

~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

kernel.org | git.kernel.org | LWN.net | Project Home | SVN repository | Mail admin

Linux® is a registered trademark of Linus Torvalds in the United States and other countries.
TOMOYO® is a registered trademark of NTT DATA CORPORATION.

sflogo.php