~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

TOMOYO Linux Cross Reference
Linux/Documentation/admin-guide/iostats.rst

Version: ~ [ linux-6.12-rc7 ] ~ [ linux-6.11.7 ] ~ [ linux-6.10.14 ] ~ [ linux-6.9.12 ] ~ [ linux-6.8.12 ] ~ [ linux-6.7.12 ] ~ [ linux-6.6.60 ] ~ [ linux-6.5.13 ] ~ [ linux-6.4.16 ] ~ [ linux-6.3.13 ] ~ [ linux-6.2.16 ] ~ [ linux-6.1.116 ] ~ [ linux-6.0.19 ] ~ [ linux-5.19.17 ] ~ [ linux-5.18.19 ] ~ [ linux-5.17.15 ] ~ [ linux-5.16.20 ] ~ [ linux-5.15.171 ] ~ [ linux-5.14.21 ] ~ [ linux-5.13.19 ] ~ [ linux-5.12.19 ] ~ [ linux-5.11.22 ] ~ [ linux-5.10.229 ] ~ [ linux-5.9.16 ] ~ [ linux-5.8.18 ] ~ [ linux-5.7.19 ] ~ [ linux-5.6.19 ] ~ [ linux-5.5.19 ] ~ [ linux-5.4.285 ] ~ [ linux-5.3.18 ] ~ [ linux-5.2.21 ] ~ [ linux-5.1.21 ] ~ [ linux-5.0.21 ] ~ [ linux-4.20.17 ] ~ [ linux-4.19.323 ] ~ [ linux-4.18.20 ] ~ [ linux-4.17.19 ] ~ [ linux-4.16.18 ] ~ [ linux-4.15.18 ] ~ [ linux-4.14.336 ] ~ [ linux-4.13.16 ] ~ [ linux-4.12.14 ] ~ [ linux-4.11.12 ] ~ [ linux-4.10.17 ] ~ [ linux-4.9.337 ] ~ [ linux-4.4.302 ] ~ [ linux-3.10.108 ] ~ [ linux-2.6.32.71 ] ~ [ linux-2.6.0 ] ~ [ linux-2.4.37.11 ] ~ [ unix-v6-master ] ~ [ ccs-tools-1.8.12 ] ~ [ policy-sample ] ~
Architecture: ~ [ i386 ] ~ [ alpha ] ~ [ m68k ] ~ [ mips ] ~ [ ppc ] ~ [ sparc ] ~ [ sparc64 ] ~

  1 =====================
  2 I/O statistics fields
  3 =====================
  4 
  5 Since 2.4.20 (and some versions before, with patches), and 2.5.45,
  6 more extensive disk statistics have been introduced to help measure disk
  7 activity. Tools such as ``sar`` and ``iostat`` typically interpret these and do
  8 the work for you, but in case you are interested in creating your own
  9 tools, the fields are explained here.
 10 
 11 In 2.4 now, the information is found as additional fields in
 12 ``/proc/partitions``.  In 2.6 and upper, the same information is found in two
 13 places: one is in the file ``/proc/diskstats``, and the other is within
 14 the sysfs file system, which must be mounted in order to obtain
 15 the information. Throughout this document we'll assume that sysfs
 16 is mounted on ``/sys``, although of course it may be mounted anywhere.
 17 Both ``/proc/diskstats`` and sysfs use the same source for the information
 18 and so should not differ.
 19 
 20 Here are examples of these different formats::
 21 
 22    2.4:
 23       3     0   39082680 hda 446216 784926 9550688 4382310 424847 312726 5922052 19310380 0 3376340 23705160
 24       3     1    9221278 hda1 35486 0 35496 38030 0 0 0 0 0 38030 38030
 25 
 26    2.6+ sysfs:
 27       446216 784926 9550688 4382310 424847 312726 5922052 19310380 0 3376340 23705160
 28       35486    38030    38030    38030
 29 
 30    2.6+ diskstats:
 31       3    0   hda 446216 784926 9550688 4382310 424847 312726 5922052 19310380 0 3376340 23705160
 32       3    1   hda1 35486 38030 38030 38030
 33 
 34    4.18+ diskstats:
 35       3    0   hda 446216 784926 9550688 4382310 424847 312726 5922052 19310380 0 3376340 23705160 0 0 0 0
 36 
 37 On 2.4 you might execute ``grep 'hda ' /proc/partitions``. On 2.6+, you have
 38 a choice of ``cat /sys/block/hda/stat`` or ``grep 'hda ' /proc/diskstats``.
 39 
 40 The advantage of one over the other is that the sysfs choice works well
 41 if you are watching a known, small set of disks.  ``/proc/diskstats`` may
 42 be a better choice if you are watching a large number of disks because
 43 you'll avoid the overhead of 50, 100, or 500 or more opens/closes with
 44 each snapshot of your disk statistics.
 45 
 46 In 2.4, the statistics fields are those after the device name. In
 47 the above example, the first field of statistics would be 446216.
 48 By contrast, in 2.6+ if you look at ``/sys/block/hda/stat``, you'll
 49 find just the 15 fields, beginning with 446216.  If you look at
 50 ``/proc/diskstats``, the 15 fields will be preceded by the major and
 51 minor device numbers, and device name.  Each of these formats provides
 52 15 fields of statistics, each meaning exactly the same things.
 53 All fields except field 9 are cumulative since boot.  Field 9 should
 54 go to zero as I/Os complete; all others only increase (unless they
 55 overflow and wrap). Wrapping might eventually occur on a very busy
 56 or long-lived system; so applications should be prepared to deal with
 57 it. Regarding wrapping, the types of the fields are either unsigned
 58 int (32 bit) or unsigned long (32-bit or 64-bit, depending on your
 59 machine) as noted per-field below. Unless your observations are very
 60 spread in time, these fields should not wrap twice before you notice it.
 61 
 62 Each set of stats only applies to the indicated device; if you want
 63 system-wide stats you'll have to find all the devices and sum them all up.
 64 
 65 Field  1 -- # of reads completed (unsigned long)
 66     This is the total number of reads completed successfully.
 67 
 68 Field  2 -- # of reads merged, field 6 -- # of writes merged (unsigned long)
 69     Reads and writes which are adjacent to each other may be merged for
 70     efficiency.  Thus two 4K reads may become one 8K read before it is
 71     ultimately handed to the disk, and so it will be counted (and queued)
 72     as only one I/O.  This field lets you know how often this was done.
 73 
 74 Field  3 -- # of sectors read (unsigned long)
 75     This is the total number of sectors read successfully.
 76 
 77 Field  4 -- # of milliseconds spent reading (unsigned int)
 78     This is the total number of milliseconds spent by all reads (as
 79     measured from blk_mq_alloc_request() to __blk_mq_end_request()).
 80 
 81 Field  5 -- # of writes completed (unsigned long)
 82     This is the total number of writes completed successfully.
 83 
 84 Field  6 -- # of writes merged  (unsigned long)
 85     See the description of field 2.
 86 
 87 Field  7 -- # of sectors written (unsigned long)
 88     This is the total number of sectors written successfully.
 89 
 90 Field  8 -- # of milliseconds spent writing (unsigned int)
 91     This is the total number of milliseconds spent by all writes (as
 92     measured from blk_mq_alloc_request() to __blk_mq_end_request()).
 93 
 94 Field  9 -- # of I/Os currently in progress (unsigned int)
 95     The only field that should go to zero. Incremented as requests are
 96     given to appropriate struct request_queue and decremented as they finish.
 97 
 98 Field 10 -- # of milliseconds spent doing I/Os (unsigned int)
 99     This field increases so long as field 9 is nonzero.
100 
101     Since 5.0 this field counts jiffies when at least one request was
102     started or completed. If request runs more than 2 jiffies then some
103     I/O time might be not accounted in case of concurrent requests.
104 
105 Field 11 -- weighted # of milliseconds spent doing I/Os (unsigned int)
106     This field is incremented at each I/O start, I/O completion, I/O
107     merge, or read of these stats by the number of I/Os in progress
108     (field 9) times the number of milliseconds spent doing I/O since the
109     last update of this field.  This can provide an easy measure of both
110     I/O completion time and the backlog that may be accumulating.
111 
112 Field 12 -- # of discards completed (unsigned long)
113     This is the total number of discards completed successfully.
114 
115 Field 13 -- # of discards merged (unsigned long)
116     See the description of field 2
117 
118 Field 14 -- # of sectors discarded (unsigned long)
119     This is the total number of sectors discarded successfully.
120 
121 Field 15 -- # of milliseconds spent discarding (unsigned int)
122     This is the total number of milliseconds spent by all discards (as
123     measured from blk_mq_alloc_request() to __blk_mq_end_request()).
124 
125 Field 16 -- # of flush requests completed
126     This is the total number of flush requests completed successfully.
127 
128     Block layer combines flush requests and executes at most one at a time.
129     This counts flush requests executed by disk. Not tracked for partitions.
130 
131 Field 17 -- # of milliseconds spent flushing
132     This is the total number of milliseconds spent by all flush requests.
133 
134 To avoid introducing performance bottlenecks, no locks are held while
135 modifying these counters.  This implies that minor inaccuracies may be
136 introduced when changes collide, so (for instance) adding up all the
137 read I/Os issued per partition should equal those made to the disks ...
138 but due to the lack of locking it may only be very close.
139 
140 In 2.6+, there are counters for each CPU, which make the lack of locking
141 almost a non-issue.  When the statistics are read, the per-CPU counters
142 are summed (possibly overflowing the unsigned long variable they are
143 summed to) and the result given to the user.  There is no convenient
144 user interface for accessing the per-CPU counters themselves.
145 
146 Since 4.19 request times are measured with nanoseconds precision and
147 truncated to milliseconds before showing in this interface.
148 
149 Disks vs Partitions
150 -------------------
151 
152 There were significant changes between 2.4 and 2.6+ in the I/O subsystem.
153 As a result, some statistic information disappeared. The translation from
154 a disk address relative to a partition to the disk address relative to
155 the host disk happens much earlier.  All merges and timings now happen
156 at the disk level rather than at both the disk and partition level as
157 in 2.4.  Consequently, you'll see a different statistics output on 2.6+ for
158 partitions from that for disks.  There are only *four* fields available
159 for partitions on 2.6+ machines.  This is reflected in the examples above.
160 
161 Field  1 -- # of reads issued
162     This is the total number of reads issued to this partition.
163 
164 Field  2 -- # of sectors read
165     This is the total number of sectors requested to be read from this
166     partition.
167 
168 Field  3 -- # of writes issued
169     This is the total number of writes issued to this partition.
170 
171 Field  4 -- # of sectors written
172     This is the total number of sectors requested to be written to
173     this partition.
174 
175 Note that since the address is translated to a disk-relative one, and no
176 record of the partition-relative address is kept, the subsequent success
177 or failure of the read cannot be attributed to the partition.  In other
178 words, the number of reads for partitions is counted slightly before time
179 of queuing for partitions, and at completion for whole disks.  This is
180 a subtle distinction that is probably uninteresting for most cases.
181 
182 More significant is the error induced by counting the numbers of
183 reads/writes before merges for partitions and after for disks. Since a
184 typical workload usually contains a lot of successive and adjacent requests,
185 the number of reads/writes issued can be several times higher than the
186 number of reads/writes completed.
187 
188 In 2.6.25, the full statistic set is again available for partitions and
189 disk and partition statistics are consistent again. Since we still don't
190 keep record of the partition-relative address, an operation is attributed to
191 the partition which contains the first sector of the request after the
192 eventual merges. As requests can be merged across partition, this could lead
193 to some (probably insignificant) inaccuracy.
194 
195 Additional notes
196 ----------------
197 
198 In 2.6+, sysfs is not mounted by default.  If your distribution of
199 Linux hasn't added it already, here's the line you'll want to add to
200 your ``/etc/fstab``::
201 
202         none /sys sysfs defaults 0 0
203 
204 
205 In 2.6+, all disk statistics were removed from ``/proc/stat``.  In 2.4, they
206 appear in both ``/proc/partitions`` and ``/proc/stat``, although the ones in
207 ``/proc/stat`` take a very different format from those in ``/proc/partitions``
208 (see proc(5), if your system has it.)
209 
210 -- ricklind@us.ibm.com

~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

kernel.org | git.kernel.org | LWN.net | Project Home | SVN repository | Mail admin

Linux® is a registered trademark of Linus Torvalds in the United States and other countries.
TOMOYO® is a registered trademark of NTT DATA CORPORATION.

sflogo.php