5 cgroup subsys "blkio" implements the block io controller. There seems to be
6 a need of various kinds of IO control policies (like proportional BW, max BW)
7 both at leaf nodes as well as at intermediate nodes in a storage hierarchy.
8 Plan is to use the same cgroup based management interface for blkio controller
9 and based on user options switch IO policies in the background.
11 Currently two IO control policies are implemented. First one is proportional
12 weight time based division of disk policy. It is implemented in CFQ. Hence
13 this policy takes effect only on leaf nodes when CFQ is being used. The second
14 one is throttling policy which can be used to specify upper IO rate limits
15 on devices. This policy is implemented in generic block layer and can be
16 used on leaf nodes as well as higher level logical devices like device mapper.
20 Proportional Weight division of bandwidth
21 -----------------------------------------
22 You can do a very simple testing of running two dd threads in two different
23 cgroups. Here is what you can do.
25 - Enable Block IO controller
28 - Enable group scheduling in CFQ
29 CONFIG_CFQ_GROUP_IOSCHED=y
31 - Compile and boot into kernel and mount IO controller (blkio).
33 mount -t cgroup -o blkio none /cgroup
36 mkdir -p /cgroup/test1/ /cgroup/test2
38 - Set weights of group test1 and test2
39 echo 1000 > /cgroup/test1/blkio.weight
40 echo 500 > /cgroup/test2/blkio.weight
42 - Create two same size files (say 512MB each) on same disk (file1, file2) and
43 launch two dd threads in different cgroup to read those files.
46 echo 3 > /proc/sys/vm/drop_caches
48 dd if=/mnt/sdb/zerofile1 of=/dev/null &
49 echo $! > /cgroup/test1/tasks
50 cat /cgroup/test1/tasks
52 dd if=/mnt/sdb/zerofile2 of=/dev/null &
53 echo $! > /cgroup/test2/tasks
54 cat /cgroup/test2/tasks
56 - At macro level, first dd should finish first. To get more precise data, keep
57 on looking at (with the help of script), at blkio.disk_time and
58 blkio.disk_sectors files of both test1 and test2 groups. This will tell how
59 much disk time (in milli seconds), each group got and how many secotors each
60 group dispatched to the disk. We provide fairness in terms of disk time, so
61 ideally io.disk_time of cgroups should be in proportion to the weight.
63 Throttling/Upper Limit policy
64 -----------------------------
65 - Enable Block IO controller
68 - Enable throttling in block layer
69 CONFIG_BLK_DEV_THROTTLING=y
71 - Mount blkio controller
72 mount -t cgroup -o blkio none /cgroup/blkio
74 - Specify a bandwidth rate on particular device for root group. The format
75 for policy is "<major>:<minor> <byes_per_second>".
77 echo "8:16 1048576" > /cgroup/blkio/blkio.read_bps_device
79 Above will put a limit of 1MB/second on reads happening for root group
80 on device having major/minor number 8:16.
82 - Run dd to read a file and see if rate is throttled to 1MB/s or not.
84 # dd if=/mnt/common/zerofile of=/dev/null bs=4K count=1024
88 4194304 bytes (4.2 MB) copied, 4.0001 s, 1.0 MB/s
90 Limits for writes can be put using blkio.write_bps_device file.
92 Various user visible config options
93 ===================================
95 - Block IO controller.
97 CONFIG_DEBUG_BLK_CGROUP
98 - Debug help. Right now some additional stats file show up in cgroup
99 if this option is enabled.
101 CONFIG_CFQ_GROUP_IOSCHED
102 - Enables group scheduling in CFQ. Currently only 1 level of group
105 CONFIG_BLK_DEV_THROTTLING
106 - Enable block device throttling support in block layer.
108 Details of cgroup files
109 =======================
110 Proportional weight policy files
111 --------------------------------
113 - Specifies per cgroup weight. This is default weight of the group
114 on all the devices until and unless overridden by per device rule.
115 (See blkio.weight_device).
116 Currently allowed range of weights is from 100 to 1000.
118 - blkio.weight_device
119 - One can specify per cgroup per device rules using this interface.
120 These rules override the default value of group weight as specified
123 Following is the format.
125 #echo dev_maj:dev_minor weight > /path/to/cgroup/blkio.weight_device
126 Configure weight=300 on /dev/sdb (8:16) in this cgroup
127 # echo 8:16 300 > blkio.weight_device
128 # cat blkio.weight_device
132 Configure weight=500 on /dev/sda (8:0) in this cgroup
133 # echo 8:0 500 > blkio.weight_device
134 # cat blkio.weight_device
139 Remove specific weight for /dev/sda in this cgroup
140 # echo 8:0 0 > blkio.weight_device
141 # cat blkio.weight_device
146 - disk time allocated to cgroup per device in milliseconds. First
147 two fields specify the major and minor number of the device and
148 third field specifies the disk time allocated to group in
152 - number of sectors transferred to/from disk by the group. First
153 two fields specify the major and minor number of the device and
154 third field specifies the number of sectors transferred by the
155 group to/from the device.
157 - blkio.io_service_bytes
158 - Number of bytes transferred to/from the disk by the group. These
159 are further divided by the type of operation - read or write, sync
160 or async. First two fields specify the major and minor number of the
161 device, third field specifies the operation type and the fourth field
162 specifies the number of bytes.
165 - Number of IOs completed to/from the disk by the group. These
166 are further divided by the type of operation - read or write, sync
167 or async. First two fields specify the major and minor number of the
168 device, third field specifies the operation type and the fourth field
169 specifies the number of IOs.
171 - blkio.io_service_time
172 - Total amount of time between request dispatch and request completion
173 for the IOs done by this cgroup. This is in nanoseconds to make it
174 meaningful for flash devices too. For devices with queue depth of 1,
175 this time represents the actual service time. When queue_depth > 1,
176 that is no longer true as requests may be served out of order. This
177 may cause the service time for a given IO to include the service time
178 of multiple IOs when served out of order which may result in total
179 io_service_time > actual time elapsed. This time is further divided by
180 the type of operation - read or write, sync or async. First two fields
181 specify the major and minor number of the device, third field
182 specifies the operation type and the fourth field specifies the
183 io_service_time in ns.
186 - Total amount of time the IOs for this cgroup spent waiting in the
187 scheduler queues for service. This can be greater than the total time
188 elapsed since it is cumulative io_wait_time for all IOs. It is not a
189 measure of total time the cgroup spent waiting but rather a measure of
190 the wait_time for its individual IOs. For devices with queue_depth > 1
191 this metric does not include the time spent waiting for service once
192 the IO is dispatched to the device but till it actually gets serviced
193 (there might be a time lag here due to re-ordering of requests by the
194 device). This is in nanoseconds to make it meaningful for flash
195 devices too. This time is further divided by the type of operation -
196 read or write, sync or async. First two fields specify the major and
197 minor number of the device, third field specifies the operation type
198 and the fourth field specifies the io_wait_time in ns.
201 - Total number of bios/requests merged into requests belonging to this
202 cgroup. This is further divided by the type of operation - read or
203 write, sync or async.
206 - Total number of requests queued up at any given instant for this
207 cgroup. This is further divided by the type of operation - read or
208 write, sync or async.
210 - blkio.avg_queue_size
211 - Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y.
212 The average queue size for this cgroup over the entire time of this
213 cgroup's existence. Queue size samples are taken each time one of the
214 queues of this cgroup gets a timeslice.
216 - blkio.group_wait_time
217 - Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y.
218 This is the amount of time the cgroup had to wait since it became busy
219 (i.e., went from 0 to 1 request queued) to get a timeslice for one of
220 its queues. This is different from the io_wait_time which is the
221 cumulative total of the amount of time spent by each IO in that cgroup
222 waiting in the scheduler queue. This is in nanoseconds. If this is
223 read when the cgroup is in a waiting (for timeslice) state, the stat
224 will only report the group_wait_time accumulated till the last time it
225 got a timeslice and will not include the current delta.
228 - Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y.
229 This is the amount of time a cgroup spends without any pending
230 requests when not being served, i.e., it does not include any time
231 spent idling for one of the queues of the cgroup. This is in
232 nanoseconds. If this is read when the cgroup is in an empty state,
233 the stat will only report the empty_time accumulated till the last
234 time it had a pending request and will not include the current delta.
237 - Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y.
238 This is the amount of time spent by the IO scheduler idling for a
239 given cgroup in anticipation of a better request than the exising ones
240 from other queues/cgroups. This is in nanoseconds. If this is read
241 when the cgroup is in an idling state, the stat will only report the
242 idle_time accumulated till the last idle period and will not include
246 - Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y. This
247 gives the statistics about how many a times a group was dequeued
248 from service tree of the device. First two fields specify the major
249 and minor number of the device and third field specifies the number
250 of times a group was dequeued from a particular device.
252 Throttling/Upper limit policy files
253 -----------------------------------
254 - blkio.throttle.read_bps_device
255 - Specifies upper limit on READ rate from the device. IO rate is
256 specified in bytes per second. Rules are per deivce. Following is
259 echo "<major>:<minor> <rate_bytes_per_second>" > /cgrp/blkio.read_bps_device
261 - blkio.throttle.write_bps_device
262 - Specifies upper limit on WRITE rate to the device. IO rate is
263 specified in bytes per second. Rules are per deivce. Following is
266 echo "<major>:<minor> <rate_bytes_per_second>" > /cgrp/blkio.write_bps_device
268 - blkio.throttle.read_iops_device
269 - Specifies upper limit on READ rate from the device. IO rate is
270 specified in IO per second. Rules are per deivce. Following is
273 echo "<major>:<minor> <rate_io_per_second>" > /cgrp/blkio.read_iops_device
275 - blkio.throttle.write_iops_device
276 - Specifies upper limit on WRITE rate to the device. IO rate is
277 specified in io per second. Rules are per deivce. Following is
280 echo "<major>:<minor> <rate_io_per_second>" > /cgrp/blkio.write_iops_device
282 Note: If both BW and IOPS rules are specified for a device, then IO is
283 subjectd to both the constraints.
285 - blkio.throttle.io_serviced
286 - Number of IOs (bio) completed to/from the disk by the group (as
287 seen by throttling policy). These are further divided by the type
288 of operation - read or write, sync or async. First two fields specify
289 the major and minor number of the device, third field specifies the
290 operation type and the fourth field specifies the number of IOs.
292 blkio.io_serviced does accounting as seen by CFQ and counts are in
293 number of requests (struct request). On the other hand,
294 blkio.throttle.io_serviced counts number of IO in terms of number
295 of bios as seen by throttling policy. These bios can later be
296 merged by elevator and total number of requests completed can be
299 - blkio.throttle.io_service_bytes
300 - Number of bytes transferred to/from the disk by the group. These
301 are further divided by the type of operation - read or write, sync
302 or async. First two fields specify the major and minor number of the
303 device, third field specifies the operation type and the fourth field
304 specifies the number of bytes.
306 These numbers should roughly be same as blkio.io_service_bytes as
307 updated by CFQ. The difference between two is that
308 blkio.io_service_bytes will not be updated if CFQ is not operating
311 Common files among various policies
312 -----------------------------------
314 - Writing an int to this file will result in resetting all the stats
319 /sys/block/<disk>/queue/iosched/group_isolation
320 -----------------------------------------------
322 If group_isolation=1, it provides stronger isolation between groups at the
323 expense of throughput. By default group_isolation is 0. In general that
324 means that if group_isolation=0, expect fairness for sequential workload
325 only. Set group_isolation=1 to see fairness for random IO workload also.
327 Generally CFQ will put random seeky workload in sync-noidle category. CFQ
328 will disable idling on these queues and it does a collective idling on group
329 of such queues. Generally these are slow moving queues and if there is a
330 sync-noidle service tree in each group, that group gets exclusive access to
331 disk for certain period. That means it will bring the throughput down if
332 group does not have enough IO to drive deeper queue depths and utilize disk
333 capacity to the fullest in the slice allocated to it. But the flip side is
334 that even a random reader should get better latencies and overall throughput
335 if there are lots of sequential readers/sync-idle workload running in the
338 If group_isolation=0, then CFQ automatically moves all the random seeky queues
339 in the root group. That means there will be no service differentiation for
340 that kind of workload. This leads to better throughput as we do collective
341 idling on root sync-noidle tree.
343 By default one should run with group_isolation=0. If that is not sufficient
344 and one wants stronger isolation between groups, then set group_isolation=1
345 but this will come at cost of reduced throughput.
347 /sys/block/<disk>/queue/iosched/slice_idle
348 ------------------------------------------
349 On a faster hardware CFQ can be slow, especially with sequential workload.
350 This happens because CFQ idles on a single queue and single queue might not
351 drive deeper request queue depths to keep the storage busy. In such scenarios
352 one can try setting slice_idle=0 and that would switch CFQ to IOPS
353 (IO operations per second) mode on NCQ supporting hardware.
355 That means CFQ will not idle between cfq queues of a cfq group and hence be
356 able to driver higher queue depth and achieve better throughput. That also
357 means that cfq provides fairness among groups in terms of IOPS and not in
360 /sys/block/<disk>/queue/iosched/group_idle
361 ------------------------------------------
362 If one disables idling on individual cfq queues and cfq service trees by
363 setting slice_idle=0, group_idle kicks in. That means CFQ will still idle
364 on the group in an attempt to provide fairness among groups.
366 By default group_idle is same as slice_idle and does not do anything if
367 slice_idle is enabled.
369 One can experience an overall throughput drop if you have created multiple
370 groups and put applications in that group which are not driving enough
371 IO to keep disk busy. In that case set group_idle=0, and CFQ will not idle
372 on individual groups and throughput should improve.
376 - Currently only sync IO queues are support. All the buffered writes are
377 still system wide and not per group. Hence we will not see service
378 differentiation between buffered writes between groups.