diff options
| author | Shaohua Li <shaohua.li@intel.com> | 2013-10-20 22:42:38 -0500 |
|---|---|---|
| committer | Moyster <oysterized@gmail.com> | 2016-09-13 13:13:13 +0200 |
| commit | c8a191d4fcdf0881a2a36d7b38f80eb7dc9897cb (patch) | |
| tree | f39d1e33e5b370c350ed2b8a43bf9443d2226081 /init | |
| parent | 1000d8e24f8c9b991469ea90303f769b28636344 (diff) | |
block: fiops ioscheduler core
FIOPS (Fair IOPS) ioscheduler is IOPS based ioscheduler, so only targets
for drive without I/O seek. It's quite similar like CFQ, but the dispatch
decision is made according to IOPS instead of slice.
The algorithm is simple. Drive has a service tree, and each task lives in
the tree. The key into the tree is called vios (virtual I/O). Every request
has vios, which is calculated according to its ioprio, request size and so
on. Task's vios is the sum of vios of all requests it dispatches. FIOPS
always selects task with minimum vios in the service tree and let the task
dispatch request. The dispatched request's vios is then added to the task's
vios and the task is repositioned in the sevice tree.
Unlike CFQ, FIOPS doesn't have separate sync/async queues, because with I/O
less writeback, usually a task can only dispatch either sync or async requests.
Bias read or write request can still be done with read/write scale.
One issue is if workload iodepth is lower than drive queue_depth, IOPS
share of a task might not be strictly according to its priority, request
Bias read or write request can still be done with read/write scale.
One issue is if workload iodepth is lower than drive queue_depth, IOPS
share of a task might not be strictly according to its priority, request
size and so on. In this case, the drive is in idle actually. Solving the
problem need make drive idle, so impact performance. I believe CFQ isn't
completely fair between tasks in such case too.
Change-Id: I1f86b964ada1e06ac979899ca05f1082d0d8228d
Signed-off-by: Shaohua Li <shaohua.li@intel.com>
Diffstat (limited to 'init')
0 files changed, 0 insertions, 0 deletions
