aboutsummaryrefslogtreecommitdiff
path: root/block/fiops-iosched.c
Commit message (Collapse)AuthorAgeFilesLines
* FIOPS: forward port for use on 3.10 LinuxPaul Reioux2016-09-131-6/+17
| | | | | Change-Id: I1ae7f50feda51b2aacb15c7b632cd38937b1edb5 Signed-off-by: Paul Reioux <reioux@gmail.com>
* block: fiops add some trace informationShaohua Li2016-09-131-1/+18
| | | | | | | Add some trace information, which is helpful when I do debugging. Change-Id: Ib1082fc2547fd56c2fadbb7a9596a3dc4c7b15c8 Signed-off-by: Shaohua Li <shaohua.li@intel.com>
* block: fiops bias sync workloadShaohua Li2016-09-131-0/+12
| | | | | | | | | If there are async requests running, delay async workload. Otherwise async workload (usually very deep iodepth) will use all queue iodepth and later sync requests will get long delayed. The idea is from CFQ. Change-Id: I66b8b87ca33c9e92ed52067cead54a4fc48c6426 Signed-off-by: Shaohua Li <shaohua.li@intel.com>
* block: fiops preserve vios key for deep queue depth workloadShaohua Li2016-09-131-3/+6
| | | | | | | | | | If the task has running request, even it's added into service tree newly, we preserve its vios key, so it will not lost its share. This should work for task driving big queue depth. For single depth task, there is no approach to preserve its vios key. Change-Id: I40bdaff6430b783b965ca434ffc46b7205b554cd Signed-off-by: Shaohua Li <shaohua.li@intel.com>
* block: fiops add ioprio supportShaohua Li2016-09-131-12/+93
| | | | | | | | Add CFQ-like ioprio support. Priority A will get 20% more share than priority A+1, which matches CFQ. Change-Id: I0d6f145810e3f0979440063c030cddf30ad4179c Signed-off-by: Shaohua Li <shaohua.li@intel.com>
* block: fiops sync/async scaleShaohua Li2016-09-131-0/+15
| | | | | | | | | | | | | | | | CFQ gives 2.5 times more share to sync workload. This matches CFQ. Note this is different with the read/write scale. We have 3 types of requests: 1. read 2. sync write 3. write CFQ doesn't differentitate type 1 and 2, but request cost of 1 and 2 are usually different for flash based storage. So we have both sync/async and read/write scale here. Change-Id: I3b36c94ba63df6d7a823c941a34a479da6243f20 Signed-off-by: Shaohua Li <shaohua.li@intel.com>
* block: fiops read/write request scaleShaohua Li2016-09-131-1/+70
| | | | | | | | | | | | read/write speed of Flash based storage usually is different. For example, in my SSD maxium thoughput of read is about 3 times faster than that of write. Add a scale to differenate read and write. Also add a tunable, so user can assign different scale for read and write. By default, the scale is 1:1, which means the scale is a noop. Change-Id: Ic223e96d1c72591ef535307755d78ff33dbc6939 Signed-off-by: Shaohua Li <shaohua.li@intel.com>
* block: fiops ioscheduler coreShaohua Li2016-09-131-0/+556
FIOPS (Fair IOPS) ioscheduler is IOPS based ioscheduler, so only targets for drive without I/O seek. It's quite similar like CFQ, but the dispatch decision is made according to IOPS instead of slice. The algorithm is simple. Drive has a service tree, and each task lives in the tree. The key into the tree is called vios (virtual I/O). Every request has vios, which is calculated according to its ioprio, request size and so on. Task's vios is the sum of vios of all requests it dispatches. FIOPS always selects task with minimum vios in the service tree and let the task dispatch request. The dispatched request's vios is then added to the task's vios and the task is repositioned in the sevice tree. Unlike CFQ, FIOPS doesn't have separate sync/async queues, because with I/O less writeback, usually a task can only dispatch either sync or async requests. Bias read or write request can still be done with read/write scale. One issue is if workload iodepth is lower than drive queue_depth, IOPS share of a task might not be strictly according to its priority, request Bias read or write request can still be done with read/write scale. One issue is if workload iodepth is lower than drive queue_depth, IOPS share of a task might not be strictly according to its priority, request size and so on. In this case, the drive is in idle actually. Solving the problem need make drive idle, so impact performance. I believe CFQ isn't completely fair between tasks in such case too. Change-Id: I1f86b964ada1e06ac979899ca05f1082d0d8228d Signed-off-by: Shaohua Li <shaohua.li@intel.com>