aboutsummaryrefslogtreecommitdiff
path: root/block/blk-core.c
diff options
context:
space:
mode:
authorAndrei F <luxneb@gmail.com>2015-11-26 23:53:27 +0100
committerMister Oyster <oysterized@gmail.com>2016-12-11 13:59:30 +0100
commit2d7ab8fde7cf5c8df844e37f68124f59f9503d6d (patch)
tree3c54cfe3327f36a8d4de740381ac4d0db5e473b6 /block/blk-core.c
parent3ee029b6fe82e31a112f13de419cee582df68e30 (diff)
block: Adding ROW scheduling algorithm
Squashed commit of the following: commit f49e14ccdcb6694ed27754e020057d27a8fcca07 Author: Andrei F <luxneb@gmail.com> Date: Thu Nov 26 22:40:38 2015 +0100 elevator: Fix a race in elevator switching commit d50235b7bc3ee0a0427984d763ea7534149531b4 upstream. There's a race between elevator switching and normal io operation. Because the allocation of struct elevator_queue and struct elevator_data don't in a atomic operation.So there are have chance to use NULL ->elevator_data. For example: Thread A: Thread B blk_queu_bio elevator_switch spin_lock_irq(q->queue_block) elevator_alloc elv_merge elevator_init_fn Because call elevator_alloc, it can't hold queue_lock and the ->elevator_data is NULL.So at the same time, threadA call elv_merge and nedd some info of elevator_data.So the crash happened. Move the elevator_alloc into func elevator_init_fn, it make the operations in a atomic operation. Using the follow method can easy reproduce this bug 1:dd if=/dev/sdb of=/dev/null 2:while true;do echo noop > scheduler;echo deadline > scheduler;done The test method also use this method. Signed-off-by: Jianpeng Ma <majianpeng@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk> Cc: Jonghwan Choi <jhbird.choi@samsung.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> commit daf22a727e64f1277b074442efb821366015ca72 Author: Tatyana Brokhman <tlinder@codeaurora.org> Date: Thu Jul 25 13:45:21 2013 +0300 block: row: Remove warning massage from add_request Regular priority queues is marked as "starved" if it skipped a dispatch due to being empty. When a new request is added to a "starved" queue it will be marked as urgent. The removed WARN_ON was warning about an impossible case when a regular priority (read) queue was marked as starved but wasn't empty. This is a possible case due to the bellow: If the device driver fetched a read request that is pending for transmission and an URGENT request arrives, the fetched read will be reinserted back to the scheduler. Its possible that the queue it will be reinserted to was marked as "starved" in the meanwhile due to being empty. CRs-fixed: 517800 Change-Id: Iaae642ea0ed9c817c41745b0e8ae2217cc684f0c Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org> commit dca47e75f1413d58e4f97ef638e5d4456c55bdce Author: Tatyana Brokhman <tlinder@codeaurora.org> Date: Tue Jul 2 14:43:13 2013 +0300 block: row: change hrtimer_cancel to hrtimer_try_to_cancel Calling hrtimer_cancel with interrupts disabled can result in a livelock. When flushing plug list in the block layer interrupts are disabled and an hrtimer is used when adding requests from that plug list to the scheduler. In this code flow, if the hrtimer (which is used for idling) is set, it's being canceled by calling hrtimer_cancel. hrtimer_cancel will perform the following in an endless loop: 1. try cancel the timer 2. if fails - rest_cpu the cancellation can fail if the timer function already started. Since interrupts are disabled it can never complete. This patch reduced the number of times the hrtimer lock is taken while interrupts are disabled by calling hrtimer_try_co_cancel. the later will try to cancel the timer just once and return with an error code if fails. CRs-fixed: 499887 Change-Id: I25f79c357426d72ad67c261ce7cb503ae97dc7b9 Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org> commit a6047b9d808eaa787e4df3107bea7536334856cd Author: Lee Susman <lsusman@codeaurora.org> Date: Sun Jun 23 16:27:40 2013 +0300 block: row-iosched idling triggered by readahead pages In the current implementation idling is triggered only by request insertion frequency. This heuristic is not very accurate and may hit random requests that shouldn't trigger idling. This patch uses the PG_readahead flag in struct page's flags, which indicates that the page is part of a readahead window, to start idling upon dispatch of a request associated with a readahead page. The above readehead flag is used together with the existing insertion-frequency trigger. The frequency timer will catch read requests which are not part of a readahead window, but are still part of a sequential stream (and therefore dispatched in small time intervals). Change-Id: Icb7145199c007408de3f267645ccb842e051fd00 Signed-off-by: Lee Susman <lsusman@codeaurora.org> commit e70e4e8e1d1f111023dd2b2d0fc9237240cab9ab Author: Tatyana Brokhman <tlinder@codeaurora.org> Date: Wed May 1 14:35:20 2013 +0300 block: urgent: Fix dispatching of URGENT mechanism There are cases when blk_peek_request is called not from blk_fetch_request thus the URGENT request may be started but the flag q->dispatched_urgent is not updated. Change-Id: I4fb588823f1b2949160cbd3907f4729767932e12 CRs-fixed: 471736 CRs-fixed: 473036 Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org> commit 0e36870f6a436840eed1782d0e85b4adb300b59f Author: Maya Erez <merez@codeaurora.org> Date: Sun Apr 14 15:19:52 2013 +0300 block: row: Fix starvation tolerance values The current starvation tolerance values increase the boot time since high priority SW requests are delayed by regular priority requests. In order to overcome this, increase the starvation tolerance values. Change-Id: I9947fca9927cbd39a1d41d4bd87069df679d3103 Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org> Signed-off-by: Maya Erez <merez@codeaurora.org> commit 3cab8d28e735fdad300eda3bed703129ba05d70a Author: Tatyana Brokhman <tlinder@codeaurora.org> Date: Thu Apr 11 14:57:15 2013 +0300 block: urgent request: Update dispatch_urgent in case of requeue/reinsert The block layer implements a mechanism for verifying that the device driver won't be notified of an URGENT request if there is already an URGENT request in flight. This is due to the fact that interrupting an URGENT request isn't efficient. This patch fixes the above described mechanism in case the URGENT request was returned back to the block layer from some reason: by requeue or reinsert. CRs-fixed: 473376, 473036, 471736 Change-Id: Ie8b8208230a302d4526068531616984825f1050d Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org> commit e052e4574bb928b44e660b9679d23e14011b0b9d Author: Tatyana Brokhman <tlinder@codeaurora.org> Date: Thu Mar 21 11:04:02 2013 +0200 block: row: Update sysfs functions All ROW (time related) configurable parameters are stored in ms so there is no need to convert from/to ms when reading/updating them via sysfs. Change-Id: Ib6a1de54140b5d25696743da944c076dd6fc02ae Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org> Conflicts: block/row-iosched.c commit 2c3203650c2109c18abb3b17a5114d54bb22e683 Author: Tatyana Brokhman <tlinder@codeaurora.org> Date: Thu Mar 21 13:02:07 2013 +0200 block: row: Prevent starvation of regular priority by high priority At the moment all REGULAR and LOW priority requests are starved as long as there are HIGH priority requests to dispatch. This patch prevents the above starvation by setting a starvation limit the REGULAR\LOW priority requests can tolerate. Change-Id: Ibe24207982c2c55d75c0b0230f67e013d1106017 Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org> commit a5434f618d395a03fe19ef430a8c5747bad069f9 Author: Tatyana Brokhman <tlinder@codeaurora.org> Date: Tue Mar 12 21:02:33 2013 +0200 block: urgent request: remove unnecessary urgent marking An urgent request is marked by the scheduler in rq->cmd_flags with the REQ_URGENT flag. There is no need to add an additional marking by the block layer. Change-Id: I05d5e9539d2f6c1bfa80240b0671db197a5d3b3f Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org> commit 3928fb74c2f78578c57913938644acb704b77586 Author: Tatyana Brokhman <tlinder@codeaurora.org> Date: Tue Mar 12 21:17:18 2013 +0200 block: row: Re-design urgent request notification mechanism When ROW scheduler reports to the block layer that there is an urgent request pending, the device driver may decide to stop the transmission of the current request in order to handle the urgent one. This is done in order to reduce the latency of an urgent request. For example: long WRITE may be stopped to handle an urgent READ. This patch updates the ROW URGENT notification policy to apply with the below: - Don't notify URGENT if there is an un-completed URGENT request in driver - After notifying that URGENT request is present, the next request dispatched is the URGENT one. - At every given moment only 1 request can be marked as URGENT. Independent of it's location (driver or scheduler) Other changes to URGENT policy: - Only READ queues are allowed to notify of an URGENT request pending. CR fix: If a pending urgent request (A) gets merged with another request (B) A is removed from scheduler queue but is not removed from rd->pending_urgent_rq. CRs-Fixed: 453712 Change-Id: I321e8cf58e12a05b82edd2a03f52fcce7bc9a900 Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org> commit 8912aa92e3d919ceabc72b2eddc829fc5e4bd7eb Author: Tatyana Brokhman <tlinder@codeaurora.org> Date: Thu Jan 24 16:17:27 2013 +0200 block: row: Update initial values of ROW data structures This patch sets the initial values of internal ROW parameters. Change-Id: I38132062a7fcbe2e58b9cc757e55caac64d013dc Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org> [smuckle@codeaurora.org: ported from msm-3.7] Signed-off-by: Steve Muckle <smuckle@codeaurora.org> commit b709e1a8a56784cb83c2c31a4e7df574a6b29802 Author: Tatyana Brokhman <tlinder@codeaurora.org> Date: Thu Jan 24 15:08:40 2013 +0200 block: row: Don't notify URGENT if there are un-completed urgent req When ROW scheduler reports to the block layer that there is an urgent request pending, the device driver may decide to stop the transmission of the current request in order to handle the urgent one. If the current transmitted request is an urgent request - we don't want it to be stopped. Due to the above ROW scheduler won't notify of an urgent request if there are urgent requests in flight. Change-Id: I2fa186d911b908ec7611682b378b9cdc48637ac7 Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org> commit eba966603cc8e6f8fb418bf702f5a6eca5f56f34 Author: Tatyana Brokhman <tlinder@codeaurora.org> Date: Thu Jan 24 04:01:59 2013 +0200 block: add REQ_URGENT to request flags This patch adds a new flag to be used in cmd_flags field of struct request for marking request as urgent. Urgent request is the one that should be given priority currently handled (regular) request by the device driver. The decision of a request urgency is taken by the scheduler. Change-Id: Ic20470987ef23410f1d0324f96f00578f7df8717 Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org> Conflicts: include/linux/blk_types.h commit 7c865ab1a9ae626d023d0b03ed7fbe5c57bcbe7c Author: Tatyana Brokhman <tlinder@codeaurora.org> Date: Thu Jan 17 20:56:07 2013 +0200 block: row: Idling mechanism re-factoring At the moment idling in ROW is implemented by delayed work that uses jiffies granularity which is not very accurate. This patch replaces current idling mechanism implementation with hrtime API, which gives nanosecond resolution (instead of jiffies). Change-Id: I86c7b1776d035e1d81571894b300228c8b8f2d92 Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org> commit 72ea1d39c04734bf5eb52117968704148d2da42f Author: Tatyana Brokhman <tlinder@codeaurora.org> Date: Wed Jan 23 17:15:49 2013 +0200 block: row: Dispatch requests according to their io-priority This patch implements "application-hints" which is a way the issuing application can notify the scheduler on the priority of its request. This is done by setting the io-priority of the request. This patch reuses an already existing mechanism of io-priorities developed for CFQ. Please refer to kernel/Documentation/block/ioprio.txt for usage example and explanations. Change-Id: I228ec8e52161b424242bb7bb133418dc8b73925a Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org> commit 9f8f3d2757788477656b1d25a3055ae11d97cee4 Author: Tatyana Brokhman <tlinder@codeaurora.org> Date: Sat Jan 12 16:23:18 2013 +0200 block: row: Aggregate row_queue parameters to one structure Each ROW queues has several parameters which default values are defined in separate arrays. This patch aggregates all default values into one array. The values in question are: - is idling enabled for the queue - queue quantum - can the queue notify on urgent request Change-Id: I3821b0a042542295069b340406a16b1000873ec6 Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org> commit d84ad45f3077661cab5984cd2fb7d5ef2ff06e39 Author: Tatyana Brokhman <tlinder@codeaurora.org> Date: Sat Jan 12 16:21:47 2013 +0200 block: row: fix sysfs functions - idle_time conversion idle_time was updated to be stored in msec instead of jiffies. So there is no need to convert the value when reading from user or displaying the value to him. Change-Id: I58e074b204e90a90536d32199ac668112966e9cf Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org> commit 202b21e9daf7b8a097f97f764bb4ad4712c75fa7 Author: Tatyana Brokhman <tlinder@codeaurora.org> Date: Sat Jan 12 16:21:12 2013 +0200 block: row: Insert dispatch_quantum into struct row_queue There is really no point in keeping the dispatch quantum of a queue outside of it. By inserting it to the row_queue structure we spare extra level in accessing it. Change-Id: Ic77571818b643e71f9aafbb2ca93d0a92158b199 Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org> commit 58ca84f091faa6ff8c4f567b158be5d38f9a5c58 Author: Tatyana Brokhman <tlinder@codeaurora.org> Date: Sun Jan 13 22:04:59 2013 +0200 block: row: Add some debug information on ROW queues 1. Add a counter for number of requests on queue. 2. Add function to print queues status (number requests currently on queue and number of already dispatched requests in current dispatch cycle). Change-Id: I1e98b9ca33853e6e6a8ddc53240f6cd6981e6024 Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org> commit 1bbb2c7ada5a647cab1f2306458d6cf9b821ddf7 Author: Subhash Jadavani <subhashj@codeaurora.org> Date: Thu Jan 10 02:15:13 2013 +0530 block: blk-merge: don't merge the pages with non-contiguous descriptors blk_rq_map_sg() function merges the physically contiguous pages to use same scatter-gather node without checking if their page descriptors are contiguous or not. Now when dma_map_sg() is called on the scatter gather list, it would take the base page pointer from each node (one by one) and iterates through all of the pages in same sg node by keep incrementing the base page pointer with the assumption that physically contiguous pages will have their page descriptor address contiguous which may not be true if SPARSEMEM config is enabled. So here we may end referring to invalid page descriptor. Following table shows the example of physically contiguous pages but their page descriptor addresses non-contiguous. ------------------------------------------- | Page Descriptor | Physical Address | ------------------------------------------ | 0xc1e43fdc | 0xdffff000 | | 0xc2052000 | 0xe0000000 | ------------------------------------------- With this patch, relevant blk-merge functions will also check if the physically contiguous pages are having page descriptors address contiguous or not? If not then, these pages are separated to be in different scatter-gather nodes. CRs-Fixed: 392141 Change-Id: I3601565e5569a69f06fb3af99061c4d4c23af241 Signed-off-by: Subhash Jadavani <subhashj@codeaurora.org> Conflicts: block/blk-merge.c commit 9a9b428480c932ef8434d8b9bd3b7bafdcac3f84 Author: Tatyana Brokhman <tlinder@codeaurora.org> Date: Thu Dec 20 19:23:58 2012 +0200 row: Add support for urgent request handling This patch adds support for handling urgent requests. ROW queue can be marked as "urgent" so if it was un-served in last dispatch cycle and a request was added to it - it will trigger issuing an urgent-request-notification to the block device driver. The block device driver may choose at stop the transmission of current ongoing request to handle the urgent one. Foe example: long WRITE may be stopped to handle an urgent READ. This decreases READ latency. Change-Id: I84954c13f5e3b1b5caeadc9fe1f9aa21208cb35e Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org> commit 8d5ec526b7e70307d3c4ce587b714349f44c0be8 Author: Tatyana Brokhman <tlinder@codeaurora.org> Date: Thu Dec 6 13:17:19 2012 +0200 block:row: fix idling mechanism in ROW This patch addresses the following issues found in the ROW idling mechanism: 1. Fix the delay passed to queue_delayed_work (pass actual delay and not the time when to start the work) 2. Change the idle time and the idling-trigger frequency to be HZ dependent (instead of using msec_to_jiffies()) 3. Destroy idle_workqueue() in queue_exit Change-Id: If86513ad6b4be44fb7a860f29bd2127197d8d5bf Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org> Conflicts: block/row-iosched.c commit c26a95811462b9ba8eca23b4ba2150e7b660ca40 Author: Tatyana Brokhman <tlinder@codeaurora.org> Date: Tue Oct 30 08:33:06 2012 +0200 row: Adding support for reinsert already dispatched req Add support for reinserting already dispatched request back to the schedulers internal data structures. The request will be reinserted back to the queue (head) it was dispatched from as if it was never dispatched. Change-Id: I70954df300774409c25b5821465fb3aa33d8feb5 Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org> commit a1a6f09cae0149d935bcea3f20d4acb6556d68f9 Author: Tatyana Brokhman <tlinder@codeaurora.org> Date: Tue Dec 4 16:04:15 2012 +0200 block: Add API for urgent request handling This patch add support in block & elevator layers for handling urgent requests. The decision if a request is urgent or not is taken by the scheduler. Urgent request notification is passed to the underlying block device driver (eMMC for example). Block device driver may decide to interrupt the currently running low priority request to serve the new urgent request. By doing so READ latency is greatly reduced in read&write collision scenarios. Note that if the current scheduler doesn't implement the urgent request mechanism, this code path is never activated. Change-Id: I8aa74b9b45c0d3a2221bd4e82ea76eb4103e7cfa Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org> Conflicts: block/blk-core.c commit 4e907d9d6079629d6ce61fbdfb1a629d3587e176 Author: Tatyana Brokhman <tlinder@codeaurora.org> Date: Tue Dec 4 15:54:43 2012 +0200 block: Add support for reinsert a dispatched req Add support for reinserting a dispatched request back to the scheduler's internal data structures. This capability is used by the device driver when it chooses to interrupt the current request transmission and execute another (more urgent) pending request. For example: interrupting long write in order to handle pending read. The device driver re-inserts the remaining write request back to the scheduler, to be rescheduled for transmission later on. Add API for verifying whether the current scheduler supports reinserting requests mechanism. If reinsert mechanism isn't supported by the scheduler, this code path will never be activated. Change-Id: I5c982a66b651ebf544aae60063ac8a340d79e67f Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org> commit 0675c27faab797f7149893b84cc357aadb37c697 Author: Tatyana Brokhman <tlinder@codeaurora.org> Date: Mon Oct 15 20:56:02 2012 +0200 block: ROW: Fix forced dispatch This patch fixes forced dispatch in the ROW scheduling algorithm. When the dispatch function is called with the forced flag on, we can't delay the dispatch of the requests that are in scheduler queues. Thus, when dispatch is called with forced turned on, we need to cancel idling, or not to idle at all. Change-Id: I3aa0da33ad7b59c0731c696f1392b48525b52ddc Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org> commit ce6acf59662d1bbe5663a64aef9fe1695b8bbe1b Author: Tatyana Brokhman <tlinder@codeaurora.org> Date: Thu Sep 20 10:46:10 2012 +0300 block: Adding ROW scheduling algorithm This patch adds the implementation of a new scheduling algorithm - ROW. The policy of this algorithm is to prioritize READ requests over WRITE as much as possible without starving the WRITE requests. Change-Id: I4ed52ea21d43b0e7c0769b2599779a3d3869c519 Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org> Signed-off-by: Tkkg1994 <luca.grifo@outlook.com>
Diffstat (limited to 'block/blk-core.c')
-rw-r--r--block/blk-core.c36
1 files changed, 32 insertions, 4 deletions
diff --git a/block/blk-core.c b/block/blk-core.c
index 14a419c50..40cb3916c 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -307,16 +307,20 @@ inline void __blk_run_queue_uncond(struct request_queue *q)
* number of active request_fn invocations such that blk_drain_queue()
* can wait until all these request_fn calls have finished.
*/
- q->request_fn_active++;
+
if (!q->notified_urgent &&
q->elevator->type->ops.elevator_is_urgent_fn &&
q->urgent_request_fn &&
q->elevator->type->ops.elevator_is_urgent_fn(q)) {
q->notified_urgent = true;
+ q->request_fn_active++;
q->urgent_request_fn(q);
- } else
+ q->request_fn_active--;
+ } else {
+ q->request_fn_active++;
q->request_fn(q);
- q->request_fn_active--;
+ q->request_fn_active--;
+ }
}
/**
@@ -1226,6 +1230,16 @@ void blk_requeue_request(struct request_queue *q, struct request *rq)
BUG_ON(blk_queued_rq(rq));
+ if (rq->cmd_flags & REQ_URGENT) {
+ /*
+ * It's not compliant with the design to re-insert
+ * urgent requests. We want to be able to track this
+ * down.
+ */
+ pr_err("%s(): requeueing an URGENT request", __func__);
+ WARN_ON(!q->dispatched_urgent);
+ q->dispatched_urgent = false;
+ }
elv_requeue_request(q, rq);
}
EXPORT_SYMBOL(blk_requeue_request);
@@ -1249,10 +1263,20 @@ int blk_reinsert_request(struct request_queue *q, struct request *rq)
blk_clear_rq_complete(rq);
trace_block_rq_requeue(q, rq);
- if (blk_rq_tagged(rq))
+ if (rq->cmd_flags & REQ_QUEUED)
blk_queue_end_tag(q, rq);
BUG_ON(blk_queued_rq(rq));
+ if (rq->cmd_flags & REQ_URGENT) {
+ /*
+ * It's not compliant with the design to re-insert
+ * urgent requests. We want to be able to track this
+ * down.
+ */
+ pr_err("%s(): reinserting an URGENT request", __func__);
+ WARN_ON(!q->dispatched_urgent);
+ q->dispatched_urgent = false;
+ }
return elv_reinsert_request(q, rq);
}
@@ -2226,6 +2250,10 @@ struct request *blk_peek_request(struct request_queue *q)
* not be passed by new incoming requests
*/
rq->cmd_flags |= REQ_STARTED;
+ if (rq->cmd_flags & REQ_URGENT) {
+ WARN_ON(q->dispatched_urgent);
+ q->dispatched_urgent = true;
+ }
trace_block_rq_issue(q, rq);
}