diff options
| author | Tatyana Brokhman <tlinder@codeaurora.org> | 2012-12-04 16:04:15 +0200 |
|---|---|---|
| committer | Moyster <oysterized@gmail.com> | 2016-08-26 20:07:39 +0200 |
| commit | be8fae6f6fbf2ac2bfdf21b571333b3e98f7c5a9 (patch) | |
| tree | b09f8a26bb2223d9d990e75a39d5a9d5dd99d545 /block/blk-core.c | |
| parent | 82356c71fd5b6b4ace46e4f31c318cc68657a598 (diff) | |
block: Add API for urgent request handling
This patch add support in block & elevator layers for handling
urgent requests. The decision if a request is urgent or not is taken
by the scheduler. Urgent request notification is passed to the underlying
block device driver (eMMC for example). Block device driver may decide to
interrupt the currently running low priority request to serve the new
urgent request. By doing so READ latency is greatly reduced in read&write
collision scenarios.
Note that if the current scheduler doesn't implement the urgent request
mechanism, this code path is never activated.
Change-Id: I8aa74b9b45c0d3a2221bd4e82ea76eb4103e7cfa
Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>
Signed-off-by: Stefan Guendhoer <stefan@guendhoer.com>
Diffstat (limited to 'block/blk-core.c')
| -rw-r--r-- | block/blk-core.c | 26 |
1 files changed, 24 insertions, 2 deletions
diff --git a/block/blk-core.c b/block/blk-core.c index 570819b04..9f671cd6c 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -312,7 +312,14 @@ inline void __blk_run_queue_uncond(struct request_queue *q) * can wait until all these request_fn calls have finished. */ q->request_fn_active++; - q->request_fn(q); + if (!q->notified_urgent && + q->elevator->type->ops.elevator_is_urgent_fn && + q->urgent_request_fn && + q->elevator->type->ops.elevator_is_urgent_fn(q)) { + q->notified_urgent = true; + q->urgent_request_fn(q); + } else + q->request_fn(q); q->request_fn_active--; } @@ -323,6 +330,12 @@ inline void __blk_run_queue_uncond(struct request_queue *q) * Description: * See @blk_run_queue. This variant must be called with the queue lock * held and interrupts disabled. + * Device driver will be notified of an urgent request + * pending under the following conditions: + * 1. The driver and the current scheduler support urgent reques handling + * 2. There is an urgent request pending in the scheduler + * 3. There isn't already an urgent request in flight, meaning previously + * notified urgent request completed (!q->notified_urgent) */ void __blk_run_queue(struct request_queue *q) { @@ -2351,8 +2364,17 @@ struct request *blk_fetch_request(struct request_queue *q) struct request *rq; rq = blk_peek_request(q); - if (rq) + if (rq) { + /* + * Assumption: the next request fetched from scheduler after we + * notified "urgent request pending" - will be the urgent one + */ + if (q->notified_urgent && !q->dispatched_urgent) { + q->dispatched_urgent = true; + (void)blk_mark_rq_urgent(rq); + } blk_start_request(rq); + } return rq; } EXPORT_SYMBOL(blk_fetch_request); |
