aboutsummaryrefslogtreecommitdiff
path: root/kernel/task_work.c
diff options
context:
space:
mode:
authorWill Deacon <will.deacon@arm.com>2016-02-02 12:46:25 +0000
committerMister Oyster <oysterized@gmail.com>2017-04-11 10:59:40 +0200
commitc5aea837100d3e6fcd213f832d8fb622086b68b0 (patch)
treea17bed9cbfa08f278e1cadd812522de671eeb568 /kernel/task_work.c
parent84d0c597690e00e66f822b64ff33d3a7690485ca (diff)
arm64: lib: improve copy_page to deal with 128 bytes at a time
We want to avoid lots of different copy_page implementations, settling for something that is "good enough" everywhere and hopefully easy to understand and maintain whilst we're at it. This patch reworks our copy_page implementation based on discussions with Cavium on the list and benchmarking on Cortex-A processors so that: - The loop is unrolled to copy 128 bytes per iteration - The reads are offset so that we read from the next 128-byte block in the same iteration that we store the previous block - Explicit prefetch instructions are removed for now, since they hurt performance on CPUs with hardware prefetching - The loop exit condition is calculated at the start of the loop Change-Id: I0d9f3bbe4efa2751f41432a3b4b299fbb0e494be Signed-off-by: Will Deacon <will.deacon@arm.com> Tested-by: Andrew Pinski <apinski@cavium.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: franciscofranco <franciscofranco.1990@gmail.com>
Diffstat (limited to 'kernel/task_work.c')
0 files changed, 0 insertions, 0 deletions