diff options
| author | Minchan Kim <minchan@kernel.org> | 2015-09-08 15:04:49 -0700 |
|---|---|---|
| committer | Mister Oyster <oysterized@gmail.com> | 2017-09-25 21:01:26 +0200 |
| commit | 1155bac9ff72583791f266ff57373ff26299fd23 (patch) | |
| tree | 1721da01afbdf4e7f521b3343aa979bd5057eeeb | |
| parent | 651584a0ba6d3fe7da3bd597d7ed95214d7d8a8c (diff) | |
zsmalloc: use class->pages_per_zspage
There is no need to recalcurate pages_per_zspage in runtime. Just use
class->pages_per_zspage to avoid unnecessary runtime overhead.
Signed-off-by: Minchan Kim <minchan@kernel.org>
Acked-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
| -rw-r--r-- | mm/zsmalloc.c | 5 |
1 files changed, 2 insertions, 3 deletions
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 02f93ae14..75cf11e84 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -1714,7 +1714,7 @@ static unsigned long zs_can_compact(struct size_class *class) obj_wasted /= get_maxobj_per_zspage(class->size, class->pages_per_zspage); - return obj_wasted * get_pages_per_zspage(class->size); + return obj_wasted * class->pages_per_zspage; } static void __zs_compact(struct zs_pool *pool, struct size_class *class) @@ -1752,8 +1752,7 @@ static void __zs_compact(struct zs_pool *pool, struct size_class *class) putback_zspage(pool, class, dst_page); if (putback_zspage(pool, class, src_page) == ZS_EMPTY) - pool->stats.pages_compacted += - get_pages_per_zspage(class->size); + pool->stats.pages_compacted += class->pages_per_zspage; spin_unlock(&class->lock); cond_resched(); spin_lock(&class->lock); |
