diff options
| author | Vinayak Menon <vinmenon@codeaurora.org> | 2016-02-24 14:27:20 +0530 |
|---|---|---|
| committer | Moyster <oysterized@gmail.com> | 2019-05-03 12:57:32 +0200 |
| commit | 8e49cfea558c54bbc648314de117e3bd7f03cdb9 (patch) | |
| tree | 770f1ada40837f46d9fca56f8d48a4042148330d | |
| parent | 6899527530aee8f9add4b18c591286a59405e1ed (diff) | |
mm: fix cma accounting in zone_watermark_ok
Some cases were reported where atomic unmovable allocations of order 2
fails, but kswapd does not wakeup. And in such cases it was seen that,
when zone_watermark_ok check is performed to decide whether to wake up
kswapd, there were lot of CMA pages of order 2 and above. This makes
the watermark check succeed resulting in kswapd not being woken up. But
since these atomic unmovable allocations can't come from CMA region,
further atomic allocations keeps failing, without kswapd trying to
reclaim. Usually concurrent movable allocations result in reclaim and
improves the situtation, but the case reported was from a network test
which was resulting in only atomic skb allocations being attempted.
Change-Id: If953b8a8cfb0a5caa1fb63d3c032b194942f8091
Signed-off-by: Vinayak Menon <vinmenon@codeaurora.org>
Signed-off-by: Prakash Gupta <guptap@codeaurora.org>
(cherry picked from commit ea934a2665d641ca879b2c374d06da64c832f00a)
| -rw-r--r-- | mm/page_alloc.c | 18 |
1 files changed, 11 insertions, 7 deletions
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 430956db0..ee0229558 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2005,11 +2005,7 @@ static bool __zone_watermark_ok(struct zone *z, int order, unsigned long mark, { /* free_pages my go negative - that's OK */ long min = mark; - long lowmem_reserve = z->lowmem_reserve[classzone_idx]; int o; -#if !defined(CONFIG_CMA) || !defined(CONFIG_MTK_SVP) // SVP 15 - long free_cma = 0; -#endif free_pages -= (1 << order) - 1; if (alloc_flags & ALLOC_HIGH) @@ -2021,12 +2017,12 @@ static bool __zone_watermark_ok(struct zone *z, int order, unsigned long mark, #ifdef CONFIG_CMA /* If allocation can't use CMA areas don't use free CMA pages */ if (!(alloc_flags & ALLOC_CMA)) - free_cma = zone_page_state(z, NR_FREE_CMA_PAGES); + free_pages -= zone_page_state(z, NR_FREE_CMA_PAGES); #endif #endif #if defined(CONFIG_CMA) && defined(CONFIG_MTK_SVP) // SVP 15 - if (free_pages <= min + lowmem_reserve) + if (free_pages <= min + z->lowmem_reserve[classzone_idx]) #else if (free_pages - free_cma <= min + lowmem_reserve) #endif @@ -2034,7 +2030,15 @@ static bool __zone_watermark_ok(struct zone *z, int order, unsigned long mark, return false; for (o = 0; o < order; o++) { /* At the next order, this order's pages become unavailable */ - free_pages -= z->free_area[o].nr_free << o; + if (!(alloc_flags & ALLOC_CMA)) { + long free = z->free_area[o].nr_free - + z->free_area[o].nr_free_cma; + if (free < 0) + free = 0; + free_pages -= free << o; + } else { + free_pages -= z->free_area[o].nr_free << o; + } /* Require fewer higher order pages to be free */ min >>= min_free_order_shift; |
