Skip to content

Commit

Permalink
mm: drain TLB batching against CMA pages
Browse files Browse the repository at this point in the history
MM has used TLB batching flush for memory unmapping. The batching
scheme does memory batch freeing as well as TLB batching flush.

Problem with CMA is process could schdule out once it kept holding
refcount of pages.

	CPU 0                                     CPU 1
do_madvise/munmap/exit_mmap
zap_pte_range
__tlb_remove_page

..                                          cma_alloc start
sched out

after 1sec                                  keep failing since

sched in
..
tlb_flush
free_pages_and_swap_cache
page_refcount is zero, finally.
				    	    page migration succeded
				            cma_alloc returns

If the process on CPU 0 is lower priority process, the CMA
allocation latency depends on the scheduler, sometimes, which
is priority inversion if process in CPU is higher priority.

This patch tries to fix it via using TLB draining right before
scheduling out(to release those pages immediately) if TLB has
CMA pages in the batch.

Bug: 238728493
Signed-off-by: Minchan Kim <[email protected]>
Change-Id: Ifdcd1be670129d59adc4c0aff9c00d8e4ede7fe1
  • Loading branch information
Minchan Kim authored and TreeHugger Robot committed Oct 27, 2022
1 parent b4eaf5b commit da334ed
Show file tree
Hide file tree
Showing 4 changed files with 42 additions and 2 deletions.
3 changes: 3 additions & 0 deletions drivers/soc/google/vh/include/mm.h
Original file line number Diff line number Diff line change
Expand Up @@ -4,5 +4,8 @@
#include <trace/hooks/mm.h>

void vh_pagevec_drain(void *data, struct page *page, bool *ret);
void vh_zap_pte_range_tlb_start(void *data, void *unused);
void vh_zap_pte_range_tlb_force_flush(void *data, struct page *page, bool *flush);
void vh_zap_pte_range_tlb_end(void *data, void *unused);

#endif
2 changes: 1 addition & 1 deletion drivers/soc/google/vh/kernel/mm/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,4 @@

# vendor mm module
obj-$(CONFIG_VH_MM) += vh_mm.o
vh_mm-y += vh_mm_init.o cma.o gup.o swap.o
vh_mm-y += vh_mm_init.o cma.o gup.o swap.o memory.o
25 changes: 25 additions & 0 deletions drivers/soc/google/vh/kernel/mm/memory.c
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
// SPDX-License-Identifier: GPL-2.0-only
/* memory.c
*
* Android Vendor Hook Support
*
* Copyright 2022 Google LLC
*/

#include <linux/mm.h>

void vh_zap_pte_range_tlb_start(void *data, void *unused)
{
preempt_disable();
}

void vh_zap_pte_range_tlb_force_flush(void *data, struct page *page, bool *flush)
{
if (is_migrate_cma_page(page))
*flush = true;
}

void vh_zap_pte_range_tlb_end(void *data, void *unused)
{
preempt_enable();
}
14 changes: 13 additions & 1 deletion drivers/soc/google/vh/kernel/mm/vh_mm_init.c
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,19 @@ static int vh_mm_init(void)
return ret;

ret = register_trace_android_vh_pagevec_drain(
vh_pagevec_drain, NULL);
vh_pagevec_drain, NULL);
if (ret)
return ret;
ret = register_trace_android_vh_zap_pte_range_tlb_start(
vh_zap_pte_range_tlb_start, NULL);
if (ret)
return ret;
ret = register_trace_android_vh_zap_pte_range_tlb_force_flush(
vh_zap_pte_range_tlb_force_flush, NULL);
if (ret)
return ret;
ret = register_trace_android_vh_zap_pte_range_tlb_end(
vh_zap_pte_range_tlb_end, NULL);
return ret;
}
module_init(vh_mm_init);
Expand Down

0 comments on commit da334ed

Please sign in to comment.