Skip to content

Commit

Permalink
mm: Don't hog the CPU and zone lock in rmqueue_bulk()
Browse files Browse the repository at this point in the history
There is noticeable scheduling latency and heavy zone lock contention
stemming from rmqueue_bulk's single hold of the zone lock while doing
its work, as seen with the preemptoff tracer. There's no actual need for
rmqueue_bulk() to hold the zone lock the entire time; it only does so
for supposed efficiency. As such, we can relax the zone lock and even
reschedule when IRQs are enabled in order to keep the scheduling delays
and zone lock contention at bay. Forward progress is still guaranteed,
as the zone lock can only be relaxed after page removal.

With this change, rmqueue_bulk() no longer appears as a serious offender
in the preemptoff tracer, and system latency is noticeably improved.

Signed-off-by: Sultan Alsawaf <[email protected]>
Signed-off-by: celtare21 <[email protected]>
  • Loading branch information
kerneltoast authored and celtare21 committed Dec 16, 2022
1 parent 22ddd5b commit 6626acb
Showing 1 changed file with 18 additions and 5 deletions.
23 changes: 18 additions & 5 deletions mm/page_alloc.c
Original file line number Diff line number Diff line change
Expand Up @@ -3036,15 +3036,16 @@ static inline struct page *__rmqueue_cma(struct zone *zone, unsigned int order,
#endif

/*
* Obtain a specified number of elements from the buddy allocator, all under
* a single hold of the lock, for efficiency. Add them to the supplied list.
* Returns the number of new pages which were placed at *list.
* Obtain a specified number of elements from the buddy allocator, and relax the
* zone lock when needed. Add them to the supplied list. Returns the number of
* new pages which were placed at *list.
*/
static int rmqueue_bulk(struct zone *zone, unsigned int order,
unsigned long count, struct list_head *list,
int migratetype, unsigned int alloc_flags)
{
int i, alloced = 0;
const bool can_resched = !preempt_count() && !irqs_disabled();
int i, alloced = 0, last_mod = 0;

spin_lock(&zone->lock);
for (i = 0; i < count; ++i) {
Expand All @@ -3063,6 +3064,18 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order,
if (unlikely(page == NULL))
break;

/* Reschedule and ease the contention on the lock if needed */
if (i + 1 < count && ((can_resched && need_resched()) ||
spin_needbreak(&zone->lock))) {
__mod_zone_page_state(zone, NR_FREE_PAGES,
-((i + 1 - last_mod) << order));
last_mod = i + 1;
spin_unlock(&zone->lock);
if (can_resched)
cond_resched();
spin_lock(&zone->lock);
}

if (unlikely(check_pcp_refill(page)))
continue;

Expand All @@ -3089,7 +3102,7 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order,
* on i. Do not confuse with 'alloced' which is the number of
* pages added to the pcp list.
*/
__mod_zone_page_state(zone, NR_FREE_PAGES, -(i << order));
__mod_zone_page_state(zone, NR_FREE_PAGES, -((i - last_mod) << order));
spin_unlock(&zone->lock);
return alloced;
}
Expand Down

0 comments on commit 6626acb

Please sign in to comment.