Skip to content
  • Vlastimil Babka's avatar
    mm, slub: convert kmem_cpu_slab protection to local_lock · 458e0c17
    Vlastimil Babka authored
    
    
    Embed local_lock into struct kmem_cpu_slab and use the irq-safe versions of
    local_lock instead of plain local_irq_save/restore. On !PREEMPT_RT that's
    equivalent, with better lockdep visibility. On PREEMPT_RT that means better
    preemption.
    
    However, the cost on PREEMPT_RT is the loss of lockless fast paths which only
    work with cpu freelist. Those are designed to detect and recover from being
    preempted by other conflicting operations (both fast or slow path), but the
    slow path operations assume they cannot be preempted by a fast path operation,
    which is guaranteed naturally with disabled irqs. With local locks on
    PREEMPT_RT, the fast paths now also need to take the local lock to avoid races.
    
    In the allocation fastpath slab_alloc_node() we can just defer to the slowpath
    __slab_alloc() which also works with cpu freelist, but under the local lock.
    In the free fastpath do_slab_free() we have to add a new local lock protected
    version of freeing to the cpu freelist, as the existing slowpath only works
    with the page freelist.
    
    Also update the comment about locking scheme in SLUB to reflect changes done
    by this series.
    
    [ Mike Galbraith <efault@gmx.de>: use local_lock() without irq in PREEMPT_RT
      scope; debugging of RT crashes resulting in put_cpu_partial() locking changes ]
    Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
    Signed-off-by: default avatarSebastian Andrzej Siewior <bigeasy@linutronix.de>
    458e0c17