This project is mirrored from https://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git. Pull mirroring updated .
  1. 27 Aug, 2021 2 commits
  2. 25 Aug, 2021 2 commits
    • Thomas Gleixner's avatar
      locking/rtmutex: Dequeue waiter on ww_mutex deadlock · 37e8abff
      Thomas Gleixner authored
      The rt_mutex based ww_mutex variant queues the new waiter first in the
      lock's rbtree before evaluating the ww_mutex specific conditions which
      might decide that the waiter should back out. This check and conditional
      exit happens before the waiter is enqueued into the PI chain.
      
      The failure handling at the call site assumes that the waiter, if it is the
      top most waiter on the lock, is queued in the PI chain and then proceeds to
      adjust the unmodified PI chain, which results in RB tree corruption.
      
      Dequeue the waiter from the lock waiter list in the ww_mutex error exit
      path to prevent this.
      
      Fixes: add46132
      
       ("locking/rtmutex: Extend the rtmutex core to support ww_mutex")
      Reported-by: default avatarSebastian Siewior <bigeasy@linutronix.de>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Link: https://lkml.kernel.org/r/20210825102454.042280541@linutronix.de
      37e8abff
    • Thomas Gleixner's avatar
      locking/rtmutex: Dont dereference waiter lockless · c3123c43
      Thomas Gleixner authored
      The new rt_mutex_spin_on_onwer() loop checks whether the spinning waiter is
      still the top waiter on the lock by utilizing rt_mutex_top_waiter(), which
      is broken because that function contains a sanity check which dereferences
      the top waiter pointer to check whether the waiter belongs to the
      lock. That's wrong in the lockless spinwait case:
      
       CPU 0							CPU 1
       rt_mutex_lock(lock)					rt_mutex_lock(lock);
         queue(waiter0)
         waiter0 == rt_mutex_top_waiter(lock)
         rt_mutex_spin_on_onwer(lock, waiter0) {		queue(waiter1)
         					 		waiter1 == rt_mutex_top_waiter(lock)
         							...
           top_waiter = rt_mutex_top_waiter(lock)
             leftmost = rb_first_cached(&lock->waiters);
      							-> signal
      							dequeue(waiter1)
      							destroy(waiter1)
             w = rb_entry(leftmost, ....)
             BUG_ON(w->lock != lock)	 <- UAF
      
      The BUG_ON() is correct for the case where the caller holds lock->wait_lock
      which guarantees that the leftmost waiter entry cannot vanish. For the
      lockless spinwait case it's broken.
      
      Create a new helper function which avoids the pointer dereference and just
      compares the leftmost entry pointer with current's waiter pointer to
      validate that currrent is still elegible for spinning.
      
      Fixes: 992caf7f
      
       ("locking/rtmutex: Add adaptive spinwait mechanism")
      Reported-by: default avatarSebastian Siewior <bigeasy@linutronix.de>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Link: https://lkml.kernel.org/r/20210825102453.981720644@linutronix.de
      c3123c43
  3. 17 Aug, 2021 16 commits
  4. 10 Aug, 2021 1 commit
  5. 18 Jun, 2021 1 commit
  6. 29 Mar, 2021 12 commits
  7. 22 Mar, 2021 1 commit
    • Ingo Molnar's avatar
      locking: Fix typos in comments · e2db7592
      Ingo Molnar authored
      
      
      Fix ~16 single-word typos in locking code comments.
      
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Paul E. McKenney <paulmck@kernel.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      e2db7592
  8. 11 Mar, 2021 1 commit
    • Davidlohr Bueso's avatar
      kernel/futex: Kill rt_mutex_next_owner() · 9a4b99fc
      Davidlohr Bueso authored
      Update wake_futex_pi() and kill the call altogether. This is possible because:
      
      (i) The case of fixup_owner() in which the pi_mutex was stolen from the
      signaled enqueued top-waiter which fails to trylock and doesn't see a
      current owner of the rtmutex but needs to acknowledge an non-enqueued
      higher priority waiter, which is the other alternative. This used to be
      handled by rt_mutex_next_owner(), which guaranteed fixup_pi_state_owner('newowner')
      never to be nil. Nowadays the logic is handled by an EAGAIN loop, without
      the need of rt_mutex_next_owner(). Specifically:
      
          c1e2f0ea (futex: Avoid violating the 10th rule of futex)
          9f5d1c33 (futex: Handle transient "ownerless" rtmutex state correctly)
      
      (ii) rt_mutex_next_owner() and rt_mutex_top_waiter() are semantically
      equivalent, as of:
      
          c28d62cf
      
       (locking/rtmutex: Handle non enqueued waiters gracefully in remove_waiter())
      
      So instead of keeping the call around, just use the good ole rt_mutex_top_waiter().
      No change in semantics.
      Signed-off-by: default avatarDavidlohr Bueso <dbueso@suse.de>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Link: https://lore.kernel.org/r/20210226175029.50335-1-dave@stgolabs.net
      9a4b99fc
  9. 26 Feb, 2021 1 commit
  10. 17 Feb, 2021 1 commit
  11. 28 Jan, 2021 1 commit
  12. 26 Jan, 2021 1 commit