This project is mirrored from https://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git. Pull mirroring updated .
  1. 28 May, 2016 1 commit
  2. 13 May, 2016 10 commits
    • Paul Burton's avatar
      MIPS: mm: Panic if an XPA kernel is run without RIXI · e56c7e18
      Paul Burton authored
      
      
      XPA kernels hardcode for the presence of RIXI - the PTE format & its
      handling presume RI & XI bits. Make this dependence explicit by panicing
      if we run on a system that violates it.
      
      Signed-off-by: default avatarPaul Burton <paul.burton@imgtec.com>
      Reviewed-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: linux-mips@linux-mips.org
      Cc: linux-kernel@vger.kernel.org
      Patchwork: https://patchwork.linux-mips.org/patch/13125/
      
      
      Signed-off-by: default avatarRalf Baechle <ralf@linux-mips.org>
      e56c7e18
    • James Hogan's avatar
      MIPS: mm: Don't do MTHC0 if XPA not present · 4b6f99d3
      James Hogan authored
      Performing an MTHC0 instruction without XPA being present will trigger a
      reserved instruction exception, therefore conditionalise the use of this
      instruction when building TLB handlers (build_update_entries()), and in
      __update_tlb().
      
      This allows an XPA kernel to run on non XPA hardware without that
      instruction implemented, just like it can run on XPA capable hardware
      without XPA in use (with the noxpa kernel argument) or with XPA not
      configured in hardware.
      
      [paul.burton@imgtec.com:
        - Rebase atop other TLB work.
        - Add "mm" to subject.
        - Handle the __kmap_pgprot case.]
      
      Fixes: c5b36783
      
       ("MIPS: Add support for XPA.")
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Signed-off-by: default avatarPaul Burton <paul.burton@imgtec.com>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: David Hildenbrand <dahi@linux.vnet.ibm.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Jerome Marchand <jmarchan@redhat.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: linux-mips@linux-mips.org
      Cc: linux-kernel@vger.kernel.org
      Patchwork: https://patchwork.linux-mips.org/patch/13124/
      
      
      Signed-off-by: default avatarRalf Baechle <ralf@linux-mips.org>
      4b6f99d3
    • Paul Burton's avatar
      MIPS: mm: Simplify build_update_entries · 2caa89b4
      Paul Burton authored
      
      
      We can simplify build_update_entries by unifying the code for the 36 bit
      physical addressing with MIPS32 case with the general case, by using
      pte_off_ variables in all cases & handling the trivial
      _PAGE_GLOBAL_SHIFT == 0 case in build_convert_pte_to_entrylo. This
      leaves XPA as the only special case.
      
      Signed-off-by: default avatarPaul Burton <paul.burton@imgtec.com>
      Reviewed-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: linux-mips@linux-mips.org
      Cc: linux-kernel@vger.kernel.org
      Patchwork: https://patchwork.linux-mips.org/patch/13123/
      
      
      Signed-off-by: default avatarRalf Baechle <ralf@linux-mips.org>
      2caa89b4
    • Paul Burton's avatar
      MIPS: mm: Be more explicit about PTE mode bit handling · b4ebbb87
      Paul Burton authored
      
      
      The XPA case in iPTE_SW or's in software mode bits to the pte_low value
      (which is what actually ends up in the high 32 bits of EntryLo...). It
      does this presuming that only bits in the upper 16 bits of the 32 bit
      pte_low value will be set. Make this assumption explicit with a BUG_ON.
      
      A similar assumption is made for the hardware mode bits, which are or'd
      in with a single ori instruction. Make that assumption explicit with a
      BUG_ON too.
      
      Signed-off-by: default avatarPaul Burton <paul.burton@imgtec.com>
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: linux-mips@linux-mips.org
      Cc: linux-kernel@vger.kernel.org
      Patchwork: https://patchwork.linux-mips.org/patch/13122/
      
      
      Signed-off-by: default avatarRalf Baechle <ralf@linux-mips.org>
      b4ebbb87
    • Paul Burton's avatar
      MIPS: mm: Pass scratch register through to iPTE_SW · bbeeffec
      Paul Burton authored
      
      
      Rather than hardcode a scratch register for the XPA case in iPTE_SW,
      pass one through from the work registers allocated by the caller. This
      allows for the XPA path to function correctly regardless of the work
      registers in use.
      
      Without doing this there are cases (where KScratch registers are
      unavailable) in which iPTE_SW will incorrectly clobber $1 despite it
      already being in use for the PTE or PTE pointer.
      
      Signed-off-by: default avatarPaul Burton <paul.burton@imgtec.com>
      Reviewed-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: linux-mips@linux-mips.org
      Cc: linux-kernel@vger.kernel.org
      Patchwork: https://patchwork.linux-mips.org/patch/13121/
      
      
      Signed-off-by: default avatarRalf Baechle <ralf@linux-mips.org>
      bbeeffec
    • James Hogan's avatar
      MIPS: mm: Don't clobber $1 on XPA TLB refill · f3832196
      James Hogan authored
      For XPA kernels build_update_entries() uses $1 (at) as a scratch
      register, but doesn't arrange for it to be preserved, so it will always
      be clobbered by the TLB refill exception. Although this register
      normally has a very short lifetime that doesn't cross memory accesses,
      TLB refills due to instruction fetches (either on a page boundary or
      after preemption) could clobber live data, and its easy to reproduce
      the clobber with a little bit of assembler code.
      
      Note that the use of a hardware page table walker will partly mask the
      problem, as the TLB refill handler will not always be invoked.
      
      This is fixed by avoiding the use of the extra scratch register. The
      pte_high parts (going into the lower half of the EntryLo registers) are
      loaded and manipulated separately so as to keep the PTE pointer around
      for the other halves (instead of storing in the scratch register), and
      the pte_low parts (going into the high half of the EntryLo registers)
      are masked with 0x00ffffff using an ext instruction (instead of loading
      0x00ffffff into the scratch register and AND'ing).
      
      [paul.burton@imgtec.com:
        - Rebase atop other TLB work.
        - Use ext instead of an sll, srl sequence.
        - Use cpu_has_xpa instead of #ifdefs.
        - Modify commit subject to include "mm".]
      
      Fixes: c5b36783
      
       ("MIPS: Add support for XPA.")
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Signed-off-by: default avatarPaul Burton <paul.burton@imgtec.com>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: linux-kernel@vger.kernel.org
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/13120/
      
      
      Signed-off-by: default avatarRalf Baechle <ralf@linux-mips.org>
      f3832196
    • Paul Burton's avatar
      MIPS: mm: Fix MIPS32 36b physical addressing (alchemy, netlogic) · 7b2cb64f
      Paul Burton authored
      There are 2 distinct cases in which a kernel for a MIPS32 CPU
      (CONFIG_CPU_MIPS32=y) may use 64 bit physical addresses
      (CONFIG_PHYS_ADDR_T_64BIT=y):
      
        - 36 bit physical addressing as used by RMI Alchemy & Netlogic XLP/XLR
          CPUs.
      
        - MIPS32r5 eXtended Physical Addressing (XPA).
      
      These 2 cases are distinct in that they require different behaviour from
      the kernel - the EntryLo registers have different formats. Until Linux
      v4.1 we only supported the first case, with code conditional upon the 2
      aforementioned Kconfig variables being set. Commit c5b36783 ("MIPS:
      Add support for XPA.") added support for the second case, but did so by
      modifying the code that existed for the first case rather than treating
      the 2 cases as distinct. Since the EntryLo registers have different
      formats this breaks the 36 bit Alchemy/XLP/XLR case. Fix this by
      splitting the 2 cases, with XPA cases now being conditional upon
      CONFIG_XPA and the non-XPA case matching the code as it existed prior to
      commit c5b36783
      
       ("MIPS: Add support for XPA.").
      
      Signed-off-by: default avatarPaul Burton <paul.burton@imgtec.com>
      Reported-by: default avatarManuel Lauss <manuel.lauss@gmail.com>
      Tested-by: default avatarManuel Lauss <manuel.lauss@gmail.com>
      Fixes: c5b36783 ("MIPS: Add support for XPA.")
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: David Daney <david.daney@cavium.com>
      Cc: Huacai Chen <chenhc@lemote.com>
      Cc: Maciej W. Rozycki <macro@linux-mips.org>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
      Cc: David Hildenbrand <dahi@linux.vnet.ibm.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Alex Smith <alex.smith@imgtec.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: stable@vger.kernel.org # v4.1+
      Cc: linux-mips@linux-mips.org
      Cc: linux-kernel@vger.kernel.org
      Patchwork: https://patchwork.linux-mips.org/patch/13119/
      
      
      Signed-off-by: default avatarRalf Baechle <ralf@linux-mips.org>
      7b2cb64f
    • Paul Burton's avatar
      MIPS: mm: Standardise on _PAGE_NO_READ, drop _PAGE_READ · 780602d7
      Paul Burton authored
      Ever since support for RI/XI was implemented by commit 6dd9344c
      
      
      ("MIPS: Implement Read Inhibit/eXecute Inhibit") we've had a mixture of
      _PAGE_READ & _PAGE_NO_READ bits. Rather than keep both around, switch
      away from using _PAGE_READ to determine page presence & instead invert
      the use to _PAGE_NO_READ. Wherever we formerly had no definition for
      _PAGE_NO_READ, change what was _PAGE_READ to _PAGE_NO_READ. The end
      result is that we consistently use _PAGE_NO_READ to determine whether a
      page is readable, regardless of whether RI/XI is implemented.
      
      Signed-off-by: default avatarPaul Burton <paul.burton@imgtec.com>
      Reviewed-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: David Daney <david.daney@cavium.com>
      Cc: Huacai Chen <chenhc@lemote.com>
      Cc: Maciej W. Rozycki <macro@linux-mips.org>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Alex Smith <alex.smith@imgtec.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: linux-mips@linux-mips.org
      Cc: linux-kernel@vger.kernel.org
      Patchwork: https://patchwork.linux-mips.org/patch/13116/
      
      
      Signed-off-by: default avatarRalf Baechle <ralf@linux-mips.org>
      780602d7
    • James Hogan's avatar
      MIPS: Fix HTW config on XPA kernel without LPA enabled · 14bc2414
      James Hogan authored
      The hardware page table walker (HTW) configuration is broken on XPA
      kernels where XPA couldn't be enabled (either nohtw or the hardware
      doesn't support it). This is because the PWSize.PTEW field (PTE width)
      was only set to 8 bytes (an extra shift of 1) in config_htw_params() if
      PageGrain.ELPA (enable large physical addressing) is set. On an XPA
      kernel though the size of PTEs is fixed at 8 bytes regardless of whether
      XPA could actually be enabled.
      
      Fix the initialisation of this field based on sizeof(pte_t) instead.
      
      Fixes: c5b36783
      
       ("MIPS: Add support for XPA.")
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Steven J. Hill <sjhill@realitydiluted.com>
      Cc: Paul Burton <paul.burton@imgtec.com>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: linux-mips@linux-mips.org
      Cc: linux-kernel@vger.kernel.org
      Patchwork: https://patchwork.linux-mips.org/patch/13113/
      
      
      Signed-off-by: default avatarPaul Burton <paul.burton@imgtec.com>
      Signed-off-by: default avatarRalf Baechle <ralf@linux-mips.org>
      14bc2414
    • Huacai Chen's avatar
      MIPS: Loongson-3: Fast TLB refill handler · 380cd582
      Huacai Chen authored
      
      
      Loongson-3A R2 has pwbase/pwfield/pwsize/pwctl registers in CP0 (this
      is very similar to HTW) and lwdir/lwpte/lddir/ldpte instructions which
      can be used for fast TLB refill.
      
      [ralf@linux-mips.org: Resolve conflict.]
      
      Signed-off-by: default avatarHuacai Chen <chenhc@lemote.com>
      Cc: Aurelien Jarno <aurelien@aurel32.net>
      Cc: Steven J . Hill <sjhill@realitydiluted.com>
      Cc: Fuxin Zhang <zhangfx@lemote.com>
      Cc: Zhangjin Wu <wuzhangjin@gmail.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/12754/
      
      
      Signed-off-by: default avatarRalf Baechle <ralf@linux-mips.org>
      380cd582
  3. 03 Apr, 2016 1 commit
  4. 24 Jan, 2016 1 commit
  5. 16 Jan, 2016 1 commit
    • Kirill A. Shutemov's avatar
      mips, thp: remove infrastructure for handling splitting PMDs · b2787370
      Kirill A. Shutemov authored
      
      
      With new refcounting we don't need to mark PMDs splitting.  Let's drop
      code to handle this.
      
      pmdp_splitting_flush() is not needed too: on splitting PMD we will do
      pmdp_clear_flush() + set_pte_at().  pmdp_clear_flush() will do IPI as
      needed for fast_gup.
      
      Signed-off-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Cc: Jerome Marchand <jmarchan@redhat.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Steve Capper <steve.capper@linaro.org>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b2787370
  6. 11 Nov, 2015 5 commits
  7. 21 Jun, 2015 3 commits
  8. 16 Jun, 2015 1 commit
  9. 10 Apr, 2015 1 commit
  10. 01 Apr, 2015 1 commit
  11. 19 Mar, 2015 1 commit
  12. 18 Mar, 2015 1 commit
    • Steven J. Hill's avatar
      MIPS: Rearrange PTE bits into fixed positions. · be0c37c9
      Steven J. Hill authored
      
      
      This patch rearranges the PTE bits into fixed positions for R2
      and later cores. In the past, the TLB handling code did runtime
      checking of RI/XI and adjusted the shifts and rotates in order
      to fit the largest PFN value into the PTE. The checking now
      occurs when building the TLB handler, thus eliminating those
      checks. These new arrangements also define the largest possible
      PFN value that can fit in the PTE. HUGE page support is only
      available for 64-bit cores. Layouts of the PTE bits are now:
      
         64-bit, R1 or earlier:     CCC D V G [S H] M A W R P
         32-bit, R1 or earler:      CCC D V G M A W R P
         64-bit, R2 or later:       CCC D V G RI/R XI [S H] M A W P
         32-bit, R2 or later:       CCC D V G RI/R XI M A W P
      
      [ralf@linux-mips.org: Fix another build error *rant* *rant*]
      
      Signed-off-by: default avatarSteven J. Hill <Steven.Hill@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/9353/
      
      
      Signed-off-by: default avatarRalf Baechle <ralf@linux-mips.org>
      be0c37c9
  13. 17 Feb, 2015 1 commit
  14. 16 Feb, 2015 1 commit
  15. 27 Nov, 2014 1 commit
  16. 24 Nov, 2014 1 commit
  17. 22 Oct, 2014 1 commit
    • David Daney's avatar
      MIPS: tlbex: Properly fix HUGE TLB Refill exception handler · 9e0f162a
      David Daney authored
      In commit 8393c524 (MIPS: tlbex: Fix a missing statement for
      HUGETLB), the TLB Refill handler was fixed so that non-OCTEON targets
      would work properly with huge pages.  The change was incorrect in that
      it broke the OCTEON case.
      
      The problem is shown here:
      
          xxx0:	df7a0000 	ld	k0,0(k1)
          .
          .
          .
          xxxc0:	df610000 	ld	at,0(k1)
          xxxc4:	335a0ff0 	andi	k0,k0,0xff0
          xxxc8:	e825ffcd 	bbit1	at,0x5,0x0
          xxxcc:	003ad82d 	daddu	k1,at,k0
          .
          .
          .
      
      In the non-octeon case there is a destructive test for the huge PTE
      bit, and then at 0, $k0 is reloaded (that is what the 8393c524
      
      
      patch added).
      
      In the octeon case, we modify k1 in the branch delay slot, but we
      never need k0 again, so the new load is not needed, but since k1 is
      modified, if we do the load, we load from a garbage location and then
      get a nested TLB Refill, which is seen in userspace as either SIGBUS
      or SIGSEGV (depending on the garbage).
      
      The real fix is to only do this reloading if it is needed, and never
      where it is harmful.
      
      Signed-off-by: default avatarDavid Daney <david.daney@cavium.com>
      Cc: Huacai Chen <chenhc@lemote.com>
      Cc: Fuxin Zhang <zhangfx@lemote.com>
      Cc: Zhangjin Wu <wuzhangjin@gmail.com>
      Cc: stable@vger.kernel.org
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/8151/
      
      
      Signed-off-by: default avatarRalf Baechle <ralf@linux-mips.org>
      9e0f162a
  18. 01 Aug, 2014 3 commits
    • Leonid Yegoshin's avatar
      MIPS: Use dedicated exception handler if CPU supports RI/XI exceptions · 5890f70f
      Leonid Yegoshin authored
      
      
      Use the regular tlb_do_page_fault_0 (no write) handler to handle
      the RI and XI exceptions. Also skip the RI/XI validation check
      on TLB load handler since it's redundant when the CPU has
      unique RI/XI exceptions.
      
      Singed-off-by: default avatarLeonid Yegoshin <Leonid.Yegoshin@imgtec.com>
      Signed-off-by: default avatarMarkos Chandras <markos.chandras@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/7339/
      
      
      Signed-off-by: default avatarRalf Baechle <ralf@linux-mips.org>
      5890f70f
    • Markos Chandras's avatar
      MIPS: mm: Use the Hardware Page Table Walker if the core supports it · f1014d1b
      Markos Chandras authored
      
      
      The Hardware Page Table Walker aims to speed up TLB refill exceptions
      by handling them in the hardware level instead of having a software
      TLB refill handler. However, a TLB refill exception can still be
      thrown in certain cases such as, synchronus exceptions, or address
      translation or memory errors during the HTW operation. As a result of
      which, HTW must not be considered a complete replacement for the TLB
      refill software handler, but rather a fast-path for it.
      For HTW to work, the PWBase register must contain the task's page
      global directory address so the HTW will kick in on TLB refill
      exceptions.
      
      Due to HTW being a separate engine embedded deep in the CPU pipeline,
      we need to restart the HTW everytime a PTE changes to avoid HTW
      fetching a old entry from the page tables. It's also necessary to
      restart the HTW on context switches to prevent it from fetching a
      page from the previous process. Finally, since HTW is using the
      entryhi register to write the translations to the TLB, it's necessary
      to stop the HTW whenever the entryhi changes (eg for tlb probe
      perations) and enable it back afterwards.
      
      == Performance ==
      
      The following trivial test was used to measure the performance of the
      HTW. Using the same root filesystem, the following command was used
      to measure the number of tlb refill handler executions with and
      without (using 'nohtw' kernel parameter) HTW support.  The kernel was
      modified to use a scratch register as a counter for the TLB refill
      exceptions.
      
      find /usr -type f -exec ls -lh {} \;
      
      HTW Enabled:
      TLB refill exceptions: 12306
      
      HTW Disabled:
      TLB refill exceptions: 17805
      
      Signed-off-by: default avatarMarkos Chandras <markos.chandras@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Cc: Markos Chandras <markos.chandras@imgtec.com>
      Patchwork: https://patchwork.linux-mips.org/patch/7336/
      
      
      Signed-off-by: default avatarRalf Baechle <ralf@linux-mips.org>
      f1014d1b
    • Leonid Yegoshin's avatar
      MIPS: bugfix: missed cache flush of TLB refill handler · 1062080a
      Leonid Yegoshin authored
      Commit
      
          Commit 1d40cfcd
      
      
          Author: Ralf Baechle <ralf@linux-mips.org>
          Date:   Fri Jul 15 15:23:23 2005 +0000
      
          Avoid SMP cacheflushes.  This is a minor optimization of startup but
          will also avoid smp_call_function from doing stupid things when called
          from a CPU that is not yet marked online.
      
      missed an appropriate cache flush of TLB refill handler because that time it was
      at fixed location CAC_BASE. After years the refill handler in EBASE vector
      is not at that location and can be allocated in some another memory and needs
      I-cache sync as other TLB exception vectors.
      
      Besides that, the new function - local_flash_icache_range() was introduced
      to avoid SMP cacheflushes.
      
      Signed-off-by: default avatarLeonid Yegoshin <Leonid.Yegoshin@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Cc: paul.gortmaker@windriver.com
      Cc: jchandra@broadcom.com
      Cc: linux-kernel@vger.kernel.org
      Cc: david.daney@cavium.com
      Patchwork: https://patchwork.linux-mips.org/patch/7312/
      
      
      Signed-off-by: default avatarRalf Baechle <ralf@linux-mips.org>
      1062080a
  19. 30 Jul, 2014 1 commit
  20. 02 Jun, 2014 1 commit
  21. 30 May, 2014 1 commit
  22. 14 May, 2014 1 commit
  23. 31 Mar, 2014 1 commit