This project is mirrored from https://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-stable-rt.git.
Pull mirroring updated .
- May 25, 2011
-
-
Amerigo Wang authored
There is no CONFIG_WORKQUEUE_DEBUGFS any more, so this code is dead. Signed-off-by:
WANG Cong <amwang@redhat.com> Cc: David Howells <dhowells@redhat.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Stephen Wilson authored
In show_numa_map() we collect statistics into a numa_maps structure. Since the number of NUMA nodes can be very large, this structure is not a candidate for stack allocation. Instead of going thru a kmalloc()+kfree() cycle each time show_numa_map() is invoked, perform the allocation just once when /proc/pid/numa_maps is opened. Performing the allocation when numa_maps is opened, and thus before a reference to the target tasks mm is taken, eliminates a potential stalemate condition in the oom-killer as originally described by Hugh Dickins: ... imagine what happens if the system is out of memory, and the mm we're looking at is selected for killing by the OOM killer: while we wait in __get_free_page for more memory, no memory is freed from the selected mm because it cannot reach exit_mmap while we hold that reference. Signed-off-by:
Stephen Wilson <wilsons@start.ca> Reviewed-by:
KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Hugh Dickins <hughd@google.com> Cc: David Rientjes <rientjes@google.com> Cc: Lee Schermerhorn <lee.schermerhorn@hp.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: Christoph Lameter <cl@linux-foundation.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Stephen Wilson authored
Now that mm/mempolicy.c is no longer implementing /proc/pid/numa_maps there is no need to export struct proc_maps_private to the world. Move it to fs/proc/internal.h instead. Signed-off-by:
Stephen Wilson <wilsons@start.ca> Reviewed-by:
KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Hugh Dickins <hughd@google.com> Cc: David Rientjes <rientjes@google.com> Cc: Lee Schermerhorn <lee.schermerhorn@hp.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: Christoph Lameter <cl@linux-foundation.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Stephen Wilson authored
Moving show_numa_map() from mempolicy.c to task_mmu.c solves several issues. - Having the show() operation "miles away" from the corresponding seq_file iteration operations is a maintenance burden. - The need to export ad hoc info like struct proc_maps_private is eliminated. - The implementation of show_numa_map() can be improved in a simple manner by cooperating with the other seq_file operations (start, stop, etc) -- something that would be messy to do without this change. Signed-off-by:
Stephen Wilson <wilsons@start.ca> Reviewed-by:
KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Hugh Dickins <hughd@google.com> Cc: David Rientjes <rientjes@google.com> Cc: Lee Schermerhorn <lee.schermerhorn@hp.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: Christoph Lameter <cl@linux-foundation.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Eric Paris authored
Implement generic xattrs for tmpfs filesystems. The Feodra project, while trying to replace suid apps with file capabilities, realized that tmpfs, which is used on the build systems, does not support file capabilities and thus cannot be used to build packages which use file capabilities. Xattrs are also needed for overlayfs. The xattr interface is a bit odd. If a filesystem does not implement any {get,set,list}xattr functions the VFS will call into some random LSM hooks and the running LSM can then implement some method for handling xattrs. SELinux for example provides a method to support security.selinux but no other security.* xattrs. As it stands today when one enables CONFIG_TMPFS_POSIX_ACL tmpfs will have xattr handler routines specifically to handle acls. Because of this tmpfs would loose the VFS/LSM helpers to support the running LSM. To make up for that tmpfs had stub functions that did nothing but call into the LSM hooks which implement the helpers. This new patch does not use the LSM fallback functions and instead just implements a native get/set/list xattr feature for the full security.* and trusted.* namespace like a normal filesystem. This means that tmpfs can now support both security.selinux and security.capability, which was not previously possible. The basic implementation is that I attach a: struct shmem_xattr { struct list_head list; /* anchored by shmem_inode_info->xattr_list */ char *name; size_t size; char value[0]; }; Into the struct shmem_inode_info for each xattr that is set. This implementation could easily support the user.* namespace as well, except some care needs to be taken to prevent large amounts of unswappable memory being allocated for unprivileged users. [mszeredi@suse.cz: new config option, suport trusted.*, support symlinks] Signed-off-by:
Eric Paris <eparis@redhat.com> Signed-off-by:
Miklos Szeredi <mszeredi@suse.cz> Acked-by:
Serge Hallyn <serge.hallyn@ubuntu.com> Tested-by:
Serge Hallyn <serge.hallyn@ubuntu.com> Cc: Kyle McMartin <kyle@mcmartin.ca> Acked-by:
Hugh Dickins <hughd@google.com> Tested-by:
Jordi Pujol <jordipujolp@gmail.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Ying Han authored
Change each shrinker's API by consolidating the existing parameters into shrink_control struct. This will simplify any further features added w/o touching each file of shrinker. [akpm@linux-foundation.org: fix build] [akpm@linux-foundation.org: fix warning] [kosaki.motohiro@jp.fujitsu.com: fix up new shrinker API] [akpm@linux-foundation.org: fix xfs warning] [akpm@linux-foundation.org: update gfs2] Signed-off-by:
Ying Han <yinghan@google.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Minchan Kim <minchan.kim@gmail.com> Acked-by:
Pavel Emelyanov <xemul@openvz.org> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Mel Gorman <mel@csn.ul.ie> Acked-by:
Rik van Riel <riel@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Hugh Dickins <hughd@google.com> Cc: Dave Hansen <dave@linux.vnet.ibm.com> Cc: Steven Whitehouse <swhiteho@redhat.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Ying Han authored
Consolidate the existing parameters to shrink_slab() into a new shrink_control struct. This is needed later to pass the same struct to shrinkers. Signed-off-by:
Ying Han <yinghan@google.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Minchan Kim <minchan.kim@gmail.com> Acked-by:
Pavel Emelyanov <xemul@openvz.org> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Mel Gorman <mel@csn.ul.ie> Acked-by:
Rik van Riel <riel@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Hugh Dickins <hughd@google.com> Cc: Dave Hansen <dave@linux.vnet.ibm.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Peter Zijlstra authored
Straightforward conversion of i_mmap_lock to a mutex. Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> Acked-by:
Hugh Dickins <hughd@google.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: David Miller <davem@davemloft.net> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Jeff Dike <jdike@addtoit.com> Cc: Richard Weinberger <richard@nod.at> Cc: Tony Luck <tony.luck@intel.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Nick Piggin <npiggin@kernel.dk> Cc: Namhyung Kim <namhyung@gmail.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Peter Zijlstra authored
Hugh says: "The only significant loser, I think, would be page reclaim (when concurrent with truncation): could spin for a long time waiting for the i_mmap_mutex it expects would soon be dropped? " Counter points: - cpu contention makes the spin stop (need_resched()) - zap pages should be freeing pages at a higher rate than reclaim ever can I think the simplification of the truncate code is definitely worth it. Effectively reverts: 2aa15890 ("mm: prevent concurrent unmap_mapping_range() on the same inode") and takes out the code that caused its problem. Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> Reviewed-by:
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Hugh Dickins <hughd@google.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: David Miller <davem@davemloft.net> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Jeff Dike <jdike@addtoit.com> Cc: Richard Weinberger <richard@nod.at> Cc: Tony Luck <tony.luck@intel.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Nick Piggin <npiggin@kernel.dk> Cc: Namhyung Kim <namhyung@gmail.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Peter Zijlstra authored
Rework the existing mmu_gather infrastructure. The direct purpose of these patches was to allow preemptible mmu_gather, but even without that I think these patches provide an improvement to the status quo. The first 9 patches rework the mmu_gather infrastructure. For review purpose I've split them into generic and per-arch patches with the last of those a generic cleanup. The next patch provides generic RCU page-table freeing, and the followup is a patch converting s390 to use this. I've also got 4 patches from DaveM lined up (not included in this series) that uses this to implement gup_fast() for sparc64. Then there is one patch that extends the generic mmu_gather batching. After that follow the mm preemptibility patches, these make part of the mm a lot more preemptible. It converts i_mmap_lock and anon_vma->lock to mutexes which together with the mmu_gather rework makes mmu_gather preemptible as well. Making i_mmap_lock a mutex also enables a clean-up of the truncate code. This also allows for preemptible mmu_notifiers, something that XPMEM I think wants. Furthermore, it removes the new and universially detested unmap_mutex. This patch: Remove the first obstacle towards a fully preemptible mmu_gather. The current scheme assumes mmu_gather is always done with preemption disabled and uses per-cpu storage for the page batches. Change this to try and allocate a page for batching and in case of failure, use a small on-stack array to make some progress. Preemptible mmu_gather is desired in general and usable once i_mmap_lock becomes a mutex. Doing it before the mutex conversion saves us from having to rework the code by moving the mmu_gather bits inside the pte_lock. Also avoid flushing the tlb batches from under the pte lock, this is useful even without the i_mmap_lock conversion as it significantly reduces pte lock hold times. [akpm@linux-foundation.org: fix comment tpyo] Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: David Miller <davem@davemloft.net> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Jeff Dike <jdike@addtoit.com> Cc: Richard Weinberger <richard@nod.at> Cc: Tony Luck <tony.luck@intel.com> Reviewed-by:
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Acked-by:
Hugh Dickins <hughd@google.com> Acked-by:
Mel Gorman <mel@csn.ul.ie> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Nick Piggin <npiggin@kernel.dk> Cc: Namhyung Kim <namhyung@gmail.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Michal Hocko authored
Currently we have expand_upwards exported while expand_downwards is accessible only via expand_stack or expand_stack_downwards. check_stack_guard_page is a nice example of the asymmetry. It uses expand_stack for VM_GROWSDOWN while expand_upwards is called for VM_GROWSUP case. Let's clean this up by exporting both functions and make those names consistent. Let's use expand_{upwards,downwards} because expanding doesn't always involve stack manipulation (an example is ia64_do_page_fault which uses expand_upwards for registers backing store expansion). expand_downwards has to be defined for both CONFIG_STACK_GROWS{UP,DOWN} because get_arg_page calls the downwards version in the early process initialization phase for growsup configuration. Signed-off-by:
Michal Hocko <mhocko@suse.cz> Acked-by:
Hugh Dickins <hughd@google.com> Cc: James Bottomley <James.Bottomley@HansenPartnership.com> Cc: "Luck, Tony" <tony.luck@intel.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- May 23, 2011
-
-
Eryu Guan authored
journal_start returns an ERR_PTR() value rather than NULL on failure. Cc: Jan Kara <jack@suse.cz> Signed-off-by:
Eryu Guan <guaneryu@gmail.com> Signed-off-by:
Jan Kara <jack@suse.cz>
-
Tejun Heo authored
02e35228 (block: rescan partitions on invalidated devices on -ENOMEDIA too) relocated partition rescan above explicit bd_set_size() to simplify condition check. As rescan_partitions() does its own bdev size setting, this doesn't break anything; however, rescan_partitions() prints out the following messages when adjusting bdev size, which can be confusing. sda: detected capacity change from 0 to 146815737856 sdb: detected capacity change from 0 to 146815737856 This patch restores the original order and remove the warning messages. stable: Please apply together with 02e35228 (block: rescan partitions on invalidated devices on -ENOMEDIA too). Signed-off-by:
Tejun Heo <tj@kernel.org> Reported-by:
Tony Luck <tony.luck@gmail.com> Tested-by:
Tony Luck <tony.luck@gmail.com> Cc: stable@kernel.org Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
David Teigland authored
Allow processes blocked on plock requests to be interrupted when they are killed. This leaves the problem of cleaning up the lock state in userspace. This has three parts: 1. Add a flag to unlock operations sent to userspace indicating the file is being closed. Userspace will then look for and clear any waiting plock operations that were abandoned by an interrupted process. 2. Queue an unlock-close operation (like in 1) to clean up userspace from an interrupted plock request. This is needed because the vfs will not send a cleanup-unlock if it sees no locks on the file, which it won't if the interrupted operation was the only one. 3. Do not use replies from userspace for unlock-close operations because they are unnecessary (they are just cleaning up for the process which did not make an unlock call). This also simplifies the new unlock-close generated from point 2. Signed-off-by:
David Teigland <teigland@redhat.com>
-
Thomas Gleixner authored
Peter is concerned about the extra scan of CLOCK_REALTIME_COS in the timer interrupt. Yes, I did not think about it, because the solution was so elegant. I didn't like the extra list in timerfd when it was proposed some time ago, but with a rcu based list the list walk it's less horrible than the original global lock, which was held over the list iteration. Requested-by:
Peter Zijlstra <peterz@infradead.org> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Reviewed-by:
Peter Zijlstra <peterz@infradead.org>
-
Artem Bityutskiy authored
Switch to debugging using dynamic printk (pr_debug()). There is no good reason to carry custom debugging prints if there is so cool and powerful generic dynamic printk infrastructure, see Documentation/dynamic-debug-howto.txt. With dynamic printks we can switch on/of individual prints, per-file, per-function and per format messages. This means that instead of doing old-fashioned echo 1 > /sys/module/ubifs/parameters/debug_msgs to enable general messages, we can do: echo 'format "UBIFS DBG gen" +ptlf' > control to enable general messages and additionally ask the dynamic printk infrastructure to print process ID, line number and function name. So there is no reason to keep UBIFS-specific crud if there is more powerful generic thing. Signed-off-by:
Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
-
- May 22, 2011
-
-
Heiko Carstens authored
Fixes this build error on s390 and probably other archs as well: fs/inode.c: In function 'new_inode': fs/inode.c:894:2: error: implicit declaration of function 'spin_lock_prefetch' Signed-off-by:
Heiko Carstens <heiko.carstens@de.ibm.com> [ Happens on architectures that don't define their own prefetch functions in <asm/processor.h>, and instead rely on the default ones in <linux/prefetch.h> - Linus] Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- May 21, 2011
-
-
Steven Whitehouse authored
The ail flush code has always relied upon log flushing to prevent it from spinning needlessly. This fixes it to wait on the last I/O request submitted (we don't need to wait for all of it) instead of either spinning with io_schedule or sleeping. As a result cpu usage of gfs2_logd is much reduced with certain workloads. Reported-by:
Abhijith Das <adas@redhat.com> Tested-by:
Abhijith Das <adas@redhat.com> Signed-off-by:
Steven Whitehouse <swhiteho@redhat.com>
-
Steven Whitehouse authored
The deallocation code for directories in GFS2 is largely divided into two parts. The first part deallocates any directory leaf blocks and marks the directory as being a regular file when that is complete. The second stage was identical to deallocating regular files. Regular files have their data blocks in a different address space to directories, and thus what would have been normal data blocks in a regular file (the hash table in a GFS2 directory) were deallocated correctly. However, a reference to these blocks was left in the journal (assuming of course that some previous activity had resulted in those blocks being in the journal or ail list). This patch uses the i_depth as a test of whether the inode is an exhash directory (we cannot test the inode type as that has already been changed to a regular file at this stage in deallocation) The original issue was reported by Chris Hertel as an issue he encountered running bonnie++ Reported-by:
Christopher R. Hertel <crh@samba.org> Cc: Abhijith Das <adas@redhat.com> Signed-off-by:
Steven Whitehouse <swhiteho@redhat.com>
-
Erez Zadok authored
This solves a serious VFS-level bug in nested_symlink (which was rewritten from do_follow_link), and follows the order of depth tests that existed before. The bug triggers a BUG_ON in fs/namei.c:1381, when running racer with symlink and rename ops. Signed-off-by:
Erez Zadok <ezk@cs.sunysb.edu> Acked-by:
Miklos Szeredi <mszeredi@suse.cz> Cc: stable@kernel.org Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- May 20, 2011
-
-
Timo Warns authored
As Ben Hutchings discovered [1], the patch for CVE-2011-1017 (buffer overflow in ldm_frag_add) is not sufficient. The original patch in commit c340b1d6 ("fs/partitions/ldm.c: fix oops caused by corrupted partition table") does not consider that, for subsequent fragments, previously allocated memory is used. [1] http://lkml.org/lkml/2011/5/6/407 Reported-by:
Ben Hutchings <ben@decadent.org.uk> Signed-off-by:
Timo Warns <warns@pre-sense.de> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Linus Torvalds authored
Commit e66eed65 ("list: remove prefetching from regular list iterators") removed the include of prefetch.h from list.h, which uncovered several cases that had apparently relied on that rather obscure header file dependency. So this fixes things up a bit, using grep -L linux/prefetch.h $(git grep -l '[^a-z_]prefetchw*(' -- '*.[ch]') grep -L 'prefetchw*(' $(git grep -l 'linux/prefetch.h' -- '*.[ch]') to guide us in finding files that either need <linux/prefetch.h> inclusion, or have it despite not needing it. There are more of them around (mostly network drivers), but this gets many core ones. Reported-by:
Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Steve French authored
Fix to earlier "Simplify invalidate part (try #6)" patch That patch caused problems with connectathon test 5. Reviewed-by:
Jeff Layton <jlayton@samba.org> Signed-off-by:
Pavel Shilovsky <piastry@etersoft.ru> Signed-off-by:
Steve French <sfrench@us.ibm.com>
-
Artem Bityutskiy authored
This is a minor fix for UBIFS kernel-doc comments - we forgot the "@" symbol for several 'struct ubifs_debug_info'. Signed-off-by:
Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
-
- May 19, 2011
-
-
Dave Chinner authored
When allocating an extent that is long enough to consume the remaining free space in an AG, we need to ensure that the allocation leaves enough space in the AG for any subsequent bmap btree blocks that are needed to track the new extent. These have to be allocated in the same AG as we only reserve enough blocks in an allocation transaction for modification of the freespace trees in a single AG. xfs_alloc_fix_minleft() has been considering blocks on the AGFL as free blocks available for extent and bmbt block allocation, which is not correct - blocks on the AGFL are there exclusively for the use of the free space btrees. As a result, when minleft is less than the number of blocks on the AGFL, xfs_alloc_fix_minleft() does not trim the given extent to leave minleft blocks available for bmbt allocation, and hence we can fail allocation during bmbt record insertion. Signed-off-by:
Dave Chinner <dchinner@redhat.com> Reviewed-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Alex Elder <aelder@sgi.com>
-
Dave Chinner authored
When we free a vmapped buffer, we need to ensure the vmap address and length we free is the same as when it was allocated. In various places in the log code we change the memory the buffer is pointing to before issuing IO, but we never reset the buffer to point back to it's original memory (or no memory, if that is the case for the buffer). As a result, when we free the buffer it points to memory that is owned by something else and attempts to unmap and free it. Because the range does not match any known mapped range, it can trigger BUG_ON() traps in the vmap code, and potentially corrupt the vmap area tracking. Fix this by always resetting these buffers to their original state before freeing them. Signed-off-by:
Dave Chinner <dchinner@redhat.com> Reviewed-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Alex Elder <aelder@sgi.com>
-
Dave Chinner authored
When the underlying inode buffer is locked and xfs_sync_inode_attr() is doing a non-blocking flush, xfs_iflush() can return EAGAIN. When this happens, clear the error rather than returning it to xfs_inode_ag_walk(), as returning EAGAIN will result in the AG walk delaying for a short while and trying again. This can result in background walks getting stuck on the one AG until inode buffer is unlocked by some other means. This behaviour was noticed when analysing event traces followed by code inspection and verification of the fix via further traces. Signed-off-by:
Dave Chinner <dchinner@redhat.com> Reviewed-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Alex Elder <aelder@sgi.com>
-
Dave Chinner authored
Variables are ordered incorrectly in trace call. Signed-off-by:
Dave Chinner <dchinner@redhat.com> Reviewed-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Alex Elder <aelder@sgi.com>
-
Dave Chinner authored
The workqueue initialisation function is called twice when initialising the XFS subsystem. Remove the second initialisation call. Signed-off-by:
Dave Chinner <dchinner@redhat.com> Signed-off-by:
Alex Elder <aelder@sgi.com>
-
Joe Perches authored
xfs_alert_tag() can be defined using xfs_alert(), and thereby avoid using xfs_printk() altogether. This is the only remaining use of xfs_printk(), so changing it this way means xfs_printk() can simply be eliminated.can simply be eliminated.can simply be eliminated.can simply be eliminated.can simply be eliminated.can simply be eliminated.can simply be eliminated.can simply be eliminated.can simply be eliminated. Also add format checking to the non-debug inline function xfs_debug. Miscellaneous function prototype argument alignment. (Updated to delete the definition of xfs_printk(), which is no longer used or needed.) Signed-off-by:
Joe Perches <joe@perches.com> Signed-off-by:
Alex Elder <aelder@sgi.com>
-
Steve French authored
Move extern for cifsConvertToUCS to different header to prevent following warning: CHECK fs/cifs/cifs_unicode.c fs/cifs/cifs_unicode.c:267:1: warning: symbol 'cifsConvertToUCS' was not declared. Should it be static? Signed-off-by:
Steve French <sfrench@us.ibm.com> Signed-off-by:
Pavel Shilovsky <piastry@etersoft.ru> Signed-off-by:
Steve French <sfrench@us.ibm.com>
-
Steve French authored
Signed-off-by:
Steve French <sfrench@us.ibm.com>
-
Shirish Pargaonkar authored
Change idmap key name from cifs.cifs_idmap to cifs.idmap. Removed unused structure wksidarr and function match_sid(). Handle errors correctly in function init_cifs(). Signed-off-by:
Shirish Pargaonkar <shirishpargaonkar@gmail.com> Reviewed-by:
Jeff Layton <jlayton@redhat.com> Signed-off-by:
Steve French <sfrench@us.ibm.com>
-
Sean Finney authored
Previously mount options were copied and updated in the cifs_sb_info struct only when CONFIG_CIFS_DFS_UPCALL was enabled. Making this information generally available allows us to remove a number of ifdefs, extra function params, and temporary variables. Reviewed-by:
Jeff Layton <jlayton@redhat.com> Signed-off-by:
Sean Finney <seanius@seanius.net> Signed-off-by:
Steve French <sfrench@us.ibm.com>
-
Sean Finney authored
A relatively minor nit, but also clarified the "consensus" from the preceding comments that it is in fact better to try for the kstrdup early and cleanup while cleaning up is still a simple thing to do. Reviewed-By:
Steve French <smfrench@gmail.com> Signed-off-by:
Sean Finney <seanius@seanius.net> Signed-off-by:
Pavel Shilovsky <piastry@etersoft.ru> Signed-off-by:
Steve French <sfrench@us.ibm.com>
-
Sean Finney authored
With CONFIG_DFS_UPCALL enabled, maintain the submount options in cifs_sb->mountdata, simplifying the code just a bit as well as making corner-case allocation problems less likely. Reviewed-by:
Jeff Layton <jlayton@redhat.com> Signed-off-by:
Sean Finney <seanius@seanius.net> Signed-off-by:
Pavel Shilovsky <piastry@etersoft.ru> Signed-off-by:
Steve French <sfrench@us.ibm.com>
-
Sean Finney authored
To keep strings passed to cifs_parse_mount_options re-usable (which is needed to clean up the DFS referral handling), tokenize a copy of the mount options instead. If values are needed from this tokenized string, they too must be duplicated (previously, some options were copied and others duplicated). Since we are not on the critical path and any cleanup is relatively easy, the extra memory usage shouldn't be a problem (and it is a bit simpler than trying to implement something smarter). Reviewed-by:
Jeff Layton <jlayton@redhat.com> Signed-off-by:
Sean Finney <seanius@seanius.net> Signed-off-by:
Pavel Shilovsky <piastry@etersoft.ru> Signed-off-by:
Steve French <sfrench@us.ibm.com>
-
Sean Finney authored
Windows 2008 CIFS servers do not always return PATH_NOT_COVERED when attempting to access a DFS share. Therefore, when checking for remote shares, unconditionally ask for a DFS referral for the UNC (w/out prepath) before continuing with previous behavior of attempting to access the UNC + prepath and checking for PATH_NOT_COVERED. Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=31092 Reviewed-by:
Jeff Layton <jlayton@redhat.com> Signed-off-by:
Sean Finney <seanius@seanius.net> Signed-off-by:
Pavel Shilovsky <piastry@etersoft.ru> Signed-off-by:
Steve French <sfrench@us.ibm.com>
-
Sean Finney authored
The logic behind the expansion of DFS referrals is now extracted from cifs_mount into a new static function, expand_dfs_referral. This will reduce duplicate code in upcoming commits. Reviewed-by:
Jeff Layton <jlayton@redhat.com> Signed-off-by:
Sean Finney <seanius@seanius.net> Signed-off-by:
Pavel Shilovsky <piastry@etersoft.ru> Signed-off-by:
Steve French <sfrench@us.ibm.com>
-
Jeff Layton authored
It's a bad idea to have macro functions that reference variables more than once, as the arguments could have side effects. Turn BCC() into a static inlined function instead. While we're at it, make it return a void * to discourage anyone from dereferencing it as-is. Reported-and-acked-by:
David Howells <dhowells@redhat.com> Signed-off-by:
Jeff Layton <jlayton@redhat.com> Signed-off-by:
Steve French <sfrench@us.ibm.com> Signed-off-by:
Pavel Shilovsky <piastry@etersoft.ru> Signed-off-by:
Steve French <sfrench@us.ibm.com>
-