4 * Provide support for fcntl()'s F_GETLK, F_SETLK, and F_SETLKW calls.
5 * Doug Evans (dje@spiff.uucp), August 07, 1992
7 * Deadlock detection added.
8 * FIXME: one thing isn't handled yet:
9 * - mandatory locks (requires lots of changes elsewhere)
10 * Kelly Carmichael (kelly@[142.24.8.65]), September 17, 1994.
12 * Miscellaneous edits, and a total rewrite of posix_lock_file() code.
13 * Kai Petzke (wpp@marie.physik.tu-berlin.de), 1994
15 * Converted file_lock_table to a linked list from an array, which eliminates
16 * the limits on how many active file locks are open.
17 * Chad Page (pageone@netcom.com), November 27, 1994
19 * Removed dependency on file descriptors. dup()'ed file descriptors now
20 * get the same locks as the original file descriptors, and a close() on
21 * any file descriptor removes ALL the locks on the file for the current
22 * process. Since locks still depend on the process id, locks are inherited
23 * after an exec() but not after a fork(). This agrees with POSIX, and both
24 * BSD and SVR4 practice.
25 * Andy Walker (andy@lysaker.kvaerner.no), February 14, 1995
27 * Scrapped free list which is redundant now that we allocate locks
28 * dynamically with kmalloc()/kfree().
29 * Andy Walker (andy@lysaker.kvaerner.no), February 21, 1995
31 * Implemented two lock personalities - FL_FLOCK and FL_POSIX.
33 * FL_POSIX locks are created with calls to fcntl() and lockf() through the
34 * fcntl() system call. They have the semantics described above.
36 * FL_FLOCK locks are created with calls to flock(), through the flock()
37 * system call, which is new. Old C libraries implement flock() via fcntl()
38 * and will continue to use the old, broken implementation.
40 * FL_FLOCK locks follow the 4.4 BSD flock() semantics. They are associated
41 * with a file pointer (filp). As a result they can be shared by a parent
42 * process and its children after a fork(). They are removed when the last
43 * file descriptor referring to the file pointer is closed (unless explicitly
46 * FL_FLOCK locks never deadlock, an existing lock is always removed before
47 * upgrading from shared to exclusive (or vice versa). When this happens
48 * any processes blocked by the current lock are woken up and allowed to
49 * run before the new lock is applied.
50 * Andy Walker (andy@lysaker.kvaerner.no), June 09, 1995
52 * Removed some race conditions in flock_lock_file(), marked other possible
53 * races. Just grep for FIXME to see them.
54 * Dmitry Gorodchanin (pgmdsg@ibi.com), February 09, 1996.
56 * Addressed Dmitry's concerns. Deadlock checking no longer recursive.
57 * Lock allocation changed to GFP_ATOMIC as we can't afford to sleep
58 * once we've checked for blocking and deadlocking.
59 * Andy Walker (andy@lysaker.kvaerner.no), April 03, 1996.
61 * Initial implementation of mandatory locks. SunOS turned out to be
62 * a rotten model, so I implemented the "obvious" semantics.
63 * See 'Documentation/filesystems/mandatory-locking.txt' for details.
64 * Andy Walker (andy@lysaker.kvaerner.no), April 06, 1996.
66 * Don't allow mandatory locks on mmap()'ed files. Added simple functions to
67 * check if a file has mandatory locks, used by mmap(), open() and creat() to
68 * see if system call should be rejected. Ref. HP-UX/SunOS/Solaris Reference
70 * Andy Walker (andy@lysaker.kvaerner.no), April 09, 1996.
72 * Tidied up block list handling. Added '/proc/locks' interface.
73 * Andy Walker (andy@lysaker.kvaerner.no), April 24, 1996.
75 * Fixed deadlock condition for pathological code that mixes calls to
76 * flock() and fcntl().
77 * Andy Walker (andy@lysaker.kvaerner.no), April 29, 1996.
79 * Allow only one type of locking scheme (FL_POSIX or FL_FLOCK) to be in use
80 * for a given file at a time. Changed the CONFIG_LOCK_MANDATORY scheme to
81 * guarantee sensible behaviour in the case where file system modules might
82 * be compiled with different options than the kernel itself.
83 * Andy Walker (andy@lysaker.kvaerner.no), May 15, 1996.
85 * Added a couple of missing wake_up() calls. Thanks to Thomas Meckel
86 * (Thomas.Meckel@mni.fh-giessen.de) for spotting this.
87 * Andy Walker (andy@lysaker.kvaerner.no), May 15, 1996.
89 * Changed FL_POSIX locks to use the block list in the same way as FL_FLOCK
90 * locks. Changed process synchronisation to avoid dereferencing locks that
91 * have already been freed.
92 * Andy Walker (andy@lysaker.kvaerner.no), Sep 21, 1996.
94 * Made the block list a circular list to minimise searching in the list.
95 * Andy Walker (andy@lysaker.kvaerner.no), Sep 25, 1996.
97 * Made mandatory locking a mount option. Default is not to allow mandatory
99 * Andy Walker (andy@lysaker.kvaerner.no), Oct 04, 1996.
101 * Some adaptations for NFS support.
102 * Olaf Kirch (okir@monad.swb.de), Dec 1996,
104 * Fixed /proc/locks interface so that we can't overrun the buffer we are handed.
105 * Andy Walker (andy@lysaker.kvaerner.no), May 12, 1997.
107 * Use slab allocator instead of kmalloc/kfree.
108 * Use generic list implementation from <linux/list.h>.
109 * Sped up posix_locks_deadlock by only considering blocked locks.
110 * Matthew Wilcox <willy@debian.org>, March, 2000.
112 * Leases and LOCK_MAND
113 * Matthew Wilcox <willy@debian.org>, June, 2000.
114 * Stephen Rothwell <sfr@canb.auug.org.au>, June, 2000.
116 * Locking conflicts and dependencies:
117 * If multiple threads attempt to lock the same byte (or flock the same file)
118 * only one can be granted the lock, and other must wait their turn.
119 * The first lock has been "applied" or "granted", the others are "waiting"
120 * and are "blocked" by the "applied" lock..
122 * Waiting and applied locks are all kept in trees whose properties are:
124 * - the root of a tree may be an applied or waiting lock.
125 * - every other node in the tree is a waiting lock that
126 * conflicts with every ancestor of that node.
128 * Every such tree begins life as a waiting singleton which obviously
129 * satisfies the above properties.
131 * The only ways we modify trees preserve these properties:
133 * 1. We may add a new leaf node, but only after first verifying that it
134 * conflicts with all of its ancestors.
135 * 2. We may remove the root of a tree, creating a new singleton
136 * tree from the root and N new trees rooted in the immediate
138 * 3. If the root of a tree is not currently an applied lock, we may
139 * apply it (if possible).
140 * 4. We may upgrade the root of the tree (either extend its range,
141 * or upgrade its entire range from read to write).
143 * When an applied lock is modified in a way that reduces or downgrades any
144 * part of its range, we remove all its children (2 above). This particularly
145 * happens when a lock is unlocked.
147 * For each of those child trees we "wake up" the thread which is
148 * waiting for the lock so it can continue handling as follows: if the
149 * root of the tree applies, we do so (3). If it doesn't, it must
150 * conflict with some applied lock. We remove (wake up) all of its children
151 * (2), and add it is a new leaf to the tree rooted in the applied
152 * lock (1). We then repeat the process recursively with those
157 #include <linux/capability.h>
158 #include <linux/file.h>
159 #include <linux/fdtable.h>
160 #include <linux/fs.h>
161 #include <linux/init.h>
162 #include <linux/security.h>
163 #include <linux/slab.h>
164 #include <linux/syscalls.h>
165 #include <linux/time.h>
166 #include <linux/rcupdate.h>
167 #include <linux/pid_namespace.h>
168 #include <linux/hashtable.h>
169 #include <linux/percpu.h>
171 #define CREATE_TRACE_POINTS
172 #include <trace/events/filelock.h>
174 #include <linux/uaccess.h>
176 #define IS_POSIX(fl) (fl->fl_flags & FL_POSIX)
177 #define IS_FLOCK(fl) (fl->fl_flags & FL_FLOCK)
178 #define IS_LEASE(fl) (fl->fl_flags & (FL_LEASE|FL_DELEG|FL_LAYOUT))
179 #define IS_OFDLCK(fl) (fl->fl_flags & FL_OFDLCK)
180 #define IS_REMOTELCK(fl) (fl->fl_pid <= 0)
182 static bool lease_breaking(struct file_lock
*fl
)
184 return fl
->fl_flags
& (FL_UNLOCK_PENDING
| FL_DOWNGRADE_PENDING
);
187 static int target_leasetype(struct file_lock
*fl
)
189 if (fl
->fl_flags
& FL_UNLOCK_PENDING
)
191 if (fl
->fl_flags
& FL_DOWNGRADE_PENDING
)
196 int leases_enable
= 1;
197 int lease_break_time
= 45;
200 * The global file_lock_list is only used for displaying /proc/locks, so we
201 * keep a list on each CPU, with each list protected by its own spinlock.
202 * Global serialization is done using file_rwsem.
204 * Note that alterations to the list also require that the relevant flc_lock is
207 struct file_lock_list_struct
{
209 struct hlist_head hlist
;
211 static DEFINE_PER_CPU(struct file_lock_list_struct
, file_lock_list
);
212 DEFINE_STATIC_PERCPU_RWSEM(file_rwsem
);
215 * The blocked_hash is used to find POSIX lock loops for deadlock detection.
216 * It is protected by blocked_lock_lock.
218 * We hash locks by lockowner in order to optimize searching for the lock a
219 * particular lockowner is waiting on.
221 * FIXME: make this value scale via some heuristic? We generally will want more
222 * buckets when we have more lockowners holding locks, but that's a little
223 * difficult to determine without knowing what the workload will look like.
225 #define BLOCKED_HASH_BITS 7
226 static DEFINE_HASHTABLE(blocked_hash
, BLOCKED_HASH_BITS
);
229 * This lock protects the blocked_hash. Generally, if you're accessing it, you
230 * want to be holding this lock.
232 * In addition, it also protects the fl->fl_blocked_requests list, and the
233 * fl->fl_blocker pointer for file_lock structures that are acting as lock
234 * requests (in contrast to those that are acting as records of acquired locks).
236 * Note that when we acquire this lock in order to change the above fields,
237 * we often hold the flc_lock as well. In certain cases, when reading the fields
238 * protected by this lock, we can skip acquiring it iff we already hold the
241 static DEFINE_SPINLOCK(blocked_lock_lock
);
243 static struct kmem_cache
*flctx_cache __read_mostly
;
244 static struct kmem_cache
*filelock_cache __read_mostly
;
246 static struct file_lock_context
*
247 locks_get_lock_context(struct inode
*inode
, int type
)
249 struct file_lock_context
*ctx
;
251 /* paired with cmpxchg() below */
252 ctx
= smp_load_acquire(&inode
->i_flctx
);
253 if (likely(ctx
) || type
== F_UNLCK
)
256 ctx
= kmem_cache_alloc(flctx_cache
, GFP_KERNEL
);
260 spin_lock_init(&ctx
->flc_lock
);
261 INIT_LIST_HEAD(&ctx
->flc_flock
);
262 INIT_LIST_HEAD(&ctx
->flc_posix
);
263 INIT_LIST_HEAD(&ctx
->flc_lease
);
266 * Assign the pointer if it's not already assigned. If it is, then
267 * free the context we just allocated.
269 if (cmpxchg(&inode
->i_flctx
, NULL
, ctx
)) {
270 kmem_cache_free(flctx_cache
, ctx
);
271 ctx
= smp_load_acquire(&inode
->i_flctx
);
274 trace_locks_get_lock_context(inode
, type
, ctx
);
279 locks_dump_ctx_list(struct list_head
*list
, char *list_type
)
281 struct file_lock
*fl
;
283 list_for_each_entry(fl
, list
, fl_list
) {
284 pr_warn("%s: fl_owner=%p fl_flags=0x%x fl_type=0x%x fl_pid=%u\n", list_type
, fl
->fl_owner
, fl
->fl_flags
, fl
->fl_type
, fl
->fl_pid
);
289 locks_check_ctx_lists(struct inode
*inode
)
291 struct file_lock_context
*ctx
= inode
->i_flctx
;
293 if (unlikely(!list_empty(&ctx
->flc_flock
) ||
294 !list_empty(&ctx
->flc_posix
) ||
295 !list_empty(&ctx
->flc_lease
))) {
296 pr_warn("Leaked locks on dev=0x%x:0x%x ino=0x%lx:\n",
297 MAJOR(inode
->i_sb
->s_dev
), MINOR(inode
->i_sb
->s_dev
),
299 locks_dump_ctx_list(&ctx
->flc_flock
, "FLOCK");
300 locks_dump_ctx_list(&ctx
->flc_posix
, "POSIX");
301 locks_dump_ctx_list(&ctx
->flc_lease
, "LEASE");
306 locks_check_ctx_file_list(struct file
*filp
, struct list_head
*list
,
309 struct file_lock
*fl
;
310 struct inode
*inode
= locks_inode(filp
);
312 list_for_each_entry(fl
, list
, fl_list
)
313 if (fl
->fl_file
== filp
)
314 pr_warn("Leaked %s lock on dev=0x%x:0x%x ino=0x%lx "
315 " fl_owner=%p fl_flags=0x%x fl_type=0x%x fl_pid=%u\n",
316 list_type
, MAJOR(inode
->i_sb
->s_dev
),
317 MINOR(inode
->i_sb
->s_dev
), inode
->i_ino
,
318 fl
->fl_owner
, fl
->fl_flags
, fl
->fl_type
, fl
->fl_pid
);
322 locks_free_lock_context(struct inode
*inode
)
324 struct file_lock_context
*ctx
= inode
->i_flctx
;
327 locks_check_ctx_lists(inode
);
328 kmem_cache_free(flctx_cache
, ctx
);
332 static void locks_init_lock_heads(struct file_lock
*fl
)
334 INIT_HLIST_NODE(&fl
->fl_link
);
335 INIT_LIST_HEAD(&fl
->fl_list
);
336 INIT_LIST_HEAD(&fl
->fl_blocked_requests
);
337 INIT_LIST_HEAD(&fl
->fl_blocked_member
);
338 init_waitqueue_head(&fl
->fl_wait
);
341 /* Allocate an empty lock structure. */
342 struct file_lock
*locks_alloc_lock(void)
344 struct file_lock
*fl
= kmem_cache_zalloc(filelock_cache
, GFP_KERNEL
);
347 locks_init_lock_heads(fl
);
351 EXPORT_SYMBOL_GPL(locks_alloc_lock
);
353 void locks_release_private(struct file_lock
*fl
)
356 if (fl
->fl_ops
->fl_release_private
)
357 fl
->fl_ops
->fl_release_private(fl
);
362 if (fl
->fl_lmops
->lm_put_owner
) {
363 fl
->fl_lmops
->lm_put_owner(fl
->fl_owner
);
369 EXPORT_SYMBOL_GPL(locks_release_private
);
371 /* Free a lock which is not in use. */
372 void locks_free_lock(struct file_lock
*fl
)
374 BUG_ON(waitqueue_active(&fl
->fl_wait
));
375 BUG_ON(!list_empty(&fl
->fl_list
));
376 BUG_ON(!list_empty(&fl
->fl_blocked_requests
));
377 BUG_ON(!list_empty(&fl
->fl_blocked_member
));
378 BUG_ON(!hlist_unhashed(&fl
->fl_link
));
380 locks_release_private(fl
);
381 kmem_cache_free(filelock_cache
, fl
);
383 EXPORT_SYMBOL(locks_free_lock
);
386 locks_dispose_list(struct list_head
*dispose
)
388 struct file_lock
*fl
;
390 while (!list_empty(dispose
)) {
391 fl
= list_first_entry(dispose
, struct file_lock
, fl_list
);
392 list_del_init(&fl
->fl_list
);
397 void locks_init_lock(struct file_lock
*fl
)
399 memset(fl
, 0, sizeof(struct file_lock
));
400 locks_init_lock_heads(fl
);
402 EXPORT_SYMBOL(locks_init_lock
);
405 * Initialize a new lock from an existing file_lock structure.
407 void locks_copy_conflock(struct file_lock
*new, struct file_lock
*fl
)
409 new->fl_owner
= fl
->fl_owner
;
410 new->fl_pid
= fl
->fl_pid
;
412 new->fl_flags
= fl
->fl_flags
;
413 new->fl_type
= fl
->fl_type
;
414 new->fl_start
= fl
->fl_start
;
415 new->fl_end
= fl
->fl_end
;
416 new->fl_lmops
= fl
->fl_lmops
;
420 if (fl
->fl_lmops
->lm_get_owner
)
421 fl
->fl_lmops
->lm_get_owner(fl
->fl_owner
);
424 EXPORT_SYMBOL(locks_copy_conflock
);
426 void locks_copy_lock(struct file_lock
*new, struct file_lock
*fl
)
428 /* "new" must be a freshly-initialized lock */
429 WARN_ON_ONCE(new->fl_ops
);
431 locks_copy_conflock(new, fl
);
433 new->fl_file
= fl
->fl_file
;
434 new->fl_ops
= fl
->fl_ops
;
437 if (fl
->fl_ops
->fl_copy_lock
)
438 fl
->fl_ops
->fl_copy_lock(new, fl
);
441 EXPORT_SYMBOL(locks_copy_lock
);
443 static void locks_move_blocks(struct file_lock
*new, struct file_lock
*fl
)
448 * As ctx->flc_lock is held, new requests cannot be added to
449 * ->fl_blocked_requests, so we don't need a lock to check if it
452 if (list_empty(&fl
->fl_blocked_requests
))
454 spin_lock(&blocked_lock_lock
);
455 list_splice_init(&fl
->fl_blocked_requests
, &new->fl_blocked_requests
);
456 list_for_each_entry(f
, &new->fl_blocked_requests
, fl_blocked_member
)
458 spin_unlock(&blocked_lock_lock
);
461 static inline int flock_translate_cmd(int cmd
) {
463 return cmd
& (LOCK_MAND
| LOCK_RW
);
475 /* Fill in a file_lock structure with an appropriate FLOCK lock. */
476 static struct file_lock
*
477 flock_make_lock(struct file
*filp
, unsigned int cmd
, struct file_lock
*fl
)
479 int type
= flock_translate_cmd(cmd
);
482 return ERR_PTR(type
);
485 fl
= locks_alloc_lock();
487 return ERR_PTR(-ENOMEM
);
494 fl
->fl_pid
= current
->tgid
;
495 fl
->fl_flags
= FL_FLOCK
;
497 fl
->fl_end
= OFFSET_MAX
;
502 static int assign_type(struct file_lock
*fl
, long type
)
516 static int flock64_to_posix_lock(struct file
*filp
, struct file_lock
*fl
,
519 switch (l
->l_whence
) {
524 fl
->fl_start
= filp
->f_pos
;
527 fl
->fl_start
= i_size_read(file_inode(filp
));
532 if (l
->l_start
> OFFSET_MAX
- fl
->fl_start
)
534 fl
->fl_start
+= l
->l_start
;
535 if (fl
->fl_start
< 0)
538 /* POSIX-1996 leaves the case l->l_len < 0 undefined;
539 POSIX-2001 defines it. */
541 if (l
->l_len
- 1 > OFFSET_MAX
- fl
->fl_start
)
543 fl
->fl_end
= fl
->fl_start
+ l
->l_len
- 1;
545 } else if (l
->l_len
< 0) {
546 if (fl
->fl_start
+ l
->l_len
< 0)
548 fl
->fl_end
= fl
->fl_start
- 1;
549 fl
->fl_start
+= l
->l_len
;
551 fl
->fl_end
= OFFSET_MAX
;
553 fl
->fl_owner
= current
->files
;
554 fl
->fl_pid
= current
->tgid
;
556 fl
->fl_flags
= FL_POSIX
;
560 return assign_type(fl
, l
->l_type
);
563 /* Verify a "struct flock" and copy it to a "struct file_lock" as a POSIX
566 static int flock_to_posix_lock(struct file
*filp
, struct file_lock
*fl
,
569 struct flock64 ll
= {
571 .l_whence
= l
->l_whence
,
572 .l_start
= l
->l_start
,
576 return flock64_to_posix_lock(filp
, fl
, &ll
);
579 /* default lease lock manager operations */
581 lease_break_callback(struct file_lock
*fl
)
583 kill_fasync(&fl
->fl_fasync
, SIGIO
, POLL_MSG
);
588 lease_setup(struct file_lock
*fl
, void **priv
)
590 struct file
*filp
= fl
->fl_file
;
591 struct fasync_struct
*fa
= *priv
;
594 * fasync_insert_entry() returns the old entry if any. If there was no
595 * old entry, then it used "priv" and inserted it into the fasync list.
596 * Clear the pointer to indicate that it shouldn't be freed.
598 if (!fasync_insert_entry(fa
->fa_fd
, filp
, &fl
->fl_fasync
, fa
))
601 __f_setown(filp
, task_pid(current
), PIDTYPE_TGID
, 0);
604 static const struct lock_manager_operations lease_manager_ops
= {
605 .lm_break
= lease_break_callback
,
606 .lm_change
= lease_modify
,
607 .lm_setup
= lease_setup
,
611 * Initialize a lease, use the default lock manager operations
613 static int lease_init(struct file
*filp
, long type
, struct file_lock
*fl
)
615 if (assign_type(fl
, type
) != 0)
619 fl
->fl_pid
= current
->tgid
;
622 fl
->fl_flags
= FL_LEASE
;
624 fl
->fl_end
= OFFSET_MAX
;
626 fl
->fl_lmops
= &lease_manager_ops
;
630 /* Allocate a file_lock initialised to this type of lease */
631 static struct file_lock
*lease_alloc(struct file
*filp
, long type
)
633 struct file_lock
*fl
= locks_alloc_lock();
637 return ERR_PTR(error
);
639 error
= lease_init(filp
, type
, fl
);
642 return ERR_PTR(error
);
647 /* Check if two locks overlap each other.
649 static inline int locks_overlap(struct file_lock
*fl1
, struct file_lock
*fl2
)
651 return ((fl1
->fl_end
>= fl2
->fl_start
) &&
652 (fl2
->fl_end
>= fl1
->fl_start
));
656 * Check whether two locks have the same owner.
658 static int posix_same_owner(struct file_lock
*fl1
, struct file_lock
*fl2
)
660 if (fl1
->fl_lmops
&& fl1
->fl_lmops
->lm_compare_owner
)
661 return fl2
->fl_lmops
== fl1
->fl_lmops
&&
662 fl1
->fl_lmops
->lm_compare_owner(fl1
, fl2
);
663 return fl1
->fl_owner
== fl2
->fl_owner
;
666 /* Must be called with the flc_lock held! */
667 static void locks_insert_global_locks(struct file_lock
*fl
)
669 struct file_lock_list_struct
*fll
= this_cpu_ptr(&file_lock_list
);
671 percpu_rwsem_assert_held(&file_rwsem
);
673 spin_lock(&fll
->lock
);
674 fl
->fl_link_cpu
= smp_processor_id();
675 hlist_add_head(&fl
->fl_link
, &fll
->hlist
);
676 spin_unlock(&fll
->lock
);
679 /* Must be called with the flc_lock held! */
680 static void locks_delete_global_locks(struct file_lock
*fl
)
682 struct file_lock_list_struct
*fll
;
684 percpu_rwsem_assert_held(&file_rwsem
);
687 * Avoid taking lock if already unhashed. This is safe since this check
688 * is done while holding the flc_lock, and new insertions into the list
689 * also require that it be held.
691 if (hlist_unhashed(&fl
->fl_link
))
694 fll
= per_cpu_ptr(&file_lock_list
, fl
->fl_link_cpu
);
695 spin_lock(&fll
->lock
);
696 hlist_del_init(&fl
->fl_link
);
697 spin_unlock(&fll
->lock
);
701 posix_owner_key(struct file_lock
*fl
)
703 if (fl
->fl_lmops
&& fl
->fl_lmops
->lm_owner_key
)
704 return fl
->fl_lmops
->lm_owner_key(fl
);
705 return (unsigned long)fl
->fl_owner
;
708 static void locks_insert_global_blocked(struct file_lock
*waiter
)
710 lockdep_assert_held(&blocked_lock_lock
);
712 hash_add(blocked_hash
, &waiter
->fl_link
, posix_owner_key(waiter
));
715 static void locks_delete_global_blocked(struct file_lock
*waiter
)
717 lockdep_assert_held(&blocked_lock_lock
);
719 hash_del(&waiter
->fl_link
);
722 /* Remove waiter from blocker's block list.
723 * When blocker ends up pointing to itself then the list is empty.
725 * Must be called with blocked_lock_lock held.
727 static void __locks_delete_block(struct file_lock
*waiter
)
729 locks_delete_global_blocked(waiter
);
730 list_del_init(&waiter
->fl_blocked_member
);
731 waiter
->fl_blocker
= NULL
;
734 static void __locks_wake_up_blocks(struct file_lock
*blocker
)
736 while (!list_empty(&blocker
->fl_blocked_requests
)) {
737 struct file_lock
*waiter
;
739 waiter
= list_first_entry(&blocker
->fl_blocked_requests
,
740 struct file_lock
, fl_blocked_member
);
741 __locks_delete_block(waiter
);
742 if (waiter
->fl_lmops
&& waiter
->fl_lmops
->lm_notify
)
743 waiter
->fl_lmops
->lm_notify(waiter
);
745 wake_up(&waiter
->fl_wait
);
750 * locks_delete_lock - stop waiting for a file lock
751 * @waiter: the lock which was waiting
753 * lockd/nfsd need to disconnect the lock while working on it.
755 int locks_delete_block(struct file_lock
*waiter
)
757 int status
= -ENOENT
;
760 * If fl_blocker is NULL, it won't be set again as this thread
761 * "owns" the lock and is the only one that might try to claim
762 * the lock. So it is safe to test fl_blocker locklessly.
763 * Also if fl_blocker is NULL, this waiter is not listed on
764 * fl_blocked_requests for some lock, so no other request can
765 * be added to the list of fl_blocked_requests for this
766 * request. So if fl_blocker is NULL, it is safe to
767 * locklessly check if fl_blocked_requests is empty. If both
768 * of these checks succeed, there is no need to take the lock.
770 if (waiter
->fl_blocker
== NULL
&&
771 list_empty(&waiter
->fl_blocked_requests
))
773 spin_lock(&blocked_lock_lock
);
774 if (waiter
->fl_blocker
)
776 __locks_wake_up_blocks(waiter
);
777 __locks_delete_block(waiter
);
778 spin_unlock(&blocked_lock_lock
);
781 EXPORT_SYMBOL(locks_delete_block
);
783 /* Insert waiter into blocker's block list.
784 * We use a circular list so that processes can be easily woken up in
785 * the order they blocked. The documentation doesn't require this but
786 * it seems like the reasonable thing to do.
788 * Must be called with both the flc_lock and blocked_lock_lock held. The
789 * fl_blocked_requests list itself is protected by the blocked_lock_lock,
790 * but by ensuring that the flc_lock is also held on insertions we can avoid
791 * taking the blocked_lock_lock in some cases when we see that the
792 * fl_blocked_requests list is empty.
794 * Rather than just adding to the list, we check for conflicts with any existing
795 * waiters, and add beneath any waiter that blocks the new waiter.
796 * Thus wakeups don't happen until needed.
798 static void __locks_insert_block(struct file_lock
*blocker
,
799 struct file_lock
*waiter
,
800 bool conflict(struct file_lock
*,
803 struct file_lock
*fl
;
804 BUG_ON(!list_empty(&waiter
->fl_blocked_member
));
807 list_for_each_entry(fl
, &blocker
->fl_blocked_requests
, fl_blocked_member
)
808 if (conflict(fl
, waiter
)) {
812 waiter
->fl_blocker
= blocker
;
813 list_add_tail(&waiter
->fl_blocked_member
, &blocker
->fl_blocked_requests
);
814 if (IS_POSIX(blocker
) && !IS_OFDLCK(blocker
))
815 locks_insert_global_blocked(waiter
);
817 /* The requests in waiter->fl_blocked are known to conflict with
818 * waiter, but might not conflict with blocker, or the requests
819 * and lock which block it. So they all need to be woken.
821 __locks_wake_up_blocks(waiter
);
824 /* Must be called with flc_lock held. */
825 static void locks_insert_block(struct file_lock
*blocker
,
826 struct file_lock
*waiter
,
827 bool conflict(struct file_lock
*,
830 spin_lock(&blocked_lock_lock
);
831 __locks_insert_block(blocker
, waiter
, conflict
);
832 spin_unlock(&blocked_lock_lock
);
836 * Wake up processes blocked waiting for blocker.
838 * Must be called with the inode->flc_lock held!
840 static void locks_wake_up_blocks(struct file_lock
*blocker
)
843 * Avoid taking global lock if list is empty. This is safe since new
844 * blocked requests are only added to the list under the flc_lock, and
845 * the flc_lock is always held here. Note that removal from the
846 * fl_blocked_requests list does not require the flc_lock, so we must
847 * recheck list_empty() after acquiring the blocked_lock_lock.
849 if (list_empty(&blocker
->fl_blocked_requests
))
852 spin_lock(&blocked_lock_lock
);
853 __locks_wake_up_blocks(blocker
);
854 spin_unlock(&blocked_lock_lock
);
858 locks_insert_lock_ctx(struct file_lock
*fl
, struct list_head
*before
)
860 list_add_tail(&fl
->fl_list
, before
);
861 locks_insert_global_locks(fl
);
865 locks_unlink_lock_ctx(struct file_lock
*fl
)
867 locks_delete_global_locks(fl
);
868 list_del_init(&fl
->fl_list
);
869 locks_wake_up_blocks(fl
);
873 locks_delete_lock_ctx(struct file_lock
*fl
, struct list_head
*dispose
)
875 locks_unlink_lock_ctx(fl
);
877 list_add(&fl
->fl_list
, dispose
);
882 /* Determine if lock sys_fl blocks lock caller_fl. Common functionality
883 * checks for shared/exclusive status of overlapping locks.
885 static bool locks_conflict(struct file_lock
*caller_fl
,
886 struct file_lock
*sys_fl
)
888 if (sys_fl
->fl_type
== F_WRLCK
)
890 if (caller_fl
->fl_type
== F_WRLCK
)
895 /* Determine if lock sys_fl blocks lock caller_fl. POSIX specific
896 * checking before calling the locks_conflict().
898 static bool posix_locks_conflict(struct file_lock
*caller_fl
,
899 struct file_lock
*sys_fl
)
901 /* POSIX locks owned by the same process do not conflict with
904 if (posix_same_owner(caller_fl
, sys_fl
))
907 /* Check whether they overlap */
908 if (!locks_overlap(caller_fl
, sys_fl
))
911 return locks_conflict(caller_fl
, sys_fl
);
914 /* Determine if lock sys_fl blocks lock caller_fl. FLOCK specific
915 * checking before calling the locks_conflict().
917 static bool flock_locks_conflict(struct file_lock
*caller_fl
,
918 struct file_lock
*sys_fl
)
920 /* FLOCK locks referring to the same filp do not conflict with
923 if (caller_fl
->fl_file
== sys_fl
->fl_file
)
925 if ((caller_fl
->fl_type
& LOCK_MAND
) || (sys_fl
->fl_type
& LOCK_MAND
))
928 return locks_conflict(caller_fl
, sys_fl
);
932 posix_test_lock(struct file
*filp
, struct file_lock
*fl
)
934 struct file_lock
*cfl
;
935 struct file_lock_context
*ctx
;
936 struct inode
*inode
= locks_inode(filp
);
938 ctx
= smp_load_acquire(&inode
->i_flctx
);
939 if (!ctx
|| list_empty_careful(&ctx
->flc_posix
)) {
940 fl
->fl_type
= F_UNLCK
;
944 spin_lock(&ctx
->flc_lock
);
945 list_for_each_entry(cfl
, &ctx
->flc_posix
, fl_list
) {
946 if (posix_locks_conflict(fl
, cfl
)) {
947 locks_copy_conflock(fl
, cfl
);
951 fl
->fl_type
= F_UNLCK
;
953 spin_unlock(&ctx
->flc_lock
);
956 EXPORT_SYMBOL(posix_test_lock
);
959 * Deadlock detection:
961 * We attempt to detect deadlocks that are due purely to posix file
964 * We assume that a task can be waiting for at most one lock at a time.
965 * So for any acquired lock, the process holding that lock may be
966 * waiting on at most one other lock. That lock in turns may be held by
967 * someone waiting for at most one other lock. Given a requested lock
968 * caller_fl which is about to wait for a conflicting lock block_fl, we
969 * follow this chain of waiters to ensure we are not about to create a
972 * Since we do this before we ever put a process to sleep on a lock, we
973 * are ensured that there is never a cycle; that is what guarantees that
974 * the while() loop in posix_locks_deadlock() eventually completes.
976 * Note: the above assumption may not be true when handling lock
977 * requests from a broken NFS client. It may also fail in the presence
978 * of tasks (such as posix threads) sharing the same open file table.
979 * To handle those cases, we just bail out after a few iterations.
981 * For FL_OFDLCK locks, the owner is the filp, not the files_struct.
982 * Because the owner is not even nominally tied to a thread of
983 * execution, the deadlock detection below can't reasonably work well. Just
986 * In principle, we could do a more limited deadlock detection on FL_OFDLCK
987 * locks that just checks for the case where two tasks are attempting to
988 * upgrade from read to write locks on the same inode.
991 #define MAX_DEADLK_ITERATIONS 10
993 /* Find a lock that the owner of the given block_fl is blocking on. */
994 static struct file_lock
*what_owner_is_waiting_for(struct file_lock
*block_fl
)
996 struct file_lock
*fl
;
998 hash_for_each_possible(blocked_hash
, fl
, fl_link
, posix_owner_key(block_fl
)) {
999 if (posix_same_owner(fl
, block_fl
)) {
1000 while (fl
->fl_blocker
)
1001 fl
= fl
->fl_blocker
;
1008 /* Must be called with the blocked_lock_lock held! */
1009 static int posix_locks_deadlock(struct file_lock
*caller_fl
,
1010 struct file_lock
*block_fl
)
1014 lockdep_assert_held(&blocked_lock_lock
);
1017 * This deadlock detector can't reasonably detect deadlocks with
1018 * FL_OFDLCK locks, since they aren't owned by a process, per-se.
1020 if (IS_OFDLCK(caller_fl
))
1023 while ((block_fl
= what_owner_is_waiting_for(block_fl
))) {
1024 if (i
++ > MAX_DEADLK_ITERATIONS
)
1026 if (posix_same_owner(caller_fl
, block_fl
))
1032 /* Try to create a FLOCK lock on filp. We always insert new FLOCK locks
1033 * after any leases, but before any posix locks.
1035 * Note that if called with an FL_EXISTS argument, the caller may determine
1036 * whether or not a lock was successfully freed by testing the return
1037 * value for -ENOENT.
1039 static int flock_lock_inode(struct inode
*inode
, struct file_lock
*request
)
1041 struct file_lock
*new_fl
= NULL
;
1042 struct file_lock
*fl
;
1043 struct file_lock_context
*ctx
;
1048 ctx
= locks_get_lock_context(inode
, request
->fl_type
);
1050 if (request
->fl_type
!= F_UNLCK
)
1052 return (request
->fl_flags
& FL_EXISTS
) ? -ENOENT
: 0;
1055 if (!(request
->fl_flags
& FL_ACCESS
) && (request
->fl_type
!= F_UNLCK
)) {
1056 new_fl
= locks_alloc_lock();
1061 percpu_down_read_preempt_disable(&file_rwsem
);
1062 spin_lock(&ctx
->flc_lock
);
1063 if (request
->fl_flags
& FL_ACCESS
)
1066 list_for_each_entry(fl
, &ctx
->flc_flock
, fl_list
) {
1067 if (request
->fl_file
!= fl
->fl_file
)
1069 if (request
->fl_type
== fl
->fl_type
)
1072 locks_delete_lock_ctx(fl
, &dispose
);
1076 if (request
->fl_type
== F_UNLCK
) {
1077 if ((request
->fl_flags
& FL_EXISTS
) && !found
)
1083 list_for_each_entry(fl
, &ctx
->flc_flock
, fl_list
) {
1084 if (!flock_locks_conflict(request
, fl
))
1087 if (!(request
->fl_flags
& FL_SLEEP
))
1089 error
= FILE_LOCK_DEFERRED
;
1090 locks_insert_block(fl
, request
, flock_locks_conflict
);
1093 if (request
->fl_flags
& FL_ACCESS
)
1095 locks_copy_lock(new_fl
, request
);
1096 locks_move_blocks(new_fl
, request
);
1097 locks_insert_lock_ctx(new_fl
, &ctx
->flc_flock
);
1102 spin_unlock(&ctx
->flc_lock
);
1103 percpu_up_read_preempt_enable(&file_rwsem
);
1105 locks_free_lock(new_fl
);
1106 locks_dispose_list(&dispose
);
1107 trace_flock_lock_inode(inode
, request
, error
);
1111 static int posix_lock_inode(struct inode
*inode
, struct file_lock
*request
,
1112 struct file_lock
*conflock
)
1114 struct file_lock
*fl
, *tmp
;
1115 struct file_lock
*new_fl
= NULL
;
1116 struct file_lock
*new_fl2
= NULL
;
1117 struct file_lock
*left
= NULL
;
1118 struct file_lock
*right
= NULL
;
1119 struct file_lock_context
*ctx
;
1124 ctx
= locks_get_lock_context(inode
, request
->fl_type
);
1126 return (request
->fl_type
== F_UNLCK
) ? 0 : -ENOMEM
;
1129 * We may need two file_lock structures for this operation,
1130 * so we get them in advance to avoid races.
1132 * In some cases we can be sure, that no new locks will be needed
1134 if (!(request
->fl_flags
& FL_ACCESS
) &&
1135 (request
->fl_type
!= F_UNLCK
||
1136 request
->fl_start
!= 0 || request
->fl_end
!= OFFSET_MAX
)) {
1137 new_fl
= locks_alloc_lock();
1138 new_fl2
= locks_alloc_lock();
1141 percpu_down_read_preempt_disable(&file_rwsem
);
1142 spin_lock(&ctx
->flc_lock
);
1144 * New lock request. Walk all POSIX locks and look for conflicts. If
1145 * there are any, either return error or put the request on the
1146 * blocker's list of waiters and the global blocked_hash.
1148 if (request
->fl_type
!= F_UNLCK
) {
1149 list_for_each_entry(fl
, &ctx
->flc_posix
, fl_list
) {
1150 if (!posix_locks_conflict(request
, fl
))
1153 locks_copy_conflock(conflock
, fl
);
1155 if (!(request
->fl_flags
& FL_SLEEP
))
1158 * Deadlock detection and insertion into the blocked
1159 * locks list must be done while holding the same lock!
1162 spin_lock(&blocked_lock_lock
);
1163 if (likely(!posix_locks_deadlock(request
, fl
))) {
1164 error
= FILE_LOCK_DEFERRED
;
1165 __locks_insert_block(fl
, request
,
1166 posix_locks_conflict
);
1168 spin_unlock(&blocked_lock_lock
);
1173 /* If we're just looking for a conflict, we're done. */
1175 if (request
->fl_flags
& FL_ACCESS
)
1178 /* Find the first old lock with the same owner as the new lock */
1179 list_for_each_entry(fl
, &ctx
->flc_posix
, fl_list
) {
1180 if (posix_same_owner(request
, fl
))
1184 /* Process locks with this owner. */
1185 list_for_each_entry_safe_from(fl
, tmp
, &ctx
->flc_posix
, fl_list
) {
1186 if (!posix_same_owner(request
, fl
))
1189 /* Detect adjacent or overlapping regions (if same lock type) */
1190 if (request
->fl_type
== fl
->fl_type
) {
1191 /* In all comparisons of start vs end, use
1192 * "start - 1" rather than "end + 1". If end
1193 * is OFFSET_MAX, end + 1 will become negative.
1195 if (fl
->fl_end
< request
->fl_start
- 1)
1197 /* If the next lock in the list has entirely bigger
1198 * addresses than the new one, insert the lock here.
1200 if (fl
->fl_start
- 1 > request
->fl_end
)
1203 /* If we come here, the new and old lock are of the
1204 * same type and adjacent or overlapping. Make one
1205 * lock yielding from the lower start address of both
1206 * locks to the higher end address.
1208 if (fl
->fl_start
> request
->fl_start
)
1209 fl
->fl_start
= request
->fl_start
;
1211 request
->fl_start
= fl
->fl_start
;
1212 if (fl
->fl_end
< request
->fl_end
)
1213 fl
->fl_end
= request
->fl_end
;
1215 request
->fl_end
= fl
->fl_end
;
1217 locks_delete_lock_ctx(fl
, &dispose
);
1223 /* Processing for different lock types is a bit
1226 if (fl
->fl_end
< request
->fl_start
)
1228 if (fl
->fl_start
> request
->fl_end
)
1230 if (request
->fl_type
== F_UNLCK
)
1232 if (fl
->fl_start
< request
->fl_start
)
1234 /* If the next lock in the list has a higher end
1235 * address than the new one, insert the new one here.
1237 if (fl
->fl_end
> request
->fl_end
) {
1241 if (fl
->fl_start
>= request
->fl_start
) {
1242 /* The new lock completely replaces an old
1243 * one (This may happen several times).
1246 locks_delete_lock_ctx(fl
, &dispose
);
1250 * Replace the old lock with new_fl, and
1251 * remove the old one. It's safe to do the
1252 * insert here since we know that we won't be
1253 * using new_fl later, and that the lock is
1254 * just replacing an existing lock.
1259 locks_copy_lock(new_fl
, request
);
1262 locks_insert_lock_ctx(request
, &fl
->fl_list
);
1263 locks_delete_lock_ctx(fl
, &dispose
);
1270 * The above code only modifies existing locks in case of merging or
1271 * replacing. If new lock(s) need to be inserted all modifications are
1272 * done below this, so it's safe yet to bail out.
1274 error
= -ENOLCK
; /* "no luck" */
1275 if (right
&& left
== right
&& !new_fl2
)
1280 if (request
->fl_type
== F_UNLCK
) {
1281 if (request
->fl_flags
& FL_EXISTS
)
1290 locks_copy_lock(new_fl
, request
);
1291 locks_move_blocks(new_fl
, request
);
1292 locks_insert_lock_ctx(new_fl
, &fl
->fl_list
);
1297 if (left
== right
) {
1298 /* The new lock breaks the old one in two pieces,
1299 * so we have to use the second new lock.
1303 locks_copy_lock(left
, right
);
1304 locks_insert_lock_ctx(left
, &fl
->fl_list
);
1306 right
->fl_start
= request
->fl_end
+ 1;
1307 locks_wake_up_blocks(right
);
1310 left
->fl_end
= request
->fl_start
- 1;
1311 locks_wake_up_blocks(left
);
1314 spin_unlock(&ctx
->flc_lock
);
1315 percpu_up_read_preempt_enable(&file_rwsem
);
1317 * Free any unused locks.
1320 locks_free_lock(new_fl
);
1322 locks_free_lock(new_fl2
);
1323 locks_dispose_list(&dispose
);
1324 trace_posix_lock_inode(inode
, request
, error
);
1330 * posix_lock_file - Apply a POSIX-style lock to a file
1331 * @filp: The file to apply the lock to
1332 * @fl: The lock to be applied
1333 * @conflock: Place to return a copy of the conflicting lock, if found.
1335 * Add a POSIX style lock to a file.
1336 * We merge adjacent & overlapping locks whenever possible.
1337 * POSIX locks are sorted by owner task, then by starting address
1339 * Note that if called with an FL_EXISTS argument, the caller may determine
1340 * whether or not a lock was successfully freed by testing the return
1341 * value for -ENOENT.
1343 int posix_lock_file(struct file
*filp
, struct file_lock
*fl
,
1344 struct file_lock
*conflock
)
1346 return posix_lock_inode(locks_inode(filp
), fl
, conflock
);
1348 EXPORT_SYMBOL(posix_lock_file
);
1351 * posix_lock_inode_wait - Apply a POSIX-style lock to a file
1352 * @inode: inode of file to which lock request should be applied
1353 * @fl: The lock to be applied
1355 * Apply a POSIX style lock request to an inode.
1357 static int posix_lock_inode_wait(struct inode
*inode
, struct file_lock
*fl
)
1362 error
= posix_lock_inode(inode
, fl
, NULL
);
1363 if (error
!= FILE_LOCK_DEFERRED
)
1365 error
= wait_event_interruptible(fl
->fl_wait
, !fl
->fl_blocker
);
1369 locks_delete_block(fl
);
1373 #ifdef CONFIG_MANDATORY_FILE_LOCKING
1375 * locks_mandatory_locked - Check for an active lock
1376 * @file: the file to check
1378 * Searches the inode's list of locks to find any POSIX locks which conflict.
1379 * This function is called from locks_verify_locked() only.
1381 int locks_mandatory_locked(struct file
*file
)
1384 struct inode
*inode
= locks_inode(file
);
1385 struct file_lock_context
*ctx
;
1386 struct file_lock
*fl
;
1388 ctx
= smp_load_acquire(&inode
->i_flctx
);
1389 if (!ctx
|| list_empty_careful(&ctx
->flc_posix
))
1393 * Search the lock list for this inode for any POSIX locks.
1395 spin_lock(&ctx
->flc_lock
);
1397 list_for_each_entry(fl
, &ctx
->flc_posix
, fl_list
) {
1398 if (fl
->fl_owner
!= current
->files
&&
1399 fl
->fl_owner
!= file
) {
1404 spin_unlock(&ctx
->flc_lock
);
1409 * locks_mandatory_area - Check for a conflicting lock
1410 * @inode: the file to check
1411 * @filp: how the file was opened (if it was)
1412 * @start: first byte in the file to check
1413 * @end: lastbyte in the file to check
1414 * @type: %F_WRLCK for a write lock, else %F_RDLCK
1416 * Searches the inode's list of locks to find any POSIX locks which conflict.
1418 int locks_mandatory_area(struct inode
*inode
, struct file
*filp
, loff_t start
,
1419 loff_t end
, unsigned char type
)
1421 struct file_lock fl
;
1425 locks_init_lock(&fl
);
1426 fl
.fl_pid
= current
->tgid
;
1428 fl
.fl_flags
= FL_POSIX
| FL_ACCESS
;
1429 if (filp
&& !(filp
->f_flags
& O_NONBLOCK
))
1432 fl
.fl_start
= start
;
1438 fl
.fl_flags
&= ~FL_SLEEP
;
1439 error
= posix_lock_inode(inode
, &fl
, NULL
);
1445 fl
.fl_flags
|= FL_SLEEP
;
1446 fl
.fl_owner
= current
->files
;
1447 error
= posix_lock_inode(inode
, &fl
, NULL
);
1448 if (error
!= FILE_LOCK_DEFERRED
)
1450 error
= wait_event_interruptible(fl
.fl_wait
, !fl
.fl_blocker
);
1453 * If we've been sleeping someone might have
1454 * changed the permissions behind our back.
1456 if (__mandatory_lock(inode
))
1462 locks_delete_block(&fl
);
1466 EXPORT_SYMBOL(locks_mandatory_area
);
1467 #endif /* CONFIG_MANDATORY_FILE_LOCKING */
1469 static void lease_clear_pending(struct file_lock
*fl
, int arg
)
1473 fl
->fl_flags
&= ~FL_UNLOCK_PENDING
;
1476 fl
->fl_flags
&= ~FL_DOWNGRADE_PENDING
;
1480 /* We already had a lease on this file; just change its type */
1481 int lease_modify(struct file_lock
*fl
, int arg
, struct list_head
*dispose
)
1483 int error
= assign_type(fl
, arg
);
1487 lease_clear_pending(fl
, arg
);
1488 locks_wake_up_blocks(fl
);
1489 if (arg
== F_UNLCK
) {
1490 struct file
*filp
= fl
->fl_file
;
1493 filp
->f_owner
.signum
= 0;
1494 fasync_helper(0, fl
->fl_file
, 0, &fl
->fl_fasync
);
1495 if (fl
->fl_fasync
!= NULL
) {
1496 printk(KERN_ERR
"locks_delete_lock: fasync == %p\n", fl
->fl_fasync
);
1497 fl
->fl_fasync
= NULL
;
1499 locks_delete_lock_ctx(fl
, dispose
);
1503 EXPORT_SYMBOL(lease_modify
);
1505 static bool past_time(unsigned long then
)
1508 /* 0 is a special value meaning "this never expires": */
1510 return time_after(jiffies
, then
);
1513 static void time_out_leases(struct inode
*inode
, struct list_head
*dispose
)
1515 struct file_lock_context
*ctx
= inode
->i_flctx
;
1516 struct file_lock
*fl
, *tmp
;
1518 lockdep_assert_held(&ctx
->flc_lock
);
1520 list_for_each_entry_safe(fl
, tmp
, &ctx
->flc_lease
, fl_list
) {
1521 trace_time_out_leases(inode
, fl
);
1522 if (past_time(fl
->fl_downgrade_time
))
1523 lease_modify(fl
, F_RDLCK
, dispose
);
1524 if (past_time(fl
->fl_break_time
))
1525 lease_modify(fl
, F_UNLCK
, dispose
);
1529 static bool leases_conflict(struct file_lock
*lease
, struct file_lock
*breaker
)
1531 if ((breaker
->fl_flags
& FL_LAYOUT
) != (lease
->fl_flags
& FL_LAYOUT
))
1533 if ((breaker
->fl_flags
& FL_DELEG
) && (lease
->fl_flags
& FL_LEASE
))
1535 return locks_conflict(breaker
, lease
);
1539 any_leases_conflict(struct inode
*inode
, struct file_lock
*breaker
)
1541 struct file_lock_context
*ctx
= inode
->i_flctx
;
1542 struct file_lock
*fl
;
1544 lockdep_assert_held(&ctx
->flc_lock
);
1546 list_for_each_entry(fl
, &ctx
->flc_lease
, fl_list
) {
1547 if (leases_conflict(fl
, breaker
))
1554 * __break_lease - revoke all outstanding leases on file
1555 * @inode: the inode of the file to return
1556 * @mode: O_RDONLY: break only write leases; O_WRONLY or O_RDWR:
1558 * @type: FL_LEASE: break leases and delegations; FL_DELEG: break
1561 * break_lease (inlined for speed) has checked there already is at least
1562 * some kind of lock (maybe a lease) on this file. Leases are broken on
1563 * a call to open() or truncate(). This function can sleep unless you
1564 * specified %O_NONBLOCK to your open().
1566 int __break_lease(struct inode
*inode
, unsigned int mode
, unsigned int type
)
1569 struct file_lock_context
*ctx
;
1570 struct file_lock
*new_fl
, *fl
, *tmp
;
1571 unsigned long break_time
;
1572 int want_write
= (mode
& O_ACCMODE
) != O_RDONLY
;
1575 new_fl
= lease_alloc(NULL
, want_write
? F_WRLCK
: F_RDLCK
);
1577 return PTR_ERR(new_fl
);
1578 new_fl
->fl_flags
= type
;
1580 /* typically we will check that ctx is non-NULL before calling */
1581 ctx
= smp_load_acquire(&inode
->i_flctx
);
1587 percpu_down_read_preempt_disable(&file_rwsem
);
1588 spin_lock(&ctx
->flc_lock
);
1590 time_out_leases(inode
, &dispose
);
1592 if (!any_leases_conflict(inode
, new_fl
))
1596 if (lease_break_time
> 0) {
1597 break_time
= jiffies
+ lease_break_time
* HZ
;
1598 if (break_time
== 0)
1599 break_time
++; /* so that 0 means no break time */
1602 list_for_each_entry_safe(fl
, tmp
, &ctx
->flc_lease
, fl_list
) {
1603 if (!leases_conflict(fl
, new_fl
))
1606 if (fl
->fl_flags
& FL_UNLOCK_PENDING
)
1608 fl
->fl_flags
|= FL_UNLOCK_PENDING
;
1609 fl
->fl_break_time
= break_time
;
1611 if (lease_breaking(fl
))
1613 fl
->fl_flags
|= FL_DOWNGRADE_PENDING
;
1614 fl
->fl_downgrade_time
= break_time
;
1616 if (fl
->fl_lmops
->lm_break(fl
))
1617 locks_delete_lock_ctx(fl
, &dispose
);
1620 if (list_empty(&ctx
->flc_lease
))
1623 if (mode
& O_NONBLOCK
) {
1624 trace_break_lease_noblock(inode
, new_fl
);
1625 error
= -EWOULDBLOCK
;
1630 fl
= list_first_entry(&ctx
->flc_lease
, struct file_lock
, fl_list
);
1631 break_time
= fl
->fl_break_time
;
1632 if (break_time
!= 0)
1633 break_time
-= jiffies
;
1634 if (break_time
== 0)
1636 locks_insert_block(fl
, new_fl
, leases_conflict
);
1637 trace_break_lease_block(inode
, new_fl
);
1638 spin_unlock(&ctx
->flc_lock
);
1639 percpu_up_read_preempt_enable(&file_rwsem
);
1641 locks_dispose_list(&dispose
);
1642 error
= wait_event_interruptible_timeout(new_fl
->fl_wait
,
1643 !new_fl
->fl_blocker
, break_time
);
1645 percpu_down_read_preempt_disable(&file_rwsem
);
1646 spin_lock(&ctx
->flc_lock
);
1647 trace_break_lease_unblock(inode
, new_fl
);
1648 locks_delete_block(new_fl
);
1651 * Wait for the next conflicting lease that has not been
1655 time_out_leases(inode
, &dispose
);
1656 if (any_leases_conflict(inode
, new_fl
))
1661 spin_unlock(&ctx
->flc_lock
);
1662 percpu_up_read_preempt_enable(&file_rwsem
);
1663 locks_dispose_list(&dispose
);
1664 locks_free_lock(new_fl
);
1667 EXPORT_SYMBOL(__break_lease
);
1670 * lease_get_mtime - update modified time of an inode with exclusive lease
1672 * @time: pointer to a timespec which contains the last modified time
1674 * This is to force NFS clients to flush their caches for files with
1675 * exclusive leases. The justification is that if someone has an
1676 * exclusive lease, then they could be modifying it.
1678 void lease_get_mtime(struct inode
*inode
, struct timespec64
*time
)
1680 bool has_lease
= false;
1681 struct file_lock_context
*ctx
;
1682 struct file_lock
*fl
;
1684 ctx
= smp_load_acquire(&inode
->i_flctx
);
1685 if (ctx
&& !list_empty_careful(&ctx
->flc_lease
)) {
1686 spin_lock(&ctx
->flc_lock
);
1687 fl
= list_first_entry_or_null(&ctx
->flc_lease
,
1688 struct file_lock
, fl_list
);
1689 if (fl
&& (fl
->fl_type
== F_WRLCK
))
1691 spin_unlock(&ctx
->flc_lock
);
1695 *time
= current_time(inode
);
1697 EXPORT_SYMBOL(lease_get_mtime
);
1700 * fcntl_getlease - Enquire what lease is currently active
1703 * The value returned by this function will be one of
1704 * (if no lease break is pending):
1706 * %F_RDLCK to indicate a shared lease is held.
1708 * %F_WRLCK to indicate an exclusive lease is held.
1710 * %F_UNLCK to indicate no lease is held.
1712 * (if a lease break is pending):
1714 * %F_RDLCK to indicate an exclusive lease needs to be
1715 * changed to a shared lease (or removed).
1717 * %F_UNLCK to indicate the lease needs to be removed.
1719 * XXX: sfr & willy disagree over whether F_INPROGRESS
1720 * should be returned to userspace.
1722 int fcntl_getlease(struct file
*filp
)
1724 struct file_lock
*fl
;
1725 struct inode
*inode
= locks_inode(filp
);
1726 struct file_lock_context
*ctx
;
1730 ctx
= smp_load_acquire(&inode
->i_flctx
);
1731 if (ctx
&& !list_empty_careful(&ctx
->flc_lease
)) {
1732 percpu_down_read_preempt_disable(&file_rwsem
);
1733 spin_lock(&ctx
->flc_lock
);
1734 time_out_leases(inode
, &dispose
);
1735 list_for_each_entry(fl
, &ctx
->flc_lease
, fl_list
) {
1736 if (fl
->fl_file
!= filp
)
1738 type
= target_leasetype(fl
);
1741 spin_unlock(&ctx
->flc_lock
);
1742 percpu_up_read_preempt_enable(&file_rwsem
);
1744 locks_dispose_list(&dispose
);
1750 * check_conflicting_open - see if the given dentry points to a file that has
1751 * an existing open that would conflict with the
1753 * @dentry: dentry to check
1754 * @arg: type of lease that we're trying to acquire
1755 * @flags: current lock flags
1757 * Check to see if there's an existing open fd on this file that would
1758 * conflict with the lease we're trying to set.
1761 check_conflicting_open(const struct dentry
*dentry
, const long arg
, int flags
)
1764 struct inode
*inode
= dentry
->d_inode
;
1766 if (flags
& FL_LAYOUT
)
1769 if ((arg
== F_RDLCK
) && inode_is_open_for_write(inode
))
1772 if ((arg
== F_WRLCK
) && ((d_count(dentry
) > 1) ||
1773 (atomic_read(&inode
->i_count
) > 1)))
1780 generic_add_lease(struct file
*filp
, long arg
, struct file_lock
**flp
, void **priv
)
1782 struct file_lock
*fl
, *my_fl
= NULL
, *lease
;
1783 struct dentry
*dentry
= filp
->f_path
.dentry
;
1784 struct inode
*inode
= dentry
->d_inode
;
1785 struct file_lock_context
*ctx
;
1786 bool is_deleg
= (*flp
)->fl_flags
& FL_DELEG
;
1791 trace_generic_add_lease(inode
, lease
);
1793 /* Note that arg is never F_UNLCK here */
1794 ctx
= locks_get_lock_context(inode
, arg
);
1799 * In the delegation case we need mutual exclusion with
1800 * a number of operations that take the i_mutex. We trylock
1801 * because delegations are an optional optimization, and if
1802 * there's some chance of a conflict--we'd rather not
1803 * bother, maybe that's a sign this just isn't a good file to
1804 * hand out a delegation on.
1806 if (is_deleg
&& !inode_trylock(inode
))
1809 if (is_deleg
&& arg
== F_WRLCK
) {
1810 /* Write delegations are not currently supported: */
1811 inode_unlock(inode
);
1816 percpu_down_read_preempt_disable(&file_rwsem
);
1817 spin_lock(&ctx
->flc_lock
);
1818 time_out_leases(inode
, &dispose
);
1819 error
= check_conflicting_open(dentry
, arg
, lease
->fl_flags
);
1824 * At this point, we know that if there is an exclusive
1825 * lease on this file, then we hold it on this filp
1826 * (otherwise our open of this file would have blocked).
1827 * And if we are trying to acquire an exclusive lease,
1828 * then the file is not open by anyone (including us)
1829 * except for this filp.
1832 list_for_each_entry(fl
, &ctx
->flc_lease
, fl_list
) {
1833 if (fl
->fl_file
== filp
&&
1834 fl
->fl_owner
== lease
->fl_owner
) {
1840 * No exclusive leases if someone else has a lease on
1846 * Modifying our existing lease is OK, but no getting a
1847 * new lease if someone else is opening for write:
1849 if (fl
->fl_flags
& FL_UNLOCK_PENDING
)
1853 if (my_fl
!= NULL
) {
1855 error
= lease
->fl_lmops
->lm_change(lease
, arg
, &dispose
);
1865 locks_insert_lock_ctx(lease
, &ctx
->flc_lease
);
1867 * The check in break_lease() is lockless. It's possible for another
1868 * open to race in after we did the earlier check for a conflicting
1869 * open but before the lease was inserted. Check again for a
1870 * conflicting open and cancel the lease if there is one.
1872 * We also add a barrier here to ensure that the insertion of the lock
1873 * precedes these checks.
1876 error
= check_conflicting_open(dentry
, arg
, lease
->fl_flags
);
1878 locks_unlink_lock_ctx(lease
);
1883 if (lease
->fl_lmops
->lm_setup
)
1884 lease
->fl_lmops
->lm_setup(lease
, priv
);
1886 spin_unlock(&ctx
->flc_lock
);
1887 percpu_up_read_preempt_enable(&file_rwsem
);
1888 locks_dispose_list(&dispose
);
1890 inode_unlock(inode
);
1891 if (!error
&& !my_fl
)
1896 static int generic_delete_lease(struct file
*filp
, void *owner
)
1898 int error
= -EAGAIN
;
1899 struct file_lock
*fl
, *victim
= NULL
;
1900 struct inode
*inode
= locks_inode(filp
);
1901 struct file_lock_context
*ctx
;
1904 ctx
= smp_load_acquire(&inode
->i_flctx
);
1906 trace_generic_delete_lease(inode
, NULL
);
1910 percpu_down_read_preempt_disable(&file_rwsem
);
1911 spin_lock(&ctx
->flc_lock
);
1912 list_for_each_entry(fl
, &ctx
->flc_lease
, fl_list
) {
1913 if (fl
->fl_file
== filp
&&
1914 fl
->fl_owner
== owner
) {
1919 trace_generic_delete_lease(inode
, victim
);
1921 error
= fl
->fl_lmops
->lm_change(victim
, F_UNLCK
, &dispose
);
1922 spin_unlock(&ctx
->flc_lock
);
1923 percpu_up_read_preempt_enable(&file_rwsem
);
1924 locks_dispose_list(&dispose
);
1929 * generic_setlease - sets a lease on an open file
1930 * @filp: file pointer
1931 * @arg: type of lease to obtain
1932 * @flp: input - file_lock to use, output - file_lock inserted
1933 * @priv: private data for lm_setup (may be NULL if lm_setup
1934 * doesn't require it)
1936 * The (input) flp->fl_lmops->lm_break function is required
1939 int generic_setlease(struct file
*filp
, long arg
, struct file_lock
**flp
,
1942 struct inode
*inode
= locks_inode(filp
);
1945 if ((!uid_eq(current_fsuid(), inode
->i_uid
)) && !capable(CAP_LEASE
))
1947 if (!S_ISREG(inode
->i_mode
))
1949 error
= security_file_lock(filp
, arg
);
1955 return generic_delete_lease(filp
, *priv
);
1958 if (!(*flp
)->fl_lmops
->lm_break
) {
1963 return generic_add_lease(filp
, arg
, flp
, priv
);
1968 EXPORT_SYMBOL(generic_setlease
);
1971 * vfs_setlease - sets a lease on an open file
1972 * @filp: file pointer
1973 * @arg: type of lease to obtain
1974 * @lease: file_lock to use when adding a lease
1975 * @priv: private info for lm_setup when adding a lease (may be
1976 * NULL if lm_setup doesn't require it)
1978 * Call this to establish a lease on the file. The "lease" argument is not
1979 * used for F_UNLCK requests and may be NULL. For commands that set or alter
1980 * an existing lease, the ``(*lease)->fl_lmops->lm_break`` operation must be
1981 * set; if not, this function will return -ENOLCK (and generate a scary-looking
1984 * The "priv" pointer is passed directly to the lm_setup function as-is. It
1985 * may be NULL if the lm_setup operation doesn't require it.
1988 vfs_setlease(struct file
*filp
, long arg
, struct file_lock
**lease
, void **priv
)
1990 if (filp
->f_op
->setlease
)
1991 return filp
->f_op
->setlease(filp
, arg
, lease
, priv
);
1993 return generic_setlease(filp
, arg
, lease
, priv
);
1995 EXPORT_SYMBOL_GPL(vfs_setlease
);
1997 static int do_fcntl_add_lease(unsigned int fd
, struct file
*filp
, long arg
)
1999 struct file_lock
*fl
;
2000 struct fasync_struct
*new;
2003 fl
= lease_alloc(filp
, arg
);
2007 new = fasync_alloc();
2009 locks_free_lock(fl
);
2014 error
= vfs_setlease(filp
, arg
, &fl
, (void **)&new);
2016 locks_free_lock(fl
);
2023 * fcntl_setlease - sets a lease on an open file
2024 * @fd: open file descriptor
2025 * @filp: file pointer
2026 * @arg: type of lease to obtain
2028 * Call this fcntl to establish a lease on the file.
2029 * Note that you also need to call %F_SETSIG to
2030 * receive a signal when the lease is broken.
2032 int fcntl_setlease(unsigned int fd
, struct file
*filp
, long arg
)
2035 return vfs_setlease(filp
, F_UNLCK
, NULL
, (void **)&filp
);
2036 return do_fcntl_add_lease(fd
, filp
, arg
);
2040 * flock_lock_inode_wait - Apply a FLOCK-style lock to a file
2041 * @inode: inode of the file to apply to
2042 * @fl: The lock to be applied
2044 * Apply a FLOCK style lock request to an inode.
2046 static int flock_lock_inode_wait(struct inode
*inode
, struct file_lock
*fl
)
2051 error
= flock_lock_inode(inode
, fl
);
2052 if (error
!= FILE_LOCK_DEFERRED
)
2054 error
= wait_event_interruptible(fl
->fl_wait
, !fl
->fl_blocker
);
2058 locks_delete_block(fl
);
2063 * locks_lock_inode_wait - Apply a lock to an inode
2064 * @inode: inode of the file to apply to
2065 * @fl: The lock to be applied
2067 * Apply a POSIX or FLOCK style lock request to an inode.
2069 int locks_lock_inode_wait(struct inode
*inode
, struct file_lock
*fl
)
2072 switch (fl
->fl_flags
& (FL_POSIX
|FL_FLOCK
)) {
2074 res
= posix_lock_inode_wait(inode
, fl
);
2077 res
= flock_lock_inode_wait(inode
, fl
);
2084 EXPORT_SYMBOL(locks_lock_inode_wait
);
2087 * sys_flock: - flock() system call.
2088 * @fd: the file descriptor to lock.
2089 * @cmd: the type of lock to apply.
2091 * Apply a %FL_FLOCK style lock to an open file descriptor.
2092 * The @cmd can be one of:
2094 * - %LOCK_SH -- a shared lock.
2095 * - %LOCK_EX -- an exclusive lock.
2096 * - %LOCK_UN -- remove an existing lock.
2097 * - %LOCK_MAND -- a 'mandatory' flock.
2098 * This exists to emulate Windows Share Modes.
2100 * %LOCK_MAND can be combined with %LOCK_READ or %LOCK_WRITE to allow other
2101 * processes read and write access respectively.
2103 SYSCALL_DEFINE2(flock
, unsigned int, fd
, unsigned int, cmd
)
2105 struct fd f
= fdget(fd
);
2106 struct file_lock
*lock
;
2107 int can_sleep
, unlock
;
2114 can_sleep
= !(cmd
& LOCK_NB
);
2116 unlock
= (cmd
== LOCK_UN
);
2118 if (!unlock
&& !(cmd
& LOCK_MAND
) &&
2119 !(f
.file
->f_mode
& (FMODE_READ
|FMODE_WRITE
)))
2122 lock
= flock_make_lock(f
.file
, cmd
, NULL
);
2124 error
= PTR_ERR(lock
);
2129 lock
->fl_flags
|= FL_SLEEP
;
2131 error
= security_file_lock(f
.file
, lock
->fl_type
);
2135 if (f
.file
->f_op
->flock
)
2136 error
= f
.file
->f_op
->flock(f
.file
,
2137 (can_sleep
) ? F_SETLKW
: F_SETLK
,
2140 error
= locks_lock_file_wait(f
.file
, lock
);
2143 locks_free_lock(lock
);
2152 * vfs_test_lock - test file byte range lock
2153 * @filp: The file to test lock for
2154 * @fl: The lock to test; also used to hold result
2156 * Returns -ERRNO on failure. Indicates presence of conflicting lock by
2157 * setting conf->fl_type to something other than F_UNLCK.
2159 int vfs_test_lock(struct file
*filp
, struct file_lock
*fl
)
2161 if (filp
->f_op
->lock
)
2162 return filp
->f_op
->lock(filp
, F_GETLK
, fl
);
2163 posix_test_lock(filp
, fl
);
2166 EXPORT_SYMBOL_GPL(vfs_test_lock
);
2169 * locks_translate_pid - translate a file_lock's fl_pid number into a namespace
2170 * @fl: The file_lock who's fl_pid should be translated
2171 * @ns: The namespace into which the pid should be translated
2173 * Used to tranlate a fl_pid into a namespace virtual pid number
2175 static pid_t
locks_translate_pid(struct file_lock
*fl
, struct pid_namespace
*ns
)
2182 if (IS_REMOTELCK(fl
))
2185 * If the flock owner process is dead and its pid has been already
2186 * freed, the translation below won't work, but we still want to show
2187 * flock owner pid number in init pidns.
2189 if (ns
== &init_pid_ns
)
2190 return (pid_t
)fl
->fl_pid
;
2193 pid
= find_pid_ns(fl
->fl_pid
, &init_pid_ns
);
2194 vnr
= pid_nr_ns(pid
, ns
);
2199 static int posix_lock_to_flock(struct flock
*flock
, struct file_lock
*fl
)
2201 flock
->l_pid
= locks_translate_pid(fl
, task_active_pid_ns(current
));
2202 #if BITS_PER_LONG == 32
2204 * Make sure we can represent the posix lock via
2205 * legacy 32bit flock.
2207 if (fl
->fl_start
> OFFT_OFFSET_MAX
)
2209 if (fl
->fl_end
!= OFFSET_MAX
&& fl
->fl_end
> OFFT_OFFSET_MAX
)
2212 flock
->l_start
= fl
->fl_start
;
2213 flock
->l_len
= fl
->fl_end
== OFFSET_MAX
? 0 :
2214 fl
->fl_end
- fl
->fl_start
+ 1;
2215 flock
->l_whence
= 0;
2216 flock
->l_type
= fl
->fl_type
;
2220 #if BITS_PER_LONG == 32
2221 static void posix_lock_to_flock64(struct flock64
*flock
, struct file_lock
*fl
)
2223 flock
->l_pid
= locks_translate_pid(fl
, task_active_pid_ns(current
));
2224 flock
->l_start
= fl
->fl_start
;
2225 flock
->l_len
= fl
->fl_end
== OFFSET_MAX
? 0 :
2226 fl
->fl_end
- fl
->fl_start
+ 1;
2227 flock
->l_whence
= 0;
2228 flock
->l_type
= fl
->fl_type
;
2232 /* Report the first existing lock that would conflict with l.
2233 * This implements the F_GETLK command of fcntl().
2235 int fcntl_getlk(struct file
*filp
, unsigned int cmd
, struct flock
*flock
)
2237 struct file_lock
*fl
;
2240 fl
= locks_alloc_lock();
2244 if (flock
->l_type
!= F_RDLCK
&& flock
->l_type
!= F_WRLCK
)
2247 error
= flock_to_posix_lock(filp
, fl
, flock
);
2251 if (cmd
== F_OFD_GETLK
) {
2253 if (flock
->l_pid
!= 0)
2257 fl
->fl_flags
|= FL_OFDLCK
;
2258 fl
->fl_owner
= filp
;
2261 error
= vfs_test_lock(filp
, fl
);
2265 flock
->l_type
= fl
->fl_type
;
2266 if (fl
->fl_type
!= F_UNLCK
) {
2267 error
= posix_lock_to_flock(flock
, fl
);
2272 locks_free_lock(fl
);
2277 * vfs_lock_file - file byte range lock
2278 * @filp: The file to apply the lock to
2279 * @cmd: type of locking operation (F_SETLK, F_GETLK, etc.)
2280 * @fl: The lock to be applied
2281 * @conf: Place to return a copy of the conflicting lock, if found.
2283 * A caller that doesn't care about the conflicting lock may pass NULL
2284 * as the final argument.
2286 * If the filesystem defines a private ->lock() method, then @conf will
2287 * be left unchanged; so a caller that cares should initialize it to
2288 * some acceptable default.
2290 * To avoid blocking kernel daemons, such as lockd, that need to acquire POSIX
2291 * locks, the ->lock() interface may return asynchronously, before the lock has
2292 * been granted or denied by the underlying filesystem, if (and only if)
2293 * lm_grant is set. Callers expecting ->lock() to return asynchronously
2294 * will only use F_SETLK, not F_SETLKW; they will set FL_SLEEP if (and only if)
2295 * the request is for a blocking lock. When ->lock() does return asynchronously,
2296 * it must return FILE_LOCK_DEFERRED, and call ->lm_grant() when the lock
2297 * request completes.
2298 * If the request is for non-blocking lock the file system should return
2299 * FILE_LOCK_DEFERRED then try to get the lock and call the callback routine
2300 * with the result. If the request timed out the callback routine will return a
2301 * nonzero return code and the file system should release the lock. The file
2302 * system is also responsible to keep a corresponding posix lock when it
2303 * grants a lock so the VFS can find out which locks are locally held and do
2304 * the correct lock cleanup when required.
2305 * The underlying filesystem must not drop the kernel lock or call
2306 * ->lm_grant() before returning to the caller with a FILE_LOCK_DEFERRED
2309 int vfs_lock_file(struct file
*filp
, unsigned int cmd
, struct file_lock
*fl
, struct file_lock
*conf
)
2311 if (filp
->f_op
->lock
)
2312 return filp
->f_op
->lock(filp
, cmd
, fl
);
2314 return posix_lock_file(filp
, fl
, conf
);
2316 EXPORT_SYMBOL_GPL(vfs_lock_file
);
2318 static int do_lock_file_wait(struct file
*filp
, unsigned int cmd
,
2319 struct file_lock
*fl
)
2323 error
= security_file_lock(filp
, fl
->fl_type
);
2328 error
= vfs_lock_file(filp
, cmd
, fl
, NULL
);
2329 if (error
!= FILE_LOCK_DEFERRED
)
2331 error
= wait_event_interruptible(fl
->fl_wait
, !fl
->fl_blocker
);
2335 locks_delete_block(fl
);
2340 /* Ensure that fl->fl_file has compatible f_mode for F_SETLK calls */
2342 check_fmode_for_setlk(struct file_lock
*fl
)
2344 switch (fl
->fl_type
) {
2346 if (!(fl
->fl_file
->f_mode
& FMODE_READ
))
2350 if (!(fl
->fl_file
->f_mode
& FMODE_WRITE
))
2356 /* Apply the lock described by l to an open file descriptor.
2357 * This implements both the F_SETLK and F_SETLKW commands of fcntl().
2359 int fcntl_setlk(unsigned int fd
, struct file
*filp
, unsigned int cmd
,
2360 struct flock
*flock
)
2362 struct file_lock
*file_lock
= locks_alloc_lock();
2363 struct inode
*inode
= locks_inode(filp
);
2367 if (file_lock
== NULL
)
2370 /* Don't allow mandatory locks on files that may be memory mapped
2373 if (mandatory_lock(inode
) && mapping_writably_mapped(filp
->f_mapping
)) {
2378 error
= flock_to_posix_lock(filp
, file_lock
, flock
);
2382 error
= check_fmode_for_setlk(file_lock
);
2387 * If the cmd is requesting file-private locks, then set the
2388 * FL_OFDLCK flag and override the owner.
2393 if (flock
->l_pid
!= 0)
2397 file_lock
->fl_flags
|= FL_OFDLCK
;
2398 file_lock
->fl_owner
= filp
;
2402 if (flock
->l_pid
!= 0)
2406 file_lock
->fl_flags
|= FL_OFDLCK
;
2407 file_lock
->fl_owner
= filp
;
2410 file_lock
->fl_flags
|= FL_SLEEP
;
2413 error
= do_lock_file_wait(filp
, cmd
, file_lock
);
2416 * Attempt to detect a close/fcntl race and recover by releasing the
2417 * lock that was just acquired. There is no need to do that when we're
2418 * unlocking though, or for OFD locks.
2420 if (!error
&& file_lock
->fl_type
!= F_UNLCK
&&
2421 !(file_lock
->fl_flags
& FL_OFDLCK
)) {
2423 * We need that spin_lock here - it prevents reordering between
2424 * update of i_flctx->flc_posix and check for it done in
2425 * close(). rcu_read_lock() wouldn't do.
2427 spin_lock(¤t
->files
->file_lock
);
2429 spin_unlock(¤t
->files
->file_lock
);
2431 file_lock
->fl_type
= F_UNLCK
;
2432 error
= do_lock_file_wait(filp
, cmd
, file_lock
);
2433 WARN_ON_ONCE(error
);
2438 trace_fcntl_setlk(inode
, file_lock
, error
);
2439 locks_free_lock(file_lock
);
2443 #if BITS_PER_LONG == 32
2444 /* Report the first existing lock that would conflict with l.
2445 * This implements the F_GETLK command of fcntl().
2447 int fcntl_getlk64(struct file
*filp
, unsigned int cmd
, struct flock64
*flock
)
2449 struct file_lock
*fl
;
2452 fl
= locks_alloc_lock();
2457 if (flock
->l_type
!= F_RDLCK
&& flock
->l_type
!= F_WRLCK
)
2460 error
= flock64_to_posix_lock(filp
, fl
, flock
);
2464 if (cmd
== F_OFD_GETLK
) {
2466 if (flock
->l_pid
!= 0)
2470 fl
->fl_flags
|= FL_OFDLCK
;
2471 fl
->fl_owner
= filp
;
2474 error
= vfs_test_lock(filp
, fl
);
2478 flock
->l_type
= fl
->fl_type
;
2479 if (fl
->fl_type
!= F_UNLCK
)
2480 posix_lock_to_flock64(flock
, fl
);
2483 locks_free_lock(fl
);
2487 /* Apply the lock described by l to an open file descriptor.
2488 * This implements both the F_SETLK and F_SETLKW commands of fcntl().
2490 int fcntl_setlk64(unsigned int fd
, struct file
*filp
, unsigned int cmd
,
2491 struct flock64
*flock
)
2493 struct file_lock
*file_lock
= locks_alloc_lock();
2494 struct inode
*inode
= locks_inode(filp
);
2498 if (file_lock
== NULL
)
2501 /* Don't allow mandatory locks on files that may be memory mapped
2504 if (mandatory_lock(inode
) && mapping_writably_mapped(filp
->f_mapping
)) {
2509 error
= flock64_to_posix_lock(filp
, file_lock
, flock
);
2513 error
= check_fmode_for_setlk(file_lock
);
2518 * If the cmd is requesting file-private locks, then set the
2519 * FL_OFDLCK flag and override the owner.
2524 if (flock
->l_pid
!= 0)
2528 file_lock
->fl_flags
|= FL_OFDLCK
;
2529 file_lock
->fl_owner
= filp
;
2533 if (flock
->l_pid
!= 0)
2537 file_lock
->fl_flags
|= FL_OFDLCK
;
2538 file_lock
->fl_owner
= filp
;
2541 file_lock
->fl_flags
|= FL_SLEEP
;
2544 error
= do_lock_file_wait(filp
, cmd
, file_lock
);
2547 * Attempt to detect a close/fcntl race and recover by releasing the
2548 * lock that was just acquired. There is no need to do that when we're
2549 * unlocking though, or for OFD locks.
2551 if (!error
&& file_lock
->fl_type
!= F_UNLCK
&&
2552 !(file_lock
->fl_flags
& FL_OFDLCK
)) {
2554 * We need that spin_lock here - it prevents reordering between
2555 * update of i_flctx->flc_posix and check for it done in
2556 * close(). rcu_read_lock() wouldn't do.
2558 spin_lock(¤t
->files
->file_lock
);
2560 spin_unlock(¤t
->files
->file_lock
);
2562 file_lock
->fl_type
= F_UNLCK
;
2563 error
= do_lock_file_wait(filp
, cmd
, file_lock
);
2564 WARN_ON_ONCE(error
);
2569 locks_free_lock(file_lock
);
2572 #endif /* BITS_PER_LONG == 32 */
2575 * This function is called when the file is being removed
2576 * from the task's fd array. POSIX locks belonging to this task
2577 * are deleted at this time.
2579 void locks_remove_posix(struct file
*filp
, fl_owner_t owner
)
2582 struct inode
*inode
= locks_inode(filp
);
2583 struct file_lock lock
;
2584 struct file_lock_context
*ctx
;
2587 * If there are no locks held on this file, we don't need to call
2588 * posix_lock_file(). Another process could be setting a lock on this
2589 * file at the same time, but we wouldn't remove that lock anyway.
2591 ctx
= smp_load_acquire(&inode
->i_flctx
);
2592 if (!ctx
|| list_empty(&ctx
->flc_posix
))
2595 locks_init_lock(&lock
);
2596 lock
.fl_type
= F_UNLCK
;
2597 lock
.fl_flags
= FL_POSIX
| FL_CLOSE
;
2599 lock
.fl_end
= OFFSET_MAX
;
2600 lock
.fl_owner
= owner
;
2601 lock
.fl_pid
= current
->tgid
;
2602 lock
.fl_file
= filp
;
2604 lock
.fl_lmops
= NULL
;
2606 error
= vfs_lock_file(filp
, F_SETLK
, &lock
, NULL
);
2608 if (lock
.fl_ops
&& lock
.fl_ops
->fl_release_private
)
2609 lock
.fl_ops
->fl_release_private(&lock
);
2610 trace_locks_remove_posix(inode
, &lock
, error
);
2612 EXPORT_SYMBOL(locks_remove_posix
);
2614 /* The i_flctx must be valid when calling into here */
2616 locks_remove_flock(struct file
*filp
, struct file_lock_context
*flctx
)
2618 struct file_lock fl
;
2619 struct inode
*inode
= locks_inode(filp
);
2621 if (list_empty(&flctx
->flc_flock
))
2624 flock_make_lock(filp
, LOCK_UN
, &fl
);
2625 fl
.fl_flags
|= FL_CLOSE
;
2627 if (filp
->f_op
->flock
)
2628 filp
->f_op
->flock(filp
, F_SETLKW
, &fl
);
2630 flock_lock_inode(inode
, &fl
);
2632 if (fl
.fl_ops
&& fl
.fl_ops
->fl_release_private
)
2633 fl
.fl_ops
->fl_release_private(&fl
);
2636 /* The i_flctx must be valid when calling into here */
2638 locks_remove_lease(struct file
*filp
, struct file_lock_context
*ctx
)
2640 struct file_lock
*fl
, *tmp
;
2643 if (list_empty(&ctx
->flc_lease
))
2646 percpu_down_read_preempt_disable(&file_rwsem
);
2647 spin_lock(&ctx
->flc_lock
);
2648 list_for_each_entry_safe(fl
, tmp
, &ctx
->flc_lease
, fl_list
)
2649 if (filp
== fl
->fl_file
)
2650 lease_modify(fl
, F_UNLCK
, &dispose
);
2651 spin_unlock(&ctx
->flc_lock
);
2652 percpu_up_read_preempt_enable(&file_rwsem
);
2654 locks_dispose_list(&dispose
);
2658 * This function is called on the last close of an open file.
2660 void locks_remove_file(struct file
*filp
)
2662 struct file_lock_context
*ctx
;
2664 ctx
= smp_load_acquire(&locks_inode(filp
)->i_flctx
);
2668 /* remove any OFD locks */
2669 locks_remove_posix(filp
, filp
);
2671 /* remove flock locks */
2672 locks_remove_flock(filp
, ctx
);
2674 /* remove any leases */
2675 locks_remove_lease(filp
, ctx
);
2677 spin_lock(&ctx
->flc_lock
);
2678 locks_check_ctx_file_list(filp
, &ctx
->flc_posix
, "POSIX");
2679 locks_check_ctx_file_list(filp
, &ctx
->flc_flock
, "FLOCK");
2680 locks_check_ctx_file_list(filp
, &ctx
->flc_lease
, "LEASE");
2681 spin_unlock(&ctx
->flc_lock
);
2685 * vfs_cancel_lock - file byte range unblock lock
2686 * @filp: The file to apply the unblock to
2687 * @fl: The lock to be unblocked
2689 * Used by lock managers to cancel blocked requests
2691 int vfs_cancel_lock(struct file
*filp
, struct file_lock
*fl
)
2693 if (filp
->f_op
->lock
)
2694 return filp
->f_op
->lock(filp
, F_CANCELLK
, fl
);
2697 EXPORT_SYMBOL_GPL(vfs_cancel_lock
);
2699 #ifdef CONFIG_PROC_FS
2700 #include <linux/proc_fs.h>
2701 #include <linux/seq_file.h>
2703 struct locks_iterator
{
2708 static void lock_get_status(struct seq_file
*f
, struct file_lock
*fl
,
2709 loff_t id
, char *pfx
)
2711 struct inode
*inode
= NULL
;
2712 unsigned int fl_pid
;
2713 struct pid_namespace
*proc_pidns
= file_inode(f
->file
)->i_sb
->s_fs_info
;
2715 fl_pid
= locks_translate_pid(fl
, proc_pidns
);
2717 * If lock owner is dead (and pid is freed) or not visible in current
2718 * pidns, zero is shown as a pid value. Check lock info from
2719 * init_pid_ns to get saved lock pid value.
2722 if (fl
->fl_file
!= NULL
)
2723 inode
= locks_inode(fl
->fl_file
);
2725 seq_printf(f
, "%lld:%s ", id
, pfx
);
2727 if (fl
->fl_flags
& FL_ACCESS
)
2728 seq_puts(f
, "ACCESS");
2729 else if (IS_OFDLCK(fl
))
2730 seq_puts(f
, "OFDLCK");
2732 seq_puts(f
, "POSIX ");
2734 seq_printf(f
, " %s ",
2735 (inode
== NULL
) ? "*NOINODE*" :
2736 mandatory_lock(inode
) ? "MANDATORY" : "ADVISORY ");
2737 } else if (IS_FLOCK(fl
)) {
2738 if (fl
->fl_type
& LOCK_MAND
) {
2739 seq_puts(f
, "FLOCK MSNFS ");
2741 seq_puts(f
, "FLOCK ADVISORY ");
2743 } else if (IS_LEASE(fl
)) {
2744 if (fl
->fl_flags
& FL_DELEG
)
2745 seq_puts(f
, "DELEG ");
2747 seq_puts(f
, "LEASE ");
2749 if (lease_breaking(fl
))
2750 seq_puts(f
, "BREAKING ");
2751 else if (fl
->fl_file
)
2752 seq_puts(f
, "ACTIVE ");
2754 seq_puts(f
, "BREAKER ");
2756 seq_puts(f
, "UNKNOWN UNKNOWN ");
2758 if (fl
->fl_type
& LOCK_MAND
) {
2759 seq_printf(f
, "%s ",
2760 (fl
->fl_type
& LOCK_READ
)
2761 ? (fl
->fl_type
& LOCK_WRITE
) ? "RW " : "READ "
2762 : (fl
->fl_type
& LOCK_WRITE
) ? "WRITE" : "NONE ");
2764 seq_printf(f
, "%s ",
2765 (lease_breaking(fl
))
2766 ? (fl
->fl_type
== F_UNLCK
) ? "UNLCK" : "READ "
2767 : (fl
->fl_type
== F_WRLCK
) ? "WRITE" : "READ ");
2770 /* userspace relies on this representation of dev_t */
2771 seq_printf(f
, "%d %02x:%02x:%ld ", fl_pid
,
2772 MAJOR(inode
->i_sb
->s_dev
),
2773 MINOR(inode
->i_sb
->s_dev
), inode
->i_ino
);
2775 seq_printf(f
, "%d <none>:0 ", fl_pid
);
2778 if (fl
->fl_end
== OFFSET_MAX
)
2779 seq_printf(f
, "%Ld EOF\n", fl
->fl_start
);
2781 seq_printf(f
, "%Ld %Ld\n", fl
->fl_start
, fl
->fl_end
);
2783 seq_puts(f
, "0 EOF\n");
2787 static int locks_show(struct seq_file
*f
, void *v
)
2789 struct locks_iterator
*iter
= f
->private;
2790 struct file_lock
*fl
, *bfl
;
2791 struct pid_namespace
*proc_pidns
= file_inode(f
->file
)->i_sb
->s_fs_info
;
2793 fl
= hlist_entry(v
, struct file_lock
, fl_link
);
2795 if (locks_translate_pid(fl
, proc_pidns
) == 0)
2798 lock_get_status(f
, fl
, iter
->li_pos
, "");
2800 list_for_each_entry(bfl
, &fl
->fl_blocked_requests
, fl_blocked_member
)
2801 lock_get_status(f
, bfl
, iter
->li_pos
, " ->");
2806 static void __show_fd_locks(struct seq_file
*f
,
2807 struct list_head
*head
, int *id
,
2808 struct file
*filp
, struct files_struct
*files
)
2810 struct file_lock
*fl
;
2812 list_for_each_entry(fl
, head
, fl_list
) {
2814 if (filp
!= fl
->fl_file
)
2816 if (fl
->fl_owner
!= files
&&
2817 fl
->fl_owner
!= filp
)
2821 seq_puts(f
, "lock:\t");
2822 lock_get_status(f
, fl
, *id
, "");
2826 void show_fd_locks(struct seq_file
*f
,
2827 struct file
*filp
, struct files_struct
*files
)
2829 struct inode
*inode
= locks_inode(filp
);
2830 struct file_lock_context
*ctx
;
2833 ctx
= smp_load_acquire(&inode
->i_flctx
);
2837 spin_lock(&ctx
->flc_lock
);
2838 __show_fd_locks(f
, &ctx
->flc_flock
, &id
, filp
, files
);
2839 __show_fd_locks(f
, &ctx
->flc_posix
, &id
, filp
, files
);
2840 __show_fd_locks(f
, &ctx
->flc_lease
, &id
, filp
, files
);
2841 spin_unlock(&ctx
->flc_lock
);
2844 static void *locks_start(struct seq_file
*f
, loff_t
*pos
)
2845 __acquires(&blocked_lock_lock
)
2847 struct locks_iterator
*iter
= f
->private;
2849 iter
->li_pos
= *pos
+ 1;
2850 percpu_down_write(&file_rwsem
);
2851 spin_lock(&blocked_lock_lock
);
2852 return seq_hlist_start_percpu(&file_lock_list
.hlist
, &iter
->li_cpu
, *pos
);
2855 static void *locks_next(struct seq_file
*f
, void *v
, loff_t
*pos
)
2857 struct locks_iterator
*iter
= f
->private;
2860 return seq_hlist_next_percpu(v
, &file_lock_list
.hlist
, &iter
->li_cpu
, pos
);
2863 static void locks_stop(struct seq_file
*f
, void *v
)
2864 __releases(&blocked_lock_lock
)
2866 spin_unlock(&blocked_lock_lock
);
2867 percpu_up_write(&file_rwsem
);
2870 static const struct seq_operations locks_seq_operations
= {
2871 .start
= locks_start
,
2877 static int __init
proc_locks_init(void)
2879 proc_create_seq_private("locks", 0, NULL
, &locks_seq_operations
,
2880 sizeof(struct locks_iterator
), NULL
);
2883 fs_initcall(proc_locks_init
);
2886 static int __init
filelock_init(void)
2890 flctx_cache
= kmem_cache_create("file_lock_ctx",
2891 sizeof(struct file_lock_context
), 0, SLAB_PANIC
, NULL
);
2893 filelock_cache
= kmem_cache_create("file_lock_cache",
2894 sizeof(struct file_lock
), 0, SLAB_PANIC
, NULL
);
2896 for_each_possible_cpu(i
) {
2897 struct file_lock_list_struct
*fll
= per_cpu_ptr(&file_lock_list
, i
);
2899 spin_lock_init(&fll
->lock
);
2900 INIT_HLIST_HEAD(&fll
->hlist
);
2905 core_initcall(filelock_init
);