4 * Provide support for fcntl()'s F_GETLK, F_SETLK, and F_SETLKW calls.
5 * Doug Evans (dje@spiff.uucp), August 07, 1992
7 * Deadlock detection added.
8 * FIXME: one thing isn't handled yet:
9 * - mandatory locks (requires lots of changes elsewhere)
10 * Kelly Carmichael (kelly@[142.24.8.65]), September 17, 1994.
12 * Miscellaneous edits, and a total rewrite of posix_lock_file() code.
13 * Kai Petzke (wpp@marie.physik.tu-berlin.de), 1994
15 * Converted file_lock_table to a linked list from an array, which eliminates
16 * the limits on how many active file locks are open.
17 * Chad Page (pageone@netcom.com), November 27, 1994
19 * Removed dependency on file descriptors. dup()'ed file descriptors now
20 * get the same locks as the original file descriptors, and a close() on
21 * any file descriptor removes ALL the locks on the file for the current
22 * process. Since locks still depend on the process id, locks are inherited
23 * after an exec() but not after a fork(). This agrees with POSIX, and both
24 * BSD and SVR4 practice.
25 * Andy Walker (andy@lysaker.kvaerner.no), February 14, 1995
27 * Scrapped free list which is redundant now that we allocate locks
28 * dynamically with kmalloc()/kfree().
29 * Andy Walker (andy@lysaker.kvaerner.no), February 21, 1995
31 * Implemented two lock personalities - FL_FLOCK and FL_POSIX.
33 * FL_POSIX locks are created with calls to fcntl() and lockf() through the
34 * fcntl() system call. They have the semantics described above.
36 * FL_FLOCK locks are created with calls to flock(), through the flock()
37 * system call, which is new. Old C libraries implement flock() via fcntl()
38 * and will continue to use the old, broken implementation.
40 * FL_FLOCK locks follow the 4.4 BSD flock() semantics. They are associated
41 * with a file pointer (filp). As a result they can be shared by a parent
42 * process and its children after a fork(). They are removed when the last
43 * file descriptor referring to the file pointer is closed (unless explicitly
46 * FL_FLOCK locks never deadlock, an existing lock is always removed before
47 * upgrading from shared to exclusive (or vice versa). When this happens
48 * any processes blocked by the current lock are woken up and allowed to
49 * run before the new lock is applied.
50 * Andy Walker (andy@lysaker.kvaerner.no), June 09, 1995
52 * Removed some race conditions in flock_lock_file(), marked other possible
53 * races. Just grep for FIXME to see them.
54 * Dmitry Gorodchanin (pgmdsg@ibi.com), February 09, 1996.
56 * Addressed Dmitry's concerns. Deadlock checking no longer recursive.
57 * Lock allocation changed to GFP_ATOMIC as we can't afford to sleep
58 * once we've checked for blocking and deadlocking.
59 * Andy Walker (andy@lysaker.kvaerner.no), April 03, 1996.
61 * Initial implementation of mandatory locks. SunOS turned out to be
62 * a rotten model, so I implemented the "obvious" semantics.
63 * See 'Documentation/mandatory.txt' for details.
64 * Andy Walker (andy@lysaker.kvaerner.no), April 06, 1996.
66 * Don't allow mandatory locks on mmap()'ed files. Added simple functions to
67 * check if a file has mandatory locks, used by mmap(), open() and creat() to
68 * see if system call should be rejected. Ref. HP-UX/SunOS/Solaris Reference
70 * Andy Walker (andy@lysaker.kvaerner.no), April 09, 1996.
72 * Tidied up block list handling. Added '/proc/locks' interface.
73 * Andy Walker (andy@lysaker.kvaerner.no), April 24, 1996.
75 * Fixed deadlock condition for pathological code that mixes calls to
76 * flock() and fcntl().
77 * Andy Walker (andy@lysaker.kvaerner.no), April 29, 1996.
79 * Allow only one type of locking scheme (FL_POSIX or FL_FLOCK) to be in use
80 * for a given file at a time. Changed the CONFIG_LOCK_MANDATORY scheme to
81 * guarantee sensible behaviour in the case where file system modules might
82 * be compiled with different options than the kernel itself.
83 * Andy Walker (andy@lysaker.kvaerner.no), May 15, 1996.
85 * Added a couple of missing wake_up() calls. Thanks to Thomas Meckel
86 * (Thomas.Meckel@mni.fh-giessen.de) for spotting this.
87 * Andy Walker (andy@lysaker.kvaerner.no), May 15, 1996.
89 * Changed FL_POSIX locks to use the block list in the same way as FL_FLOCK
90 * locks. Changed process synchronisation to avoid dereferencing locks that
91 * have already been freed.
92 * Andy Walker (andy@lysaker.kvaerner.no), Sep 21, 1996.
94 * Made the block list a circular list to minimise searching in the list.
95 * Andy Walker (andy@lysaker.kvaerner.no), Sep 25, 1996.
97 * Made mandatory locking a mount option. Default is not to allow mandatory
99 * Andy Walker (andy@lysaker.kvaerner.no), Oct 04, 1996.
101 * Some adaptations for NFS support.
102 * Olaf Kirch (okir@monad.swb.de), Dec 1996,
104 * Fixed /proc/locks interface so that we can't overrun the buffer we are handed.
105 * Andy Walker (andy@lysaker.kvaerner.no), May 12, 1997.
107 * Use slab allocator instead of kmalloc/kfree.
108 * Use generic list implementation from <linux/list.h>.
109 * Sped up posix_locks_deadlock by only considering blocked locks.
110 * Matthew Wilcox <willy@debian.org>, March, 2000.
112 * Leases and LOCK_MAND
113 * Matthew Wilcox <willy@debian.org>, June, 2000.
114 * Stephen Rothwell <sfr@canb.auug.org.au>, June, 2000.
117 #include <linux/capability.h>
118 #include <linux/file.h>
119 #include <linux/fs.h>
120 #include <linux/init.h>
121 #include <linux/module.h>
122 #include <linux/security.h>
123 #include <linux/slab.h>
124 #include <linux/smp_lock.h>
125 #include <linux/syscalls.h>
126 #include <linux/time.h>
127 #include <linux/rcupdate.h>
129 #include <asm/semaphore.h>
130 #include <asm/uaccess.h>
132 #define IS_POSIX(fl) (fl->fl_flags & FL_POSIX)
133 #define IS_FLOCK(fl) (fl->fl_flags & FL_FLOCK)
134 #define IS_LEASE(fl) (fl->fl_flags & FL_LEASE)
136 int leases_enable
= 1;
137 int lease_break_time
= 45;
139 #define for_each_lock(inode, lockp) \
140 for (lockp = &inode->i_flock; *lockp != NULL; lockp = &(*lockp)->fl_next)
142 static LIST_HEAD(file_lock_list
);
143 static LIST_HEAD(blocked_list
);
145 static kmem_cache_t
*filelock_cache
;
147 /* Allocate an empty lock structure. */
148 static struct file_lock
*locks_alloc_lock(void)
150 return kmem_cache_alloc(filelock_cache
, SLAB_KERNEL
);
153 static void locks_release_private(struct file_lock
*fl
)
156 if (fl
->fl_ops
->fl_release_private
)
157 fl
->fl_ops
->fl_release_private(fl
);
161 if (fl
->fl_lmops
->fl_release_private
)
162 fl
->fl_lmops
->fl_release_private(fl
);
168 /* Free a lock which is not in use. */
169 static void locks_free_lock(struct file_lock
*fl
)
175 if (waitqueue_active(&fl
->fl_wait
))
176 panic("Attempting to free lock with active wait queue");
178 if (!list_empty(&fl
->fl_block
))
179 panic("Attempting to free lock with active block list");
181 if (!list_empty(&fl
->fl_link
))
182 panic("Attempting to free lock on active lock list");
184 locks_release_private(fl
);
185 kmem_cache_free(filelock_cache
, fl
);
188 void locks_init_lock(struct file_lock
*fl
)
190 INIT_LIST_HEAD(&fl
->fl_link
);
191 INIT_LIST_HEAD(&fl
->fl_block
);
192 init_waitqueue_head(&fl
->fl_wait
);
194 fl
->fl_fasync
= NULL
;
200 fl
->fl_start
= fl
->fl_end
= 0;
205 EXPORT_SYMBOL(locks_init_lock
);
208 * Initialises the fields of the file lock which are invariant for
211 static void init_once(void *foo
, kmem_cache_t
*cache
, unsigned long flags
)
213 struct file_lock
*lock
= (struct file_lock
*) foo
;
215 if ((flags
& (SLAB_CTOR_VERIFY
|SLAB_CTOR_CONSTRUCTOR
)) !=
216 SLAB_CTOR_CONSTRUCTOR
)
219 locks_init_lock(lock
);
222 static void locks_copy_private(struct file_lock
*new, struct file_lock
*fl
)
225 if (fl
->fl_ops
->fl_copy_lock
)
226 fl
->fl_ops
->fl_copy_lock(new, fl
);
227 new->fl_ops
= fl
->fl_ops
;
230 if (fl
->fl_lmops
->fl_copy_lock
)
231 fl
->fl_lmops
->fl_copy_lock(new, fl
);
232 new->fl_lmops
= fl
->fl_lmops
;
237 * Initialize a new lock from an existing file_lock structure.
239 static void __locks_copy_lock(struct file_lock
*new, const struct file_lock
*fl
)
241 new->fl_owner
= fl
->fl_owner
;
242 new->fl_pid
= fl
->fl_pid
;
244 new->fl_flags
= fl
->fl_flags
;
245 new->fl_type
= fl
->fl_type
;
246 new->fl_start
= fl
->fl_start
;
247 new->fl_end
= fl
->fl_end
;
249 new->fl_lmops
= NULL
;
252 void locks_copy_lock(struct file_lock
*new, struct file_lock
*fl
)
254 locks_release_private(new);
256 __locks_copy_lock(new, fl
);
257 new->fl_file
= fl
->fl_file
;
258 new->fl_ops
= fl
->fl_ops
;
259 new->fl_lmops
= fl
->fl_lmops
;
261 locks_copy_private(new, fl
);
264 EXPORT_SYMBOL(locks_copy_lock
);
266 static inline int flock_translate_cmd(int cmd
) {
268 return cmd
& (LOCK_MAND
| LOCK_RW
);
280 /* Fill in a file_lock structure with an appropriate FLOCK lock. */
281 static int flock_make_lock(struct file
*filp
, struct file_lock
**lock
,
284 struct file_lock
*fl
;
285 int type
= flock_translate_cmd(cmd
);
289 fl
= locks_alloc_lock();
294 fl
->fl_pid
= current
->tgid
;
295 fl
->fl_flags
= FL_FLOCK
;
297 fl
->fl_end
= OFFSET_MAX
;
303 static int assign_type(struct file_lock
*fl
, int type
)
317 /* Verify a "struct flock" and copy it to a "struct file_lock" as a POSIX
320 static int flock_to_posix_lock(struct file
*filp
, struct file_lock
*fl
,
325 switch (l
->l_whence
) {
333 start
= i_size_read(filp
->f_dentry
->d_inode
);
339 /* POSIX-1996 leaves the case l->l_len < 0 undefined;
340 POSIX-2001 defines it. */
344 fl
->fl_end
= OFFSET_MAX
;
346 end
= start
+ l
->l_len
- 1;
348 } else if (l
->l_len
< 0) {
355 fl
->fl_start
= start
; /* we record the absolute position */
356 if (fl
->fl_end
< fl
->fl_start
)
359 fl
->fl_owner
= current
->files
;
360 fl
->fl_pid
= current
->tgid
;
362 fl
->fl_flags
= FL_POSIX
;
366 return assign_type(fl
, l
->l_type
);
369 #if BITS_PER_LONG == 32
370 static int flock64_to_posix_lock(struct file
*filp
, struct file_lock
*fl
,
375 switch (l
->l_whence
) {
383 start
= i_size_read(filp
->f_dentry
->d_inode
);
392 fl
->fl_end
= OFFSET_MAX
;
394 fl
->fl_end
= start
+ l
->l_len
- 1;
395 } else if (l
->l_len
< 0) {
396 fl
->fl_end
= start
- 1;
401 fl
->fl_start
= start
; /* we record the absolute position */
402 if (fl
->fl_end
< fl
->fl_start
)
405 fl
->fl_owner
= current
->files
;
406 fl
->fl_pid
= current
->tgid
;
408 fl
->fl_flags
= FL_POSIX
;
416 fl
->fl_type
= l
->l_type
;
426 /* default lease lock manager operations */
427 static void lease_break_callback(struct file_lock
*fl
)
429 kill_fasync(&fl
->fl_fasync
, SIGIO
, POLL_MSG
);
432 static void lease_release_private_callback(struct file_lock
*fl
)
437 f_delown(fl
->fl_file
);
438 fl
->fl_file
->f_owner
.signum
= 0;
441 static int lease_mylease_callback(struct file_lock
*fl
, struct file_lock
*try)
443 return fl
->fl_file
== try->fl_file
;
446 static struct lock_manager_operations lease_manager_ops
= {
447 .fl_break
= lease_break_callback
,
448 .fl_release_private
= lease_release_private_callback
,
449 .fl_mylease
= lease_mylease_callback
,
450 .fl_change
= lease_modify
,
454 * Initialize a lease, use the default lock manager operations
456 static int lease_init(struct file
*filp
, int type
, struct file_lock
*fl
)
458 fl
->fl_owner
= current
->files
;
459 fl
->fl_pid
= current
->tgid
;
462 fl
->fl_flags
= FL_LEASE
;
463 if (assign_type(fl
, type
) != 0) {
468 fl
->fl_end
= OFFSET_MAX
;
470 fl
->fl_lmops
= &lease_manager_ops
;
474 /* Allocate a file_lock initialised to this type of lease */
475 static int lease_alloc(struct file
*filp
, int type
, struct file_lock
**flp
)
477 struct file_lock
*fl
= locks_alloc_lock();
483 error
= lease_init(filp
, type
, fl
);
490 /* Check if two locks overlap each other.
492 static inline int locks_overlap(struct file_lock
*fl1
, struct file_lock
*fl2
)
494 return ((fl1
->fl_end
>= fl2
->fl_start
) &&
495 (fl2
->fl_end
>= fl1
->fl_start
));
499 * Check whether two locks have the same owner.
501 static int posix_same_owner(struct file_lock
*fl1
, struct file_lock
*fl2
)
503 if (fl1
->fl_lmops
&& fl1
->fl_lmops
->fl_compare_owner
)
504 return fl2
->fl_lmops
== fl1
->fl_lmops
&&
505 fl1
->fl_lmops
->fl_compare_owner(fl1
, fl2
);
506 return fl1
->fl_owner
== fl2
->fl_owner
;
509 /* Remove waiter from blocker's block list.
510 * When blocker ends up pointing to itself then the list is empty.
512 static void __locks_delete_block(struct file_lock
*waiter
)
514 list_del_init(&waiter
->fl_block
);
515 list_del_init(&waiter
->fl_link
);
516 waiter
->fl_next
= NULL
;
521 static void locks_delete_block(struct file_lock
*waiter
)
524 __locks_delete_block(waiter
);
528 /* Insert waiter into blocker's block list.
529 * We use a circular list so that processes can be easily woken up in
530 * the order they blocked. The documentation doesn't require this but
531 * it seems like the reasonable thing to do.
533 static void locks_insert_block(struct file_lock
*blocker
,
534 struct file_lock
*waiter
)
536 if (!list_empty(&waiter
->fl_block
)) {
537 printk(KERN_ERR
"locks_insert_block: removing duplicated lock "
538 "(pid=%d %Ld-%Ld type=%d)\n", waiter
->fl_pid
,
539 waiter
->fl_start
, waiter
->fl_end
, waiter
->fl_type
);
540 __locks_delete_block(waiter
);
542 list_add_tail(&waiter
->fl_block
, &blocker
->fl_block
);
543 waiter
->fl_next
= blocker
;
544 if (IS_POSIX(blocker
))
545 list_add(&waiter
->fl_link
, &blocked_list
);
548 /* Wake up processes blocked waiting for blocker.
549 * If told to wait then schedule the processes until the block list
550 * is empty, otherwise empty the block list ourselves.
552 static void locks_wake_up_blocks(struct file_lock
*blocker
)
554 while (!list_empty(&blocker
->fl_block
)) {
555 struct file_lock
*waiter
= list_entry(blocker
->fl_block
.next
,
556 struct file_lock
, fl_block
);
557 __locks_delete_block(waiter
);
558 if (waiter
->fl_lmops
&& waiter
->fl_lmops
->fl_notify
)
559 waiter
->fl_lmops
->fl_notify(waiter
);
561 wake_up(&waiter
->fl_wait
);
565 /* Insert file lock fl into an inode's lock list at the position indicated
566 * by pos. At the same time add the lock to the global file lock list.
568 static void locks_insert_lock(struct file_lock
**pos
, struct file_lock
*fl
)
570 list_add(&fl
->fl_link
, &file_lock_list
);
572 /* insert into file's list */
576 if (fl
->fl_ops
&& fl
->fl_ops
->fl_insert
)
577 fl
->fl_ops
->fl_insert(fl
);
581 * Delete a lock and then free it.
582 * Wake up processes that are blocked waiting for this lock,
583 * notify the FS that the lock has been cleared and
584 * finally free the lock.
586 static void locks_delete_lock(struct file_lock
**thisfl_p
)
588 struct file_lock
*fl
= *thisfl_p
;
590 *thisfl_p
= fl
->fl_next
;
592 list_del_init(&fl
->fl_link
);
594 fasync_helper(0, fl
->fl_file
, 0, &fl
->fl_fasync
);
595 if (fl
->fl_fasync
!= NULL
) {
596 printk(KERN_ERR
"locks_delete_lock: fasync == %p\n", fl
->fl_fasync
);
597 fl
->fl_fasync
= NULL
;
600 if (fl
->fl_ops
&& fl
->fl_ops
->fl_remove
)
601 fl
->fl_ops
->fl_remove(fl
);
603 locks_wake_up_blocks(fl
);
607 /* Determine if lock sys_fl blocks lock caller_fl. Common functionality
608 * checks for shared/exclusive status of overlapping locks.
610 static int locks_conflict(struct file_lock
*caller_fl
, struct file_lock
*sys_fl
)
612 if (sys_fl
->fl_type
== F_WRLCK
)
614 if (caller_fl
->fl_type
== F_WRLCK
)
619 /* Determine if lock sys_fl blocks lock caller_fl. POSIX specific
620 * checking before calling the locks_conflict().
622 static int posix_locks_conflict(struct file_lock
*caller_fl
, struct file_lock
*sys_fl
)
624 /* POSIX locks owned by the same process do not conflict with
627 if (!IS_POSIX(sys_fl
) || posix_same_owner(caller_fl
, sys_fl
))
630 /* Check whether they overlap */
631 if (!locks_overlap(caller_fl
, sys_fl
))
634 return (locks_conflict(caller_fl
, sys_fl
));
637 /* Determine if lock sys_fl blocks lock caller_fl. FLOCK specific
638 * checking before calling the locks_conflict().
640 static int flock_locks_conflict(struct file_lock
*caller_fl
, struct file_lock
*sys_fl
)
642 /* FLOCK locks referring to the same filp do not conflict with
645 if (!IS_FLOCK(sys_fl
) || (caller_fl
->fl_file
== sys_fl
->fl_file
))
647 if ((caller_fl
->fl_type
& LOCK_MAND
) || (sys_fl
->fl_type
& LOCK_MAND
))
650 return (locks_conflict(caller_fl
, sys_fl
));
653 static int interruptible_sleep_on_locked(wait_queue_head_t
*fl_wait
, int timeout
)
656 DECLARE_WAITQUEUE(wait
, current
);
658 __set_current_state(TASK_INTERRUPTIBLE
);
659 add_wait_queue(fl_wait
, &wait
);
663 result
= schedule_timeout(timeout
);
664 if (signal_pending(current
))
665 result
= -ERESTARTSYS
;
666 remove_wait_queue(fl_wait
, &wait
);
667 __set_current_state(TASK_RUNNING
);
671 static int locks_block_on_timeout(struct file_lock
*blocker
, struct file_lock
*waiter
, int time
)
674 locks_insert_block(blocker
, waiter
);
675 result
= interruptible_sleep_on_locked(&waiter
->fl_wait
, time
);
676 __locks_delete_block(waiter
);
681 posix_test_lock(struct file
*filp
, struct file_lock
*fl
,
682 struct file_lock
*conflock
)
684 struct file_lock
*cfl
;
687 for (cfl
= filp
->f_dentry
->d_inode
->i_flock
; cfl
; cfl
= cfl
->fl_next
) {
690 if (posix_locks_conflict(cfl
, fl
))
694 __locks_copy_lock(conflock
, cfl
);
702 EXPORT_SYMBOL(posix_test_lock
);
704 /* This function tests for deadlock condition before putting a process to
705 * sleep. The detection scheme is no longer recursive. Recursive was neat,
706 * but dangerous - we risked stack corruption if the lock data was bad, or
707 * if the recursion was too deep for any other reason.
709 * We rely on the fact that a task can only be on one lock's wait queue
710 * at a time. When we find blocked_task on a wait queue we can re-search
711 * with blocked_task equal to that queue's owner, until either blocked_task
712 * isn't found, or blocked_task is found on a queue owned by my_task.
714 * Note: the above assumption may not be true when handling lock requests
715 * from a broken NFS client. But broken NFS clients have a lot more to
716 * worry about than proper deadlock detection anyway... --okir
718 int posix_locks_deadlock(struct file_lock
*caller_fl
,
719 struct file_lock
*block_fl
)
721 struct list_head
*tmp
;
724 if (posix_same_owner(caller_fl
, block_fl
))
726 list_for_each(tmp
, &blocked_list
) {
727 struct file_lock
*fl
= list_entry(tmp
, struct file_lock
, fl_link
);
728 if (posix_same_owner(fl
, block_fl
)) {
737 EXPORT_SYMBOL(posix_locks_deadlock
);
739 /* Try to create a FLOCK lock on filp. We always insert new FLOCK locks
740 * at the head of the list, but that's secret knowledge known only to
741 * flock_lock_file and posix_lock_file.
743 static int flock_lock_file(struct file
*filp
, struct file_lock
*new_fl
)
745 struct file_lock
**before
;
746 struct inode
* inode
= filp
->f_dentry
->d_inode
;
751 for_each_lock(inode
, before
) {
752 struct file_lock
*fl
= *before
;
757 if (filp
!= fl
->fl_file
)
759 if (new_fl
->fl_type
== fl
->fl_type
)
762 locks_delete_lock(before
);
767 if (new_fl
->fl_type
== F_UNLCK
)
771 * If a higher-priority process was blocked on the old file lock,
772 * give it the opportunity to lock the file.
778 for_each_lock(inode
, before
) {
779 struct file_lock
*fl
= *before
;
784 if (!flock_locks_conflict(new_fl
, fl
))
787 if (new_fl
->fl_flags
& FL_SLEEP
) {
788 locks_insert_block(fl
, new_fl
);
792 locks_insert_lock(&inode
->i_flock
, new_fl
);
800 EXPORT_SYMBOL(posix_lock_file
);
802 static int __posix_lock_file(struct inode
*inode
, struct file_lock
*request
)
804 struct file_lock
*fl
;
805 struct file_lock
*new_fl
, *new_fl2
;
806 struct file_lock
*left
= NULL
;
807 struct file_lock
*right
= NULL
;
808 struct file_lock
**before
;
809 int error
, added
= 0;
812 * We may need two file_lock structures for this operation,
813 * so we get them in advance to avoid races.
815 new_fl
= locks_alloc_lock();
816 new_fl2
= locks_alloc_lock();
819 if (request
->fl_type
!= F_UNLCK
) {
820 for_each_lock(inode
, before
) {
821 struct file_lock
*fl
= *before
;
824 if (!posix_locks_conflict(request
, fl
))
827 if (!(request
->fl_flags
& FL_SLEEP
))
830 if (posix_locks_deadlock(request
, fl
))
833 locks_insert_block(fl
, request
);
838 /* If we're just looking for a conflict, we're done. */
840 if (request
->fl_flags
& FL_ACCESS
)
843 error
= -ENOLCK
; /* "no luck" */
844 if (!(new_fl
&& new_fl2
))
848 * We've allocated the new locks in advance, so there are no
849 * errors possible (and no blocking operations) from here on.
851 * Find the first old lock with the same owner as the new lock.
854 before
= &inode
->i_flock
;
856 /* First skip locks owned by other processes. */
857 while ((fl
= *before
) && (!IS_POSIX(fl
) ||
858 !posix_same_owner(request
, fl
))) {
859 before
= &fl
->fl_next
;
862 /* Process locks with this owner. */
863 while ((fl
= *before
) && posix_same_owner(request
, fl
)) {
864 /* Detect adjacent or overlapping regions (if same lock type)
866 if (request
->fl_type
== fl
->fl_type
) {
867 /* In all comparisons of start vs end, use
868 * "start - 1" rather than "end + 1". If end
869 * is OFFSET_MAX, end + 1 will become negative.
871 if (fl
->fl_end
< request
->fl_start
- 1)
873 /* If the next lock in the list has entirely bigger
874 * addresses than the new one, insert the lock here.
876 if (fl
->fl_start
- 1 > request
->fl_end
)
879 /* If we come here, the new and old lock are of the
880 * same type and adjacent or overlapping. Make one
881 * lock yielding from the lower start address of both
882 * locks to the higher end address.
884 if (fl
->fl_start
> request
->fl_start
)
885 fl
->fl_start
= request
->fl_start
;
887 request
->fl_start
= fl
->fl_start
;
888 if (fl
->fl_end
< request
->fl_end
)
889 fl
->fl_end
= request
->fl_end
;
891 request
->fl_end
= fl
->fl_end
;
893 locks_delete_lock(before
);
900 /* Processing for different lock types is a bit
903 if (fl
->fl_end
< request
->fl_start
)
905 if (fl
->fl_start
> request
->fl_end
)
907 if (request
->fl_type
== F_UNLCK
)
909 if (fl
->fl_start
< request
->fl_start
)
911 /* If the next lock in the list has a higher end
912 * address than the new one, insert the new one here.
914 if (fl
->fl_end
> request
->fl_end
) {
918 if (fl
->fl_start
>= request
->fl_start
) {
919 /* The new lock completely replaces an old
920 * one (This may happen several times).
923 locks_delete_lock(before
);
926 /* Replace the old lock with the new one.
927 * Wake up anybody waiting for the old one,
928 * as the change in lock type might satisfy
931 locks_wake_up_blocks(fl
);
932 fl
->fl_start
= request
->fl_start
;
933 fl
->fl_end
= request
->fl_end
;
934 fl
->fl_type
= request
->fl_type
;
935 locks_release_private(fl
);
936 locks_copy_private(fl
, request
);
941 /* Go on to next lock.
944 before
= &fl
->fl_next
;
949 if (request
->fl_type
== F_UNLCK
)
951 locks_copy_lock(new_fl
, request
);
952 locks_insert_lock(before
, new_fl
);
957 /* The new lock breaks the old one in two pieces,
958 * so we have to use the second new lock.
962 locks_copy_lock(left
, right
);
963 locks_insert_lock(before
, left
);
965 right
->fl_start
= request
->fl_end
+ 1;
966 locks_wake_up_blocks(right
);
969 left
->fl_end
= request
->fl_start
- 1;
970 locks_wake_up_blocks(left
);
975 * Free any unused locks.
978 locks_free_lock(new_fl
);
980 locks_free_lock(new_fl2
);
985 * posix_lock_file - Apply a POSIX-style lock to a file
986 * @filp: The file to apply the lock to
987 * @fl: The lock to be applied
989 * Add a POSIX style lock to a file.
990 * We merge adjacent & overlapping locks whenever possible.
991 * POSIX locks are sorted by owner task, then by starting address
993 int posix_lock_file(struct file
*filp
, struct file_lock
*fl
)
995 return __posix_lock_file(filp
->f_dentry
->d_inode
, fl
);
999 * posix_lock_file_wait - Apply a POSIX-style lock to a file
1000 * @filp: The file to apply the lock to
1001 * @fl: The lock to be applied
1003 * Add a POSIX style lock to a file.
1004 * We merge adjacent & overlapping locks whenever possible.
1005 * POSIX locks are sorted by owner task, then by starting address
1007 int posix_lock_file_wait(struct file
*filp
, struct file_lock
*fl
)
1012 error
= __posix_lock_file(filp
->f_dentry
->d_inode
, fl
);
1013 if ((error
!= -EAGAIN
) || !(fl
->fl_flags
& FL_SLEEP
))
1015 error
= wait_event_interruptible(fl
->fl_wait
, !fl
->fl_next
);
1019 locks_delete_block(fl
);
1024 EXPORT_SYMBOL(posix_lock_file_wait
);
1027 * locks_mandatory_locked - Check for an active lock
1028 * @inode: the file to check
1030 * Searches the inode's list of locks to find any POSIX locks which conflict.
1031 * This function is called from locks_verify_locked() only.
1033 int locks_mandatory_locked(struct inode
*inode
)
1035 fl_owner_t owner
= current
->files
;
1036 struct file_lock
*fl
;
1039 * Search the lock list for this inode for any POSIX locks.
1042 for (fl
= inode
->i_flock
; fl
!= NULL
; fl
= fl
->fl_next
) {
1045 if (fl
->fl_owner
!= owner
)
1049 return fl
? -EAGAIN
: 0;
1053 * locks_mandatory_area - Check for a conflicting lock
1054 * @read_write: %FLOCK_VERIFY_WRITE for exclusive access, %FLOCK_VERIFY_READ
1056 * @inode: the file to check
1057 * @filp: how the file was opened (if it was)
1058 * @offset: start of area to check
1059 * @count: length of area to check
1061 * Searches the inode's list of locks to find any POSIX locks which conflict.
1062 * This function is called from rw_verify_area() and
1063 * locks_verify_truncate().
1065 int locks_mandatory_area(int read_write
, struct inode
*inode
,
1066 struct file
*filp
, loff_t offset
,
1069 struct file_lock fl
;
1072 locks_init_lock(&fl
);
1073 fl
.fl_owner
= current
->files
;
1074 fl
.fl_pid
= current
->tgid
;
1076 fl
.fl_flags
= FL_POSIX
| FL_ACCESS
;
1077 if (filp
&& !(filp
->f_flags
& O_NONBLOCK
))
1078 fl
.fl_flags
|= FL_SLEEP
;
1079 fl
.fl_type
= (read_write
== FLOCK_VERIFY_WRITE
) ? F_WRLCK
: F_RDLCK
;
1080 fl
.fl_start
= offset
;
1081 fl
.fl_end
= offset
+ count
- 1;
1084 error
= __posix_lock_file(inode
, &fl
);
1085 if (error
!= -EAGAIN
)
1087 if (!(fl
.fl_flags
& FL_SLEEP
))
1089 error
= wait_event_interruptible(fl
.fl_wait
, !fl
.fl_next
);
1092 * If we've been sleeping someone might have
1093 * changed the permissions behind our back.
1095 if ((inode
->i_mode
& (S_ISGID
| S_IXGRP
)) == S_ISGID
)
1099 locks_delete_block(&fl
);
1106 EXPORT_SYMBOL(locks_mandatory_area
);
1108 /* We already had a lease on this file; just change its type */
1109 int lease_modify(struct file_lock
**before
, int arg
)
1111 struct file_lock
*fl
= *before
;
1112 int error
= assign_type(fl
, arg
);
1116 locks_wake_up_blocks(fl
);
1118 locks_delete_lock(before
);
1122 EXPORT_SYMBOL(lease_modify
);
1124 static void time_out_leases(struct inode
*inode
)
1126 struct file_lock
**before
;
1127 struct file_lock
*fl
;
1129 before
= &inode
->i_flock
;
1130 while ((fl
= *before
) && IS_LEASE(fl
) && (fl
->fl_type
& F_INPROGRESS
)) {
1131 if ((fl
->fl_break_time
== 0)
1132 || time_before(jiffies
, fl
->fl_break_time
)) {
1133 before
= &fl
->fl_next
;
1136 lease_modify(before
, fl
->fl_type
& ~F_INPROGRESS
);
1137 if (fl
== *before
) /* lease_modify may have freed fl */
1138 before
= &fl
->fl_next
;
1143 * __break_lease - revoke all outstanding leases on file
1144 * @inode: the inode of the file to return
1145 * @mode: the open mode (read or write)
1147 * break_lease (inlined for speed) has checked there already
1148 * is a lease on this file. Leases are broken on a call to open()
1149 * or truncate(). This function can sleep unless you
1150 * specified %O_NONBLOCK to your open().
1152 int __break_lease(struct inode
*inode
, unsigned int mode
)
1154 int error
= 0, future
;
1155 struct file_lock
*new_fl
, *flock
;
1156 struct file_lock
*fl
;
1158 unsigned long break_time
;
1159 int i_have_this_lease
= 0;
1161 alloc_err
= lease_alloc(NULL
, mode
& FMODE_WRITE
? F_WRLCK
: F_RDLCK
,
1166 time_out_leases(inode
);
1168 flock
= inode
->i_flock
;
1169 if ((flock
== NULL
) || !IS_LEASE(flock
))
1172 for (fl
= flock
; fl
&& IS_LEASE(fl
); fl
= fl
->fl_next
)
1173 if (fl
->fl_owner
== current
->files
)
1174 i_have_this_lease
= 1;
1176 if (mode
& FMODE_WRITE
) {
1177 /* If we want write access, we have to revoke any lease. */
1178 future
= F_UNLCK
| F_INPROGRESS
;
1179 } else if (flock
->fl_type
& F_INPROGRESS
) {
1180 /* If the lease is already being broken, we just leave it */
1181 future
= flock
->fl_type
;
1182 } else if (flock
->fl_type
& F_WRLCK
) {
1183 /* Downgrade the exclusive lease to a read-only lease. */
1184 future
= F_RDLCK
| F_INPROGRESS
;
1186 /* the existing lease was read-only, so we can read too. */
1190 if (alloc_err
&& !i_have_this_lease
&& ((mode
& O_NONBLOCK
) == 0)) {
1196 if (lease_break_time
> 0) {
1197 break_time
= jiffies
+ lease_break_time
* HZ
;
1198 if (break_time
== 0)
1199 break_time
++; /* so that 0 means no break time */
1202 for (fl
= flock
; fl
&& IS_LEASE(fl
); fl
= fl
->fl_next
) {
1203 if (fl
->fl_type
!= future
) {
1204 fl
->fl_type
= future
;
1205 fl
->fl_break_time
= break_time
;
1206 /* lease must have lmops break callback */
1207 fl
->fl_lmops
->fl_break(fl
);
1211 if (i_have_this_lease
|| (mode
& O_NONBLOCK
)) {
1212 error
= -EWOULDBLOCK
;
1217 break_time
= flock
->fl_break_time
;
1218 if (break_time
!= 0) {
1219 break_time
-= jiffies
;
1220 if (break_time
== 0)
1223 error
= locks_block_on_timeout(flock
, new_fl
, break_time
);
1226 time_out_leases(inode
);
1227 /* Wait for the next lease that has not been broken yet */
1228 for (flock
= inode
->i_flock
; flock
&& IS_LEASE(flock
);
1229 flock
= flock
->fl_next
) {
1230 if (flock
->fl_type
& F_INPROGRESS
)
1239 locks_free_lock(new_fl
);
1243 EXPORT_SYMBOL(__break_lease
);
1248 * @time: pointer to a timespec which will contain the last modified time
1250 * This is to force NFS clients to flush their caches for files with
1251 * exclusive leases. The justification is that if someone has an
1252 * exclusive lease, then they could be modifiying it.
1254 void lease_get_mtime(struct inode
*inode
, struct timespec
*time
)
1256 struct file_lock
*flock
= inode
->i_flock
;
1257 if (flock
&& IS_LEASE(flock
) && (flock
->fl_type
& F_WRLCK
))
1258 *time
= current_fs_time(inode
->i_sb
);
1260 *time
= inode
->i_mtime
;
1263 EXPORT_SYMBOL(lease_get_mtime
);
1266 * fcntl_getlease - Enquire what lease is currently active
1269 * The value returned by this function will be one of
1270 * (if no lease break is pending):
1272 * %F_RDLCK to indicate a shared lease is held.
1274 * %F_WRLCK to indicate an exclusive lease is held.
1276 * %F_UNLCK to indicate no lease is held.
1278 * (if a lease break is pending):
1280 * %F_RDLCK to indicate an exclusive lease needs to be
1281 * changed to a shared lease (or removed).
1283 * %F_UNLCK to indicate the lease needs to be removed.
1285 * XXX: sfr & willy disagree over whether F_INPROGRESS
1286 * should be returned to userspace.
1288 int fcntl_getlease(struct file
*filp
)
1290 struct file_lock
*fl
;
1294 time_out_leases(filp
->f_dentry
->d_inode
);
1295 for (fl
= filp
->f_dentry
->d_inode
->i_flock
; fl
&& IS_LEASE(fl
);
1297 if (fl
->fl_file
== filp
) {
1298 type
= fl
->fl_type
& ~F_INPROGRESS
;
1307 * __setlease - sets a lease on an open file
1308 * @filp: file pointer
1309 * @arg: type of lease to obtain
1310 * @flp: input - file_lock to use, output - file_lock inserted
1312 * The (input) flp->fl_lmops->fl_break function is required
1315 * Called with kernel lock held.
1317 static int __setlease(struct file
*filp
, long arg
, struct file_lock
**flp
)
1319 struct file_lock
*fl
, **before
, **my_before
= NULL
, *lease
;
1320 struct dentry
*dentry
= filp
->f_dentry
;
1321 struct inode
*inode
= dentry
->d_inode
;
1322 int error
, rdlease_count
= 0, wrlease_count
= 0;
1324 time_out_leases(inode
);
1327 if (!flp
|| !(*flp
) || !(*flp
)->fl_lmops
|| !(*flp
)->fl_lmops
->fl_break
)
1333 if ((arg
== F_RDLCK
) && (atomic_read(&inode
->i_writecount
) > 0))
1335 if ((arg
== F_WRLCK
)
1336 && ((atomic_read(&dentry
->d_count
) > 1)
1337 || (atomic_read(&inode
->i_count
) > 1)))
1341 * At this point, we know that if there is an exclusive
1342 * lease on this file, then we hold it on this filp
1343 * (otherwise our open of this file would have blocked).
1344 * And if we are trying to acquire an exclusive lease,
1345 * then the file is not open by anyone (including us)
1346 * except for this filp.
1348 for (before
= &inode
->i_flock
;
1349 ((fl
= *before
) != NULL
) && IS_LEASE(fl
);
1350 before
= &fl
->fl_next
) {
1351 if (lease
->fl_lmops
->fl_mylease(fl
, lease
))
1353 else if (fl
->fl_type
== (F_INPROGRESS
| F_UNLCK
))
1355 * Someone is in the process of opening this
1356 * file for writing so we may not take an
1357 * exclusive lease on it.
1364 if ((arg
== F_RDLCK
&& (wrlease_count
> 0)) ||
1365 (arg
== F_WRLCK
&& ((rdlease_count
+ wrlease_count
) > 0)))
1368 if (my_before
!= NULL
) {
1369 error
= lease
->fl_lmops
->fl_change(my_before
, arg
);
1381 error
= lease_alloc(filp
, arg
, &fl
);
1385 locks_copy_lock(fl
, lease
);
1387 locks_insert_lock(before
, fl
);
1395 * setlease - sets a lease on an open file
1396 * @filp: file pointer
1397 * @arg: type of lease to obtain
1398 * @lease: file_lock to use
1400 * Call this to establish a lease on the file.
1401 * The fl_lmops fl_break function is required by break_lease
1404 int setlease(struct file
*filp
, long arg
, struct file_lock
**lease
)
1406 struct dentry
*dentry
= filp
->f_dentry
;
1407 struct inode
*inode
= dentry
->d_inode
;
1410 if ((current
->fsuid
!= inode
->i_uid
) && !capable(CAP_LEASE
))
1412 if (!S_ISREG(inode
->i_mode
))
1414 error
= security_file_lock(filp
, arg
);
1419 error
= __setlease(filp
, arg
, lease
);
1425 EXPORT_SYMBOL(setlease
);
1428 * fcntl_setlease - sets a lease on an open file
1429 * @fd: open file descriptor
1430 * @filp: file pointer
1431 * @arg: type of lease to obtain
1433 * Call this fcntl to establish a lease on the file.
1434 * Note that you also need to call %F_SETSIG to
1435 * receive a signal when the lease is broken.
1437 int fcntl_setlease(unsigned int fd
, struct file
*filp
, long arg
)
1439 struct file_lock fl
, *flp
= &fl
;
1440 struct dentry
*dentry
= filp
->f_dentry
;
1441 struct inode
*inode
= dentry
->d_inode
;
1444 if ((current
->fsuid
!= inode
->i_uid
) && !capable(CAP_LEASE
))
1446 if (!S_ISREG(inode
->i_mode
))
1448 error
= security_file_lock(filp
, arg
);
1452 locks_init_lock(&fl
);
1453 error
= lease_init(filp
, arg
, &fl
);
1459 error
= __setlease(filp
, arg
, &flp
);
1460 if (error
|| arg
== F_UNLCK
)
1463 error
= fasync_helper(fd
, filp
, 1, &flp
->fl_fasync
);
1465 /* remove lease just inserted by __setlease */
1466 flp
->fl_type
= F_UNLCK
| F_INPROGRESS
;
1467 flp
->fl_break_time
= jiffies
- 10;
1468 time_out_leases(inode
);
1472 error
= f_setown(filp
, current
->pid
, 0);
1479 * flock_lock_file_wait - Apply a FLOCK-style lock to a file
1480 * @filp: The file to apply the lock to
1481 * @fl: The lock to be applied
1483 * Add a FLOCK style lock to a file.
1485 int flock_lock_file_wait(struct file
*filp
, struct file_lock
*fl
)
1490 error
= flock_lock_file(filp
, fl
);
1491 if ((error
!= -EAGAIN
) || !(fl
->fl_flags
& FL_SLEEP
))
1493 error
= wait_event_interruptible(fl
->fl_wait
, !fl
->fl_next
);
1497 locks_delete_block(fl
);
1503 EXPORT_SYMBOL(flock_lock_file_wait
);
1506 * sys_flock: - flock() system call.
1507 * @fd: the file descriptor to lock.
1508 * @cmd: the type of lock to apply.
1510 * Apply a %FL_FLOCK style lock to an open file descriptor.
1511 * The @cmd can be one of
1513 * %LOCK_SH -- a shared lock.
1515 * %LOCK_EX -- an exclusive lock.
1517 * %LOCK_UN -- remove an existing lock.
1519 * %LOCK_MAND -- a `mandatory' flock. This exists to emulate Windows Share Modes.
1521 * %LOCK_MAND can be combined with %LOCK_READ or %LOCK_WRITE to allow other
1522 * processes read and write access respectively.
1524 asmlinkage
long sys_flock(unsigned int fd
, unsigned int cmd
)
1527 struct file_lock
*lock
;
1528 int can_sleep
, unlock
;
1536 can_sleep
= !(cmd
& LOCK_NB
);
1538 unlock
= (cmd
== LOCK_UN
);
1540 if (!unlock
&& !(cmd
& LOCK_MAND
) && !(filp
->f_mode
& 3))
1543 error
= flock_make_lock(filp
, &lock
, cmd
);
1547 lock
->fl_flags
|= FL_SLEEP
;
1549 error
= security_file_lock(filp
, cmd
);
1553 if (filp
->f_op
&& filp
->f_op
->flock
)
1554 error
= filp
->f_op
->flock(filp
,
1555 (can_sleep
) ? F_SETLKW
: F_SETLK
,
1558 error
= flock_lock_file_wait(filp
, lock
);
1561 if (list_empty(&lock
->fl_link
)) {
1562 locks_free_lock(lock
);
1571 /* Report the first existing lock that would conflict with l.
1572 * This implements the F_GETLK command of fcntl().
1574 int fcntl_getlk(struct file
*filp
, struct flock __user
*l
)
1576 struct file_lock
*fl
, cfl
, file_lock
;
1581 if (copy_from_user(&flock
, l
, sizeof(flock
)))
1584 if ((flock
.l_type
!= F_RDLCK
) && (flock
.l_type
!= F_WRLCK
))
1587 error
= flock_to_posix_lock(filp
, &file_lock
, &flock
);
1591 if (filp
->f_op
&& filp
->f_op
->lock
) {
1592 error
= filp
->f_op
->lock(filp
, F_GETLK
, &file_lock
);
1593 if (file_lock
.fl_ops
&& file_lock
.fl_ops
->fl_release_private
)
1594 file_lock
.fl_ops
->fl_release_private(&file_lock
);
1598 fl
= (file_lock
.fl_type
== F_UNLCK
? NULL
: &file_lock
);
1600 fl
= (posix_test_lock(filp
, &file_lock
, &cfl
) ? &cfl
: NULL
);
1603 flock
.l_type
= F_UNLCK
;
1605 flock
.l_pid
= fl
->fl_pid
;
1606 #if BITS_PER_LONG == 32
1608 * Make sure we can represent the posix lock via
1609 * legacy 32bit flock.
1612 if (fl
->fl_start
> OFFT_OFFSET_MAX
)
1614 if ((fl
->fl_end
!= OFFSET_MAX
)
1615 && (fl
->fl_end
> OFFT_OFFSET_MAX
))
1618 flock
.l_start
= fl
->fl_start
;
1619 flock
.l_len
= fl
->fl_end
== OFFSET_MAX
? 0 :
1620 fl
->fl_end
- fl
->fl_start
+ 1;
1622 flock
.l_type
= fl
->fl_type
;
1625 if (!copy_to_user(l
, &flock
, sizeof(flock
)))
1631 /* Apply the lock described by l to an open file descriptor.
1632 * This implements both the F_SETLK and F_SETLKW commands of fcntl().
1634 int fcntl_setlk(unsigned int fd
, struct file
*filp
, unsigned int cmd
,
1635 struct flock __user
*l
)
1637 struct file_lock
*file_lock
= locks_alloc_lock();
1639 struct inode
*inode
;
1642 if (file_lock
== NULL
)
1646 * This might block, so we do it before checking the inode.
1649 if (copy_from_user(&flock
, l
, sizeof(flock
)))
1652 inode
= filp
->f_dentry
->d_inode
;
1654 /* Don't allow mandatory locks on files that may be memory mapped
1657 if (IS_MANDLOCK(inode
) &&
1658 (inode
->i_mode
& (S_ISGID
| S_IXGRP
)) == S_ISGID
&&
1659 mapping_writably_mapped(filp
->f_mapping
)) {
1665 error
= flock_to_posix_lock(filp
, file_lock
, &flock
);
1668 if (cmd
== F_SETLKW
) {
1669 file_lock
->fl_flags
|= FL_SLEEP
;
1673 switch (flock
.l_type
) {
1675 if (!(filp
->f_mode
& FMODE_READ
))
1679 if (!(filp
->f_mode
& FMODE_WRITE
))
1689 error
= security_file_lock(filp
, file_lock
->fl_type
);
1693 if (filp
->f_op
&& filp
->f_op
->lock
!= NULL
)
1694 error
= filp
->f_op
->lock(filp
, cmd
, file_lock
);
1697 error
= __posix_lock_file(inode
, file_lock
);
1698 if ((error
!= -EAGAIN
) || (cmd
== F_SETLK
))
1700 error
= wait_event_interruptible(file_lock
->fl_wait
,
1701 !file_lock
->fl_next
);
1705 locks_delete_block(file_lock
);
1711 * Attempt to detect a close/fcntl race and recover by
1712 * releasing the lock that was just acquired.
1714 if (!error
&& fcheck(fd
) != filp
&& flock
.l_type
!= F_UNLCK
) {
1715 flock
.l_type
= F_UNLCK
;
1720 locks_free_lock(file_lock
);
1724 #if BITS_PER_LONG == 32
1725 /* Report the first existing lock that would conflict with l.
1726 * This implements the F_GETLK command of fcntl().
1728 int fcntl_getlk64(struct file
*filp
, struct flock64 __user
*l
)
1730 struct file_lock
*fl
, cfl
, file_lock
;
1731 struct flock64 flock
;
1735 if (copy_from_user(&flock
, l
, sizeof(flock
)))
1738 if ((flock
.l_type
!= F_RDLCK
) && (flock
.l_type
!= F_WRLCK
))
1741 error
= flock64_to_posix_lock(filp
, &file_lock
, &flock
);
1745 if (filp
->f_op
&& filp
->f_op
->lock
) {
1746 error
= filp
->f_op
->lock(filp
, F_GETLK
, &file_lock
);
1747 if (file_lock
.fl_ops
&& file_lock
.fl_ops
->fl_release_private
)
1748 file_lock
.fl_ops
->fl_release_private(&file_lock
);
1752 fl
= (file_lock
.fl_type
== F_UNLCK
? NULL
: &file_lock
);
1754 fl
= (posix_test_lock(filp
, &file_lock
, &cfl
) ? &cfl
: NULL
);
1757 flock
.l_type
= F_UNLCK
;
1759 flock
.l_pid
= fl
->fl_pid
;
1760 flock
.l_start
= fl
->fl_start
;
1761 flock
.l_len
= fl
->fl_end
== OFFSET_MAX
? 0 :
1762 fl
->fl_end
- fl
->fl_start
+ 1;
1764 flock
.l_type
= fl
->fl_type
;
1767 if (!copy_to_user(l
, &flock
, sizeof(flock
)))
1774 /* Apply the lock described by l to an open file descriptor.
1775 * This implements both the F_SETLK and F_SETLKW commands of fcntl().
1777 int fcntl_setlk64(unsigned int fd
, struct file
*filp
, unsigned int cmd
,
1778 struct flock64 __user
*l
)
1780 struct file_lock
*file_lock
= locks_alloc_lock();
1781 struct flock64 flock
;
1782 struct inode
*inode
;
1785 if (file_lock
== NULL
)
1789 * This might block, so we do it before checking the inode.
1792 if (copy_from_user(&flock
, l
, sizeof(flock
)))
1795 inode
= filp
->f_dentry
->d_inode
;
1797 /* Don't allow mandatory locks on files that may be memory mapped
1800 if (IS_MANDLOCK(inode
) &&
1801 (inode
->i_mode
& (S_ISGID
| S_IXGRP
)) == S_ISGID
&&
1802 mapping_writably_mapped(filp
->f_mapping
)) {
1808 error
= flock64_to_posix_lock(filp
, file_lock
, &flock
);
1811 if (cmd
== F_SETLKW64
) {
1812 file_lock
->fl_flags
|= FL_SLEEP
;
1816 switch (flock
.l_type
) {
1818 if (!(filp
->f_mode
& FMODE_READ
))
1822 if (!(filp
->f_mode
& FMODE_WRITE
))
1832 error
= security_file_lock(filp
, file_lock
->fl_type
);
1836 if (filp
->f_op
&& filp
->f_op
->lock
!= NULL
)
1837 error
= filp
->f_op
->lock(filp
, cmd
, file_lock
);
1840 error
= __posix_lock_file(inode
, file_lock
);
1841 if ((error
!= -EAGAIN
) || (cmd
== F_SETLK64
))
1843 error
= wait_event_interruptible(file_lock
->fl_wait
,
1844 !file_lock
->fl_next
);
1848 locks_delete_block(file_lock
);
1854 * Attempt to detect a close/fcntl race and recover by
1855 * releasing the lock that was just acquired.
1857 if (!error
&& fcheck(fd
) != filp
&& flock
.l_type
!= F_UNLCK
) {
1858 flock
.l_type
= F_UNLCK
;
1863 locks_free_lock(file_lock
);
1866 #endif /* BITS_PER_LONG == 32 */
1869 * This function is called when the file is being removed
1870 * from the task's fd array. POSIX locks belonging to this task
1871 * are deleted at this time.
1873 void locks_remove_posix(struct file
*filp
, fl_owner_t owner
)
1875 struct file_lock lock
, **before
;
1878 * If there are no locks held on this file, we don't need to call
1879 * posix_lock_file(). Another process could be setting a lock on this
1880 * file at the same time, but we wouldn't remove that lock anyway.
1882 before
= &filp
->f_dentry
->d_inode
->i_flock
;
1883 if (*before
== NULL
)
1886 lock
.fl_type
= F_UNLCK
;
1887 lock
.fl_flags
= FL_POSIX
;
1889 lock
.fl_end
= OFFSET_MAX
;
1890 lock
.fl_owner
= owner
;
1891 lock
.fl_pid
= current
->tgid
;
1892 lock
.fl_file
= filp
;
1894 lock
.fl_lmops
= NULL
;
1896 if (filp
->f_op
&& filp
->f_op
->lock
!= NULL
) {
1897 filp
->f_op
->lock(filp
, F_SETLK
, &lock
);
1901 /* Can't use posix_lock_file here; we need to remove it no matter
1902 * which pid we have.
1905 while (*before
!= NULL
) {
1906 struct file_lock
*fl
= *before
;
1907 if (IS_POSIX(fl
) && posix_same_owner(fl
, &lock
)) {
1908 locks_delete_lock(before
);
1911 before
= &fl
->fl_next
;
1915 if (lock
.fl_ops
&& lock
.fl_ops
->fl_release_private
)
1916 lock
.fl_ops
->fl_release_private(&lock
);
1919 EXPORT_SYMBOL(locks_remove_posix
);
1922 * This function is called on the last close of an open file.
1924 void locks_remove_flock(struct file
*filp
)
1926 struct inode
* inode
= filp
->f_dentry
->d_inode
;
1927 struct file_lock
*fl
;
1928 struct file_lock
**before
;
1930 if (!inode
->i_flock
)
1933 if (filp
->f_op
&& filp
->f_op
->flock
) {
1934 struct file_lock fl
= {
1935 .fl_pid
= current
->tgid
,
1937 .fl_flags
= FL_FLOCK
,
1939 .fl_end
= OFFSET_MAX
,
1941 filp
->f_op
->flock(filp
, F_SETLKW
, &fl
);
1942 if (fl
.fl_ops
&& fl
.fl_ops
->fl_release_private
)
1943 fl
.fl_ops
->fl_release_private(&fl
);
1947 before
= &inode
->i_flock
;
1949 while ((fl
= *before
) != NULL
) {
1950 if (fl
->fl_file
== filp
) {
1952 locks_delete_lock(before
);
1956 lease_modify(before
, F_UNLCK
);
1962 before
= &fl
->fl_next
;
1968 * posix_unblock_lock - stop waiting for a file lock
1969 * @filp: how the file was opened
1970 * @waiter: the lock which was waiting
1972 * lockd needs to block waiting for locks.
1975 posix_unblock_lock(struct file
*filp
, struct file_lock
*waiter
)
1980 if (waiter
->fl_next
)
1981 __locks_delete_block(waiter
);
1988 EXPORT_SYMBOL(posix_unblock_lock
);
1990 static void lock_get_status(char* out
, struct file_lock
*fl
, int id
, char *pfx
)
1992 struct inode
*inode
= NULL
;
1994 if (fl
->fl_file
!= NULL
)
1995 inode
= fl
->fl_file
->f_dentry
->d_inode
;
1997 out
+= sprintf(out
, "%d:%s ", id
, pfx
);
1999 out
+= sprintf(out
, "%6s %s ",
2000 (fl
->fl_flags
& FL_ACCESS
) ? "ACCESS" : "POSIX ",
2001 (inode
== NULL
) ? "*NOINODE*" :
2002 (IS_MANDLOCK(inode
) &&
2003 (inode
->i_mode
& (S_IXGRP
| S_ISGID
)) == S_ISGID
) ?
2004 "MANDATORY" : "ADVISORY ");
2005 } else if (IS_FLOCK(fl
)) {
2006 if (fl
->fl_type
& LOCK_MAND
) {
2007 out
+= sprintf(out
, "FLOCK MSNFS ");
2009 out
+= sprintf(out
, "FLOCK ADVISORY ");
2011 } else if (IS_LEASE(fl
)) {
2012 out
+= sprintf(out
, "LEASE ");
2013 if (fl
->fl_type
& F_INPROGRESS
)
2014 out
+= sprintf(out
, "BREAKING ");
2015 else if (fl
->fl_file
)
2016 out
+= sprintf(out
, "ACTIVE ");
2018 out
+= sprintf(out
, "BREAKER ");
2020 out
+= sprintf(out
, "UNKNOWN UNKNOWN ");
2022 if (fl
->fl_type
& LOCK_MAND
) {
2023 out
+= sprintf(out
, "%s ",
2024 (fl
->fl_type
& LOCK_READ
)
2025 ? (fl
->fl_type
& LOCK_WRITE
) ? "RW " : "READ "
2026 : (fl
->fl_type
& LOCK_WRITE
) ? "WRITE" : "NONE ");
2028 out
+= sprintf(out
, "%s ",
2029 (fl
->fl_type
& F_INPROGRESS
)
2030 ? (fl
->fl_type
& F_UNLCK
) ? "UNLCK" : "READ "
2031 : (fl
->fl_type
& F_WRLCK
) ? "WRITE" : "READ ");
2034 #ifdef WE_CAN_BREAK_LSLK_NOW
2035 out
+= sprintf(out
, "%d %s:%ld ", fl
->fl_pid
,
2036 inode
->i_sb
->s_id
, inode
->i_ino
);
2038 /* userspace relies on this representation of dev_t ;-( */
2039 out
+= sprintf(out
, "%d %02x:%02x:%ld ", fl
->fl_pid
,
2040 MAJOR(inode
->i_sb
->s_dev
),
2041 MINOR(inode
->i_sb
->s_dev
), inode
->i_ino
);
2044 out
+= sprintf(out
, "%d <none>:0 ", fl
->fl_pid
);
2047 if (fl
->fl_end
== OFFSET_MAX
)
2048 out
+= sprintf(out
, "%Ld EOF\n", fl
->fl_start
);
2050 out
+= sprintf(out
, "%Ld %Ld\n", fl
->fl_start
,
2053 out
+= sprintf(out
, "0 EOF\n");
2057 static void move_lock_status(char **p
, off_t
* pos
, off_t offset
)
2061 if(*pos
>= offset
) {
2062 /* the complete line is valid */
2067 if(*pos
+len
> offset
) {
2068 /* use the second part of the line */
2069 int i
= offset
-*pos
;
2070 memmove(*p
,*p
+i
,len
-i
);
2075 /* discard the complete line */
2080 * get_locks_status - reports lock usage in /proc/locks
2081 * @buffer: address in userspace to write into
2083 * @offset: how far we are through the buffer
2084 * @length: how much to read
2087 int get_locks_status(char *buffer
, char **start
, off_t offset
, int length
)
2089 struct list_head
*tmp
;
2095 list_for_each(tmp
, &file_lock_list
) {
2096 struct list_head
*btmp
;
2097 struct file_lock
*fl
= list_entry(tmp
, struct file_lock
, fl_link
);
2098 lock_get_status(q
, fl
, ++i
, "");
2099 move_lock_status(&q
, &pos
, offset
);
2101 if(pos
>= offset
+length
)
2104 list_for_each(btmp
, &fl
->fl_block
) {
2105 struct file_lock
*bfl
= list_entry(btmp
,
2106 struct file_lock
, fl_block
);
2107 lock_get_status(q
, bfl
, i
, " ->");
2108 move_lock_status(&q
, &pos
, offset
);
2110 if(pos
>= offset
+length
)
2117 if(q
-buffer
< length
)
2123 * lock_may_read - checks that the region is free of locks
2124 * @inode: the inode that is being read
2125 * @start: the first byte to read
2126 * @len: the number of bytes to read
2128 * Emulates Windows locking requirements. Whole-file
2129 * mandatory locks (share modes) can prohibit a read and
2130 * byte-range POSIX locks can prohibit a read if they overlap.
2132 * N.B. this function is only ever called
2133 * from knfsd and ownership of locks is never checked.
2135 int lock_may_read(struct inode
*inode
, loff_t start
, unsigned long len
)
2137 struct file_lock
*fl
;
2140 for (fl
= inode
->i_flock
; fl
!= NULL
; fl
= fl
->fl_next
) {
2142 if (fl
->fl_type
== F_RDLCK
)
2144 if ((fl
->fl_end
< start
) || (fl
->fl_start
> (start
+ len
)))
2146 } else if (IS_FLOCK(fl
)) {
2147 if (!(fl
->fl_type
& LOCK_MAND
))
2149 if (fl
->fl_type
& LOCK_READ
)
2160 EXPORT_SYMBOL(lock_may_read
);
2163 * lock_may_write - checks that the region is free of locks
2164 * @inode: the inode that is being written
2165 * @start: the first byte to write
2166 * @len: the number of bytes to write
2168 * Emulates Windows locking requirements. Whole-file
2169 * mandatory locks (share modes) can prohibit a write and
2170 * byte-range POSIX locks can prohibit a write if they overlap.
2172 * N.B. this function is only ever called
2173 * from knfsd and ownership of locks is never checked.
2175 int lock_may_write(struct inode
*inode
, loff_t start
, unsigned long len
)
2177 struct file_lock
*fl
;
2180 for (fl
= inode
->i_flock
; fl
!= NULL
; fl
= fl
->fl_next
) {
2182 if ((fl
->fl_end
< start
) || (fl
->fl_start
> (start
+ len
)))
2184 } else if (IS_FLOCK(fl
)) {
2185 if (!(fl
->fl_type
& LOCK_MAND
))
2187 if (fl
->fl_type
& LOCK_WRITE
)
2198 EXPORT_SYMBOL(lock_may_write
);
2200 static inline void __steal_locks(struct file
*file
, fl_owner_t from
)
2202 struct inode
*inode
= file
->f_dentry
->d_inode
;
2203 struct file_lock
*fl
= inode
->i_flock
;
2206 if (fl
->fl_file
== file
&& fl
->fl_owner
== from
)
2207 fl
->fl_owner
= current
->files
;
2212 /* When getting ready for executing a binary, we make sure that current
2213 * has a files_struct on its own. Before dropping the old files_struct,
2214 * we take over ownership of all locks for all file descriptors we own.
2215 * Note that we may accidentally steal a lock for a file that a sibling
2216 * has created since the unshare_files() call.
2218 void steal_locks(fl_owner_t from
)
2220 struct files_struct
*files
= current
->files
;
2222 struct fdtable
*fdt
;
2230 fdt
= files_fdtable(files
);
2234 if (i
>= fdt
->max_fdset
|| i
>= fdt
->max_fds
)
2236 set
= fdt
->open_fds
->fds_bits
[j
++];
2239 struct file
*file
= fdt
->fd
[i
];
2241 __steal_locks(file
, from
);
2250 EXPORT_SYMBOL(steal_locks
);
2252 static int __init
filelock_init(void)
2254 filelock_cache
= kmem_cache_create("file_lock_cache",
2255 sizeof(struct file_lock
), 0, SLAB_PANIC
,
2260 core_initcall(filelock_init
);