6 As the name suggests, it provides statistics on locks.
10 Because things like lock contention can severely impact performance.
14 Lockdep already has hooks in the lock functions and maps lock instances to
15 lock classes. We build on that. The graph below shows the relation between
16 the lock functions and the various hooks therein.
38 lock, unlock - the regular lock functions
42 With these hooks we provide the following statistics:
44 con-bounces - number of lock contention that involved x-cpu data
45 contentions - number of lock acquisitions that had to wait
46 wait time min - shortest (non-0) time we ever had to wait for a lock
47 max - longest time we ever had to wait for a lock
48 total - total time we spend waiting on this lock
49 acq-bounces - number of lock acquisitions that involved x-cpu data
50 acquisitions - number of times we took the lock
51 hold time min - shortest (non-0) time we ever held the lock
52 max - longest time we ever held the lock
53 total - total time this lock was held
55 From these number various other statistics can be derived, such as:
57 hold time average = hold time total / acquisitions
59 These numbers are gathered per lock class, per read/write state (when
62 It also tracks 4 contention points per class. A contention point is a call site
63 that had to wait on lock acquisition.
67 Look at the current lock statistics:
69 ( line numbers not part of actual output, done for clarity in the explanation
72 # less /proc/lock_stat
74 01 lock_stat version 0.2
75 02 -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
76 03 class name con-bounces contentions waittime-min waittime-max waittime-total acq-bounces acquisitions holdtime-min holdtime-max holdtime-total
77 04 -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
79 06 &inode->i_data.tree_lock-W: 15 21657 0.18 1093295.30 11547131054.85 58 10415 0.16 87.51 6387.60
80 07 &inode->i_data.tree_lock-R: 0 0 0.00 0.00 0.00 23302 231198 0.25 8.45 98023.38
81 08 --------------------------
82 09 &inode->i_data.tree_lock 0 [<ffffffff8027c08f>] add_to_page_cache+0x5f/0x190
84 11 ...............................................................................................................................................................................................
86 13 dcache_lock: 1037 1161 0.38 45.32 774.51 6611 243371 0.15 306.48 77387.24
88 15 dcache_lock 180 [<ffffffff802c0d7e>] sys_getcwd+0x11e/0x230
89 16 dcache_lock 165 [<ffffffff802c002a>] d_alloc+0x15a/0x210
90 17 dcache_lock 33 [<ffffffff8035818d>] _atomic_dec_and_lock+0x4d/0x70
91 18 dcache_lock 1 [<ffffffff802beef8>] shrink_dcache_parent+0x18/0x130
93 This excerpt shows the first two lock class statistics. Line 01 shows the
94 output version - each time the format changes this will be updated. Line 02-04
95 show the header with column descriptions. Lines 05-10 and 13-18 show the actual
96 statistics. These statistics come in two parts; the actual stats separated by a
97 short separator (line 08, 14) from the contention points.
99 The first lock (05-10) is a read/write lock, and shows two lines above the
100 short separator. The contention points don't match the column descriptors,
101 they have two: contentions and [<IP>] symbol.
104 View the top contending locks:
106 # grep : /proc/lock_stat | head
107 &inode->i_data.tree_lock-W: 15 21657 0.18 1093295.30 11547131054.85 58 10415 0.16 87.51 6387.60
108 &inode->i_data.tree_lock-R: 0 0 0.00 0.00 0.00 23302 231198 0.25 8.45 98023.38
109 dcache_lock: 1037 1161 0.38 45.32 774.51 6611 243371 0.15 306.48 77387.24
110 &inode->i_mutex: 161 286 18446744073709 62882.54 1244614.55 3653 20598 18446744073709 62318.60 1693822.74
111 &zone->lru_lock: 94 94 0.53 7.33 92.10 4366 32690 0.29 59.81 16350.06
112 &inode->i_data.i_mmap_lock: 79 79 0.40 3.77 53.03 11779 87755 0.28 116.93 29898.44
113 &q->__queue_lock: 48 50 0.52 31.62 86.31 774 13131 0.17 113.08 12277.52
114 &rq->rq_lock_key: 43 47 0.74 68.50 170.63 3706 33929 0.22 107.99 17460.62
115 &rq->rq_lock_key#2: 39 46 0.75 6.68 49.03 2979 32292 0.17 125.17 17137.63
116 tasklist_lock-W: 15 15 1.45 10.87 32.70 1201 7390 0.58 62.55 13648.47
118 Clear the statistics:
120 # echo 0 > /proc/lock_stat