mySQL 5.0.11 sources for tomato
[tomato.git] / release / src / router / mysql / storage / innodb_plugin / sync / sync0sync.c
blob0c7f1e921fb691c3d8d087bb4da5733c99fd1a0b
1 /*****************************************************************************
3 Copyright (c) 1995, 2011, Oracle and/or its affiliates. All Rights Reserved.
4 Copyright (c) 2008, Google Inc.
6 Portions of this file contain modifications contributed and copyrighted by
7 Google, Inc. Those modifications are gratefully acknowledged and are described
8 briefly in the InnoDB documentation. The contributions by Google are
9 incorporated with their permission, and subject to the conditions contained in
10 the file COPYING.Google.
12 This program is free software; you can redistribute it and/or modify it under
13 the terms of the GNU General Public License as published by the Free Software
14 Foundation; version 2 of the License.
16 This program is distributed in the hope that it will be useful, but WITHOUT
17 ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
18 FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
20 You should have received a copy of the GNU General Public License along with
21 this program; if not, write to the Free Software Foundation, Inc.,
22 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
24 *****************************************************************************/
26 /**************************************************//**
27 @file sync/sync0sync.c
28 Mutex, the basic synchronization primitive
30 Created 9/5/1995 Heikki Tuuri
31 *******************************************************/
33 #include "sync0sync.h"
34 #ifdef UNIV_NONINL
35 #include "sync0sync.ic"
36 #endif
38 #include "sync0rw.h"
39 #include "buf0buf.h"
40 #include "srv0srv.h"
41 #include "buf0types.h"
42 #include "os0sync.h" /* for HAVE_ATOMIC_BUILTINS */
45 REASONS FOR IMPLEMENTING THE SPIN LOCK MUTEX
46 ============================================
48 Semaphore operations in operating systems are slow: Solaris on a 1993 Sparc
49 takes 3 microseconds (us) for a lock-unlock pair and Windows NT on a 1995
50 Pentium takes 20 microseconds for a lock-unlock pair. Therefore, we have to
51 implement our own efficient spin lock mutex. Future operating systems may
52 provide efficient spin locks, but we cannot count on that.
54 Another reason for implementing a spin lock is that on multiprocessor systems
55 it can be more efficient for a processor to run a loop waiting for the
56 semaphore to be released than to switch to a different thread. A thread switch
57 takes 25 us on both platforms mentioned above. See Gray and Reuter's book
58 Transaction processing for background.
60 How long should the spin loop last before suspending the thread? On a
61 uniprocessor, spinning does not help at all, because if the thread owning the
62 mutex is not executing, it cannot be released. Spinning actually wastes
63 resources.
65 On a multiprocessor, we do not know if the thread owning the mutex is
66 executing or not. Thus it would make sense to spin as long as the operation
67 guarded by the mutex would typically last assuming that the thread is
68 executing. If the mutex is not released by that time, we may assume that the
69 thread owning the mutex is not executing and suspend the waiting thread.
71 A typical operation (where no i/o involved) guarded by a mutex or a read-write
72 lock may last 1 - 20 us on the current Pentium platform. The longest
73 operations are the binary searches on an index node.
75 We conclude that the best choice is to set the spin time at 20 us. Then the
76 system should work well on a multiprocessor. On a uniprocessor we have to
77 make sure that thread swithches due to mutex collisions are not frequent,
78 i.e., they do not happen every 100 us or so, because that wastes too much
79 resources. If the thread switches are not frequent, the 20 us wasted in spin
80 loop is not too much.
82 Empirical studies on the effect of spin time should be done for different
83 platforms.
86 IMPLEMENTATION OF THE MUTEX
87 ===========================
89 For background, see Curt Schimmel's book on Unix implementation on modern
90 architectures. The key points in the implementation are atomicity and
91 serialization of memory accesses. The test-and-set instruction (XCHG in
92 Pentium) must be atomic. As new processors may have weak memory models, also
93 serialization of memory references may be necessary. The successor of Pentium,
94 P6, has at least one mode where the memory model is weak. As far as we know,
95 in Pentium all memory accesses are serialized in the program order and we do
96 not have to worry about the memory model. On other processors there are
97 special machine instructions called a fence, memory barrier, or storage
98 barrier (STBAR in Sparc), which can be used to serialize the memory accesses
99 to happen in program order relative to the fence instruction.
101 Leslie Lamport has devised a "bakery algorithm" to implement a mutex without
102 the atomic test-and-set, but his algorithm should be modified for weak memory
103 models. We do not use Lamport's algorithm, because we guess it is slower than
104 the atomic test-and-set.
106 Our mutex implementation works as follows: After that we perform the atomic
107 test-and-set instruction on the memory word. If the test returns zero, we
108 know we got the lock first. If the test returns not zero, some other thread
109 was quicker and got the lock: then we spin in a loop reading the memory word,
110 waiting it to become zero. It is wise to just read the word in the loop, not
111 perform numerous test-and-set instructions, because they generate memory
112 traffic between the cache and the main memory. The read loop can just access
113 the cache, saving bus bandwidth.
115 If we cannot acquire the mutex lock in the specified time, we reserve a cell
116 in the wait array, set the waiters byte in the mutex to 1. To avoid a race
117 condition, after setting the waiters byte and before suspending the waiting
118 thread, we still have to check that the mutex is reserved, because it may
119 have happened that the thread which was holding the mutex has just released
120 it and did not see the waiters byte set to 1, a case which would lead the
121 other thread to an infinite wait.
123 LEMMA 1: After a thread resets the event of a mutex (or rw_lock), some
124 =======
125 thread will eventually call os_event_set() on that particular event.
126 Thus no infinite wait is possible in this case.
128 Proof: After making the reservation the thread sets the waiters field in the
129 mutex to 1. Then it checks that the mutex is still reserved by some thread,
130 or it reserves the mutex for itself. In any case, some thread (which may be
131 also some earlier thread, not necessarily the one currently holding the mutex)
132 will set the waiters field to 0 in mutex_exit, and then call
133 os_event_set() with the mutex as an argument.
134 Q.E.D.
136 LEMMA 2: If an os_event_set() call is made after some thread has called
137 =======
138 the os_event_reset() and before it starts wait on that event, the call
139 will not be lost to the second thread. This is true even if there is an
140 intervening call to os_event_reset() by another thread.
141 Thus no infinite wait is possible in this case.
143 Proof (non-windows platforms): os_event_reset() returns a monotonically
144 increasing value of signal_count. This value is increased at every
145 call of os_event_set() If thread A has called os_event_reset() followed
146 by thread B calling os_event_set() and then some other thread C calling
147 os_event_reset(), the is_set flag of the event will be set to FALSE;
148 but now if thread A calls os_event_wait_low() with the signal_count
149 value returned from the earlier call of os_event_reset(), it will
150 return immediately without waiting.
151 Q.E.D.
153 Proof (windows): If there is a writer thread which is forced to wait for
154 the lock, it may be able to set the state of rw_lock to RW_LOCK_WAIT_EX
155 The design of rw_lock ensures that there is one and only one thread
156 that is able to change the state to RW_LOCK_WAIT_EX and this thread is
157 guaranteed to acquire the lock after it is released by the current
158 holders and before any other waiter gets the lock.
159 On windows this thread waits on a separate event i.e.: wait_ex_event.
160 Since only one thread can wait on this event there is no chance
161 of this event getting reset before the writer starts wait on it.
162 Therefore, this thread is guaranteed to catch the os_set_event()
163 signalled unconditionally at the release of the lock.
164 Q.E.D. */
166 /* Number of spin waits on mutexes: for performance monitoring */
168 /** The number of iterations in the mutex_spin_wait() spin loop.
169 Intended for performance monitoring. */
170 static ib_int64_t mutex_spin_round_count = 0;
171 /** The number of mutex_spin_wait() calls. Intended for
172 performance monitoring. */
173 static ib_int64_t mutex_spin_wait_count = 0;
174 /** The number of OS waits in mutex_spin_wait(). Intended for
175 performance monitoring. */
176 static ib_int64_t mutex_os_wait_count = 0;
177 /** The number of mutex_exit() calls. Intended for performance
178 monitoring. */
179 UNIV_INTERN ib_int64_t mutex_exit_count = 0;
181 /** The global array of wait cells for implementation of the database's own
182 mutexes and read-write locks */
183 UNIV_INTERN sync_array_t* sync_primary_wait_array;
185 /** This variable is set to TRUE when sync_init is called */
186 UNIV_INTERN ibool sync_initialized = FALSE;
188 /** An acquired mutex or rw-lock and its level in the latching order */
189 typedef struct sync_level_struct sync_level_t;
190 /** Mutexes or rw-locks held by a thread */
191 typedef struct sync_thread_struct sync_thread_t;
193 #ifdef UNIV_SYNC_DEBUG
194 /** The latch levels currently owned by threads are stored in this data
195 structure; the size of this array is OS_THREAD_MAX_N */
197 UNIV_INTERN sync_thread_t* sync_thread_level_arrays;
199 /** Mutex protecting sync_thread_level_arrays */
200 UNIV_INTERN mutex_t sync_thread_mutex;
201 #endif /* UNIV_SYNC_DEBUG */
203 /** Global list of database mutexes (not OS mutexes) created. */
204 UNIV_INTERN ut_list_base_node_t mutex_list;
206 /** Mutex protecting the mutex_list variable */
207 UNIV_INTERN mutex_t mutex_list_mutex;
209 #ifdef UNIV_SYNC_DEBUG
210 /** Latching order checks start when this is set TRUE */
211 UNIV_INTERN ibool sync_order_checks_on = FALSE;
212 #endif /* UNIV_SYNC_DEBUG */
214 /** Mutexes or rw-locks held by a thread */
215 struct sync_thread_struct{
216 os_thread_id_t id; /*!< OS thread id */
217 sync_level_t* levels; /*!< level array for this thread; if
218 this is NULL this slot is unused */
221 /** Number of slots reserved for each OS thread in the sync level array */
222 #define SYNC_THREAD_N_LEVELS 10000
224 /** An acquired mutex or rw-lock and its level in the latching order */
225 struct sync_level_struct{
226 void* latch; /*!< pointer to a mutex or an rw-lock; NULL means that
227 the slot is empty */
228 ulint level; /*!< level of the latch in the latching order */
231 /******************************************************************//**
232 Creates, or rather, initializes a mutex object in a specified memory
233 location (which must be appropriately aligned). The mutex is initialized
234 in the reset state. Explicit freeing of the mutex with mutex_free is
235 necessary only if the memory block containing it is freed. */
236 UNIV_INTERN
237 void
238 mutex_create_func(
239 /*==============*/
240 mutex_t* mutex, /*!< in: pointer to memory */
241 #ifdef UNIV_DEBUG
242 const char* cmutex_name, /*!< in: mutex name */
243 # ifdef UNIV_SYNC_DEBUG
244 ulint level, /*!< in: level */
245 # endif /* UNIV_SYNC_DEBUG */
246 #endif /* UNIV_DEBUG */
247 const char* cfile_name, /*!< in: file name where created */
248 ulint cline) /*!< in: file line where created */
250 #if defined(HAVE_ATOMIC_BUILTINS)
251 mutex_reset_lock_word(mutex);
252 #else
253 os_fast_mutex_init(&(mutex->os_fast_mutex));
254 mutex->lock_word = 0;
255 #endif
256 mutex->event = os_event_create(NULL);
257 mutex_set_waiters(mutex, 0);
258 #ifdef UNIV_DEBUG
259 mutex->magic_n = MUTEX_MAGIC_N;
260 #endif /* UNIV_DEBUG */
261 #ifdef UNIV_SYNC_DEBUG
262 mutex->line = 0;
263 mutex->file_name = "not yet reserved";
264 mutex->level = level;
265 #endif /* UNIV_SYNC_DEBUG */
266 mutex->cfile_name = cfile_name;
267 mutex->cline = cline;
268 mutex->count_os_wait = 0;
269 #ifdef UNIV_DEBUG
270 mutex->cmutex_name= cmutex_name;
271 mutex->count_using= 0;
272 mutex->mutex_type= 0;
273 mutex->lspent_time= 0;
274 mutex->lmax_spent_time= 0;
275 mutex->count_spin_loop= 0;
276 mutex->count_spin_rounds= 0;
277 mutex->count_os_yield= 0;
278 #endif /* UNIV_DEBUG */
280 /* Check that lock_word is aligned; this is important on Intel */
281 ut_ad(((ulint)(&(mutex->lock_word))) % 4 == 0);
283 /* NOTE! The very first mutexes are not put to the mutex list */
285 if ((mutex == &mutex_list_mutex)
286 #ifdef UNIV_SYNC_DEBUG
287 || (mutex == &sync_thread_mutex)
288 #endif /* UNIV_SYNC_DEBUG */
291 return;
294 mutex_enter(&mutex_list_mutex);
296 ut_ad(UT_LIST_GET_LEN(mutex_list) == 0
297 || UT_LIST_GET_FIRST(mutex_list)->magic_n == MUTEX_MAGIC_N);
299 UT_LIST_ADD_FIRST(list, mutex_list, mutex);
301 mutex_exit(&mutex_list_mutex);
304 /******************************************************************//**
305 Calling this function is obligatory only if the memory buffer containing
306 the mutex is freed. Removes a mutex object from the mutex list. The mutex
307 is checked to be in the reset state. */
308 UNIV_INTERN
309 void
310 mutex_free(
311 /*=======*/
312 mutex_t* mutex) /*!< in: mutex */
314 ut_ad(mutex_validate(mutex));
315 ut_a(mutex_get_lock_word(mutex) == 0);
316 ut_a(mutex_get_waiters(mutex) == 0);
318 #ifdef UNIV_MEM_DEBUG
319 if (mutex == &mem_hash_mutex) {
320 ut_ad(UT_LIST_GET_LEN(mutex_list) == 1);
321 ut_ad(UT_LIST_GET_FIRST(mutex_list) == &mem_hash_mutex);
322 UT_LIST_REMOVE(list, mutex_list, mutex);
323 goto func_exit;
325 #endif /* UNIV_MEM_DEBUG */
327 if (mutex != &mutex_list_mutex
328 #ifdef UNIV_SYNC_DEBUG
329 && mutex != &sync_thread_mutex
330 #endif /* UNIV_SYNC_DEBUG */
333 mutex_enter(&mutex_list_mutex);
335 ut_ad(!UT_LIST_GET_PREV(list, mutex)
336 || UT_LIST_GET_PREV(list, mutex)->magic_n
337 == MUTEX_MAGIC_N);
338 ut_ad(!UT_LIST_GET_NEXT(list, mutex)
339 || UT_LIST_GET_NEXT(list, mutex)->magic_n
340 == MUTEX_MAGIC_N);
342 UT_LIST_REMOVE(list, mutex_list, mutex);
344 mutex_exit(&mutex_list_mutex);
347 os_event_free(mutex->event);
348 #ifdef UNIV_MEM_DEBUG
349 func_exit:
350 #endif /* UNIV_MEM_DEBUG */
351 #if !defined(HAVE_ATOMIC_BUILTINS)
352 os_fast_mutex_free(&(mutex->os_fast_mutex));
353 #endif
354 /* If we free the mutex protecting the mutex list (freeing is
355 not necessary), we have to reset the magic number AFTER removing
356 it from the list. */
357 #ifdef UNIV_DEBUG
358 mutex->magic_n = 0;
359 #endif /* UNIV_DEBUG */
362 /********************************************************************//**
363 NOTE! Use the corresponding macro in the header file, not this function
364 directly. Tries to lock the mutex for the current thread. If the lock is not
365 acquired immediately, returns with return value 1.
366 @return 0 if succeed, 1 if not */
367 UNIV_INTERN
368 ulint
369 mutex_enter_nowait_func(
370 /*====================*/
371 mutex_t* mutex, /*!< in: pointer to mutex */
372 const char* file_name __attribute__((unused)),
373 /*!< in: file name where mutex
374 requested */
375 ulint line __attribute__((unused)))
376 /*!< in: line where requested */
378 ut_ad(mutex_validate(mutex));
380 if (!mutex_test_and_set(mutex)) {
382 ut_d(mutex->thread_id = os_thread_get_curr_id());
383 #ifdef UNIV_SYNC_DEBUG
384 mutex_set_debug_info(mutex, file_name, line);
385 #endif
387 return(0); /* Succeeded! */
390 return(1);
393 #ifdef UNIV_DEBUG
394 /******************************************************************//**
395 Checks that the mutex has been initialized.
396 @return TRUE */
397 UNIV_INTERN
398 ibool
399 mutex_validate(
400 /*===========*/
401 const mutex_t* mutex) /*!< in: mutex */
403 ut_a(mutex);
404 ut_a(mutex->magic_n == MUTEX_MAGIC_N);
406 return(TRUE);
409 /******************************************************************//**
410 Checks that the current thread owns the mutex. Works only in the debug
411 version.
412 @return TRUE if owns */
413 UNIV_INTERN
414 ibool
415 mutex_own(
416 /*======*/
417 const mutex_t* mutex) /*!< in: mutex */
419 ut_ad(mutex_validate(mutex));
421 return(mutex_get_lock_word(mutex) == 1
422 && os_thread_eq(mutex->thread_id, os_thread_get_curr_id()));
424 #endif /* UNIV_DEBUG */
426 /******************************************************************//**
427 Sets the waiters field in a mutex. */
428 UNIV_INTERN
429 void
430 mutex_set_waiters(
431 /*==============*/
432 mutex_t* mutex, /*!< in: mutex */
433 ulint n) /*!< in: value to set */
435 volatile ulint* ptr; /* declared volatile to ensure that
436 the value is stored to memory */
437 ut_ad(mutex);
439 ptr = &(mutex->waiters);
441 *ptr = n; /* Here we assume that the write of a single
442 word in memory is atomic */
445 /******************************************************************//**
446 Reserves a mutex for the current thread. If the mutex is reserved, the
447 function spins a preset time (controlled by SYNC_SPIN_ROUNDS), waiting
448 for the mutex before suspending the thread. */
449 UNIV_INTERN
450 void
451 mutex_spin_wait(
452 /*============*/
453 mutex_t* mutex, /*!< in: pointer to mutex */
454 const char* file_name, /*!< in: file name where mutex
455 requested */
456 ulint line) /*!< in: line where requested */
458 ulint index; /* index of the reserved wait cell */
459 ulint i; /* spin round count */
460 #ifdef UNIV_DEBUG
461 ib_int64_t lstart_time = 0, lfinish_time; /* for timing os_wait */
462 ulint ltime_diff;
463 ulint sec;
464 ulint ms;
465 uint timer_started = 0;
466 #endif /* UNIV_DEBUG */
467 ut_ad(mutex);
469 /* This update is not thread safe, but we don't mind if the count
470 isn't exact. Moved out of ifdef that follows because we are willing
471 to sacrifice the cost of counting this as the data is valuable.
472 Count the number of calls to mutex_spin_wait. */
473 mutex_spin_wait_count++;
475 mutex_loop:
477 i = 0;
479 /* Spin waiting for the lock word to become zero. Note that we do
480 not have to assume that the read access to the lock word is atomic,
481 as the actual locking is always committed with atomic test-and-set.
482 In reality, however, all processors probably have an atomic read of
483 a memory word. */
485 spin_loop:
486 ut_d(mutex->count_spin_loop++);
488 while (mutex_get_lock_word(mutex) != 0 && i < SYNC_SPIN_ROUNDS) {
489 if (srv_spin_wait_delay) {
490 ut_delay(ut_rnd_interval(0, srv_spin_wait_delay));
493 i++;
496 if (i == SYNC_SPIN_ROUNDS) {
497 #ifdef UNIV_DEBUG
498 mutex->count_os_yield++;
499 #ifndef UNIV_HOTBACKUP
500 if (timed_mutexes && timer_started == 0) {
501 ut_usectime(&sec, &ms);
502 lstart_time= (ib_int64_t)sec * 1000000 + ms;
503 timer_started = 1;
505 #endif /* UNIV_HOTBACKUP */
506 #endif /* UNIV_DEBUG */
507 os_thread_yield();
510 #ifdef UNIV_SRV_PRINT_LATCH_WAITS
511 fprintf(stderr,
512 "Thread %lu spin wait mutex at %p"
513 " cfile %s cline %lu rnds %lu\n",
514 (ulong) os_thread_pf(os_thread_get_curr_id()), (void*) mutex,
515 mutex->cfile_name, (ulong) mutex->cline, (ulong) i);
516 #endif
518 mutex_spin_round_count += i;
520 ut_d(mutex->count_spin_rounds += i);
522 if (mutex_test_and_set(mutex) == 0) {
523 /* Succeeded! */
525 ut_d(mutex->thread_id = os_thread_get_curr_id());
526 #ifdef UNIV_SYNC_DEBUG
527 mutex_set_debug_info(mutex, file_name, line);
528 #endif
530 goto finish_timing;
533 /* We may end up with a situation where lock_word is 0 but the OS
534 fast mutex is still reserved. On FreeBSD the OS does not seem to
535 schedule a thread which is constantly calling pthread_mutex_trylock
536 (in mutex_test_and_set implementation). Then we could end up
537 spinning here indefinitely. The following 'i++' stops this infinite
538 spin. */
540 i++;
542 if (i < SYNC_SPIN_ROUNDS) {
543 goto spin_loop;
546 sync_array_reserve_cell(sync_primary_wait_array, mutex,
547 SYNC_MUTEX, file_name, line, &index);
549 /* The memory order of the array reservation and the change in the
550 waiters field is important: when we suspend a thread, we first
551 reserve the cell and then set waiters field to 1. When threads are
552 released in mutex_exit, the waiters field is first set to zero and
553 then the event is set to the signaled state. */
555 mutex_set_waiters(mutex, 1);
557 /* Try to reserve still a few times */
558 for (i = 0; i < 4; i++) {
559 if (mutex_test_and_set(mutex) == 0) {
560 /* Succeeded! Free the reserved wait cell */
562 sync_array_free_cell(sync_primary_wait_array, index);
564 ut_d(mutex->thread_id = os_thread_get_curr_id());
565 #ifdef UNIV_SYNC_DEBUG
566 mutex_set_debug_info(mutex, file_name, line);
567 #endif
569 #ifdef UNIV_SRV_PRINT_LATCH_WAITS
570 fprintf(stderr, "Thread %lu spin wait succeeds at 2:"
571 " mutex at %p\n",
572 (ulong) os_thread_pf(os_thread_get_curr_id()),
573 (void*) mutex);
574 #endif
576 goto finish_timing;
578 /* Note that in this case we leave the waiters field
579 set to 1. We cannot reset it to zero, as we do not
580 know if there are other waiters. */
584 /* Now we know that there has been some thread holding the mutex
585 after the change in the wait array and the waiters field was made.
586 Now there is no risk of infinite wait on the event. */
588 #ifdef UNIV_SRV_PRINT_LATCH_WAITS
589 fprintf(stderr,
590 "Thread %lu OS wait mutex at %p cfile %s cline %lu rnds %lu\n",
591 (ulong) os_thread_pf(os_thread_get_curr_id()), (void*) mutex,
592 mutex->cfile_name, (ulong) mutex->cline, (ulong) i);
593 #endif
595 mutex_os_wait_count++;
597 mutex->count_os_wait++;
598 #ifdef UNIV_DEBUG
599 /* !!!!! Sometimes os_wait can be called without os_thread_yield */
600 #ifndef UNIV_HOTBACKUP
601 if (timed_mutexes == 1 && timer_started == 0) {
602 ut_usectime(&sec, &ms);
603 lstart_time= (ib_int64_t)sec * 1000000 + ms;
604 timer_started = 1;
606 #endif /* UNIV_HOTBACKUP */
607 #endif /* UNIV_DEBUG */
609 sync_array_wait_event(sync_primary_wait_array, index);
610 goto mutex_loop;
612 finish_timing:
613 #ifdef UNIV_DEBUG
614 if (timed_mutexes == 1 && timer_started==1) {
615 ut_usectime(&sec, &ms);
616 lfinish_time= (ib_int64_t)sec * 1000000 + ms;
618 ltime_diff= (ulint) (lfinish_time - lstart_time);
619 mutex->lspent_time += ltime_diff;
621 if (mutex->lmax_spent_time < ltime_diff) {
622 mutex->lmax_spent_time= ltime_diff;
625 #endif /* UNIV_DEBUG */
626 return;
629 /******************************************************************//**
630 Releases the threads waiting in the primary wait array for this mutex. */
631 UNIV_INTERN
632 void
633 mutex_signal_object(
634 /*================*/
635 mutex_t* mutex) /*!< in: mutex */
637 mutex_set_waiters(mutex, 0);
639 /* The memory order of resetting the waiters field and
640 signaling the object is important. See LEMMA 1 above. */
641 os_event_set(mutex->event);
642 sync_array_object_signalled(sync_primary_wait_array);
645 #ifdef UNIV_SYNC_DEBUG
646 /******************************************************************//**
647 Sets the debug information for a reserved mutex. */
648 UNIV_INTERN
649 void
650 mutex_set_debug_info(
651 /*=================*/
652 mutex_t* mutex, /*!< in: mutex */
653 const char* file_name, /*!< in: file where requested */
654 ulint line) /*!< in: line where requested */
656 ut_ad(mutex);
657 ut_ad(file_name);
659 sync_thread_add_level(mutex, mutex->level, FALSE);
661 mutex->file_name = file_name;
662 mutex->line = line;
665 /******************************************************************//**
666 Gets the debug information for a reserved mutex. */
667 UNIV_INTERN
668 void
669 mutex_get_debug_info(
670 /*=================*/
671 mutex_t* mutex, /*!< in: mutex */
672 const char** file_name, /*!< out: file where requested */
673 ulint* line, /*!< out: line where requested */
674 os_thread_id_t* thread_id) /*!< out: id of the thread which owns
675 the mutex */
677 ut_ad(mutex);
679 *file_name = mutex->file_name;
680 *line = mutex->line;
681 *thread_id = mutex->thread_id;
684 /******************************************************************//**
685 Prints debug info of currently reserved mutexes. */
686 static
687 void
688 mutex_list_print_info(
689 /*==================*/
690 FILE* file) /*!< in: file where to print */
692 mutex_t* mutex;
693 const char* file_name;
694 ulint line;
695 os_thread_id_t thread_id;
696 ulint count = 0;
698 fputs("----------\n"
699 "MUTEX INFO\n"
700 "----------\n", file);
702 mutex_enter(&mutex_list_mutex);
704 mutex = UT_LIST_GET_FIRST(mutex_list);
706 while (mutex != NULL) {
707 count++;
709 if (mutex_get_lock_word(mutex) != 0) {
710 mutex_get_debug_info(mutex, &file_name, &line,
711 &thread_id);
712 fprintf(file,
713 "Locked mutex: addr %p thread %ld"
714 " file %s line %ld\n",
715 (void*) mutex, os_thread_pf(thread_id),
716 file_name, line);
719 mutex = UT_LIST_GET_NEXT(list, mutex);
722 fprintf(file, "Total number of mutexes %ld\n", count);
724 mutex_exit(&mutex_list_mutex);
727 /******************************************************************//**
728 Counts currently reserved mutexes. Works only in the debug version.
729 @return number of reserved mutexes */
730 UNIV_INTERN
731 ulint
732 mutex_n_reserved(void)
733 /*==================*/
735 mutex_t* mutex;
736 ulint count = 0;
738 mutex_enter(&mutex_list_mutex);
740 mutex = UT_LIST_GET_FIRST(mutex_list);
742 while (mutex != NULL) {
743 if (mutex_get_lock_word(mutex) != 0) {
745 count++;
748 mutex = UT_LIST_GET_NEXT(list, mutex);
751 mutex_exit(&mutex_list_mutex);
753 ut_a(count >= 1);
755 return(count - 1); /* Subtract one, because this function itself
756 was holding one mutex (mutex_list_mutex) */
759 /******************************************************************//**
760 Returns TRUE if no mutex or rw-lock is currently locked. Works only in
761 the debug version.
762 @return TRUE if no mutexes and rw-locks reserved */
763 UNIV_INTERN
764 ibool
765 sync_all_freed(void)
766 /*================*/
768 return(mutex_n_reserved() + rw_lock_n_locked() == 0);
771 /******************************************************************//**
772 Gets the value in the nth slot in the thread level arrays.
773 @return pointer to thread slot */
774 static
775 sync_thread_t*
776 sync_thread_level_arrays_get_nth(
777 /*=============================*/
778 ulint n) /*!< in: slot number */
780 ut_ad(n < OS_THREAD_MAX_N);
782 return(sync_thread_level_arrays + n);
785 /******************************************************************//**
786 Looks for the thread slot for the calling thread.
787 @return pointer to thread slot, NULL if not found */
788 static
789 sync_thread_t*
790 sync_thread_level_arrays_find_slot(void)
791 /*====================================*/
794 sync_thread_t* slot;
795 os_thread_id_t id;
796 ulint i;
798 id = os_thread_get_curr_id();
800 for (i = 0; i < OS_THREAD_MAX_N; i++) {
802 slot = sync_thread_level_arrays_get_nth(i);
804 if (slot->levels && os_thread_eq(slot->id, id)) {
806 return(slot);
810 return(NULL);
813 /******************************************************************//**
814 Looks for an unused thread slot.
815 @return pointer to thread slot */
816 static
817 sync_thread_t*
818 sync_thread_level_arrays_find_free(void)
819 /*====================================*/
822 sync_thread_t* slot;
823 ulint i;
825 for (i = 0; i < OS_THREAD_MAX_N; i++) {
827 slot = sync_thread_level_arrays_get_nth(i);
829 if (slot->levels == NULL) {
831 return(slot);
835 return(NULL);
838 /******************************************************************//**
839 Gets the value in the nth slot in the thread level array.
840 @return pointer to level slot */
841 static
842 sync_level_t*
843 sync_thread_levels_get_nth(
844 /*=======================*/
845 sync_level_t* arr, /*!< in: pointer to level array for an OS
846 thread */
847 ulint n) /*!< in: slot number */
849 ut_ad(n < SYNC_THREAD_N_LEVELS);
851 return(arr + n);
854 /******************************************************************//**
855 Checks if all the level values stored in the level array are greater than
856 the given limit.
857 @return TRUE if all greater */
858 static
859 ibool
860 sync_thread_levels_g(
861 /*=================*/
862 sync_level_t* arr, /*!< in: pointer to level array for an OS
863 thread */
864 ulint limit, /*!< in: level limit */
865 ulint warn) /*!< in: TRUE=display a diagnostic message */
867 sync_level_t* slot;
868 rw_lock_t* lock;
869 mutex_t* mutex;
870 ulint i;
872 for (i = 0; i < SYNC_THREAD_N_LEVELS; i++) {
874 slot = sync_thread_levels_get_nth(arr, i);
876 if (slot->latch != NULL) {
877 if (slot->level <= limit) {
879 if (!warn) {
881 return(FALSE);
884 lock = slot->latch;
885 mutex = slot->latch;
887 fprintf(stderr,
888 "InnoDB: sync levels should be"
889 " > %lu but a level is %lu\n",
890 (ulong) limit, (ulong) slot->level);
892 if (mutex->magic_n == MUTEX_MAGIC_N) {
893 fprintf(stderr,
894 "Mutex created at %s %lu\n",
895 mutex->cfile_name,
896 (ulong) mutex->cline);
898 if (mutex_get_lock_word(mutex) != 0) {
899 const char* file_name;
900 ulint line;
901 os_thread_id_t thread_id;
903 mutex_get_debug_info(
904 mutex, &file_name,
905 &line, &thread_id);
907 fprintf(stderr,
908 "InnoDB: Locked mutex:"
909 " addr %p thread %ld"
910 " file %s line %ld\n",
911 (void*) mutex,
912 os_thread_pf(
913 thread_id),
914 file_name,
915 (ulong) line);
916 } else {
917 fputs("Not locked\n", stderr);
919 } else {
920 rw_lock_print(lock);
923 return(FALSE);
928 return(TRUE);
931 /******************************************************************//**
932 Checks if the level value is stored in the level array.
933 @return TRUE if stored */
934 static
935 ibool
936 sync_thread_levels_contain(
937 /*=======================*/
938 sync_level_t* arr, /*!< in: pointer to level array for an OS
939 thread */
940 ulint level) /*!< in: level */
942 sync_level_t* slot;
943 ulint i;
945 for (i = 0; i < SYNC_THREAD_N_LEVELS; i++) {
947 slot = sync_thread_levels_get_nth(arr, i);
949 if (slot->latch != NULL) {
950 if (slot->level == level) {
952 return(TRUE);
957 return(FALSE);
960 /******************************************************************//**
961 Checks if the level array for the current thread contains a
962 mutex or rw-latch at the specified level.
963 @return a matching latch, or NULL if not found */
964 UNIV_INTERN
965 void*
966 sync_thread_levels_contains(
967 /*========================*/
968 ulint level) /*!< in: latching order level
969 (SYNC_DICT, ...)*/
971 sync_level_t* arr;
972 sync_thread_t* thread_slot;
973 sync_level_t* slot;
974 ulint i;
976 if (!sync_order_checks_on) {
978 return(NULL);
981 mutex_enter(&sync_thread_mutex);
983 thread_slot = sync_thread_level_arrays_find_slot();
985 if (thread_slot == NULL) {
987 mutex_exit(&sync_thread_mutex);
989 return(NULL);
992 arr = thread_slot->levels;
994 for (i = 0; i < SYNC_THREAD_N_LEVELS; i++) {
996 slot = sync_thread_levels_get_nth(arr, i);
998 if (slot->latch != NULL && slot->level == level) {
1000 mutex_exit(&sync_thread_mutex);
1001 return(slot->latch);
1005 mutex_exit(&sync_thread_mutex);
1007 return(NULL);
1010 /******************************************************************//**
1011 Checks that the level array for the current thread is empty.
1012 @return a latch, or NULL if empty except the exceptions specified below */
1013 UNIV_INTERN
1014 void*
1015 sync_thread_levels_nonempty_gen(
1016 /*============================*/
1017 ibool dict_mutex_allowed) /*!< in: TRUE if dictionary mutex is
1018 allowed to be owned by the thread,
1019 also purge_is_running mutex is
1020 allowed */
1022 sync_level_t* arr;
1023 sync_thread_t* thread_slot;
1024 sync_level_t* slot;
1025 ulint i;
1027 if (!sync_order_checks_on) {
1029 return(NULL);
1032 mutex_enter(&sync_thread_mutex);
1034 thread_slot = sync_thread_level_arrays_find_slot();
1036 if (thread_slot == NULL) {
1038 mutex_exit(&sync_thread_mutex);
1040 return(NULL);
1043 arr = thread_slot->levels;
1045 for (i = 0; i < SYNC_THREAD_N_LEVELS; i++) {
1047 slot = sync_thread_levels_get_nth(arr, i);
1049 if (slot->latch != NULL
1050 && (!dict_mutex_allowed
1051 || (slot->level != SYNC_DICT
1052 && slot->level != SYNC_DICT_OPERATION))) {
1054 mutex_exit(&sync_thread_mutex);
1055 ut_error;
1057 return(slot->latch);
1061 mutex_exit(&sync_thread_mutex);
1063 return(NULL);
1066 /******************************************************************//**
1067 Checks that the level array for the current thread is empty.
1068 @return TRUE if empty */
1069 UNIV_INTERN
1070 ibool
1071 sync_thread_levels_empty(void)
1072 /*==========================*/
1074 return(sync_thread_levels_empty_gen(FALSE));
1077 /******************************************************************//**
1078 Adds a latch and its level in the thread level array. Allocates the memory
1079 for the array if called first time for this OS thread. Makes the checks
1080 against other latch levels stored in the array for this thread. */
1081 UNIV_INTERN
1082 void
1083 sync_thread_add_level(
1084 /*==================*/
1085 void* latch, /*!< in: pointer to a mutex or an rw-lock */
1086 ulint level, /*!< in: level in the latching order; if
1087 SYNC_LEVEL_VARYING, nothing is done */
1088 ibool relock) /*!< in: TRUE if re-entering an x-lock */
1090 sync_level_t* array;
1091 sync_level_t* slot;
1092 sync_thread_t* thread_slot;
1093 ulint i;
1095 if (!sync_order_checks_on) {
1097 return;
1100 if ((latch == (void*)&sync_thread_mutex)
1101 || (latch == (void*)&mutex_list_mutex)
1102 || (latch == (void*)&rw_lock_debug_mutex)
1103 || (latch == (void*)&rw_lock_list_mutex)) {
1105 return;
1108 if (level == SYNC_LEVEL_VARYING) {
1110 return;
1113 mutex_enter(&sync_thread_mutex);
1115 thread_slot = sync_thread_level_arrays_find_slot();
1117 if (thread_slot == NULL) {
1118 /* We have to allocate the level array for a new thread */
1119 array = ut_malloc(sizeof(sync_level_t) * SYNC_THREAD_N_LEVELS);
1121 thread_slot = sync_thread_level_arrays_find_free();
1123 thread_slot->id = os_thread_get_curr_id();
1124 thread_slot->levels = array;
1126 for (i = 0; i < SYNC_THREAD_N_LEVELS; i++) {
1128 slot = sync_thread_levels_get_nth(array, i);
1130 slot->latch = NULL;
1134 array = thread_slot->levels;
1136 if (relock) {
1137 goto levels_ok;
1140 /* NOTE that there is a problem with _NODE and _LEAF levels: if the
1141 B-tree height changes, then a leaf can change to an internal node
1142 or the other way around. We do not know at present if this can cause
1143 unnecessary assertion failures below. */
1145 switch (level) {
1146 case SYNC_NO_ORDER_CHECK:
1147 case SYNC_EXTERN_STORAGE:
1148 case SYNC_TREE_NODE_FROM_HASH:
1149 /* Do no order checking */
1150 break;
1151 case SYNC_MEM_POOL:
1152 case SYNC_MEM_HASH:
1153 case SYNC_RECV:
1154 case SYNC_WORK_QUEUE:
1155 case SYNC_LOG:
1156 case SYNC_THR_LOCAL:
1157 case SYNC_ANY_LATCH:
1158 case SYNC_TRX_SYS_HEADER:
1159 case SYNC_FILE_FORMAT_TAG:
1160 case SYNC_DOUBLEWRITE:
1161 case SYNC_BUF_POOL:
1162 case SYNC_SEARCH_SYS:
1163 case SYNC_TRX_LOCK_HEAP:
1164 case SYNC_KERNEL:
1165 case SYNC_IBUF_BITMAP_MUTEX:
1166 case SYNC_RSEG:
1167 case SYNC_TRX_UNDO:
1168 case SYNC_PURGE_LATCH:
1169 case SYNC_PURGE_SYS:
1170 case SYNC_DICT_AUTOINC_MUTEX:
1171 case SYNC_DICT_OPERATION:
1172 case SYNC_DICT_HEADER:
1173 case SYNC_TRX_I_S_RWLOCK:
1174 case SYNC_TRX_I_S_LAST_READ:
1175 case SYNC_IBUF_MUTEX:
1176 if (!sync_thread_levels_g(array, level, TRUE)) {
1177 fprintf(stderr,
1178 "InnoDB: sync_thread_levels_g(array, %lu)"
1179 " does not hold!\n", level);
1180 ut_error;
1182 break;
1183 case SYNC_BUF_BLOCK:
1184 /* Either the thread must own the buffer pool mutex
1185 (buf_pool_mutex), or it is allowed to latch only ONE
1186 buffer block (block->mutex or buf_pool_zip_mutex). */
1187 if (!sync_thread_levels_g(array, level, FALSE)) {
1188 ut_a(sync_thread_levels_g(array, level - 1, TRUE));
1189 ut_a(sync_thread_levels_contain(array, SYNC_BUF_POOL));
1191 break;
1192 case SYNC_REC_LOCK:
1193 if (sync_thread_levels_contain(array, SYNC_KERNEL)) {
1194 ut_a(sync_thread_levels_g(array, SYNC_REC_LOCK - 1,
1195 TRUE));
1196 } else {
1197 ut_a(sync_thread_levels_g(array, SYNC_REC_LOCK, TRUE));
1199 break;
1200 case SYNC_IBUF_BITMAP:
1201 /* Either the thread must own the master mutex to all
1202 the bitmap pages, or it is allowed to latch only ONE
1203 bitmap page. */
1204 if (sync_thread_levels_contain(array,
1205 SYNC_IBUF_BITMAP_MUTEX)) {
1206 ut_a(sync_thread_levels_g(array, SYNC_IBUF_BITMAP - 1,
1207 TRUE));
1208 } else {
1209 ut_a(sync_thread_levels_g(array, SYNC_IBUF_BITMAP,
1210 TRUE));
1212 break;
1213 case SYNC_FSP_PAGE:
1214 ut_a(sync_thread_levels_contain(array, SYNC_FSP));
1215 break;
1216 case SYNC_FSP:
1217 ut_a(sync_thread_levels_contain(array, SYNC_FSP)
1218 || sync_thread_levels_g(array, SYNC_FSP, TRUE));
1219 break;
1220 case SYNC_TRX_UNDO_PAGE:
1221 ut_a(sync_thread_levels_contain(array, SYNC_TRX_UNDO)
1222 || sync_thread_levels_contain(array, SYNC_RSEG)
1223 || sync_thread_levels_contain(array, SYNC_PURGE_SYS)
1224 || sync_thread_levels_g(array, SYNC_TRX_UNDO_PAGE, TRUE));
1225 break;
1226 case SYNC_RSEG_HEADER:
1227 ut_a(sync_thread_levels_contain(array, SYNC_RSEG));
1228 break;
1229 case SYNC_RSEG_HEADER_NEW:
1230 ut_a(sync_thread_levels_contain(array, SYNC_KERNEL)
1231 && sync_thread_levels_contain(array, SYNC_FSP_PAGE));
1232 break;
1233 case SYNC_TREE_NODE:
1234 ut_a(sync_thread_levels_contain(array, SYNC_INDEX_TREE)
1235 || sync_thread_levels_contain(array, SYNC_DICT_OPERATION)
1236 || sync_thread_levels_g(array, SYNC_TREE_NODE - 1, TRUE));
1237 break;
1238 case SYNC_TREE_NODE_NEW:
1239 ut_a(sync_thread_levels_contain(array, SYNC_FSP_PAGE));
1240 break;
1241 case SYNC_INDEX_TREE:
1242 ut_a(sync_thread_levels_g(array, SYNC_TREE_NODE - 1, TRUE));
1243 break;
1244 case SYNC_IBUF_TREE_NODE:
1245 ut_a(sync_thread_levels_contain(array, SYNC_IBUF_INDEX_TREE)
1246 || sync_thread_levels_g(array, SYNC_IBUF_TREE_NODE - 1,
1247 TRUE));
1248 break;
1249 case SYNC_IBUF_TREE_NODE_NEW:
1250 /* ibuf_add_free_page() allocates new pages for the
1251 change buffer while only holding the tablespace
1252 x-latch. These pre-allocated new pages may only be
1253 taken in use while holding ibuf_mutex, in
1254 btr_page_alloc_for_ibuf(). */
1255 ut_a(sync_thread_levels_contain(array, SYNC_IBUF_MUTEX)
1256 || sync_thread_levels_contain(array, SYNC_FSP));
1257 break;
1258 case SYNC_IBUF_INDEX_TREE:
1259 if (sync_thread_levels_contain(array, SYNC_FSP)) {
1260 ut_a(sync_thread_levels_g(array, level - 1, TRUE));
1261 } else {
1262 ut_a(sync_thread_levels_g(
1263 array, SYNC_IBUF_TREE_NODE - 1, TRUE));
1265 break;
1266 case SYNC_IBUF_PESS_INSERT_MUTEX:
1267 ut_a(sync_thread_levels_g(array, SYNC_FSP - 1, TRUE));
1268 ut_a(!sync_thread_levels_contain(array, SYNC_IBUF_MUTEX));
1269 break;
1270 case SYNC_IBUF_HEADER:
1271 ut_a(sync_thread_levels_g(array, SYNC_FSP - 1, TRUE));
1272 ut_a(!sync_thread_levels_contain(array, SYNC_IBUF_MUTEX));
1273 ut_a(!sync_thread_levels_contain(array,
1274 SYNC_IBUF_PESS_INSERT_MUTEX));
1275 break;
1276 case SYNC_DICT:
1277 #ifdef UNIV_DEBUG
1278 ut_a(buf_debug_prints
1279 || sync_thread_levels_g(array, SYNC_DICT, TRUE));
1280 #else /* UNIV_DEBUG */
1281 ut_a(sync_thread_levels_g(array, SYNC_DICT, TRUE));
1282 #endif /* UNIV_DEBUG */
1283 break;
1284 default:
1285 ut_error;
1288 levels_ok:
1289 for (i = 0; i < SYNC_THREAD_N_LEVELS; i++) {
1291 slot = sync_thread_levels_get_nth(array, i);
1293 if (slot->latch == NULL) {
1294 slot->latch = latch;
1295 slot->level = level;
1297 break;
1301 ut_a(i < SYNC_THREAD_N_LEVELS);
1303 mutex_exit(&sync_thread_mutex);
1306 /******************************************************************//**
1307 Removes a latch from the thread level array if it is found there.
1308 @return TRUE if found in the array; it is no error if the latch is
1309 not found, as we presently are not able to determine the level for
1310 every latch reservation the program does */
1311 UNIV_INTERN
1312 ibool
1313 sync_thread_reset_level(
1314 /*====================*/
1315 void* latch) /*!< in: pointer to a mutex or an rw-lock */
1317 sync_level_t* array;
1318 sync_level_t* slot;
1319 sync_thread_t* thread_slot;
1320 ulint i;
1322 if (!sync_order_checks_on) {
1324 return(FALSE);
1327 if ((latch == (void*)&sync_thread_mutex)
1328 || (latch == (void*)&mutex_list_mutex)
1329 || (latch == (void*)&rw_lock_debug_mutex)
1330 || (latch == (void*)&rw_lock_list_mutex)) {
1332 return(FALSE);
1335 mutex_enter(&sync_thread_mutex);
1337 thread_slot = sync_thread_level_arrays_find_slot();
1339 if (thread_slot == NULL) {
1341 ut_error;
1343 mutex_exit(&sync_thread_mutex);
1344 return(FALSE);
1347 array = thread_slot->levels;
1349 for (i = 0; i < SYNC_THREAD_N_LEVELS; i++) {
1351 slot = sync_thread_levels_get_nth(array, i);
1353 if (slot->latch == latch) {
1354 slot->latch = NULL;
1356 mutex_exit(&sync_thread_mutex);
1358 return(TRUE);
1362 if (((mutex_t*) latch)->magic_n != MUTEX_MAGIC_N) {
1363 rw_lock_t* rw_lock;
1365 rw_lock = (rw_lock_t*) latch;
1367 if (rw_lock->level == SYNC_LEVEL_VARYING) {
1368 mutex_exit(&sync_thread_mutex);
1370 return(TRUE);
1374 ut_error;
1376 mutex_exit(&sync_thread_mutex);
1378 return(FALSE);
1380 #endif /* UNIV_SYNC_DEBUG */
1382 /******************************************************************//**
1383 Initializes the synchronization data structures. */
1384 UNIV_INTERN
1385 void
1386 sync_init(void)
1387 /*===========*/
1389 #ifdef UNIV_SYNC_DEBUG
1390 sync_thread_t* thread_slot;
1391 ulint i;
1392 #endif /* UNIV_SYNC_DEBUG */
1394 ut_a(sync_initialized == FALSE);
1396 sync_initialized = TRUE;
1398 /* Create the primary system wait array which is protected by an OS
1399 mutex */
1401 sync_primary_wait_array = sync_array_create(OS_THREAD_MAX_N,
1402 SYNC_ARRAY_OS_MUTEX);
1403 #ifdef UNIV_SYNC_DEBUG
1404 /* Create the thread latch level array where the latch levels
1405 are stored for each OS thread */
1407 sync_thread_level_arrays = ut_malloc(OS_THREAD_MAX_N
1408 * sizeof(sync_thread_t));
1409 for (i = 0; i < OS_THREAD_MAX_N; i++) {
1411 thread_slot = sync_thread_level_arrays_get_nth(i);
1412 thread_slot->levels = NULL;
1414 #endif /* UNIV_SYNC_DEBUG */
1415 /* Init the mutex list and create the mutex to protect it. */
1417 UT_LIST_INIT(mutex_list);
1418 mutex_create(&mutex_list_mutex, SYNC_NO_ORDER_CHECK);
1419 #ifdef UNIV_SYNC_DEBUG
1420 mutex_create(&sync_thread_mutex, SYNC_NO_ORDER_CHECK);
1421 #endif /* UNIV_SYNC_DEBUG */
1423 /* Init the rw-lock list and create the mutex to protect it. */
1425 UT_LIST_INIT(rw_lock_list);
1426 mutex_create(&rw_lock_list_mutex, SYNC_NO_ORDER_CHECK);
1428 #ifdef UNIV_SYNC_DEBUG
1429 mutex_create(&rw_lock_debug_mutex, SYNC_NO_ORDER_CHECK);
1431 rw_lock_debug_event = os_event_create(NULL);
1432 rw_lock_debug_waiters = FALSE;
1433 #endif /* UNIV_SYNC_DEBUG */
1436 /******************************************************************//**
1437 Frees the resources in InnoDB's own synchronization data structures. Use
1438 os_sync_free() after calling this. */
1439 UNIV_INTERN
1440 void
1441 sync_close(void)
1442 /*===========*/
1444 mutex_t* mutex;
1446 sync_array_free(sync_primary_wait_array);
1448 mutex = UT_LIST_GET_FIRST(mutex_list);
1450 while (mutex) {
1451 #ifdef UNIV_MEM_DEBUG
1452 if (mutex == &mem_hash_mutex) {
1453 mutex = UT_LIST_GET_NEXT(list, mutex);
1454 continue;
1456 #endif /* UNIV_MEM_DEBUG */
1457 mutex_free(mutex);
1458 mutex = UT_LIST_GET_FIRST(mutex_list);
1461 mutex_free(&mutex_list_mutex);
1462 #ifdef UNIV_SYNC_DEBUG
1463 mutex_free(&sync_thread_mutex);
1465 /* Switch latching order checks on in sync0sync.c */
1466 sync_order_checks_on = FALSE;
1467 #endif /* UNIV_SYNC_DEBUG */
1469 sync_initialized = FALSE;
1472 /*******************************************************************//**
1473 Prints wait info of the sync system. */
1474 UNIV_INTERN
1475 void
1476 sync_print_wait_info(
1477 /*=================*/
1478 FILE* file) /*!< in: file where to print */
1480 #ifdef UNIV_SYNC_DEBUG
1481 fprintf(file, "Mutex exits %llu, rws exits %llu, rwx exits %llu\n",
1482 mutex_exit_count, rw_s_exit_count, rw_x_exit_count);
1483 #endif
1485 fprintf(file,
1486 "Mutex spin waits %llu, rounds %llu, OS waits %llu\n"
1487 "RW-shared spins %llu, OS waits %llu;"
1488 " RW-excl spins %llu, OS waits %llu\n",
1489 mutex_spin_wait_count,
1490 mutex_spin_round_count,
1491 mutex_os_wait_count,
1492 rw_s_spin_wait_count,
1493 rw_s_os_wait_count,
1494 rw_x_spin_wait_count,
1495 rw_x_os_wait_count);
1497 fprintf(file,
1498 "Spin rounds per wait: %.2f mutex, %.2f RW-shared, "
1499 "%.2f RW-excl\n",
1500 (double) mutex_spin_round_count /
1501 (mutex_spin_wait_count ? mutex_spin_wait_count : 1),
1502 (double) rw_s_spin_round_count /
1503 (rw_s_spin_wait_count ? rw_s_spin_wait_count : 1),
1504 (double) rw_x_spin_round_count /
1505 (rw_x_spin_wait_count ? rw_x_spin_wait_count : 1));
1508 /*******************************************************************//**
1509 Prints info of the sync system. */
1510 UNIV_INTERN
1511 void
1512 sync_print(
1513 /*=======*/
1514 FILE* file) /*!< in: file where to print */
1516 #ifdef UNIV_SYNC_DEBUG
1517 mutex_list_print_info(file);
1519 rw_lock_list_print_info(file);
1520 #endif /* UNIV_SYNC_DEBUG */
1522 sync_array_print_info(file, sync_primary_wait_array);
1524 sync_print_wait_info(file);