1 /* Malloc implementation for multiple threads without lock contention.
2 Copyright (C) 1996, 1997 Free Software Foundation, Inc.
3 This file is part of the GNU C Library.
4 Contributed by Wolfram Gloger <wmglo@dent.med.uni-muenchen.de>
5 and Doug Lea <dl@cs.oswego.edu>, 1996.
7 The GNU C Library is free software; you can redistribute it and/or
8 modify it under the terms of the GNU Library General Public License as
9 published by the Free Software Foundation; either version 2 of the
10 License, or (at your option) any later version.
12 The GNU C Library is distributed in the hope that it will be useful,
13 but WITHOUT ANY WARRANTY; without even the implied warranty of
14 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
15 Library General Public License for more details.
17 You should have received a copy of the GNU Library General Public
18 License along with the GNU C Library; see the file COPYING.LIB. If not,
19 write to the Free Software Foundation, Inc., 59 Temple Place - Suite 330,
20 Boston, MA 02111-1307, USA. */
22 /* V2.6.4-pt3 Thu Feb 20 1997
24 This work is mainly derived from malloc-2.6.4 by Doug Lea
25 <dl@cs.oswego.edu>, which is available from:
27 ftp://g.oswego.edu/pub/misc/malloc.c
29 Most of the original comments are reproduced in the code below.
31 * Why use this malloc?
33 This is not the fastest, most space-conserving, most portable, or
34 most tunable malloc ever written. However it is among the fastest
35 while also being among the most space-conserving, portable and tunable.
36 Consistent balance across these factors results in a good general-purpose
37 allocator. For a high-level description, see
38 http://g.oswego.edu/dl/html/malloc.html
40 On many systems, the standard malloc implementation is by itself not
41 thread-safe, and therefore wrapped with a single global lock around
42 all malloc-related functions. In some applications, especially with
43 multiple available processors, this can lead to contention problems
44 and bad performance. This malloc version was designed with the goal
45 to avoid waiting for locks as much as possible. Statistics indicate
46 that this goal is achieved in many cases.
48 * Synopsis of public routines
50 (Much fuller descriptions are contained in the program documentation below.)
53 Initialize global configuration. When compiled for multiple threads,
54 this function must be called once before any other function in the
55 package. It is not required otherwise. It is called automatically
56 in the Linux/GNU C libray or when compiling with MALLOC_HOOKS.
58 Return a pointer to a newly allocated chunk of at least n bytes, or null
59 if no space is available.
61 Release the chunk of memory pointed to by p, or no effect if p is null.
62 realloc(Void_t* p, size_t n);
63 Return a pointer to a chunk of size n that contains the same data
64 as does chunk p up to the minimum of (n, p's size) bytes, or null
65 if no space is available. The returned pointer may or may not be
66 the same as p. If p is null, equivalent to malloc. Unless the
67 #define REALLOC_ZERO_BYTES_FREES below is set, realloc with a
68 size argument of zero (re)allocates a minimum-sized chunk.
69 memalign(size_t alignment, size_t n);
70 Return a pointer to a newly allocated chunk of n bytes, aligned
71 in accord with the alignment argument, which must be a power of
74 Equivalent to memalign(pagesize, n), where pagesize is the page
75 size of the system (or as near to this as can be figured out from
76 all the includes/defines below.)
78 Equivalent to valloc(minimum-page-that-holds(n)), that is,
79 round up n to nearest pagesize.
80 calloc(size_t unit, size_t quantity);
81 Returns a pointer to quantity * unit bytes, with all locations
84 Equivalent to free(p).
85 malloc_trim(size_t pad);
86 Release all but pad bytes of freed top-most memory back
87 to the system. Return 1 if successful, else 0.
88 malloc_usable_size(Void_t* p);
89 Report the number usable allocated bytes associated with allocated
90 chunk p. This may or may not report more bytes than were requested,
91 due to alignment and minimum size constraints.
93 Prints brief summary statistics on stderr.
95 Returns (by copy) a struct containing various summary statistics.
96 mallopt(int parameter_number, int parameter_value)
97 Changes one of the tunable parameters described below. Returns
98 1 if successful in changing the parameter, else 0.
103 8 byte alignment is currently hardwired into the design. This
104 seems to suffice for all current machines and C compilers.
106 Assumed pointer representation: 4 or 8 bytes
107 Code for 8-byte pointers is untested by me but has worked
108 reliably by Wolfram Gloger, who contributed most of the
109 changes supporting this.
111 Assumed size_t representation: 4 or 8 bytes
112 Note that size_t is allowed to be 4 bytes even if pointers are 8.
114 Minimum overhead per allocated chunk: 4 or 8 bytes
115 Each malloced chunk has a hidden overhead of 4 bytes holding size
116 and status information.
118 Minimum allocated size: 4-byte ptrs: 16 bytes (including 4 overhead)
119 8-byte ptrs: 24/32 bytes (including, 4/8 overhead)
121 When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte
122 ptrs but 4 byte size) or 24 (for 8/8) additional bytes are
123 needed; 4 (8) for a trailing size field
124 and 8 (16) bytes for free list pointers. Thus, the minimum
125 allocatable size is 16/24/32 bytes.
127 Even a request for zero bytes (i.e., malloc(0)) returns a
128 pointer to something of the minimum allocatable size.
130 Maximum allocated size: 4-byte size_t: 2^31 - 8 bytes
131 8-byte size_t: 2^63 - 16 bytes
133 It is assumed that (possibly signed) size_t bit values suffice to
134 represent chunk sizes. `Possibly signed' is due to the fact
135 that `size_t' may be defined on a system as either a signed or
136 an unsigned type. To be conservative, values that would appear
137 as negative numbers are avoided.
138 Requests for sizes with a negative sign bit will return a
141 Maximum overhead wastage per allocated chunk: normally 15 bytes
143 Alignment demands, plus the minimum allocatable size restriction
144 make the normal worst-case wastage 15 bytes (i.e., up to 15
145 more bytes will be allocated than were requested in malloc), with
147 1. Because requests for zero bytes allocate non-zero space,
148 the worst case wastage for a request of zero bytes is 24 bytes.
149 2. For requests >= mmap_threshold that are serviced via
150 mmap(), the worst case wastage is 8 bytes plus the remainder
151 from a system page (the minimal mmap unit); typically 4096 bytes.
155 Here are some features that are NOT currently supported
157 * No automated mechanism for fully checking that all accesses
158 to malloced memory stay within their bounds.
159 * No support for compaction.
161 * Synopsis of compile-time options:
163 People have reported using previous versions of this malloc on all
164 versions of Unix, sometimes by tweaking some of the defines
165 below. It has been tested most extensively on Solaris and
166 Linux. People have also reported adapting this malloc for use in
167 stand-alone embedded systems.
169 The implementation is in straight, hand-tuned ANSI C. Among other
170 consequences, it uses a lot of macros. Because of this, to be at
171 all usable, this code should be compiled using an optimizing compiler
172 (for example gcc -O2) that can simplify expressions and control
175 __STD_C (default: derived from C compiler defines)
176 Nonzero if using ANSI-standard C compiler, a C++ compiler, or
177 a C compiler sufficiently close to ANSI to get away with it.
178 MALLOC_DEBUG (default: NOT defined)
179 Define to enable debugging. Adds fairly extensive assertion-based
180 checking to help track down memory errors, but noticeably slows down
182 MALLOC_HOOKS (default: NOT defined)
183 Define to enable support run-time replacement of the allocation
184 functions through user-defined `hooks'.
185 REALLOC_ZERO_BYTES_FREES (default: NOT defined)
186 Define this if you think that realloc(p, 0) should be equivalent
187 to free(p). Otherwise, since malloc returns a unique pointer for
188 malloc(0), so does realloc(p, 0).
189 HAVE_MEMCPY (default: defined)
190 Define if you are not otherwise using ANSI STD C, but still
191 have memcpy and memset in your C library and want to use them.
192 Otherwise, simple internal versions are supplied.
193 USE_MEMCPY (default: 1 if HAVE_MEMCPY is defined, 0 otherwise)
194 Define as 1 if you want the C library versions of memset and
195 memcpy called in realloc and calloc (otherwise macro versions are used).
196 At least on some platforms, the simple macro versions usually
197 outperform libc versions.
198 HAVE_MMAP (default: defined as 1)
199 Define to non-zero to optionally make malloc() use mmap() to
200 allocate very large blocks.
201 HAVE_MREMAP (default: defined as 0 unless Linux libc set)
202 Define to non-zero to optionally make realloc() use mremap() to
203 reallocate very large blocks.
204 malloc_getpagesize (default: derived from system #includes)
205 Either a constant or routine call returning the system page size.
206 HAVE_USR_INCLUDE_MALLOC_H (default: NOT defined)
207 Optionally define if you are on a system with a /usr/include/malloc.h
208 that declares struct mallinfo. It is not at all necessary to
209 define this even if you do, but will ensure consistency.
210 INTERNAL_SIZE_T (default: size_t)
211 Define to a 32-bit type (probably `unsigned int') if you are on a
212 64-bit machine, yet do not want or need to allow malloc requests of
213 greater than 2^31 to be handled. This saves space, especially for
215 _LIBC (default: NOT defined)
216 Defined only when compiled as part of the Linux libc/glibc.
217 Also note that there is some odd internal name-mangling via defines
218 (for example, internally, `malloc' is named `mALLOc') needed
219 when compiling in this case. These look funny but don't otherwise
221 LACKS_UNISTD_H (default: undefined)
222 Define this if your system does not have a <unistd.h>.
223 MORECORE (default: sbrk)
224 The name of the routine to call to obtain more memory from the system.
225 MORECORE_FAILURE (default: -1)
226 The value returned upon failure of MORECORE.
227 MORECORE_CLEARS (default 1)
228 True (1) if the routine mapped to MORECORE zeroes out memory (which
230 DEFAULT_TRIM_THRESHOLD
232 DEFAULT_MMAP_THRESHOLD
234 Default values of tunable parameters (described in detail below)
235 controlling interaction with host system routines (sbrk, mmap, etc).
236 These values may also be changed dynamically via mallopt(). The
237 preset defaults are those that give best performance for typical
240 When the standard debugging hooks are in place, and a pointer is
241 detected as corrupt, do nothing (0), print an error message (1),
249 * Compile-time options for multiple threads:
251 USE_PTHREADS, USE_THR, USE_SPROC
252 Define one of these as 1 to select the thread interface:
253 POSIX threads, Solaris threads or SGI sproc's, respectively.
254 If none of these is defined as non-zero, you get a `normal'
255 malloc implementation which is not thread-safe. Support for
256 multiple threads requires HAVE_MMAP=1. As an exception, when
257 compiling for GNU libc, i.e. when _LIBC is defined, then none of
258 the USE_... symbols have to be defined.
262 When thread support is enabled, additional `heap's are created
263 with mmap calls. These are limited in size; HEAP_MIN_SIZE should
264 be a multiple of the page size, while HEAP_MAX_SIZE must be a power
265 of two for alignment reasons. HEAP_MAX_SIZE should be at least
266 twice as large as the mmap threshold.
268 When this is defined as non-zero, some statistics on mutex locking
279 #if defined (__STDC__)
286 #endif /*__cplusplus*/
299 # include <stddef.h> /* for size_t */
300 # if defined _LIBC || defined MALLOC_HOOKS
301 # include <stdlib.h> /* for getenv(), abort() */
304 # include <sys/types.h>
307 /* Macros for handling mutexes and thread-specific data. This is
308 included early, because some thread-related header files (such as
309 pthread.h) should be included before any others. */
310 #include "thread-m.h"
316 #include <stdio.h> /* needed for malloc_stats */
327 Because freed chunks may be overwritten with link fields, this
328 malloc will often die when freed memory is overwritten by user
329 programs. This can be very effective (albeit in an annoying way)
330 in helping track down dangling pointers.
332 If you compile with -DMALLOC_DEBUG, a number of assertion checks are
333 enabled that will catch more memory errors. You probably won't be
334 able to make much sense of the actual assertion errors, but they
335 should help you locate incorrectly overwritten memory. The
336 checking is fairly extensive, and will slow down execution
337 noticeably. Calling malloc_stats or mallinfo with MALLOC_DEBUG set will
338 attempt to check every non-mmapped allocated and free chunk in the
339 course of computing the summaries. (By nature, mmapped regions
340 cannot be checked very much automatically.)
342 Setting MALLOC_DEBUG may also be helpful if you are trying to modify
343 this code. The assertions in the check routines spell out in more
344 detail the assumptions and invariants underlying the algorithms.
351 #define assert(x) ((void)0)
356 INTERNAL_SIZE_T is the word-size used for internal bookkeeping
357 of chunk sizes. On a 64-bit machine, you can reduce malloc
358 overhead by defining INTERNAL_SIZE_T to be a 32 bit `unsigned int'
359 at the expense of not being able to handle requests greater than
360 2^31. This limitation is hardly ever a concern; you are encouraged
361 to set this. However, the default version is the same as size_t.
364 #ifndef INTERNAL_SIZE_T
365 #define INTERNAL_SIZE_T size_t
369 REALLOC_ZERO_BYTES_FREES should be set if a call to
370 realloc with zero bytes should be the same as a call to free.
371 Some people think it should. Otherwise, since this malloc
372 returns a unique pointer for malloc(0), so does realloc(p, 0).
376 /* #define REALLOC_ZERO_BYTES_FREES */
380 HAVE_MEMCPY should be defined if you are not otherwise using
381 ANSI STD C, but still have memcpy and memset in your C library
382 and want to use them in calloc and realloc. Otherwise simple
383 macro versions are defined here.
385 USE_MEMCPY should be defined as 1 if you actually want to
386 have memset and memcpy called. People report that the macro
387 versions are often enough faster than libc versions on many
388 systems that it is better to use them.
392 #define HAVE_MEMCPY 1
402 #if (__STD_C || defined(HAVE_MEMCPY))
405 void* memset(void*, int, size_t);
406 void* memcpy(void*, const void*, size_t);
415 /* The following macros are only invoked with (2n+1)-multiples of
416 INTERNAL_SIZE_T units, with a positive integer n. This is exploited
417 for fast inline execution when n is small. */
419 #define MALLOC_ZERO(charp, nbytes) \
421 INTERNAL_SIZE_T mzsz = (nbytes); \
422 if(mzsz <= 9*sizeof(mzsz)) { \
423 INTERNAL_SIZE_T* mz = (INTERNAL_SIZE_T*) (charp); \
424 if(mzsz >= 5*sizeof(mzsz)) { *mz++ = 0; \
426 if(mzsz >= 7*sizeof(mzsz)) { *mz++ = 0; \
428 if(mzsz >= 9*sizeof(mzsz)) { *mz++ = 0; \
433 } else memset((charp), 0, mzsz); \
436 #define MALLOC_COPY(dest,src,nbytes) \
438 INTERNAL_SIZE_T mcsz = (nbytes); \
439 if(mcsz <= 9*sizeof(mcsz)) { \
440 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) (src); \
441 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) (dest); \
442 if(mcsz >= 5*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
443 *mcdst++ = *mcsrc++; \
444 if(mcsz >= 7*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
445 *mcdst++ = *mcsrc++; \
446 if(mcsz >= 9*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
447 *mcdst++ = *mcsrc++; }}} \
448 *mcdst++ = *mcsrc++; \
449 *mcdst++ = *mcsrc++; \
451 } else memcpy(dest, src, mcsz); \
454 #else /* !USE_MEMCPY */
456 /* Use Duff's device for good zeroing/copying performance. */
458 #define MALLOC_ZERO(charp, nbytes) \
460 INTERNAL_SIZE_T* mzp = (INTERNAL_SIZE_T*)(charp); \
461 long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
462 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
464 case 0: for(;;) { *mzp++ = 0; \
465 case 7: *mzp++ = 0; \
466 case 6: *mzp++ = 0; \
467 case 5: *mzp++ = 0; \
468 case 4: *mzp++ = 0; \
469 case 3: *mzp++ = 0; \
470 case 2: *mzp++ = 0; \
471 case 1: *mzp++ = 0; if(mcn <= 0) break; mcn--; } \
475 #define MALLOC_COPY(dest,src,nbytes) \
477 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) src; \
478 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) dest; \
479 long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
480 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
482 case 0: for(;;) { *mcdst++ = *mcsrc++; \
483 case 7: *mcdst++ = *mcsrc++; \
484 case 6: *mcdst++ = *mcsrc++; \
485 case 5: *mcdst++ = *mcsrc++; \
486 case 4: *mcdst++ = *mcsrc++; \
487 case 3: *mcdst++ = *mcsrc++; \
488 case 2: *mcdst++ = *mcsrc++; \
489 case 1: *mcdst++ = *mcsrc++; if(mcn <= 0) break; mcn--; } \
497 Define HAVE_MMAP to optionally make malloc() use mmap() to
498 allocate very large blocks. These will be returned to the
499 operating system immediately after a free().
507 Define HAVE_MREMAP to make realloc() use mremap() to re-allocate
508 large blocks. This is currently only possible on Linux with
509 kernel versions newer than 1.3.77.
513 #define HAVE_MREMAP defined(__linux__)
520 #include <sys/mman.h>
522 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
523 #define MAP_ANONYMOUS MAP_ANON
526 #endif /* HAVE_MMAP */
529 Access to system page size. To the extent possible, this malloc
530 manages memory from the system in page-size units.
532 The following mechanics for getpagesize were adapted from
533 bsd/gnu getpagesize.h
536 #ifndef LACKS_UNISTD_H
540 #ifndef malloc_getpagesize
541 # ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
542 # ifndef _SC_PAGE_SIZE
543 # define _SC_PAGE_SIZE _SC_PAGESIZE
546 # ifdef _SC_PAGE_SIZE
547 # define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
549 # if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
550 extern size_t getpagesize();
551 # define malloc_getpagesize getpagesize()
553 # include <sys/param.h>
554 # ifdef EXEC_PAGESIZE
555 # define malloc_getpagesize EXEC_PAGESIZE
559 # define malloc_getpagesize NBPG
561 # define malloc_getpagesize (NBPG * CLSIZE)
565 # define malloc_getpagesize NBPC
568 # define malloc_getpagesize PAGESIZE
570 # define malloc_getpagesize (4096) /* just guess */
583 This version of malloc supports the standard SVID/XPG mallinfo
584 routine that returns a struct containing the same kind of
585 information you can get from malloc_stats. It should work on
586 any SVID/XPG compliant system that has a /usr/include/malloc.h
587 defining struct mallinfo. (If you'd like to install such a thing
588 yourself, cut out the preliminary declarations as described above
589 and below and save them in a malloc.h file. But there's no
590 compelling reason to bother to do this.)
592 The main declaration needed is the mallinfo struct that is returned
593 (by-copy) by mallinfo(). The SVID/XPG malloinfo struct contains a
594 bunch of fields, most of which are not even meaningful in this
595 version of malloc. Some of these fields are are instead filled by
596 mallinfo() with other numbers that might possibly be of interest.
598 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
599 /usr/include/malloc.h file that includes a declaration of struct
600 mallinfo. If so, it is included; else an SVID2/XPG2 compliant
601 version is declared below. These must be precisely the same for
606 /* #define HAVE_USR_INCLUDE_MALLOC_H */
608 #if HAVE_USR_INCLUDE_MALLOC_H
609 # include "/usr/include/malloc.h"
614 # include "ptmalloc.h"
620 #ifndef DEFAULT_TRIM_THRESHOLD
621 #define DEFAULT_TRIM_THRESHOLD (128 * 1024)
625 M_TRIM_THRESHOLD is the maximum amount of unused top-most memory
626 to keep before releasing via malloc_trim in free().
628 Automatic trimming is mainly useful in long-lived programs.
629 Because trimming via sbrk can be slow on some systems, and can
630 sometimes be wasteful (in cases where programs immediately
631 afterward allocate more large chunks) the value should be high
632 enough so that your overall system performance would improve by
635 The trim threshold and the mmap control parameters (see below)
636 can be traded off with one another. Trimming and mmapping are
637 two different ways of releasing unused memory back to the
638 system. Between these two, it is often possible to keep
639 system-level demands of a long-lived program down to a bare
640 minimum. For example, in one test suite of sessions measuring
641 the XF86 X server on Linux, using a trim threshold of 128K and a
642 mmap threshold of 192K led to near-minimal long term resource
645 If you are using this malloc in a long-lived program, it should
646 pay to experiment with these values. As a rough guide, you
647 might set to a value close to the average size of a process
648 (program) running on your system. Releasing this much memory
649 would allow such a process to run in memory. Generally, it's
650 worth it to tune for trimming rather than memory mapping when a
651 program undergoes phases where several large chunks are
652 allocated and released in ways that can reuse each other's
653 storage, perhaps mixed with phases where there are no such
654 chunks at all. And in well-behaved long-lived programs,
655 controlling release of large blocks via trimming versus mapping
658 However, in most programs, these parameters serve mainly as
659 protection against the system-level effects of carrying around
660 massive amounts of unneeded memory. Since frequent calls to
661 sbrk, mmap, and munmap otherwise degrade performance, the default
662 parameters are set to relatively high values that serve only as
665 The default trim value is high enough to cause trimming only in
666 fairly extreme (by current memory consumption standards) cases.
667 It must be greater than page size to have any useful effect. To
668 disable trimming completely, you can set to (unsigned long)(-1);
674 #ifndef DEFAULT_TOP_PAD
675 #define DEFAULT_TOP_PAD (0)
679 M_TOP_PAD is the amount of extra `padding' space to allocate or
680 retain whenever sbrk is called. It is used in two ways internally:
682 * When sbrk is called to extend the top of the arena to satisfy
683 a new malloc request, this much padding is added to the sbrk
686 * When malloc_trim is called automatically from free(),
687 it is used as the `pad' argument.
689 In both cases, the actual amount of padding is rounded
690 so that the end of the arena is always a system page boundary.
692 The main reason for using padding is to avoid calling sbrk so
693 often. Having even a small pad greatly reduces the likelihood
694 that nearly every malloc request during program start-up (or
695 after trimming) will invoke sbrk, which needlessly wastes
698 Automatic rounding-up to page-size units is normally sufficient
699 to avoid measurable overhead, so the default is 0. However, in
700 systems where sbrk is relatively slow, it can pay to increase
701 this value, at the expense of carrying around more memory than
707 #ifndef DEFAULT_MMAP_THRESHOLD
708 #define DEFAULT_MMAP_THRESHOLD (128 * 1024)
713 M_MMAP_THRESHOLD is the request size threshold for using mmap()
714 to service a request. Requests of at least this size that cannot
715 be allocated using already-existing space will be serviced via mmap.
716 (If enough normal freed space already exists it is used instead.)
718 Using mmap segregates relatively large chunks of memory so that
719 they can be individually obtained and released from the host
720 system. A request serviced through mmap is never reused by any
721 other request (at least not directly; the system may just so
722 happen to remap successive requests to the same locations).
724 Segregating space in this way has the benefit that mmapped space
725 can ALWAYS be individually released back to the system, which
726 helps keep the system level memory demands of a long-lived
727 program low. Mapped memory can never become `locked' between
728 other chunks, as can happen with normally allocated chunks, which
729 menas that even trimming via malloc_trim would not release them.
731 However, it has the disadvantages that:
733 1. The space cannot be reclaimed, consolidated, and then
734 used to service later requests, as happens with normal chunks.
735 2. It can lead to more wastage because of mmap page alignment
737 3. It causes malloc performance to be more dependent on host
738 system memory management support routines which may vary in
739 implementation quality and may impose arbitrary
740 limitations. Generally, servicing a request via normal
741 malloc steps is faster than going through a system's mmap.
743 All together, these considerations should lead you to use mmap
744 only for relatively large requests.
751 #ifndef DEFAULT_MMAP_MAX
753 #define DEFAULT_MMAP_MAX (1024)
755 #define DEFAULT_MMAP_MAX (0)
760 M_MMAP_MAX is the maximum number of requests to simultaneously
761 service using mmap. This parameter exists because:
763 1. Some systems have a limited number of internal tables for
765 2. In most systems, overreliance on mmap can degrade overall
767 3. If a program allocates many large regions, it is probably
768 better off using normal sbrk-based allocation routines that
769 can reclaim and reallocate normal heap memory. Using a
770 small value allows transition into this mode after the
771 first few allocations.
773 Setting to 0 disables all use of mmap. If HAVE_MMAP is not set,
774 the default value is 0, and attempts to set it to non-zero values
775 in mallopt will fail.
780 #ifndef DEFAULT_CHECK_ACTION
781 #define DEFAULT_CHECK_ACTION 1
784 /* What to do if the standard debugging hooks are in place and a
785 corrupt pointer is detected: do nothing (0), print an error message
786 (1), or call abort() (2). */
790 #define HEAP_MIN_SIZE (32*1024)
791 #define HEAP_MAX_SIZE (1024*1024) /* must be a power of two */
793 /* HEAP_MIN_SIZE and HEAP_MAX_SIZE limit the size of mmap()ed heaps
794 that are dynamically created for multi-threaded programs. The
795 maximum size must be a power of two, for fast determination of
796 which heap belongs to a chunk. It should be much larger than
797 the mmap threshold, so that requests with a size just below that
798 threshold can be fulfilled without creating too many heaps.
804 #define THREAD_STATS 0
807 /* If THREAD_STATS is non-zero, some statistics on mutex locking are
813 Special defines for the Linux/GNU C library.
822 Void_t
* __default_morecore (ptrdiff_t);
823 Void_t
*(*__morecore
)(ptrdiff_t) = __default_morecore
;
827 Void_t
* __default_morecore ();
828 Void_t
*(*__morecore
)() = __default_morecore
;
832 #define MORECORE (*__morecore)
833 #define MORECORE_FAILURE 0
834 #define MORECORE_CLEARS 1
836 #define munmap __munmap
837 #define mremap __mremap
838 #define mprotect __mprotect
839 #undef malloc_getpagesize
840 #define malloc_getpagesize __getpagesize()
845 extern Void_t
* sbrk(ptrdiff_t);
847 extern Void_t
* sbrk();
851 #define MORECORE sbrk
854 #ifndef MORECORE_FAILURE
855 #define MORECORE_FAILURE -1
858 #ifndef MORECORE_CLEARS
859 #define MORECORE_CLEARS 1
866 #define cALLOc __libc_calloc
867 #define fREe __libc_free
868 #define mALLOc __libc_malloc
869 #define mEMALIGn __libc_memalign
870 #define rEALLOc __libc_realloc
871 #define vALLOc __libc_valloc
872 #define pvALLOc __libc_pvalloc
873 #define mALLINFo __libc_mallinfo
874 #define mALLOPt __libc_mallopt
875 #define mALLOC_STATs __malloc_stats
876 #define mALLOC_USABLE_SIZe __malloc_usable_size
877 #define mALLOC_TRIm __malloc_trim
878 #define mALLOC_GET_STATe __malloc_get_state
879 #define mALLOC_SET_STATe __malloc_set_state
883 #define cALLOc calloc
885 #define mALLOc malloc
886 #define mEMALIGn memalign
887 #define rEALLOc realloc
888 #define vALLOc valloc
889 #define pvALLOc pvalloc
890 #define mALLINFo mallinfo
891 #define mALLOPt mallopt
892 #define mALLOC_STATs malloc_stats
893 #define mALLOC_USABLE_SIZe malloc_usable_size
894 #define mALLOC_TRIm malloc_trim
895 #define mALLOC_GET_STATe malloc_get_state
896 #define mALLOC_SET_STATe malloc_set_state
900 /* Public routines */
905 void ptmalloc_init(void);
907 Void_t
* mALLOc(size_t);
909 Void_t
* rEALLOc(Void_t
*, size_t);
910 Void_t
* mEMALIGn(size_t, size_t);
911 Void_t
* vALLOc(size_t);
912 Void_t
* pvALLOc(size_t);
913 Void_t
* cALLOc(size_t, size_t);
915 int mALLOC_TRIm(size_t);
916 size_t mALLOC_USABLE_SIZe(Void_t
*);
917 void mALLOC_STATs(void);
918 int mALLOPt(int, int);
919 struct mallinfo
mALLINFo(void);
920 Void_t
* mALLOC_GET_STATe(void);
921 int mALLOC_SET_STATe(Void_t
*);
926 void ptmalloc_init();
937 size_t mALLOC_USABLE_SIZe();
940 struct mallinfo
mALLINFo();
941 Void_t
* mALLOC_GET_STATe();
942 int mALLOC_SET_STATe();
948 }; /* end of extern "C" */
951 #if !defined(NO_THREADS) && !HAVE_MMAP
952 "Can't have threads support without mmap"
963 INTERNAL_SIZE_T prev_size
; /* Size of previous chunk (if free). */
964 INTERNAL_SIZE_T size
; /* Size in bytes, including overhead. */
965 struct malloc_chunk
* fd
; /* double links -- used only if free. */
966 struct malloc_chunk
* bk
;
969 typedef struct malloc_chunk
* mchunkptr
;
973 malloc_chunk details:
975 (The following includes lightly edited explanations by Colin Plumb.)
977 Chunks of memory are maintained using a `boundary tag' method as
978 described in e.g., Knuth or Standish. (See the paper by Paul
979 Wilson ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a
980 survey of such techniques.) Sizes of free chunks are stored both
981 in the front of each chunk and at the end. This makes
982 consolidating fragmented chunks into bigger chunks very fast. The
983 size fields also hold bits representing whether chunks are free or
986 An allocated chunk looks like this:
989 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
990 | Size of previous chunk, if allocated | |
991 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
992 | Size of chunk, in bytes |P|
993 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
994 | User data starts here... .
996 . (malloc_usable_space() bytes) .
998 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1000 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1003 Where "chunk" is the front of the chunk for the purpose of most of
1004 the malloc code, but "mem" is the pointer that is returned to the
1005 user. "Nextchunk" is the beginning of the next contiguous chunk.
1007 Chunks always begin on even word boundaries, so the mem portion
1008 (which is returned to the user) is also on an even word boundary, and
1009 thus double-word aligned.
1011 Free chunks are stored in circular doubly-linked lists, and look like this:
1013 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1014 | Size of previous chunk |
1015 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1016 `head:' | Size of chunk, in bytes |P|
1017 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1018 | Forward pointer to next chunk in list |
1019 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1020 | Back pointer to previous chunk in list |
1021 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1022 | Unused space (may be 0 bytes long) .
1025 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1026 `foot:' | Size of chunk, in bytes |
1027 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1029 The P (PREV_INUSE) bit, stored in the unused low-order bit of the
1030 chunk size (which is always a multiple of two words), is an in-use
1031 bit for the *previous* chunk. If that bit is *clear*, then the
1032 word before the current chunk size contains the previous chunk
1033 size, and can be used to find the front of the previous chunk.
1034 (The very first chunk allocated always has this bit set,
1035 preventing access to non-existent (or non-owned) memory.)
1037 Note that the `foot' of the current chunk is actually represented
1038 as the prev_size of the NEXT chunk. (This makes it easier to
1039 deal with alignments etc).
1041 The two exceptions to all this are
1043 1. The special chunk `top', which doesn't bother using the
1044 trailing size field since there is no
1045 next contiguous chunk that would have to index off it. (After
1046 initialization, `top' is forced to always exist. If it would
1047 become less than MINSIZE bytes long, it is replenished via
1050 2. Chunks allocated via mmap, which have the second-lowest-order
1051 bit (IS_MMAPPED) set in their size fields. Because they are
1052 never merged or traversed from any other chunk, they have no
1053 foot size or inuse information.
1055 Available chunks are kept in any of several places (all declared below):
1057 * `av': An array of chunks serving as bin headers for consolidated
1058 chunks. Each bin is doubly linked. The bins are approximately
1059 proportionally (log) spaced. There are a lot of these bins
1060 (128). This may look excessive, but works very well in
1061 practice. All procedures maintain the invariant that no
1062 consolidated chunk physically borders another one. Chunks in
1063 bins are kept in size order, with ties going to the
1064 approximately least recently used chunk.
1066 The chunks in each bin are maintained in decreasing sorted order by
1067 size. This is irrelevant for the small bins, which all contain
1068 the same-sized chunks, but facilitates best-fit allocation for
1069 larger chunks. (These lists are just sequential. Keeping them in
1070 order almost never requires enough traversal to warrant using
1071 fancier ordered data structures.) Chunks of the same size are
1072 linked with the most recently freed at the front, and allocations
1073 are taken from the back. This results in LRU or FIFO allocation
1074 order, which tends to give each chunk an equal opportunity to be
1075 consolidated with adjacent freed chunks, resulting in larger free
1076 chunks and less fragmentation.
1078 * `top': The top-most available chunk (i.e., the one bordering the
1079 end of available memory) is treated specially. It is never
1080 included in any bin, is used only if no other chunk is
1081 available, and is released back to the system if it is very
1082 large (see M_TRIM_THRESHOLD).
1084 * `last_remainder': A bin holding only the remainder of the
1085 most recently split (non-top) chunk. This bin is checked
1086 before other non-fitting chunks, so as to provide better
1087 locality for runs of sequentially allocated chunks.
1089 * Implicitly, through the host system's memory mapping tables.
1090 If supported, requests greater than a threshold are usually
1091 serviced via calls to mmap, and then later released via munmap.
1098 The bins are an array of pairs of pointers serving as the
1099 heads of (initially empty) doubly-linked lists of chunks, laid out
1100 in a way so that each pair can be treated as if it were in a
1101 malloc_chunk. (This way, the fd/bk offsets for linking bin heads
1102 and chunks are the same).
1104 Bins for sizes < 512 bytes contain chunks of all the same size, spaced
1105 8 bytes apart. Larger bins are approximately logarithmically
1106 spaced. (See the table below.)
1114 4 bins of size 32768
1115 2 bins of size 262144
1116 1 bin of size what's left
1118 There is actually a little bit of slop in the numbers in bin_index
1119 for the sake of speed. This makes no difference elsewhere.
1121 The special chunks `top' and `last_remainder' get their own bins,
1122 (this is implemented via yet more trickery with the av array),
1123 although `top' is never properly linked to its bin since it is
1124 always handled specially.
1128 #define NAV 128 /* number of bins */
1130 typedef struct malloc_chunk
* mbinptr
;
1132 /* An arena is a configuration of malloc_chunks together with an array
1133 of bins. With multiple threads, it must be locked via a mutex
1134 before changing its data structures. One or more `heaps' are
1135 associated with each arena, except for the main_arena, which is
1136 associated only with the `main heap', i.e. the conventional free
1137 store obtained with calls to MORECORE() (usually sbrk). The `av'
1138 array is never mentioned directly in the code, but instead used via
1139 bin access macros. */
1141 typedef struct _arena
{
1142 mbinptr av
[2*NAV
+ 2];
1143 struct _arena
*next
;
1146 long stat_lock_direct
, stat_lock_loop
, stat_lock_wait
;
1152 /* A heap is a single contiguous memory region holding (coalesceable)
1153 malloc_chunks. It is allocated with mmap() and always starts at an
1154 address aligned to HEAP_MAX_SIZE. Not used unless compiling for
1155 multiple threads. */
1157 typedef struct _heap_info
{
1158 arena
*ar_ptr
; /* Arena for this heap. */
1159 struct _heap_info
*prev
; /* Previous heap. */
1160 size_t size
; /* Current size in bytes. */
1161 size_t pad
; /* Make sure the following data is properly aligned. */
1166 Static functions (forward declarations)
1171 static void chunk_free(arena
*ar_ptr
, mchunkptr p
) internal_function
;
1172 static mchunkptr
chunk_alloc(arena
*ar_ptr
, INTERNAL_SIZE_T size
)
1174 static mchunkptr
chunk_realloc(arena
*ar_ptr
, mchunkptr oldp
,
1175 INTERNAL_SIZE_T oldsize
, INTERNAL_SIZE_T nb
)
1177 static mchunkptr
chunk_align(arena
*ar_ptr
, INTERNAL_SIZE_T nb
,
1178 size_t alignment
) internal_function
;
1179 static int main_trim(size_t pad
) internal_function
;
1181 static int heap_trim(heap_info
*heap
, size_t pad
) internal_function
;
1183 #if defined _LIBC || defined MALLOC_HOOKS
1184 static Void_t
* malloc_check(size_t sz
, const Void_t
*caller
);
1185 static void free_check(Void_t
* mem
, const Void_t
*caller
);
1186 static Void_t
* realloc_check(Void_t
* oldmem
, size_t bytes
,
1187 const Void_t
*caller
);
1188 static Void_t
* memalign_check(size_t alignment
, size_t bytes
,
1189 const Void_t
*caller
);
1190 static Void_t
* malloc_starter(size_t sz
, const Void_t
*caller
);
1191 static void free_starter(Void_t
* mem
, const Void_t
*caller
);
1192 static Void_t
* malloc_atfork(size_t sz
, const Void_t
*caller
);
1193 static void free_atfork(Void_t
* mem
, const Void_t
*caller
);
1198 static void chunk_free();
1199 static mchunkptr
chunk_alloc();
1200 static mchunkptr
chunk_realloc();
1201 static mchunkptr
chunk_align();
1202 static int main_trim();
1204 static int heap_trim();
1206 #if defined _LIBC || defined MALLOC_HOOKS
1207 static Void_t
* malloc_check();
1208 static void free_check();
1209 static Void_t
* realloc_check();
1210 static Void_t
* memalign_check();
1211 static Void_t
* malloc_starter();
1212 static void free_starter();
1213 static Void_t
* malloc_atfork();
1214 static void free_atfork();
1219 /* On some platforms we can compile internal, not exported functions better.
1220 Let the environment provide a macro and define it to be empty if it
1221 is not available. */
1222 #ifndef internal_function
1223 # define internal_function
1228 /* sizes, alignments */
1230 #define SIZE_SZ (sizeof(INTERNAL_SIZE_T))
1231 #define MALLOC_ALIGNMENT (SIZE_SZ + SIZE_SZ)
1232 #define MALLOC_ALIGN_MASK (MALLOC_ALIGNMENT - 1)
1233 #define MINSIZE (sizeof(struct malloc_chunk))
1235 /* conversion from malloc headers to user pointers, and back */
1237 #define chunk2mem(p) ((Void_t*)((char*)(p) + 2*SIZE_SZ))
1238 #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - 2*SIZE_SZ))
1240 /* pad request bytes into a usable size */
1242 #define request2size(req) \
1243 (((long)((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) < \
1244 (long)(MINSIZE + MALLOC_ALIGN_MASK)) ? MINSIZE : \
1245 (((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) & ~(MALLOC_ALIGN_MASK)))
1247 /* Check if m has acceptable alignment */
1249 #define aligned_OK(m) (((unsigned long)((m)) & (MALLOC_ALIGN_MASK)) == 0)
1255 Physical chunk operations
1259 /* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */
1261 #define PREV_INUSE 0x1
1263 /* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */
1265 #define IS_MMAPPED 0x2
1267 /* Bits to mask off when extracting size */
1269 #define SIZE_BITS (PREV_INUSE|IS_MMAPPED)
1272 /* Ptr to next physical malloc_chunk. */
1274 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->size & ~PREV_INUSE) ))
1276 /* Ptr to previous physical malloc_chunk */
1278 #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_size) ))
1281 /* Treat space at ptr + offset as a chunk */
1283 #define chunk_at_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
1289 Dealing with use bits
1292 /* extract p's inuse bit */
1295 ((((mchunkptr)(((char*)(p))+((p)->size & ~PREV_INUSE)))->size) & PREV_INUSE)
1297 /* extract inuse bit of previous chunk */
1299 #define prev_inuse(p) ((p)->size & PREV_INUSE)
1301 /* check for mmap()'ed chunk */
1303 #define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED)
1305 /* set/clear chunk as in use without otherwise disturbing */
1307 #define set_inuse(p) \
1308 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size |= PREV_INUSE
1310 #define clear_inuse(p) \
1311 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size &= ~(PREV_INUSE)
1313 /* check/set/clear inuse bits in known places */
1315 #define inuse_bit_at_offset(p, s)\
1316 (((mchunkptr)(((char*)(p)) + (s)))->size & PREV_INUSE)
1318 #define set_inuse_bit_at_offset(p, s)\
1319 (((mchunkptr)(((char*)(p)) + (s)))->size |= PREV_INUSE)
1321 #define clear_inuse_bit_at_offset(p, s)\
1322 (((mchunkptr)(((char*)(p)) + (s)))->size &= ~(PREV_INUSE))
1328 Dealing with size fields
1331 /* Get size, ignoring use bits */
1333 #define chunksize(p) ((p)->size & ~(SIZE_BITS))
1335 /* Set size at head, without disturbing its use bit */
1337 #define set_head_size(p, s) ((p)->size = (((p)->size & PREV_INUSE) | (s)))
1339 /* Set size/use ignoring previous bits in header */
1341 #define set_head(p, s) ((p)->size = (s))
1343 /* Set size at footer (only when chunk is not in use) */
1345 #define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_size = (s))
1353 #define bin_at(a, i) ((mbinptr)((char*)&(((a)->av)[2*(i) + 2]) - 2*SIZE_SZ))
1354 #define init_bin(a, i) ((a)->av[2*i+2] = (a)->av[2*i+3] = bin_at((a), i))
1355 #define next_bin(b) ((mbinptr)((char*)(b) + 2 * sizeof(mbinptr)))
1356 #define prev_bin(b) ((mbinptr)((char*)(b) - 2 * sizeof(mbinptr)))
1359 The first 2 bins are never indexed. The corresponding av cells are instead
1360 used for bookkeeping. This is not to save space, but to simplify
1361 indexing, maintain locality, and avoid some initialization tests.
1364 #define binblocks(a) (bin_at(a,0)->size)/* bitvector of nonempty blocks */
1365 #define top(a) (bin_at(a,0)->fd) /* The topmost chunk */
1366 #define last_remainder(a) (bin_at(a,1)) /* remainder from last split */
1369 Because top initially points to its own bin with initial
1370 zero size, thus forcing extension on the first malloc request,
1371 we avoid having any special code in malloc to check whether
1372 it even exists yet. But we still need to in malloc_extend_top.
1375 #define initial_top(a) ((mchunkptr)bin_at(a, 0))
1379 /* field-extraction macros */
1381 #define first(b) ((b)->fd)
1382 #define last(b) ((b)->bk)
1388 #define bin_index(sz) \
1389 (((((unsigned long)(sz)) >> 9) == 0) ? (((unsigned long)(sz)) >> 3):\
1390 ((((unsigned long)(sz)) >> 9) <= 4) ? 56 + (((unsigned long)(sz)) >> 6):\
1391 ((((unsigned long)(sz)) >> 9) <= 20) ? 91 + (((unsigned long)(sz)) >> 9):\
1392 ((((unsigned long)(sz)) >> 9) <= 84) ? 110 + (((unsigned long)(sz)) >> 12):\
1393 ((((unsigned long)(sz)) >> 9) <= 340) ? 119 + (((unsigned long)(sz)) >> 15):\
1394 ((((unsigned long)(sz)) >> 9) <= 1364) ? 124 + (((unsigned long)(sz)) >> 18):\
1397 bins for chunks < 512 are all spaced 8 bytes apart, and hold
1398 identically sized chunks. This is exploited in malloc.
1401 #define MAX_SMALLBIN 63
1402 #define MAX_SMALLBIN_SIZE 512
1403 #define SMALLBIN_WIDTH 8
1405 #define smallbin_index(sz) (((unsigned long)(sz)) >> 3)
1408 Requests are `small' if both the corresponding and the next bin are small
1411 #define is_small_request(nb) ((nb) < MAX_SMALLBIN_SIZE - SMALLBIN_WIDTH)
1416 To help compensate for the large number of bins, a one-level index
1417 structure is used for bin-by-bin searching. `binblocks' is a
1418 one-word bitvector recording whether groups of BINBLOCKWIDTH bins
1419 have any (possibly) non-empty bins, so they can be skipped over
1420 all at once during during traversals. The bits are NOT always
1421 cleared as soon as all bins in a block are empty, but instead only
1422 when all are noticed to be empty during traversal in malloc.
1425 #define BINBLOCKWIDTH 4 /* bins per block */
1427 /* bin<->block macros */
1429 #define idx2binblock(ix) ((unsigned)1 << ((ix) / BINBLOCKWIDTH))
1430 #define mark_binblock(a, ii) (binblocks(a) |= idx2binblock(ii))
1431 #define clear_binblock(a, ii) (binblocks(a) &= ~(idx2binblock(ii)))
1436 /* Static bookkeeping data */
1438 /* Helper macro to initialize bins */
1439 #define IAV(i) bin_at(&main_arena, i), bin_at(&main_arena, i)
1441 static arena main_arena
= {
1444 IAV(0), IAV(1), IAV(2), IAV(3), IAV(4), IAV(5), IAV(6), IAV(7),
1445 IAV(8), IAV(9), IAV(10), IAV(11), IAV(12), IAV(13), IAV(14), IAV(15),
1446 IAV(16), IAV(17), IAV(18), IAV(19), IAV(20), IAV(21), IAV(22), IAV(23),
1447 IAV(24), IAV(25), IAV(26), IAV(27), IAV(28), IAV(29), IAV(30), IAV(31),
1448 IAV(32), IAV(33), IAV(34), IAV(35), IAV(36), IAV(37), IAV(38), IAV(39),
1449 IAV(40), IAV(41), IAV(42), IAV(43), IAV(44), IAV(45), IAV(46), IAV(47),
1450 IAV(48), IAV(49), IAV(50), IAV(51), IAV(52), IAV(53), IAV(54), IAV(55),
1451 IAV(56), IAV(57), IAV(58), IAV(59), IAV(60), IAV(61), IAV(62), IAV(63),
1452 IAV(64), IAV(65), IAV(66), IAV(67), IAV(68), IAV(69), IAV(70), IAV(71),
1453 IAV(72), IAV(73), IAV(74), IAV(75), IAV(76), IAV(77), IAV(78), IAV(79),
1454 IAV(80), IAV(81), IAV(82), IAV(83), IAV(84), IAV(85), IAV(86), IAV(87),
1455 IAV(88), IAV(89), IAV(90), IAV(91), IAV(92), IAV(93), IAV(94), IAV(95),
1456 IAV(96), IAV(97), IAV(98), IAV(99), IAV(100), IAV(101), IAV(102), IAV(103),
1457 IAV(104), IAV(105), IAV(106), IAV(107), IAV(108), IAV(109), IAV(110), IAV(111),
1458 IAV(112), IAV(113), IAV(114), IAV(115), IAV(116), IAV(117), IAV(118), IAV(119),
1459 IAV(120), IAV(121), IAV(122), IAV(123), IAV(124), IAV(125), IAV(126), IAV(127)
1461 &main_arena
, /* next */
1464 0, 0, 0, /* stat_lock_direct, stat_lock_loop, stat_lock_wait */
1466 MUTEX_INITIALIZER
/* mutex */
1471 /* Thread specific data */
1474 static tsd_key_t arena_key
;
1475 static mutex_t list_lock
= MUTEX_INITIALIZER
;
1479 static int stat_n_heaps
= 0;
1480 #define THREAD_STAT(x) x
1482 #define THREAD_STAT(x) do ; while(0)
1485 /* variables holding tunable values */
1487 static unsigned long trim_threshold
= DEFAULT_TRIM_THRESHOLD
;
1488 static unsigned long top_pad
= DEFAULT_TOP_PAD
;
1489 static unsigned int n_mmaps_max
= DEFAULT_MMAP_MAX
;
1490 static unsigned long mmap_threshold
= DEFAULT_MMAP_THRESHOLD
;
1491 static int check_action
= DEFAULT_CHECK_ACTION
;
1493 /* The first value returned from sbrk */
1494 static char* sbrk_base
= (char*)(-1);
1496 /* The maximum memory obtained from system via sbrk */
1497 static unsigned long max_sbrked_mem
= 0;
1499 /* The maximum via either sbrk or mmap (too difficult to track with threads) */
1501 static unsigned long max_total_mem
= 0;
1504 /* The total memory obtained from system via sbrk */
1505 #define sbrked_mem (main_arena.size)
1507 /* Tracking mmaps */
1509 static unsigned int n_mmaps
= 0;
1510 static unsigned int max_n_mmaps
= 0;
1511 static unsigned long mmapped_mem
= 0;
1512 static unsigned long max_mmapped_mem
= 0;
1517 #define weak_variable
1519 /* In GNU libc we want the hook variables to be weak definitions to
1520 avoid a problem with Emacs. */
1521 #define weak_variable weak_function
1524 /* Already initialized? */
1525 int __malloc_initialized
= 0;
1528 /* The following two functions are registered via thread_atfork() to
1529 make sure that the mutexes remain in a consistent state in the
1530 fork()ed version of a thread. Also adapt the malloc and free hooks
1531 temporarily, because the `atfork' handler mechanism may use
1532 malloc/free internally (e.g. in LinuxThreads). */
1534 #if defined _LIBC || defined MALLOC_HOOKS
1535 static __malloc_ptr_t (*save_malloc_hook
) __MALLOC_P ((size_t __size
,
1536 const __malloc_ptr_t
));
1537 static void (*save_free_hook
) __MALLOC_P ((__malloc_ptr_t __ptr
,
1538 const __malloc_ptr_t
));
1539 static Void_t
* save_arena
;
1543 ptmalloc_lock_all
__MALLOC_P((void))
1547 (void)mutex_lock(&list_lock
);
1548 for(ar_ptr
= &main_arena
;;) {
1549 (void)mutex_lock(&ar_ptr
->mutex
);
1550 ar_ptr
= ar_ptr
->next
;
1551 if(ar_ptr
== &main_arena
) break;
1553 #if defined _LIBC || defined MALLOC_HOOKS
1554 save_malloc_hook
= __malloc_hook
;
1555 save_free_hook
= __free_hook
;
1556 __malloc_hook
= malloc_atfork
;
1557 __free_hook
= free_atfork
;
1558 /* Only the current thread may perform malloc/free calls now. */
1559 tsd_getspecific(arena_key
, save_arena
);
1560 tsd_setspecific(arena_key
, (Void_t
*)0);
1565 ptmalloc_unlock_all
__MALLOC_P((void))
1569 #if defined _LIBC || defined MALLOC_HOOKS
1570 tsd_setspecific(arena_key
, save_arena
);
1571 __malloc_hook
= save_malloc_hook
;
1572 __free_hook
= save_free_hook
;
1574 for(ar_ptr
= &main_arena
;;) {
1575 (void)mutex_unlock(&ar_ptr
->mutex
);
1576 ar_ptr
= ar_ptr
->next
;
1577 if(ar_ptr
== &main_arena
) break;
1579 (void)mutex_unlock(&list_lock
);
1582 /* Initialization routine. */
1585 static void ptmalloc_init
__MALLOC_P ((void)) __attribute__ ((constructor
));
1589 ptmalloc_init
__MALLOC_P((void))
1592 ptmalloc_init
__MALLOC_P((void))
1595 #if defined _LIBC || defined MALLOC_HOOKS
1599 if(__malloc_initialized
) return;
1600 __malloc_initialized
= 1;
1601 #if defined _LIBC || defined MALLOC_HOOKS
1602 /* With some threads implementations, creating thread-specific data
1603 or initializing a mutex may call malloc() itself. Provide a
1604 simple starter version (realloc() won't work). */
1605 save_malloc_hook
= __malloc_hook
;
1606 save_free_hook
= __free_hook
;
1607 __malloc_hook
= malloc_starter
;
1608 __free_hook
= free_starter
;
1610 #if defined _LIBC && !defined NO_THREADS
1611 /* Initialize the pthreads interface. */
1612 if (__pthread_initialize
!= NULL
)
1613 __pthread_initialize();
1616 mutex_init(&main_arena
.mutex
);
1617 mutex_init(&list_lock
);
1618 tsd_key_create(&arena_key
, NULL
);
1619 tsd_setspecific(arena_key
, (Void_t
*)&main_arena
);
1620 thread_atfork(ptmalloc_lock_all
, ptmalloc_unlock_all
, ptmalloc_unlock_all
);
1622 #if defined _LIBC || defined MALLOC_HOOKS
1623 if((s
= getenv("MALLOC_TRIM_THRESHOLD_")))
1624 mALLOPt(M_TRIM_THRESHOLD
, atoi(s
));
1625 if((s
= getenv("MALLOC_TOP_PAD_")))
1626 mALLOPt(M_TOP_PAD
, atoi(s
));
1627 if((s
= getenv("MALLOC_MMAP_THRESHOLD_")))
1628 mALLOPt(M_MMAP_THRESHOLD
, atoi(s
));
1629 if((s
= getenv("MALLOC_MMAP_MAX_")))
1630 mALLOPt(M_MMAP_MAX
, atoi(s
));
1631 s
= getenv("MALLOC_CHECK_");
1632 __malloc_hook
= save_malloc_hook
;
1633 __free_hook
= save_free_hook
;
1635 if(s
[0]) mALLOPt(M_CHECK_ACTION
, (int)(s
[0] - '0'));
1636 __malloc_check_init();
1638 if(__malloc_initialize_hook
!= NULL
)
1639 (*__malloc_initialize_hook
)();
1643 /* There are platforms (e.g. Hurd) with a link-time hook mechanism. */
1644 #ifdef thread_atfork_static
1645 thread_atfork_static(ptmalloc_lock_all
, ptmalloc_unlock_all
, \
1646 ptmalloc_unlock_all
)
1649 #if defined _LIBC || defined MALLOC_HOOKS
1651 /* Hooks for debugging versions. The initial hooks just call the
1652 initialization routine, then do the normal work. */
1656 malloc_hook_ini(size_t sz
, const __malloc_ptr_t caller
)
1659 malloc_hook_ini(size_t sz
)
1661 malloc_hook_ini(sz
) size_t sz
;
1665 __malloc_hook
= NULL
;
1666 __realloc_hook
= NULL
;
1667 __memalign_hook
= NULL
;
1674 realloc_hook_ini(Void_t
* ptr
, size_t sz
, const __malloc_ptr_t caller
)
1676 realloc_hook_ini(ptr
, sz
, caller
)
1677 Void_t
* ptr
; size_t sz
; const __malloc_ptr_t caller
;
1680 __malloc_hook
= NULL
;
1681 __realloc_hook
= NULL
;
1682 __memalign_hook
= NULL
;
1684 return rEALLOc(ptr
, sz
);
1689 memalign_hook_ini(size_t sz
, size_t alignment
, const __malloc_ptr_t caller
)
1691 memalign_hook_ini(sz
, alignment
, caller
)
1692 size_t sz
; size_t alignment
; const __malloc_ptr_t caller
;
1695 __malloc_hook
= NULL
;
1696 __realloc_hook
= NULL
;
1697 __memalign_hook
= NULL
;
1699 return mEMALIGn(sz
, alignment
);
1702 void weak_variable (*__malloc_initialize_hook
) __MALLOC_P ((void)) = NULL
;
1703 void weak_variable (*__free_hook
) __MALLOC_P ((__malloc_ptr_t __ptr
,
1704 const __malloc_ptr_t
)) = NULL
;
1705 __malloc_ptr_t
weak_variable (*__malloc_hook
)
1706 __MALLOC_P ((size_t __size
, const __malloc_ptr_t
)) = malloc_hook_ini
;
1707 __malloc_ptr_t
weak_variable (*__realloc_hook
)
1708 __MALLOC_P ((__malloc_ptr_t __ptr
, size_t __size
, const __malloc_ptr_t
))
1710 __malloc_ptr_t
weak_variable (*__memalign_hook
)
1711 __MALLOC_P ((size_t __size
, size_t __alignment
, const __malloc_ptr_t
))
1712 = memalign_hook_ini
;
1713 void weak_variable (*__after_morecore_hook
) __MALLOC_P ((void)) = NULL
;
1715 /* Activate a standard set of debugging hooks. */
1717 __malloc_check_init()
1719 __malloc_hook
= malloc_check
;
1720 __free_hook
= free_check
;
1721 __realloc_hook
= realloc_check
;
1722 __memalign_hook
= memalign_check
;
1723 if(check_action
== 1)
1724 fprintf(stderr
, "malloc: using debugging hooks\n");
1733 /* Routines dealing with mmap(). */
1737 #ifndef MAP_ANONYMOUS
1739 static int dev_zero_fd
= -1; /* Cached file descriptor for /dev/zero. */
1741 #define MMAP(size, prot) ((dev_zero_fd < 0) ? \
1742 (dev_zero_fd = open("/dev/zero", O_RDWR), \
1743 mmap(0, (size), (prot), MAP_PRIVATE, dev_zero_fd, 0)) : \
1744 mmap(0, (size), (prot), MAP_PRIVATE, dev_zero_fd, 0))
1748 #define MMAP(size, prot) \
1749 (mmap(0, (size), (prot), MAP_PRIVATE|MAP_ANONYMOUS, -1, 0))
1754 #if defined __GNUC__ && __GNUC__ >= 2
1755 /* This function is only called from one place, inline it. */
1760 mmap_chunk(size_t size
)
1762 mmap_chunk(size
) size_t size
;
1765 size_t page_mask
= malloc_getpagesize
- 1;
1768 if(n_mmaps
>= n_mmaps_max
) return 0; /* too many regions */
1770 /* For mmapped chunks, the overhead is one SIZE_SZ unit larger, because
1771 * there is no following chunk whose prev_size field could be used.
1773 size
= (size
+ SIZE_SZ
+ page_mask
) & ~page_mask
;
1775 p
= (mchunkptr
)MMAP(size
, PROT_READ
|PROT_WRITE
);
1776 if(p
== (mchunkptr
) MAP_FAILED
) return 0;
1779 if (n_mmaps
> max_n_mmaps
) max_n_mmaps
= n_mmaps
;
1781 /* We demand that eight bytes into a page must be 8-byte aligned. */
1782 assert(aligned_OK(chunk2mem(p
)));
1784 /* The offset to the start of the mmapped region is stored
1785 * in the prev_size field of the chunk; normally it is zero,
1786 * but that can be changed in memalign().
1789 set_head(p
, size
|IS_MMAPPED
);
1791 mmapped_mem
+= size
;
1792 if ((unsigned long)mmapped_mem
> (unsigned long)max_mmapped_mem
)
1793 max_mmapped_mem
= mmapped_mem
;
1795 if ((unsigned long)(mmapped_mem
+ sbrked_mem
) > (unsigned long)max_total_mem
)
1796 max_total_mem
= mmapped_mem
+ sbrked_mem
;
1802 static void munmap_chunk(mchunkptr p
)
1804 static void munmap_chunk(p
) mchunkptr p
;
1807 INTERNAL_SIZE_T size
= chunksize(p
);
1810 assert (chunk_is_mmapped(p
));
1811 assert(! ((char*)p
>= sbrk_base
&& (char*)p
< sbrk_base
+ sbrked_mem
));
1812 assert((n_mmaps
> 0));
1813 assert(((p
->prev_size
+ size
) & (malloc_getpagesize
-1)) == 0);
1816 mmapped_mem
-= (size
+ p
->prev_size
);
1818 ret
= munmap((char *)p
- p
->prev_size
, size
+ p
->prev_size
);
1820 /* munmap returns non-zero on failure */
1827 static mchunkptr
mremap_chunk(mchunkptr p
, size_t new_size
)
1829 static mchunkptr
mremap_chunk(p
, new_size
) mchunkptr p
; size_t new_size
;
1832 size_t page_mask
= malloc_getpagesize
- 1;
1833 INTERNAL_SIZE_T offset
= p
->prev_size
;
1834 INTERNAL_SIZE_T size
= chunksize(p
);
1837 assert (chunk_is_mmapped(p
));
1838 assert(! ((char*)p
>= sbrk_base
&& (char*)p
< sbrk_base
+ sbrked_mem
));
1839 assert((n_mmaps
> 0));
1840 assert(((size
+ offset
) & (malloc_getpagesize
-1)) == 0);
1842 /* Note the extra SIZE_SZ overhead as in mmap_chunk(). */
1843 new_size
= (new_size
+ offset
+ SIZE_SZ
+ page_mask
) & ~page_mask
;
1845 cp
= (char *)mremap((char *)p
- offset
, size
+ offset
, new_size
,
1848 if (cp
== (char *)-1) return 0;
1850 p
= (mchunkptr
)(cp
+ offset
);
1852 assert(aligned_OK(chunk2mem(p
)));
1854 assert((p
->prev_size
== offset
));
1855 set_head(p
, (new_size
- offset
)|IS_MMAPPED
);
1857 mmapped_mem
-= size
+ offset
;
1858 mmapped_mem
+= new_size
;
1859 if ((unsigned long)mmapped_mem
> (unsigned long)max_mmapped_mem
)
1860 max_mmapped_mem
= mmapped_mem
;
1862 if ((unsigned long)(mmapped_mem
+ sbrked_mem
) > (unsigned long)max_total_mem
)
1863 max_total_mem
= mmapped_mem
+ sbrked_mem
;
1868 #endif /* HAVE_MREMAP */
1870 #endif /* HAVE_MMAP */
1874 /* Managing heaps and arenas (for concurrent threads) */
1878 /* Create a new heap. size is automatically rounded up to a multiple
1879 of the page size. */
1884 new_heap(size_t size
)
1886 new_heap(size
) size_t size
;
1889 size_t page_mask
= malloc_getpagesize
- 1;
1894 if(size
+top_pad
< HEAP_MIN_SIZE
)
1895 size
= HEAP_MIN_SIZE
;
1896 else if(size
+top_pad
<= HEAP_MAX_SIZE
)
1898 else if(size
> HEAP_MAX_SIZE
)
1901 size
= HEAP_MAX_SIZE
;
1902 size
= (size
+ page_mask
) & ~page_mask
;
1904 p1
= (char *)MMAP(HEAP_MAX_SIZE
<<1, PROT_NONE
);
1905 if(p1
== MAP_FAILED
)
1907 p2
= (char *)(((unsigned long)p1
+ HEAP_MAX_SIZE
) & ~(HEAP_MAX_SIZE
-1));
1910 munmap(p2
+ HEAP_MAX_SIZE
, HEAP_MAX_SIZE
- ul
);
1911 if(mprotect(p2
, size
, PROT_READ
|PROT_WRITE
) != 0) {
1912 munmap(p2
, HEAP_MAX_SIZE
);
1915 h
= (heap_info
*)p2
;
1917 THREAD_STAT(stat_n_heaps
++);
1921 /* Grow or shrink a heap. size is automatically rounded up to a
1922 multiple of the page size if it is positive. */
1926 grow_heap(heap_info
*h
, long diff
)
1928 grow_heap(h
, diff
) heap_info
*h
; long diff
;
1931 size_t page_mask
= malloc_getpagesize
- 1;
1935 diff
= (diff
+ page_mask
) & ~page_mask
;
1936 new_size
= (long)h
->size
+ diff
;
1937 if(new_size
> HEAP_MAX_SIZE
)
1939 if(mprotect((char *)h
+ h
->size
, diff
, PROT_READ
|PROT_WRITE
) != 0)
1942 new_size
= (long)h
->size
+ diff
;
1943 if(new_size
< (long)sizeof(*h
))
1945 if(mprotect((char *)h
+ new_size
, -diff
, PROT_NONE
) != 0)
1952 /* Delete a heap. */
1954 #define delete_heap(heap) munmap((char*)(heap), HEAP_MAX_SIZE)
1956 /* arena_get() acquires an arena and locks the corresponding mutex.
1957 First, try the one last locked successfully by this thread. (This
1958 is the common case and handled with a macro for speed.) Then, loop
1959 once over the circularly linked list of arenas. If no arena is
1960 readily available, create a new one. */
1962 #define arena_get(ptr, size) do { \
1963 Void_t *vptr = NULL; \
1964 ptr = (arena *)tsd_getspecific(arena_key, vptr); \
1965 if(ptr && !mutex_trylock(&ptr->mutex)) { \
1966 THREAD_STAT(++(ptr->stat_lock_direct)); \
1968 ptr = arena_get2(ptr, (size)); \
1974 arena_get2(arena
*a_tsd
, size_t size
)
1976 arena_get2(a_tsd
, size
) arena
*a_tsd
; size_t size
;
1983 unsigned long misalign
;
1986 a
= a_tsd
= &main_arena
;
1990 /* This can only happen while initializing the new arena. */
1991 (void)mutex_lock(&main_arena
.mutex
);
1992 THREAD_STAT(++(main_arena
.stat_lock_wait
));
1997 /* Check the global, circularly linked list for available arenas. */
1999 if(!mutex_trylock(&a
->mutex
)) {
2000 THREAD_STAT(++(a
->stat_lock_loop
));
2001 tsd_setspecific(arena_key
, (Void_t
*)a
);
2005 } while(a
!= a_tsd
);
2007 /* Nothing immediately available, so generate a new arena. */
2008 h
= new_heap(size
+ (sizeof(*h
) + sizeof(*a
) + MALLOC_ALIGNMENT
));
2011 a
= h
->ar_ptr
= (arena
*)(h
+1);
2012 for(i
=0; i
<NAV
; i
++)
2016 tsd_setspecific(arena_key
, (Void_t
*)a
);
2017 mutex_init(&a
->mutex
);
2018 i
= mutex_lock(&a
->mutex
); /* remember result */
2020 /* Set up the top chunk, with proper alignment. */
2021 ptr
= (char *)(a
+ 1);
2022 misalign
= (unsigned long)chunk2mem(ptr
) & MALLOC_ALIGN_MASK
;
2024 ptr
+= MALLOC_ALIGNMENT
- misalign
;
2025 top(a
) = (mchunkptr
)ptr
;
2026 set_head(top(a
), (((char*)h
+ h
->size
) - ptr
) | PREV_INUSE
);
2028 /* Add the new arena to the list. */
2029 (void)mutex_lock(&list_lock
);
2030 a
->next
= main_arena
.next
;
2031 main_arena
.next
= a
;
2032 (void)mutex_unlock(&list_lock
);
2034 if(i
) /* locking failed; keep arena for further attempts later */
2037 THREAD_STAT(++(a
->stat_lock_loop
));
2041 /* find the heap and corresponding arena for a given ptr */
2043 #define heap_for_ptr(ptr) \
2044 ((heap_info *)((unsigned long)(ptr) & ~(HEAP_MAX_SIZE-1)))
2045 #define arena_for_ptr(ptr) \
2046 (((mchunkptr)(ptr) < top(&main_arena) && (char *)(ptr) >= sbrk_base) ? \
2047 &main_arena : heap_for_ptr(ptr)->ar_ptr)
2049 #else /* defined(NO_THREADS) */
2051 /* Without concurrent threads, there is only one arena. */
2053 #define arena_get(ptr, sz) (ptr = &main_arena)
2054 #define arena_for_ptr(ptr) (&main_arena)
2056 #endif /* !defined(NO_THREADS) */
2068 These routines make a number of assertions about the states
2069 of data structures that should be true at all times. If any
2070 are not true, it's very likely that a user program has somehow
2071 trashed memory. (It's also possible that there is a coding error
2072 in malloc. In which case, please report it!)
2076 static void do_check_chunk(arena
*ar_ptr
, mchunkptr p
)
2078 static void do_check_chunk(ar_ptr
, p
) arena
*ar_ptr
; mchunkptr p
;
2081 INTERNAL_SIZE_T sz
= p
->size
& ~PREV_INUSE
;
2083 /* No checkable chunk is mmapped */
2084 assert(!chunk_is_mmapped(p
));
2087 if(ar_ptr
!= &main_arena
) {
2088 heap_info
*heap
= heap_for_ptr(p
);
2089 assert(heap
->ar_ptr
== ar_ptr
);
2090 assert((char *)p
+ sz
<= (char *)heap
+ heap
->size
);
2095 /* Check for legal address ... */
2096 assert((char*)p
>= sbrk_base
);
2097 if (p
!= top(ar_ptr
))
2098 assert((char*)p
+ sz
<= (char*)top(ar_ptr
));
2100 assert((char*)p
+ sz
<= sbrk_base
+ sbrked_mem
);
2106 static void do_check_free_chunk(arena
*ar_ptr
, mchunkptr p
)
2108 static void do_check_free_chunk(ar_ptr
, p
) arena
*ar_ptr
; mchunkptr p
;
2111 INTERNAL_SIZE_T sz
= p
->size
& ~PREV_INUSE
;
2112 mchunkptr next
= chunk_at_offset(p
, sz
);
2114 do_check_chunk(ar_ptr
, p
);
2116 /* Check whether it claims to be free ... */
2119 /* Must have OK size and fields */
2120 assert((long)sz
>= (long)MINSIZE
);
2121 assert((sz
& MALLOC_ALIGN_MASK
) == 0);
2122 assert(aligned_OK(chunk2mem(p
)));
2123 /* ... matching footer field */
2124 assert(next
->prev_size
== sz
);
2125 /* ... and is fully consolidated */
2126 assert(prev_inuse(p
));
2127 assert (next
== top(ar_ptr
) || inuse(next
));
2129 /* ... and has minimally sane links */
2130 assert(p
->fd
->bk
== p
);
2131 assert(p
->bk
->fd
== p
);
2135 static void do_check_inuse_chunk(arena
*ar_ptr
, mchunkptr p
)
2137 static void do_check_inuse_chunk(ar_ptr
, p
) arena
*ar_ptr
; mchunkptr p
;
2140 mchunkptr next
= next_chunk(p
);
2141 do_check_chunk(ar_ptr
, p
);
2143 /* Check whether it claims to be in use ... */
2146 /* ... whether its size is OK (it might be a fencepost) ... */
2147 assert(chunksize(p
) >= MINSIZE
|| next
->size
== (0|PREV_INUSE
));
2149 /* ... and is surrounded by OK chunks.
2150 Since more things can be checked with free chunks than inuse ones,
2151 if an inuse chunk borders them and debug is on, it's worth doing them.
2155 mchunkptr prv
= prev_chunk(p
);
2156 assert(next_chunk(prv
) == p
);
2157 do_check_free_chunk(ar_ptr
, prv
);
2159 if (next
== top(ar_ptr
))
2161 assert(prev_inuse(next
));
2162 assert(chunksize(next
) >= MINSIZE
);
2164 else if (!inuse(next
))
2165 do_check_free_chunk(ar_ptr
, next
);
2170 static void do_check_malloced_chunk(arena
*ar_ptr
,
2171 mchunkptr p
, INTERNAL_SIZE_T s
)
2173 static void do_check_malloced_chunk(ar_ptr
, p
, s
)
2174 arena
*ar_ptr
; mchunkptr p
; INTERNAL_SIZE_T s
;
2177 INTERNAL_SIZE_T sz
= p
->size
& ~PREV_INUSE
;
2180 do_check_inuse_chunk(ar_ptr
, p
);
2182 /* Legal size ... */
2183 assert((long)sz
>= (long)MINSIZE
);
2184 assert((sz
& MALLOC_ALIGN_MASK
) == 0);
2186 assert(room
< (long)MINSIZE
);
2188 /* ... and alignment */
2189 assert(aligned_OK(chunk2mem(p
)));
2192 /* ... and was allocated at front of an available chunk */
2193 assert(prev_inuse(p
));
2198 #define check_free_chunk(A,P) do_check_free_chunk(A,P)
2199 #define check_inuse_chunk(A,P) do_check_inuse_chunk(A,P)
2200 #define check_chunk(A,P) do_check_chunk(A,P)
2201 #define check_malloced_chunk(A,P,N) do_check_malloced_chunk(A,P,N)
2203 #define check_free_chunk(A,P)
2204 #define check_inuse_chunk(A,P)
2205 #define check_chunk(A,P)
2206 #define check_malloced_chunk(A,P,N)
2212 Macro-based internal utilities
2217 Linking chunks in bin lists.
2218 Call these only with variables, not arbitrary expressions, as arguments.
2222 Place chunk p of size s in its bin, in size order,
2223 putting it ahead of others of same size.
2227 #define frontlink(A, P, S, IDX, BK, FD) \
2229 if (S < MAX_SMALLBIN_SIZE) \
2231 IDX = smallbin_index(S); \
2232 mark_binblock(A, IDX); \
2233 BK = bin_at(A, IDX); \
2237 FD->bk = BK->fd = P; \
2241 IDX = bin_index(S); \
2242 BK = bin_at(A, IDX); \
2244 if (FD == BK) mark_binblock(A, IDX); \
2247 while (FD != BK && S < chunksize(FD)) FD = FD->fd; \
2252 FD->bk = BK->fd = P; \
2257 /* take a chunk off a list */
2259 #define unlink(P, BK, FD) \
2267 /* Place p as the last remainder */
2269 #define link_last_remainder(A, P) \
2271 last_remainder(A)->fd = last_remainder(A)->bk = P; \
2272 P->fd = P->bk = last_remainder(A); \
2275 /* Clear the last_remainder bin */
2277 #define clear_last_remainder(A) \
2278 (last_remainder(A)->fd = last_remainder(A)->bk = last_remainder(A))
2285 Extend the top-most chunk by obtaining memory from system.
2286 Main interface to sbrk (but see also malloc_trim).
2290 #if defined __GNUC__ && __GNUC__ >= 2
2291 /* This function is called only from one place, inline it. */
2296 malloc_extend_top(arena
*ar_ptr
, INTERNAL_SIZE_T nb
)
2298 malloc_extend_top(ar_ptr
, nb
) arena
*ar_ptr
; INTERNAL_SIZE_T nb
;
2301 unsigned long pagesz
= malloc_getpagesize
;
2302 mchunkptr old_top
= top(ar_ptr
); /* Record state of old top */
2303 INTERNAL_SIZE_T old_top_size
= chunksize(old_top
);
2304 INTERNAL_SIZE_T top_size
; /* new size of top chunk */
2307 if(ar_ptr
== &main_arena
) {
2310 char* brk
; /* return value from sbrk */
2311 INTERNAL_SIZE_T front_misalign
; /* unusable bytes at front of sbrked space */
2312 INTERNAL_SIZE_T correction
; /* bytes for 2nd sbrk call */
2313 char* new_brk
; /* return of 2nd sbrk call */
2314 char* old_end
= (char*)(chunk_at_offset(old_top
, old_top_size
));
2316 /* Pad request with top_pad plus minimal overhead */
2317 INTERNAL_SIZE_T sbrk_size
= nb
+ top_pad
+ MINSIZE
;
2319 /* If not the first time through, round to preserve page boundary */
2320 /* Otherwise, we need to correct to a page size below anyway. */
2321 /* (We also correct below if an intervening foreign sbrk call.) */
2323 if (sbrk_base
!= (char*)(-1))
2324 sbrk_size
= (sbrk_size
+ (pagesz
- 1)) & ~(pagesz
- 1);
2326 brk
= (char*)(MORECORE (sbrk_size
));
2328 /* Fail if sbrk failed or if a foreign sbrk call killed our space */
2329 if (brk
== (char*)(MORECORE_FAILURE
) ||
2330 (brk
< old_end
&& old_top
!= initial_top(&main_arena
)))
2333 #if defined _LIBC || defined MALLOC_HOOKS
2334 /* Call the `morecore' hook if necessary. */
2335 if (__after_morecore_hook
)
2336 (*__after_morecore_hook
) ();
2339 sbrked_mem
+= sbrk_size
;
2341 if (brk
== old_end
) { /* can just add bytes to current top */
2342 top_size
= sbrk_size
+ old_top_size
;
2343 set_head(old_top
, top_size
| PREV_INUSE
);
2344 old_top
= 0; /* don't free below */
2346 if (sbrk_base
== (char*)(-1)) /* First time through. Record base */
2349 /* Someone else called sbrk(). Count those bytes as sbrked_mem. */
2350 sbrked_mem
+= brk
- (char*)old_end
;
2352 /* Guarantee alignment of first new chunk made from this space */
2353 front_misalign
= (unsigned long)chunk2mem(brk
) & MALLOC_ALIGN_MASK
;
2354 if (front_misalign
> 0) {
2355 correction
= (MALLOC_ALIGNMENT
) - front_misalign
;
2360 /* Guarantee the next brk will be at a page boundary */
2361 correction
+= pagesz
- ((unsigned long)(brk
+ sbrk_size
) & (pagesz
- 1));
2363 /* Allocate correction */
2364 new_brk
= (char*)(MORECORE (correction
));
2365 if (new_brk
== (char*)(MORECORE_FAILURE
)) return;
2367 #if defined _LIBC || defined MALLOC_HOOKS
2368 /* Call the `morecore' hook if necessary. */
2369 if (__after_morecore_hook
)
2370 (*__after_morecore_hook
) ();
2373 sbrked_mem
+= correction
;
2375 top(&main_arena
) = (mchunkptr
)brk
;
2376 top_size
= new_brk
- brk
+ correction
;
2377 set_head(top(&main_arena
), top_size
| PREV_INUSE
);
2379 if (old_top
== initial_top(&main_arena
))
2380 old_top
= 0; /* don't free below */
2383 if ((unsigned long)sbrked_mem
> (unsigned long)max_sbrked_mem
)
2384 max_sbrked_mem
= sbrked_mem
;
2386 if ((unsigned long)(mmapped_mem
+ sbrked_mem
) >
2387 (unsigned long)max_total_mem
)
2388 max_total_mem
= mmapped_mem
+ sbrked_mem
;
2392 } else { /* ar_ptr != &main_arena */
2393 heap_info
*old_heap
, *heap
;
2394 size_t old_heap_size
;
2396 if(old_top_size
< MINSIZE
) /* this should never happen */
2399 /* First try to extend the current heap. */
2400 if(MINSIZE
+ nb
<= old_top_size
)
2402 old_heap
= heap_for_ptr(old_top
);
2403 old_heap_size
= old_heap
->size
;
2404 if(grow_heap(old_heap
, MINSIZE
+ nb
- old_top_size
) == 0) {
2405 ar_ptr
->size
+= old_heap
->size
- old_heap_size
;
2406 top_size
= ((char *)old_heap
+ old_heap
->size
) - (char *)old_top
;
2407 set_head(old_top
, top_size
| PREV_INUSE
);
2411 /* A new heap must be created. */
2412 heap
= new_heap(nb
+ (MINSIZE
+ sizeof(*heap
)));
2415 heap
->ar_ptr
= ar_ptr
;
2416 heap
->prev
= old_heap
;
2417 ar_ptr
->size
+= heap
->size
;
2419 /* Set up the new top, so we can safely use chunk_free() below. */
2420 top(ar_ptr
) = chunk_at_offset(heap
, sizeof(*heap
));
2421 top_size
= heap
->size
- sizeof(*heap
);
2422 set_head(top(ar_ptr
), top_size
| PREV_INUSE
);
2424 #endif /* !defined(NO_THREADS) */
2426 /* We always land on a page boundary */
2427 assert(((unsigned long)((char*)top(ar_ptr
) + top_size
) & (pagesz
-1)) == 0);
2429 /* Setup fencepost and free the old top chunk. */
2431 /* The fencepost takes at least MINSIZE bytes, because it might
2432 become the top chunk again later. Note that a footer is set
2433 up, too, although the chunk is marked in use. */
2434 old_top_size
-= MINSIZE
;
2435 set_head(chunk_at_offset(old_top
, old_top_size
+ 2*SIZE_SZ
), 0|PREV_INUSE
);
2436 if(old_top_size
>= MINSIZE
) {
2437 set_head(chunk_at_offset(old_top
, old_top_size
), (2*SIZE_SZ
)|PREV_INUSE
);
2438 set_foot(chunk_at_offset(old_top
, old_top_size
), (2*SIZE_SZ
));
2439 set_head_size(old_top
, old_top_size
);
2440 chunk_free(ar_ptr
, old_top
);
2442 set_head(old_top
, (old_top_size
+ 2*SIZE_SZ
)|PREV_INUSE
);
2443 set_foot(old_top
, (old_top_size
+ 2*SIZE_SZ
));
2451 /* Main public routines */
2457 The requested size is first converted into a usable form, `nb'.
2458 This currently means to add 4 bytes overhead plus possibly more to
2459 obtain 8-byte alignment and/or to obtain a size of at least
2460 MINSIZE (currently 16, 24, or 32 bytes), the smallest allocatable
2461 size. (All fits are considered `exact' if they are within MINSIZE
2464 From there, the first successful of the following steps is taken:
2466 1. The bin corresponding to the request size is scanned, and if
2467 a chunk of exactly the right size is found, it is taken.
2469 2. The most recently remaindered chunk is used if it is big
2470 enough. This is a form of (roving) first fit, used only in
2471 the absence of exact fits. Runs of consecutive requests use
2472 the remainder of the chunk used for the previous such request
2473 whenever possible. This limited use of a first-fit style
2474 allocation strategy tends to give contiguous chunks
2475 coextensive lifetimes, which improves locality and can reduce
2476 fragmentation in the long run.
2478 3. Other bins are scanned in increasing size order, using a
2479 chunk big enough to fulfill the request, and splitting off
2480 any remainder. This search is strictly by best-fit; i.e.,
2481 the smallest (with ties going to approximately the least
2482 recently used) chunk that fits is selected.
2484 4. If large enough, the chunk bordering the end of memory
2485 (`top') is split off. (This use of `top' is in accord with
2486 the best-fit search rule. In effect, `top' is treated as
2487 larger (and thus less well fitting) than any other available
2488 chunk since it can be extended to be as large as necessary
2489 (up to system limitations).
2491 5. If the request size meets the mmap threshold and the
2492 system supports mmap, and there are few enough currently
2493 allocated mmapped regions, and a call to mmap succeeds,
2494 the request is allocated via direct memory mapping.
2496 6. Otherwise, the top of memory is extended by
2497 obtaining more space from the system (normally using sbrk,
2498 but definable to anything else via the MORECORE macro).
2499 Memory is gathered from the system (in system page-sized
2500 units) in a way that allows chunks obtained across different
2501 sbrk calls to be consolidated, but does not require
2502 contiguous memory. Thus, it should be safe to intersperse
2503 mallocs with other sbrk calls.
2506 All allocations are made from the the `lowest' part of any found
2507 chunk. (The implementation invariant is that prev_inuse is
2508 always true of any allocated chunk; i.e., that each allocated
2509 chunk borders either a previously allocated and still in-use chunk,
2510 or the base of its memory arena.)
2515 Void_t
* mALLOc(size_t bytes
)
2517 Void_t
* mALLOc(bytes
) size_t bytes
;
2521 INTERNAL_SIZE_T nb
; /* padded request size */
2524 #if defined _LIBC || defined MALLOC_HOOKS
2525 if (__malloc_hook
!= NULL
) {
2528 #if defined __GNUC__ && __GNUC__ >= 2
2529 result
= (*__malloc_hook
)(bytes
, __builtin_return_address (0));
2531 result
= (*__malloc_hook
)(bytes
, NULL
);
2537 nb
= request2size(bytes
);
2538 arena_get(ar_ptr
, nb
);
2541 victim
= chunk_alloc(ar_ptr
, nb
);
2542 (void)mutex_unlock(&ar_ptr
->mutex
);
2544 /* Maybe the failure is due to running out of mmapped areas. */
2545 if(ar_ptr
!= &main_arena
) {
2546 (void)mutex_lock(&main_arena
.mutex
);
2547 victim
= chunk_alloc(&main_arena
, nb
);
2548 (void)mutex_unlock(&main_arena
.mutex
);
2550 if(!victim
) return 0;
2552 return chunk2mem(victim
);
2558 chunk_alloc(arena
*ar_ptr
, INTERNAL_SIZE_T nb
)
2560 chunk_alloc(ar_ptr
, nb
) arena
*ar_ptr
; INTERNAL_SIZE_T nb
;
2563 mchunkptr victim
; /* inspected/selected chunk */
2564 INTERNAL_SIZE_T victim_size
; /* its size */
2565 int idx
; /* index for bin traversal */
2566 mbinptr bin
; /* associated bin */
2567 mchunkptr remainder
; /* remainder from a split */
2568 long remainder_size
; /* its size */
2569 int remainder_index
; /* its bin index */
2570 unsigned long block
; /* block traverser bit */
2571 int startidx
; /* first bin of a traversed block */
2572 mchunkptr fwd
; /* misc temp for linking */
2573 mchunkptr bck
; /* misc temp for linking */
2574 mbinptr q
; /* misc temp */
2577 /* Check for exact match in a bin */
2579 if (is_small_request(nb
)) /* Faster version for small requests */
2581 idx
= smallbin_index(nb
);
2583 /* No traversal or size check necessary for small bins. */
2585 q
= bin_at(ar_ptr
, idx
);
2588 /* Also scan the next one, since it would have a remainder < MINSIZE */
2596 victim_size
= chunksize(victim
);
2597 unlink(victim
, bck
, fwd
);
2598 set_inuse_bit_at_offset(victim
, victim_size
);
2599 check_malloced_chunk(ar_ptr
, victim
, nb
);
2603 idx
+= 2; /* Set for bin scan below. We've already scanned 2 bins. */
2608 idx
= bin_index(nb
);
2609 bin
= bin_at(ar_ptr
, idx
);
2611 for (victim
= last(bin
); victim
!= bin
; victim
= victim
->bk
)
2613 victim_size
= chunksize(victim
);
2614 remainder_size
= victim_size
- nb
;
2616 if (remainder_size
>= (long)MINSIZE
) /* too big */
2618 --idx
; /* adjust to rescan below after checking last remainder */
2622 else if (remainder_size
>= 0) /* exact fit */
2624 unlink(victim
, bck
, fwd
);
2625 set_inuse_bit_at_offset(victim
, victim_size
);
2626 check_malloced_chunk(ar_ptr
, victim
, nb
);
2635 /* Try to use the last split-off remainder */
2637 if ( (victim
= last_remainder(ar_ptr
)->fd
) != last_remainder(ar_ptr
))
2639 victim_size
= chunksize(victim
);
2640 remainder_size
= victim_size
- nb
;
2642 if (remainder_size
>= (long)MINSIZE
) /* re-split */
2644 remainder
= chunk_at_offset(victim
, nb
);
2645 set_head(victim
, nb
| PREV_INUSE
);
2646 link_last_remainder(ar_ptr
, remainder
);
2647 set_head(remainder
, remainder_size
| PREV_INUSE
);
2648 set_foot(remainder
, remainder_size
);
2649 check_malloced_chunk(ar_ptr
, victim
, nb
);
2653 clear_last_remainder(ar_ptr
);
2655 if (remainder_size
>= 0) /* exhaust */
2657 set_inuse_bit_at_offset(victim
, victim_size
);
2658 check_malloced_chunk(ar_ptr
, victim
, nb
);
2662 /* Else place in bin */
2664 frontlink(ar_ptr
, victim
, victim_size
, remainder_index
, bck
, fwd
);
2668 If there are any possibly nonempty big-enough blocks,
2669 search for best fitting chunk by scanning bins in blockwidth units.
2672 if ( (block
= idx2binblock(idx
)) <= binblocks(ar_ptr
))
2675 /* Get to the first marked block */
2677 if ( (block
& binblocks(ar_ptr
)) == 0)
2679 /* force to an even block boundary */
2680 idx
= (idx
& ~(BINBLOCKWIDTH
- 1)) + BINBLOCKWIDTH
;
2682 while ((block
& binblocks(ar_ptr
)) == 0)
2684 idx
+= BINBLOCKWIDTH
;
2689 /* For each possibly nonempty block ... */
2692 startidx
= idx
; /* (track incomplete blocks) */
2693 q
= bin
= bin_at(ar_ptr
, idx
);
2695 /* For each bin in this block ... */
2698 /* Find and use first big enough chunk ... */
2700 for (victim
= last(bin
); victim
!= bin
; victim
= victim
->bk
)
2702 victim_size
= chunksize(victim
);
2703 remainder_size
= victim_size
- nb
;
2705 if (remainder_size
>= (long)MINSIZE
) /* split */
2707 remainder
= chunk_at_offset(victim
, nb
);
2708 set_head(victim
, nb
| PREV_INUSE
);
2709 unlink(victim
, bck
, fwd
);
2710 link_last_remainder(ar_ptr
, remainder
);
2711 set_head(remainder
, remainder_size
| PREV_INUSE
);
2712 set_foot(remainder
, remainder_size
);
2713 check_malloced_chunk(ar_ptr
, victim
, nb
);
2717 else if (remainder_size
>= 0) /* take */
2719 set_inuse_bit_at_offset(victim
, victim_size
);
2720 unlink(victim
, bck
, fwd
);
2721 check_malloced_chunk(ar_ptr
, victim
, nb
);
2727 bin
= next_bin(bin
);
2729 } while ((++idx
& (BINBLOCKWIDTH
- 1)) != 0);
2731 /* Clear out the block bit. */
2733 do /* Possibly backtrack to try to clear a partial block */
2735 if ((startidx
& (BINBLOCKWIDTH
- 1)) == 0)
2737 binblocks(ar_ptr
) &= ~block
;
2742 } while (first(q
) == q
);
2744 /* Get to the next possibly nonempty block */
2746 if ( (block
<<= 1) <= binblocks(ar_ptr
) && (block
!= 0) )
2748 while ((block
& binblocks(ar_ptr
)) == 0)
2750 idx
+= BINBLOCKWIDTH
;
2760 /* Try to use top chunk */
2762 /* Require that there be a remainder, ensuring top always exists */
2763 if ( (remainder_size
= chunksize(top(ar_ptr
)) - nb
) < (long)MINSIZE
)
2767 /* If big and would otherwise need to extend, try to use mmap instead */
2768 if ((unsigned long)nb
>= (unsigned long)mmap_threshold
&&
2769 (victim
= mmap_chunk(nb
)) != 0)
2774 malloc_extend_top(ar_ptr
, nb
);
2775 if ((remainder_size
= chunksize(top(ar_ptr
)) - nb
) < (long)MINSIZE
)
2776 return 0; /* propagate failure */
2779 victim
= top(ar_ptr
);
2780 set_head(victim
, nb
| PREV_INUSE
);
2781 top(ar_ptr
) = chunk_at_offset(victim
, nb
);
2782 set_head(top(ar_ptr
), remainder_size
| PREV_INUSE
);
2783 check_malloced_chunk(ar_ptr
, victim
, nb
);
2797 1. free(0) has no effect.
2799 2. If the chunk was allocated via mmap, it is released via munmap().
2801 3. If a returned chunk borders the current high end of memory,
2802 it is consolidated into the top, and if the total unused
2803 topmost memory exceeds the trim threshold, malloc_trim is
2806 4. Other chunks are consolidated as they arrive, and
2807 placed in corresponding bins. (This includes the case of
2808 consolidating with the current `last_remainder').
2814 void fREe(Void_t
* mem
)
2816 void fREe(mem
) Void_t
* mem
;
2820 mchunkptr p
; /* chunk corresponding to mem */
2822 #if defined _LIBC || defined MALLOC_HOOKS
2823 if (__free_hook
!= NULL
) {
2824 #if defined __GNUC__ && __GNUC__ >= 2
2825 (*__free_hook
)(mem
, __builtin_return_address (0));
2827 (*__free_hook
)(mem
, NULL
);
2833 if (mem
== 0) /* free(0) has no effect */
2839 if (chunk_is_mmapped(p
)) /* release mmapped memory. */
2846 ar_ptr
= arena_for_ptr(p
);
2848 if(!mutex_trylock(&ar_ptr
->mutex
))
2849 ++(ar_ptr
->stat_lock_direct
);
2851 (void)mutex_lock(&ar_ptr
->mutex
);
2852 ++(ar_ptr
->stat_lock_wait
);
2855 (void)mutex_lock(&ar_ptr
->mutex
);
2857 chunk_free(ar_ptr
, p
);
2858 (void)mutex_unlock(&ar_ptr
->mutex
);
2864 chunk_free(arena
*ar_ptr
, mchunkptr p
)
2866 chunk_free(ar_ptr
, p
) arena
*ar_ptr
; mchunkptr p
;
2869 INTERNAL_SIZE_T hd
= p
->size
; /* its head field */
2870 INTERNAL_SIZE_T sz
; /* its size */
2871 int idx
; /* its bin index */
2872 mchunkptr next
; /* next contiguous chunk */
2873 INTERNAL_SIZE_T nextsz
; /* its size */
2874 INTERNAL_SIZE_T prevsz
; /* size of previous contiguous chunk */
2875 mchunkptr bck
; /* misc temp for linking */
2876 mchunkptr fwd
; /* misc temp for linking */
2877 int islr
; /* track whether merging with last_remainder */
2879 check_inuse_chunk(ar_ptr
, p
);
2881 sz
= hd
& ~PREV_INUSE
;
2882 next
= chunk_at_offset(p
, sz
);
2883 nextsz
= chunksize(next
);
2885 if (next
== top(ar_ptr
)) /* merge with top */
2889 if (!(hd
& PREV_INUSE
)) /* consolidate backward */
2891 prevsz
= p
->prev_size
;
2892 p
= chunk_at_offset(p
, -prevsz
);
2894 unlink(p
, bck
, fwd
);
2897 set_head(p
, sz
| PREV_INUSE
);
2901 if(ar_ptr
== &main_arena
) {
2903 if ((unsigned long)(sz
) >= (unsigned long)trim_threshold
)
2907 heap_info
*heap
= heap_for_ptr(p
);
2909 assert(heap
->ar_ptr
== ar_ptr
);
2911 /* Try to get rid of completely empty heaps, if possible. */
2912 if((unsigned long)(sz
) >= (unsigned long)trim_threshold
||
2913 p
== chunk_at_offset(heap
, sizeof(*heap
)))
2914 heap_trim(heap
, top_pad
);
2922 if (!(hd
& PREV_INUSE
)) /* consolidate backward */
2924 prevsz
= p
->prev_size
;
2925 p
= chunk_at_offset(p
, -prevsz
);
2928 if (p
->fd
== last_remainder(ar_ptr
)) /* keep as last_remainder */
2931 unlink(p
, bck
, fwd
);
2934 if (!(inuse_bit_at_offset(next
, nextsz
))) /* consolidate forward */
2938 if (!islr
&& next
->fd
== last_remainder(ar_ptr
))
2939 /* re-insert last_remainder */
2942 link_last_remainder(ar_ptr
, p
);
2945 unlink(next
, bck
, fwd
);
2947 next
= chunk_at_offset(p
, sz
);
2950 set_head(next
, nextsz
); /* clear inuse bit */
2952 set_head(p
, sz
| PREV_INUSE
);
2953 next
->prev_size
= sz
;
2955 frontlink(ar_ptr
, p
, sz
, idx
, bck
, fwd
);
2958 /* Check whether the heap containing top can go away now. */
2959 if(next
->size
< MINSIZE
&&
2960 (unsigned long)sz
> trim_threshold
&&
2961 ar_ptr
!= &main_arena
) { /* fencepost */
2962 heap_info
* heap
= heap_for_ptr(top(ar_ptr
));
2964 if(top(ar_ptr
) == chunk_at_offset(heap
, sizeof(*heap
)) &&
2965 heap
->prev
== heap_for_ptr(p
))
2966 heap_trim(heap
, top_pad
);
2979 Chunks that were obtained via mmap cannot be extended or shrunk
2980 unless HAVE_MREMAP is defined, in which case mremap is used.
2981 Otherwise, if their reallocation is for additional space, they are
2982 copied. If for less, they are just left alone.
2984 Otherwise, if the reallocation is for additional space, and the
2985 chunk can be extended, it is, else a malloc-copy-free sequence is
2986 taken. There are several different ways that a chunk could be
2987 extended. All are tried:
2989 * Extending forward into following adjacent free chunk.
2990 * Shifting backwards, joining preceding adjacent space
2991 * Both shifting backwards and extending forward.
2992 * Extending into newly sbrked space
2994 Unless the #define REALLOC_ZERO_BYTES_FREES is set, realloc with a
2995 size argument of zero (re)allocates a minimum-sized chunk.
2997 If the reallocation is for less space, and the new request is for
2998 a `small' (<512 bytes) size, then the newly unused space is lopped
3001 The old unix realloc convention of allowing the last-free'd chunk
3002 to be used as an argument to realloc is no longer supported.
3003 I don't know of any programs still relying on this feature,
3004 and allowing it would also allow too many other incorrect
3005 usages of realloc to be sensible.
3012 Void_t
* rEALLOc(Void_t
* oldmem
, size_t bytes
)
3014 Void_t
* rEALLOc(oldmem
, bytes
) Void_t
* oldmem
; size_t bytes
;
3018 INTERNAL_SIZE_T nb
; /* padded request size */
3020 mchunkptr oldp
; /* chunk corresponding to oldmem */
3021 INTERNAL_SIZE_T oldsize
; /* its size */
3023 mchunkptr newp
; /* chunk to return */
3025 #if defined _LIBC || defined MALLOC_HOOKS
3026 if (__realloc_hook
!= NULL
) {
3029 #if defined __GNUC__ && __GNUC__ >= 2
3030 result
= (*__realloc_hook
)(oldmem
, bytes
, __builtin_return_address (0));
3032 result
= (*__realloc_hook
)(oldmem
, bytes
, NULL
);
3038 #ifdef REALLOC_ZERO_BYTES_FREES
3039 if (bytes
== 0) { fREe(oldmem
); return 0; }
3042 /* realloc of null is supposed to be same as malloc */
3043 if (oldmem
== 0) return mALLOc(bytes
);
3045 oldp
= mem2chunk(oldmem
);
3046 oldsize
= chunksize(oldp
);
3048 nb
= request2size(bytes
);
3051 if (chunk_is_mmapped(oldp
))
3056 newp
= mremap_chunk(oldp
, nb
);
3057 if(newp
) return chunk2mem(newp
);
3059 /* Note the extra SIZE_SZ overhead. */
3060 if(oldsize
- SIZE_SZ
>= nb
) return oldmem
; /* do nothing */
3061 /* Must alloc, copy, free. */
3062 newmem
= mALLOc(bytes
);
3063 if (newmem
== 0) return 0; /* propagate failure */
3064 MALLOC_COPY(newmem
, oldmem
, oldsize
- 2*SIZE_SZ
);
3070 ar_ptr
= arena_for_ptr(oldp
);
3072 if(!mutex_trylock(&ar_ptr
->mutex
))
3073 ++(ar_ptr
->stat_lock_direct
);
3075 (void)mutex_lock(&ar_ptr
->mutex
);
3076 ++(ar_ptr
->stat_lock_wait
);
3079 (void)mutex_lock(&ar_ptr
->mutex
);
3083 /* As in malloc(), remember this arena for the next allocation. */
3084 tsd_setspecific(arena_key
, (Void_t
*)ar_ptr
);
3087 newp
= chunk_realloc(ar_ptr
, oldp
, oldsize
, nb
);
3089 (void)mutex_unlock(&ar_ptr
->mutex
);
3090 return newp
? chunk2mem(newp
) : NULL
;
3096 chunk_realloc(arena
* ar_ptr
, mchunkptr oldp
, INTERNAL_SIZE_T oldsize
,
3099 chunk_realloc(ar_ptr
, oldp
, oldsize
, nb
)
3100 arena
* ar_ptr
; mchunkptr oldp
; INTERNAL_SIZE_T oldsize
, nb
;
3103 mchunkptr newp
= oldp
; /* chunk to return */
3104 INTERNAL_SIZE_T newsize
= oldsize
; /* its size */
3106 mchunkptr next
; /* next contiguous chunk after oldp */
3107 INTERNAL_SIZE_T nextsize
; /* its size */
3109 mchunkptr prev
; /* previous contiguous chunk before oldp */
3110 INTERNAL_SIZE_T prevsize
; /* its size */
3112 mchunkptr remainder
; /* holds split off extra space from newp */
3113 INTERNAL_SIZE_T remainder_size
; /* its size */
3115 mchunkptr bck
; /* misc temp for linking */
3116 mchunkptr fwd
; /* misc temp for linking */
3118 check_inuse_chunk(ar_ptr
, oldp
);
3120 if ((long)(oldsize
) < (long)(nb
))
3123 /* Try expanding forward */
3125 next
= chunk_at_offset(oldp
, oldsize
);
3126 if (next
== top(ar_ptr
) || !inuse(next
))
3128 nextsize
= chunksize(next
);
3130 /* Forward into top only if a remainder */
3131 if (next
== top(ar_ptr
))
3133 if ((long)(nextsize
+ newsize
) >= (long)(nb
+ MINSIZE
))
3135 newsize
+= nextsize
;
3136 top(ar_ptr
) = chunk_at_offset(oldp
, nb
);
3137 set_head(top(ar_ptr
), (newsize
- nb
) | PREV_INUSE
);
3138 set_head_size(oldp
, nb
);
3143 /* Forward into next chunk */
3144 else if (((long)(nextsize
+ newsize
) >= (long)(nb
)))
3146 unlink(next
, bck
, fwd
);
3147 newsize
+= nextsize
;
3157 /* Try shifting backwards. */
3159 if (!prev_inuse(oldp
))
3161 prev
= prev_chunk(oldp
);
3162 prevsize
= chunksize(prev
);
3164 /* try forward + backward first to save a later consolidation */
3169 if (next
== top(ar_ptr
))
3171 if ((long)(nextsize
+ prevsize
+ newsize
) >= (long)(nb
+ MINSIZE
))
3173 unlink(prev
, bck
, fwd
);
3175 newsize
+= prevsize
+ nextsize
;
3176 MALLOC_COPY(chunk2mem(newp
), chunk2mem(oldp
), oldsize
- SIZE_SZ
);
3177 top(ar_ptr
) = chunk_at_offset(newp
, nb
);
3178 set_head(top(ar_ptr
), (newsize
- nb
) | PREV_INUSE
);
3179 set_head_size(newp
, nb
);
3184 /* into next chunk */
3185 else if (((long)(nextsize
+ prevsize
+ newsize
) >= (long)(nb
)))
3187 unlink(next
, bck
, fwd
);
3188 unlink(prev
, bck
, fwd
);
3190 newsize
+= nextsize
+ prevsize
;
3191 MALLOC_COPY(chunk2mem(newp
), chunk2mem(oldp
), oldsize
- SIZE_SZ
);
3197 if (prev
!= 0 && (long)(prevsize
+ newsize
) >= (long)nb
)
3199 unlink(prev
, bck
, fwd
);
3201 newsize
+= prevsize
;
3202 MALLOC_COPY(chunk2mem(newp
), chunk2mem(oldp
), oldsize
- SIZE_SZ
);
3209 newp
= chunk_alloc (ar_ptr
, nb
);
3212 /* Maybe the failure is due to running out of mmapped areas. */
3213 if (ar_ptr
!= &main_arena
) {
3214 (void)mutex_lock(&main_arena
.mutex
);
3215 newp
= chunk_alloc(&main_arena
, nb
);
3216 (void)mutex_unlock(&main_arena
.mutex
);
3218 if (newp
== 0) /* propagate failure */
3222 /* Avoid copy if newp is next chunk after oldp. */
3223 /* (This can only happen when new chunk is sbrk'ed.) */
3225 if ( newp
== next_chunk(oldp
))
3227 newsize
+= chunksize(newp
);
3232 /* Otherwise copy, free, and exit */
3233 MALLOC_COPY(chunk2mem(newp
), chunk2mem(oldp
), oldsize
- SIZE_SZ
);
3234 chunk_free(ar_ptr
, oldp
);
3239 split
: /* split off extra room in old or expanded chunk */
3241 if (newsize
- nb
>= MINSIZE
) /* split off remainder */
3243 remainder
= chunk_at_offset(newp
, nb
);
3244 remainder_size
= newsize
- nb
;
3245 set_head_size(newp
, nb
);
3246 set_head(remainder
, remainder_size
| PREV_INUSE
);
3247 set_inuse_bit_at_offset(remainder
, remainder_size
);
3248 chunk_free(ar_ptr
, remainder
);
3252 set_head_size(newp
, newsize
);
3253 set_inuse_bit_at_offset(newp
, newsize
);
3256 check_inuse_chunk(ar_ptr
, newp
);
3267 memalign requests more than enough space from malloc, finds a spot
3268 within that chunk that meets the alignment request, and then
3269 possibly frees the leading and trailing space.
3271 The alignment argument must be a power of two. This property is not
3272 checked by memalign, so misuse may result in random runtime errors.
3274 8-byte alignment is guaranteed by normal malloc calls, so don't
3275 bother calling memalign with an argument of 8 or less.
3277 Overreliance on memalign is a sure way to fragment space.
3283 Void_t
* mEMALIGn(size_t alignment
, size_t bytes
)
3285 Void_t
* mEMALIGn(alignment
, bytes
) size_t alignment
; size_t bytes
;
3289 INTERNAL_SIZE_T nb
; /* padded request size */
3292 #if defined _LIBC || defined MALLOC_HOOKS
3293 if (__memalign_hook
!= NULL
) {
3296 #if defined __GNUC__ && __GNUC__ >= 2
3297 result
= (*__memalign_hook
)(alignment
, bytes
,
3298 __builtin_return_address (0));
3300 result
= (*__memalign_hook
)(alignment
, bytes
, NULL
);
3306 /* If need less alignment than we give anyway, just relay to malloc */
3308 if (alignment
<= MALLOC_ALIGNMENT
) return mALLOc(bytes
);
3310 /* Otherwise, ensure that it is at least a minimum chunk size */
3312 if (alignment
< MINSIZE
) alignment
= MINSIZE
;
3314 nb
= request2size(bytes
);
3315 arena_get(ar_ptr
, nb
+ alignment
+ MINSIZE
);
3318 p
= chunk_align(ar_ptr
, nb
, alignment
);
3319 (void)mutex_unlock(&ar_ptr
->mutex
);
3321 /* Maybe the failure is due to running out of mmapped areas. */
3322 if(ar_ptr
!= &main_arena
) {
3323 (void)mutex_lock(&main_arena
.mutex
);
3324 p
= chunk_align(&main_arena
, nb
, alignment
);
3325 (void)mutex_unlock(&main_arena
.mutex
);
3329 return chunk2mem(p
);
3335 chunk_align(arena
* ar_ptr
, INTERNAL_SIZE_T nb
, size_t alignment
)
3337 chunk_align(ar_ptr
, nb
, alignment
)
3338 arena
* ar_ptr
; INTERNAL_SIZE_T nb
; size_t alignment
;
3341 char* m
; /* memory returned by malloc call */
3342 mchunkptr p
; /* corresponding chunk */
3343 char* brk
; /* alignment point within p */
3344 mchunkptr newp
; /* chunk to return */
3345 INTERNAL_SIZE_T newsize
; /* its size */
3346 INTERNAL_SIZE_T leadsize
; /* leading space befor alignment point */
3347 mchunkptr remainder
; /* spare room at end to split off */
3348 long remainder_size
; /* its size */
3350 /* Call chunk_alloc with worst case padding to hit alignment. */
3351 p
= chunk_alloc(ar_ptr
, nb
+ alignment
+ MINSIZE
);
3353 return 0; /* propagate failure */
3357 if ((((unsigned long)(m
)) % alignment
) == 0) /* aligned */
3360 if(chunk_is_mmapped(p
)) {
3361 return p
; /* nothing more to do */
3365 else /* misaligned */
3368 Find an aligned spot inside chunk.
3369 Since we need to give back leading space in a chunk of at
3370 least MINSIZE, if the first calculation places us at
3371 a spot with less than MINSIZE leader, we can move to the
3372 next aligned spot -- we've allocated enough total room so that
3373 this is always possible.
3376 brk
= (char*)mem2chunk(((unsigned long)(m
+ alignment
- 1)) & -alignment
);
3377 if ((long)(brk
- (char*)(p
)) < (long)MINSIZE
) brk
+= alignment
;
3379 newp
= (mchunkptr
)brk
;
3380 leadsize
= brk
- (char*)(p
);
3381 newsize
= chunksize(p
) - leadsize
;
3384 if(chunk_is_mmapped(p
))
3386 newp
->prev_size
= p
->prev_size
+ leadsize
;
3387 set_head(newp
, newsize
|IS_MMAPPED
);
3392 /* give back leader, use the rest */
3394 set_head(newp
, newsize
| PREV_INUSE
);
3395 set_inuse_bit_at_offset(newp
, newsize
);
3396 set_head_size(p
, leadsize
);
3397 chunk_free(ar_ptr
, p
);
3400 assert (newsize
>=nb
&& (((unsigned long)(chunk2mem(p
))) % alignment
) == 0);
3403 /* Also give back spare room at the end */
3405 remainder_size
= chunksize(p
) - nb
;
3407 if (remainder_size
>= (long)MINSIZE
)
3409 remainder
= chunk_at_offset(p
, nb
);
3410 set_head(remainder
, remainder_size
| PREV_INUSE
);
3411 set_head_size(p
, nb
);
3412 chunk_free(ar_ptr
, remainder
);
3415 check_inuse_chunk(ar_ptr
, p
);
3423 valloc just invokes memalign with alignment argument equal
3424 to the page size of the system (or as near to this as can
3425 be figured out from all the includes/defines above.)
3429 Void_t
* vALLOc(size_t bytes
)
3431 Void_t
* vALLOc(bytes
) size_t bytes
;
3434 return mEMALIGn (malloc_getpagesize
, bytes
);
3438 pvalloc just invokes valloc for the nearest pagesize
3439 that will accommodate request
3444 Void_t
* pvALLOc(size_t bytes
)
3446 Void_t
* pvALLOc(bytes
) size_t bytes
;
3449 size_t pagesize
= malloc_getpagesize
;
3450 return mEMALIGn (pagesize
, (bytes
+ pagesize
- 1) & ~(pagesize
- 1));
3455 calloc calls chunk_alloc, then zeroes out the allocated chunk.
3460 Void_t
* cALLOc(size_t n
, size_t elem_size
)
3462 Void_t
* cALLOc(n
, elem_size
) size_t n
; size_t elem_size
;
3466 mchunkptr p
, oldtop
;
3467 INTERNAL_SIZE_T sz
, csz
, oldtopsize
;
3470 #if defined _LIBC || defined MALLOC_HOOKS
3471 if (__malloc_hook
!= NULL
) {
3473 #if defined __GNUC__ && __GNUC__ >= 2
3474 mem
= (*__malloc_hook
)(sz
, __builtin_return_address (0));
3476 mem
= (*__malloc_hook
)(sz
, NULL
);
3481 return memset(mem
, 0, sz
);
3483 while(sz
> 0) ((char*)mem
)[--sz
] = 0; /* rather inefficient */
3489 sz
= request2size(n
* elem_size
);
3490 arena_get(ar_ptr
, sz
);
3494 /* check if expand_top called, in which case don't need to clear */
3496 oldtop
= top(ar_ptr
);
3497 oldtopsize
= chunksize(top(ar_ptr
));
3499 p
= chunk_alloc (ar_ptr
, sz
);
3501 /* Only clearing follows, so we can unlock early. */
3502 (void)mutex_unlock(&ar_ptr
->mutex
);
3505 /* Maybe the failure is due to running out of mmapped areas. */
3506 if(ar_ptr
!= &main_arena
) {
3507 (void)mutex_lock(&main_arena
.mutex
);
3508 p
= chunk_alloc(&main_arena
, sz
);
3509 (void)mutex_unlock(&main_arena
.mutex
);
3511 if (p
== 0) return 0;
3515 /* Two optional cases in which clearing not necessary */
3518 if (chunk_is_mmapped(p
)) return mem
;
3524 if (p
== oldtop
&& csz
> oldtopsize
) {
3525 /* clear only the bytes from non-freshly-sbrked memory */
3530 MALLOC_ZERO(mem
, csz
- SIZE_SZ
);
3536 cfree just calls free. It is needed/defined on some systems
3537 that pair it with calloc, presumably for odd historical reasons.
3543 void cfree(Void_t
*mem
)
3545 void cfree(mem
) Void_t
*mem
;
3556 Malloc_trim gives memory back to the system (via negative
3557 arguments to sbrk) if there is unused memory at the `high' end of
3558 the malloc pool. You can call this after freeing large blocks of
3559 memory to potentially reduce the system-level memory requirements
3560 of a program. However, it cannot guarantee to reduce memory. Under
3561 some allocation patterns, some large free blocks of memory will be
3562 locked between two used chunks, so they cannot be given back to
3565 The `pad' argument to malloc_trim represents the amount of free
3566 trailing space to leave untrimmed. If this argument is zero,
3567 only the minimum amount of memory to maintain internal data
3568 structures will be left (one page or less). Non-zero arguments
3569 can be supplied to maintain enough trailing space to service
3570 future expected allocations without having to re-obtain memory
3573 Malloc_trim returns 1 if it actually released any memory, else 0.
3578 int mALLOC_TRIm(size_t pad
)
3580 int mALLOC_TRIm(pad
) size_t pad
;
3585 (void)mutex_lock(&main_arena
.mutex
);
3586 res
= main_trim(pad
);
3587 (void)mutex_unlock(&main_arena
.mutex
);
3591 /* Trim the main arena. */
3596 main_trim(size_t pad
)
3598 main_trim(pad
) size_t pad
;
3601 mchunkptr top_chunk
; /* The current top chunk */
3602 long top_size
; /* Amount of top-most memory */
3603 long extra
; /* Amount to release */
3604 char* current_brk
; /* address returned by pre-check sbrk call */
3605 char* new_brk
; /* address returned by negative sbrk call */
3607 unsigned long pagesz
= malloc_getpagesize
;
3609 top_chunk
= top(&main_arena
);
3610 top_size
= chunksize(top_chunk
);
3611 extra
= ((top_size
- pad
- MINSIZE
+ (pagesz
-1)) / pagesz
- 1) * pagesz
;
3613 if (extra
< (long)pagesz
) /* Not enough memory to release */
3616 /* Test to make sure no one else called sbrk */
3617 current_brk
= (char*)(MORECORE (0));
3618 if (current_brk
!= (char*)(top_chunk
) + top_size
)
3619 return 0; /* Apparently we don't own memory; must fail */
3621 new_brk
= (char*)(MORECORE (-extra
));
3623 #if defined _LIBC || defined MALLOC_HOOKS
3624 /* Call the `morecore' hook if necessary. */
3625 if (__after_morecore_hook
)
3626 (*__after_morecore_hook
) ();
3629 if (new_brk
== (char*)(MORECORE_FAILURE
)) { /* sbrk failed? */
3630 /* Try to figure out what we have */
3631 current_brk
= (char*)(MORECORE (0));
3632 top_size
= current_brk
- (char*)top_chunk
;
3633 if (top_size
>= (long)MINSIZE
) /* if not, we are very very dead! */
3635 sbrked_mem
= current_brk
- sbrk_base
;
3636 set_head(top_chunk
, top_size
| PREV_INUSE
);
3638 check_chunk(&main_arena
, top_chunk
);
3641 sbrked_mem
-= extra
;
3643 /* Success. Adjust top accordingly. */
3644 set_head(top_chunk
, (top_size
- extra
) | PREV_INUSE
);
3645 check_chunk(&main_arena
, top_chunk
);
3654 heap_trim(heap_info
*heap
, size_t pad
)
3656 heap_trim(heap
, pad
) heap_info
*heap
; size_t pad
;
3659 unsigned long pagesz
= malloc_getpagesize
;
3660 arena
*ar_ptr
= heap
->ar_ptr
;
3661 mchunkptr top_chunk
= top(ar_ptr
), p
, bck
, fwd
;
3662 heap_info
*prev_heap
;
3663 long new_size
, top_size
, extra
;
3665 /* Can this heap go away completely ? */
3666 while(top_chunk
== chunk_at_offset(heap
, sizeof(*heap
))) {
3667 prev_heap
= heap
->prev
;
3668 p
= chunk_at_offset(prev_heap
, prev_heap
->size
- (MINSIZE
-2*SIZE_SZ
));
3669 assert(p
->size
== (0|PREV_INUSE
)); /* must be fencepost */
3671 new_size
= chunksize(p
) + (MINSIZE
-2*SIZE_SZ
);
3672 assert(new_size
>0 && new_size
<(long)(2*MINSIZE
));
3674 new_size
+= p
->prev_size
;
3675 assert(new_size
>0 && new_size
<HEAP_MAX_SIZE
);
3676 if(new_size
+ (HEAP_MAX_SIZE
- prev_heap
->size
) < pad
+ MINSIZE
+ pagesz
)
3678 ar_ptr
->size
-= heap
->size
;
3681 if(!prev_inuse(p
)) { /* consolidate backward */
3683 unlink(p
, bck
, fwd
);
3685 assert(((unsigned long)((char*)p
+ new_size
) & (pagesz
-1)) == 0);
3686 assert( ((char*)p
+ new_size
) == ((char*)heap
+ heap
->size
) );
3687 top(ar_ptr
) = top_chunk
= p
;
3688 set_head(top_chunk
, new_size
| PREV_INUSE
);
3689 check_chunk(ar_ptr
, top_chunk
);
3691 top_size
= chunksize(top_chunk
);
3692 extra
= ((top_size
- pad
- MINSIZE
+ (pagesz
-1))/pagesz
- 1) * pagesz
;
3693 if(extra
< (long)pagesz
)
3695 /* Try to shrink. */
3696 if(grow_heap(heap
, -extra
) != 0)
3698 ar_ptr
->size
-= extra
;
3700 /* Success. Adjust top accordingly. */
3701 set_head(top_chunk
, (top_size
- extra
) | PREV_INUSE
);
3702 check_chunk(ar_ptr
, top_chunk
);
3713 This routine tells you how many bytes you can actually use in an
3714 allocated chunk, which may be more than you requested (although
3715 often not). You can use this many bytes without worrying about
3716 overwriting other allocated objects. Not a particularly great
3717 programming practice, but still sometimes useful.
3722 size_t mALLOC_USABLE_SIZe(Void_t
* mem
)
3724 size_t mALLOC_USABLE_SIZe(mem
) Void_t
* mem
;
3734 if(!chunk_is_mmapped(p
))
3736 if (!inuse(p
)) return 0;
3737 check_inuse_chunk(arena_for_ptr(mem
), p
);
3738 return chunksize(p
) - SIZE_SZ
;
3740 return chunksize(p
) - 2*SIZE_SZ
;
3747 /* Utility to update mallinfo for malloc_stats() and mallinfo() */
3751 malloc_update_mallinfo(arena
*ar_ptr
, struct mallinfo
*mi
)
3753 malloc_update_mallinfo(ar_ptr
, mi
) arena
*ar_ptr
; struct mallinfo
*mi
;
3762 INTERNAL_SIZE_T avail
;
3764 (void)mutex_lock(&ar_ptr
->mutex
);
3765 avail
= chunksize(top(ar_ptr
));
3766 navail
= ((long)(avail
) >= (long)MINSIZE
)? 1 : 0;
3768 for (i
= 1; i
< NAV
; ++i
)
3770 b
= bin_at(ar_ptr
, i
);
3771 for (p
= last(b
); p
!= b
; p
= p
->bk
)
3774 check_free_chunk(ar_ptr
, p
);
3775 for (q
= next_chunk(p
);
3776 q
!= top(ar_ptr
) && inuse(q
) && (long)chunksize(q
) > 0;
3778 check_inuse_chunk(ar_ptr
, q
);
3780 avail
+= chunksize(p
);
3785 mi
->arena
= ar_ptr
->size
;
3786 mi
->ordblks
= navail
;
3787 mi
->uordblks
= ar_ptr
->size
- avail
;
3788 mi
->fordblks
= avail
;
3789 mi
->hblks
= n_mmaps
;
3790 mi
->hblkhd
= mmapped_mem
;
3791 mi
->keepcost
= chunksize(top(ar_ptr
));
3793 (void)mutex_unlock(&ar_ptr
->mutex
);
3796 #if !defined(NO_THREADS) && MALLOC_DEBUG > 1
3798 /* Print the complete contents of a single heap to stderr. */
3802 dump_heap(heap_info
*heap
)
3804 dump_heap(heap
) heap_info
*heap
;
3810 fprintf(stderr
, "Heap %p, size %10lx:\n", heap
, (long)heap
->size
);
3811 ptr
= (heap
->ar_ptr
!= (arena
*)(heap
+1)) ?
3812 (char*)(heap
+ 1) : (char*)(heap
+ 1) + sizeof(arena
);
3813 p
= (mchunkptr
)(((unsigned long)ptr
+ MALLOC_ALIGN_MASK
) &
3814 ~MALLOC_ALIGN_MASK
);
3816 fprintf(stderr
, "chunk %p size %10lx", p
, (long)p
->size
);
3817 if(p
== top(heap
->ar_ptr
)) {
3818 fprintf(stderr
, " (top)\n");
3820 } else if(p
->size
== (0|PREV_INUSE
)) {
3821 fprintf(stderr
, " (fence)\n");
3824 fprintf(stderr
, "\n");
3837 For all arenas separately and in total, prints on stderr the
3838 amount of space obtained from the system, and the current number
3839 of bytes allocated via malloc (or realloc, etc) but not yet
3840 freed. (Note that this is the number of bytes allocated, not the
3841 number requested. It will be larger than the number requested
3842 because of alignment and bookkeeping overhead.) When not compiled
3843 for multiple threads, the maximum amount of allocated memory
3844 (which may be more than current if malloc_trim and/or munmap got
3845 called) is also reported. When using mmap(), prints the maximum
3846 number of simultaneous mmap regions used, too.
3855 unsigned int in_use_b
= mmapped_mem
, system_b
= in_use_b
;
3857 long stat_lock_direct
= 0, stat_lock_loop
= 0, stat_lock_wait
= 0;
3860 for(i
=0, ar_ptr
= &main_arena
;; i
++) {
3861 malloc_update_mallinfo(ar_ptr
, &mi
);
3862 fprintf(stderr
, "Arena %d:\n", i
);
3863 fprintf(stderr
, "system bytes = %10u\n", (unsigned int)mi
.arena
);
3864 fprintf(stderr
, "in use bytes = %10u\n", (unsigned int)mi
.uordblks
);
3865 system_b
+= mi
.arena
;
3866 in_use_b
+= mi
.uordblks
;
3868 stat_lock_direct
+= ar_ptr
->stat_lock_direct
;
3869 stat_lock_loop
+= ar_ptr
->stat_lock_loop
;
3870 stat_lock_wait
+= ar_ptr
->stat_lock_wait
;
3872 #if !defined(NO_THREADS) && MALLOC_DEBUG > 1
3873 if(ar_ptr
!= &main_arena
) {
3875 (void)mutex_lock(&ar_ptr
->mutex
);
3876 heap
= heap_for_ptr(top(ar_ptr
));
3877 while(heap
) { dump_heap(heap
); heap
= heap
->prev
; }
3878 (void)mutex_unlock(&ar_ptr
->mutex
);
3881 ar_ptr
= ar_ptr
->next
;
3882 if(ar_ptr
== &main_arena
) break;
3885 fprintf(stderr
, "Total (incl. mmap):\n");
3887 fprintf(stderr
, "Total:\n");
3889 fprintf(stderr
, "system bytes = %10u\n", system_b
);
3890 fprintf(stderr
, "in use bytes = %10u\n", in_use_b
);
3892 fprintf(stderr
, "max system bytes = %10u\n", (unsigned int)max_total_mem
);
3895 fprintf(stderr
, "max mmap regions = %10u\n", (unsigned int)max_n_mmaps
);
3896 fprintf(stderr
, "max mmap bytes = %10lu\n", max_mmapped_mem
);
3899 fprintf(stderr
, "heaps created = %10d\n", stat_n_heaps
);
3900 fprintf(stderr
, "locked directly = %10ld\n", stat_lock_direct
);
3901 fprintf(stderr
, "locked in loop = %10ld\n", stat_lock_loop
);
3902 fprintf(stderr
, "locked waiting = %10ld\n", stat_lock_wait
);
3903 fprintf(stderr
, "locked total = %10ld\n",
3904 stat_lock_direct
+ stat_lock_loop
+ stat_lock_wait
);
3909 mallinfo returns a copy of updated current mallinfo.
3910 The information reported is for the arena last used by the thread.
3913 struct mallinfo
mALLINFo()
3916 Void_t
*vptr
= NULL
;
3919 tsd_getspecific(arena_key
, vptr
);
3921 malloc_update_mallinfo((vptr
? (arena
*)vptr
: &main_arena
), &mi
);
3931 mallopt is the general SVID/XPG interface to tunable parameters.
3932 The format is to provide a (parameter-number, parameter-value) pair.
3933 mallopt then sets the corresponding parameter to the argument
3934 value if it can (i.e., so long as the value is meaningful),
3935 and returns 1 if successful else 0.
3937 See descriptions of tunable parameters above.
3942 int mALLOPt(int param_number
, int value
)
3944 int mALLOPt(param_number
, value
) int param_number
; int value
;
3947 switch(param_number
)
3949 case M_TRIM_THRESHOLD
:
3950 trim_threshold
= value
; return 1;
3952 top_pad
= value
; return 1;
3953 case M_MMAP_THRESHOLD
:
3955 /* Forbid setting the threshold too high. */
3956 if((unsigned long)value
> HEAP_MAX_SIZE
/2) return 0;
3958 mmap_threshold
= value
; return 1;
3961 n_mmaps_max
= value
; return 1;
3963 if (value
!= 0) return 0; else n_mmaps_max
= value
; return 1;
3965 case M_CHECK_ACTION
:
3966 check_action
= value
; return 1;
3975 /* Get/set state: malloc_get_state() records the current state of all
3976 malloc variables (_except_ for the actual heap contents and `hook'
3977 function pointers) in a system dependent, opaque data structure.
3978 This data structure is dynamically allocated and can be free()d
3979 after use. malloc_set_state() restores the state of all malloc
3980 variables to the previously obtained state. This is especially
3981 useful when using this malloc as part of a shared library, and when
3982 the heap contents are saved/restored via some other method. The
3983 primary example for this is GNU Emacs with its `dumping' procedure.
3984 `Hook' function pointers are never saved or restored by these
3987 #define MALLOC_STATE_MAGIC 0x444c4541l
3988 #define MALLOC_STATE_VERSION (0*0x100l + 0l) /* major*0x100 + minor */
3990 struct malloc_state
{
3993 mbinptr av
[NAV
* 2 + 2];
3995 int sbrked_mem_bytes
;
3996 unsigned long trim_threshold
;
3997 unsigned long top_pad
;
3998 unsigned int n_mmaps_max
;
3999 unsigned long mmap_threshold
;
4001 unsigned long max_sbrked_mem
;
4002 unsigned long max_total_mem
;
4003 unsigned int n_mmaps
;
4004 unsigned int max_n_mmaps
;
4005 unsigned long mmapped_mem
;
4006 unsigned long max_mmapped_mem
;
4013 struct malloc_state
* ms
;
4018 (void)mutex_lock(&main_arena
.mutex
);
4019 victim
= chunk_alloc(&main_arena
, request2size(sizeof(*ms
)));
4021 (void)mutex_unlock(&main_arena
.mutex
);
4024 ms
= (struct malloc_state
*)chunk2mem(victim
);
4025 ms
->magic
= MALLOC_STATE_MAGIC
;
4026 ms
->version
= MALLOC_STATE_VERSION
;
4027 ms
->av
[0] = main_arena
.av
[0];
4028 ms
->av
[1] = main_arena
.av
[1];
4029 for(i
=0; i
<NAV
; i
++) {
4030 b
= bin_at(&main_arena
, i
);
4032 ms
->av
[2*i
+2] = ms
->av
[2*i
+3] = 0; /* empty bin (or initial top) */
4034 ms
->av
[2*i
+2] = first(b
);
4035 ms
->av
[2*i
+3] = last(b
);
4038 ms
->sbrk_base
= sbrk_base
;
4039 ms
->sbrked_mem_bytes
= sbrked_mem
;
4040 ms
->trim_threshold
= trim_threshold
;
4041 ms
->top_pad
= top_pad
;
4042 ms
->n_mmaps_max
= n_mmaps_max
;
4043 ms
->mmap_threshold
= mmap_threshold
;
4044 ms
->check_action
= check_action
;
4045 ms
->max_sbrked_mem
= max_sbrked_mem
;
4047 ms
->max_total_mem
= max_total_mem
;
4049 ms
->max_total_mem
= 0;
4051 ms
->n_mmaps
= n_mmaps
;
4052 ms
->max_n_mmaps
= max_n_mmaps
;
4053 ms
->mmapped_mem
= mmapped_mem
;
4054 ms
->max_mmapped_mem
= max_mmapped_mem
;
4055 (void)mutex_unlock(&main_arena
.mutex
);
4061 mALLOC_SET_STATe(Void_t
* msptr
)
4063 mALLOC_SET_STATe(msptr
) Void_t
* msptr
;
4066 struct malloc_state
* ms
= (struct malloc_state
*)msptr
;
4071 if(ms
->magic
!= MALLOC_STATE_MAGIC
) return -1;
4072 /* Must fail if the major version is too high. */
4073 if((ms
->version
& ~0xffl
) > (MALLOC_STATE_VERSION
& ~0xffl
)) return -2;
4074 (void)mutex_lock(&main_arena
.mutex
);
4075 main_arena
.av
[0] = ms
->av
[0];
4076 main_arena
.av
[1] = ms
->av
[1];
4077 for(i
=0; i
<NAV
; i
++) {
4078 b
= bin_at(&main_arena
, i
);
4079 if(ms
->av
[2*i
+2] == 0)
4080 first(b
) = last(b
) = b
;
4082 first(b
) = ms
->av
[2*i
+2];
4083 last(b
) = ms
->av
[2*i
+3];
4085 /* Make sure the links to the `av'-bins in the heap are correct. */
4091 sbrk_base
= ms
->sbrk_base
;
4092 sbrked_mem
= ms
->sbrked_mem_bytes
;
4093 trim_threshold
= ms
->trim_threshold
;
4094 top_pad
= ms
->top_pad
;
4095 n_mmaps_max
= ms
->n_mmaps_max
;
4096 mmap_threshold
= ms
->mmap_threshold
;
4097 check_action
= ms
->check_action
;
4098 max_sbrked_mem
= ms
->max_sbrked_mem
;
4100 max_total_mem
= ms
->max_total_mem
;
4102 n_mmaps
= ms
->n_mmaps
;
4103 max_n_mmaps
= ms
->max_n_mmaps
;
4104 mmapped_mem
= ms
->mmapped_mem
;
4105 max_mmapped_mem
= ms
->max_mmapped_mem
;
4106 /* add version-dependent code here */
4107 (void)mutex_unlock(&main_arena
.mutex
);
4113 #if defined _LIBC || defined MALLOC_HOOKS
4115 /* A simple, standard set of debugging hooks. Overhead is `only' one
4116 byte per chunk; still this will catch most cases of double frees or
4119 #define MAGICBYTE(p) ( ( ((size_t)p >> 3) ^ ((size_t)p >> 11)) & 0xFF )
4121 /* Convert a pointer to be free()d or realloc()ed to a valid chunk
4122 pointer. If the provided pointer is not valid, return NULL. The
4123 goal here is to avoid crashes, unlike in the MALLOC_DEBUG code. */
4128 mem2chunk_check(Void_t
* mem
)
4130 mem2chunk_check(mem
) Void_t
* mem
;
4137 if(!aligned_OK(p
)) return NULL
;
4138 if( (char*)p
>=sbrk_base
&& (char*)p
<(sbrk_base
+sbrked_mem
) ) {
4139 /* Must be a chunk in conventional heap memory. */
4140 if(chunk_is_mmapped(p
) ||
4141 ( (sz
= chunksize(p
)), ((char*)p
+ sz
)>=(sbrk_base
+sbrked_mem
) ) ||
4142 sz
<MINSIZE
|| sz
&MALLOC_ALIGN_MASK
|| !inuse(p
) ||
4143 ( !prev_inuse(p
) && (p
->prev_size
&MALLOC_ALIGN_MASK
||
4144 (long)prev_chunk(p
)<(long)sbrk_base
||
4145 next_chunk(prev_chunk(p
))!=p
) ))
4147 if(*((unsigned char*)p
+ sz
+ (SIZE_SZ
-1)) != MAGICBYTE(p
))
4149 *((unsigned char*)p
+ sz
+ (SIZE_SZ
-1)) ^= 0xFF;
4151 unsigned long offset
, page_mask
= malloc_getpagesize
-1;
4153 /* mmap()ed chunks have MALLOC_ALIGNMENT or higher power-of-two
4154 alignment relative to the beginning of a page. Check this
4156 offset
= (unsigned long)mem
& page_mask
;
4157 if((offset
!=MALLOC_ALIGNMENT
&& offset
!=0 && offset
!=0x10 &&
4158 offset
!=0x20 && offset
!=0x40 && offset
!=0x80 && offset
!=0x100 &&
4159 offset
!=0x200 && offset
!=0x400 && offset
!=0x800 && offset
!=0x1000 &&
4161 !chunk_is_mmapped(p
) || (p
->size
& PREV_INUSE
) ||
4162 ( (((unsigned long)p
- p
->prev_size
) & page_mask
) != 0 ) ||
4163 ( (sz
= chunksize(p
)), ((p
->prev_size
+ sz
) & page_mask
) != 0 ) )
4165 if(*((unsigned char*)p
+ sz
- 1) != MAGICBYTE(p
))
4167 *((unsigned char*)p
+ sz
- 1) ^= 0xFF;
4174 malloc_check(size_t sz
, const Void_t
*caller
)
4176 malloc_check(sz
, caller
) size_t sz
; const Void_t
*caller
;
4180 INTERNAL_SIZE_T nb
= request2size(sz
+ 1);
4182 (void)mutex_lock(&main_arena
.mutex
);
4183 victim
= chunk_alloc(&main_arena
, nb
);
4184 (void)mutex_unlock(&main_arena
.mutex
);
4185 if(!victim
) return NULL
;
4186 nb
= chunksize(victim
);
4187 if(chunk_is_mmapped(victim
))
4191 *((unsigned char*)victim
+ nb
) = MAGICBYTE(victim
);
4192 return chunk2mem(victim
);
4197 free_check(Void_t
* mem
, const Void_t
*caller
)
4199 free_check(mem
, caller
) Void_t
* mem
; const Void_t
*caller
;
4205 (void)mutex_lock(&main_arena
.mutex
);
4206 p
= mem2chunk_check(mem
);
4208 (void)mutex_unlock(&main_arena
.mutex
);
4209 switch(check_action
) {
4211 fprintf(stderr
, "free(): invalid pointer %lx!\n", (long)(mem
));
4219 if (chunk_is_mmapped(p
)) {
4220 (void)mutex_unlock(&main_arena
.mutex
);
4225 #if 0 /* Erase freed memory. */
4226 memset(mem
, 0, chunksize(p
) - (SIZE_SZ
+1));
4228 chunk_free(&main_arena
, p
);
4229 (void)mutex_unlock(&main_arena
.mutex
);
4234 realloc_check(Void_t
* oldmem
, size_t bytes
, const Void_t
*caller
)
4236 realloc_check(oldmem
, bytes
, caller
)
4237 Void_t
* oldmem
; size_t bytes
; const Void_t
*caller
;
4240 mchunkptr oldp
, newp
;
4241 INTERNAL_SIZE_T nb
, oldsize
;
4243 if (oldmem
== 0) return malloc_check(bytes
, NULL
);
4244 (void)mutex_lock(&main_arena
.mutex
);
4245 oldp
= mem2chunk_check(oldmem
);
4247 (void)mutex_unlock(&main_arena
.mutex
);
4248 switch(check_action
) {
4250 fprintf(stderr
, "realloc(): invalid pointer %lx!\n", (long)(oldmem
));
4255 return malloc_check(bytes
, NULL
);
4257 oldsize
= chunksize(oldp
);
4259 nb
= request2size(bytes
+1);
4262 if (chunk_is_mmapped(oldp
)) {
4264 newp
= mremap_chunk(oldp
, nb
);
4267 /* Note the extra SIZE_SZ overhead. */
4268 if(oldsize
- SIZE_SZ
>= nb
) newp
= oldp
; /* do nothing */
4270 /* Must alloc, copy, free. */
4271 newp
= chunk_alloc(&main_arena
, nb
);
4273 MALLOC_COPY(chunk2mem(newp
), oldmem
, oldsize
- 2*SIZE_SZ
);
4281 #endif /* HAVE_MMAP */
4282 newp
= chunk_realloc(&main_arena
, oldp
, oldsize
, nb
);
4283 #if 0 /* Erase freed memory. */
4284 nb
= chunksize(newp
);
4285 if(oldp
<newp
|| oldp
>=chunk_at_offset(newp
, nb
)) {
4286 memset((char*)oldmem
+ 2*sizeof(mbinptr
), 0,
4287 oldsize
- (2*sizeof(mbinptr
)+2*SIZE_SZ
+1));
4288 } else if(nb
> oldsize
+SIZE_SZ
) {
4289 memset((char*)chunk2mem(newp
) + oldsize
, 0, nb
- (oldsize
+SIZE_SZ
));
4295 (void)mutex_unlock(&main_arena
.mutex
);
4297 if(!newp
) return NULL
;
4298 nb
= chunksize(newp
);
4299 if(chunk_is_mmapped(newp
))
4303 *((unsigned char*)newp
+ nb
) = MAGICBYTE(newp
);
4304 return chunk2mem(newp
);
4309 memalign_check(size_t alignment
, size_t bytes
, const Void_t
*caller
)
4311 memalign_check(alignment
, bytes
, caller
)
4312 size_t alignment
; size_t bytes
; const Void_t
*caller
;
4318 if (alignment
<= MALLOC_ALIGNMENT
) return malloc_check(bytes
, NULL
);
4319 if (alignment
< MINSIZE
) alignment
= MINSIZE
;
4321 nb
= request2size(bytes
+1);
4322 (void)mutex_lock(&main_arena
.mutex
);
4323 p
= chunk_align(&main_arena
, nb
, alignment
);
4324 (void)mutex_unlock(&main_arena
.mutex
);
4327 if(chunk_is_mmapped(p
))
4331 *((unsigned char*)p
+ nb
) = MAGICBYTE(p
);
4332 return chunk2mem(p
);
4335 /* The following hooks are used when the global initialization in
4336 ptmalloc_init() hasn't completed yet. */
4340 malloc_starter(size_t sz
, const Void_t
*caller
)
4342 malloc_starter(sz
, caller
) size_t sz
; const Void_t
*caller
;
4345 mchunkptr victim
= chunk_alloc(&main_arena
, request2size(sz
));
4347 return victim
? chunk2mem(victim
) : 0;
4352 free_starter(Void_t
* mem
, const Void_t
*caller
)
4354 free_starter(mem
, caller
) Void_t
* mem
; const Void_t
*caller
;
4362 if (chunk_is_mmapped(p
)) {
4367 chunk_free(&main_arena
, p
);
4370 /* The following hooks are used while the `atfork' handling mechanism
4375 malloc_atfork (size_t sz
, const Void_t
*caller
)
4377 malloc_atfork(sz
, caller
) size_t sz
; const Void_t
*caller
;
4380 Void_t
*vptr
= NULL
;
4382 tsd_getspecific(arena_key
, vptr
);
4384 mchunkptr victim
= chunk_alloc(&main_arena
, request2size(sz
));
4385 return victim
? chunk2mem(victim
) : 0;
4387 /* Suspend the thread until the `atfork' handlers have completed.
4388 By that time, the hooks will have been reset as well, so that
4389 mALLOc() can be used again. */
4390 (void)mutex_lock(&list_lock
);
4391 (void)mutex_unlock(&list_lock
);
4398 free_atfork(Void_t
* mem
, const Void_t
*caller
)
4400 free_atfork(mem
, caller
) Void_t
* mem
; const Void_t
*caller
;
4403 Void_t
*vptr
= NULL
;
4405 mchunkptr p
; /* chunk corresponding to mem */
4407 if (mem
== 0) /* free(0) has no effect */
4413 if (chunk_is_mmapped(p
)) /* release mmapped memory. */
4420 ar_ptr
= arena_for_ptr(p
);
4421 tsd_getspecific(arena_key
, vptr
);
4423 (void)mutex_lock(&ar_ptr
->mutex
);
4424 chunk_free(ar_ptr
, p
);
4426 (void)mutex_unlock(&ar_ptr
->mutex
);
4429 #endif /* defined _LIBC || defined MALLOC_HOOKS */
4434 weak_alias (__libc_calloc
, __calloc
) weak_alias (__libc_calloc
, calloc
)
4435 weak_alias (__libc_free
, __cfree
) weak_alias (__libc_free
, cfree
)
4436 weak_alias (__libc_free
, __free
) weak_alias (__libc_free
, free
)
4437 weak_alias (__libc_malloc
, __malloc
) weak_alias (__libc_malloc
, malloc
)
4438 weak_alias (__libc_memalign
, __memalign
) weak_alias (__libc_memalign
, memalign
)
4439 weak_alias (__libc_realloc
, __realloc
) weak_alias (__libc_realloc
, realloc
)
4440 weak_alias (__libc_valloc
, __valloc
) weak_alias (__libc_valloc
, valloc
)
4441 weak_alias (__libc_pvalloc
, __pvalloc
) weak_alias (__libc_pvalloc
, pvalloc
)
4442 weak_alias (__libc_mallinfo
, __mallinfo
) weak_alias (__libc_mallinfo
, mallinfo
)
4443 weak_alias (__libc_mallopt
, __mallopt
) weak_alias (__libc_mallopt
, mallopt
)
4445 weak_alias (__malloc_stats
, malloc_stats
)
4446 weak_alias (__malloc_usable_size
, malloc_usable_size
)
4447 weak_alias (__malloc_trim
, malloc_trim
)
4448 weak_alias (__malloc_get_state
, malloc_get_state
)
4449 weak_alias (__malloc_set_state
, malloc_set_state
)
4456 V2.6.4-pt3 Thu Feb 20 1997 Wolfram Gloger (wmglo@dent.med.uni-muenchen.de)
4457 * Added malloc_get/set_state() (mainly for use in GNU emacs),
4458 using interface from Marcus Daniels
4459 * All parameters are now adjustable via environment variables
4461 V2.6.4-pt2 Sat Dec 14 1996 Wolfram Gloger (wmglo@dent.med.uni-muenchen.de)
4462 * Added debugging hooks
4463 * Fixed possible deadlock in realloc() when out of memory
4464 * Don't pollute namespace in glibc: use __getpagesize, __mmap, etc.
4466 V2.6.4-pt Wed Dec 4 1996 Wolfram Gloger (wmglo@dent.med.uni-muenchen.de)
4467 * Very minor updates from the released 2.6.4 version.
4468 * Trimmed include file down to exported data structures.
4469 * Changes from H.J. Lu for glibc-2.0.
4471 V2.6.3i-pt Sep 16 1996 Wolfram Gloger (wmglo@dent.med.uni-muenchen.de)
4472 * Many changes for multiple threads
4473 * Introduced arenas and heaps
4475 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
4476 * Added pvalloc, as recommended by H.J. Liu
4477 * Added 64bit pointer support mainly from Wolfram Gloger
4478 * Added anonymously donated WIN32 sbrk emulation
4479 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
4480 * malloc_extend_top: fix mask error that caused wastage after
4482 * Add linux mremap support code from HJ Liu
4484 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
4485 * Integrated most documentation with the code.
4486 * Add support for mmap, with help from
4487 Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
4488 * Use last_remainder in more cases.
4489 * Pack bins using idea from colin@nyx10.cs.du.edu
4490 * Use ordered bins instead of best-fit threshold
4491 * Eliminate block-local decls to simplify tracing and debugging.
4492 * Support another case of realloc via move into top
4493 * Fix error occurring when initial sbrk_base not word-aligned.
4494 * Rely on page size for units instead of SBRK_UNIT to
4495 avoid surprises about sbrk alignment conventions.
4496 * Add mallinfo, mallopt. Thanks to Raymond Nijssen
4497 (raymond@es.ele.tue.nl) for the suggestion.
4498 * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
4499 * More precautions for cases where other routines call sbrk,
4500 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
4501 * Added macros etc., allowing use in linux libc from
4502 H.J. Lu (hjl@gnu.ai.mit.edu)
4503 * Inverted this history list
4505 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
4506 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
4507 * Removed all preallocation code since under current scheme
4508 the work required to undo bad preallocations exceeds
4509 the work saved in good cases for most test programs.
4510 * No longer use return list or unconsolidated bins since
4511 no scheme using them consistently outperforms those that don't
4512 given above changes.
4513 * Use best fit for very large chunks to prevent some worst-cases.
4514 * Added some support for debugging
4516 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
4517 * Removed footers when chunks are in use. Thanks to
4518 Paul Wilson (wilson@cs.texas.edu) for the suggestion.
4520 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
4521 * Added malloc_trim, with help from Wolfram Gloger
4522 (wmglo@Dent.MED.Uni-Muenchen.DE).
4524 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
4526 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
4527 * realloc: try to expand in both directions
4528 * malloc: swap order of clean-bin strategy;
4529 * realloc: only conditionally expand backwards
4530 * Try not to scavenge used bins
4531 * Use bin counts as a guide to preallocation
4532 * Occasionally bin return list chunks in first scan
4533 * Add a few optimizations from colin@nyx10.cs.du.edu
4535 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
4536 * faster bin computation & slightly different binning
4537 * merged all consolidations to one part of malloc proper
4538 (eliminating old malloc_find_space & malloc_clean_bin)
4539 * Scan 2 returns chunks (not just 1)
4540 * Propagate failure in realloc if malloc returns 0
4541 * Add stuff to allow compilation on non-ANSI compilers
4542 from kpv@research.att.com
4544 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
4545 * removed potential for odd address access in prev_chunk
4546 * removed dependency on getpagesize.h
4547 * misc cosmetics and a bit more internal documentation
4548 * anticosmetics: mangled names in macros to evade debugger strangeness
4549 * tested on sparc, hp-700, dec-mips, rs6000
4550 with gcc & native cc (hp, dec only) allowing
4551 Detlefs & Zorn comparison study (in SIGPLAN Notices.)
4553 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
4554 * Based loosely on libg++-1.2X malloc. (It retains some of the overall
4555 structure of old version, but most details differ.)