Update.
[glibc.git] / malloc / malloc.c
blob21cb9d98d92780aa76770f6778fe67a4d05d6d36
1 /* Malloc implementation for multiple threads without lock contention.
2 Copyright (C) 1996, 1997, 1998, 1999, 2000 Free Software Foundation, Inc.
3 This file is part of the GNU C Library.
4 Contributed by Wolfram Gloger <wmglo@dent.med.uni-muenchen.de>
5 and Doug Lea <dl@cs.oswego.edu>, 1996.
7 The GNU C Library is free software; you can redistribute it and/or
8 modify it under the terms of the GNU Library General Public License as
9 published by the Free Software Foundation; either version 2 of the
10 License, or (at your option) any later version.
12 The GNU C Library is distributed in the hope that it will be useful,
13 but WITHOUT ANY WARRANTY; without even the implied warranty of
14 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
15 Library General Public License for more details.
17 You should have received a copy of the GNU Library General Public
18 License along with the GNU C Library; see the file COPYING.LIB. If not,
19 write to the Free Software Foundation, Inc., 59 Temple Place - Suite 330,
20 Boston, MA 02111-1307, USA. */
22 /* $Id$
24 This work is mainly derived from malloc-2.6.4 by Doug Lea
25 <dl@cs.oswego.edu>, which is available from:
27 ftp://g.oswego.edu/pub/misc/malloc.c
29 Most of the original comments are reproduced in the code below.
31 * Why use this malloc?
33 This is not the fastest, most space-conserving, most portable, or
34 most tunable malloc ever written. However it is among the fastest
35 while also being among the most space-conserving, portable and tunable.
36 Consistent balance across these factors results in a good general-purpose
37 allocator. For a high-level description, see
38 http://g.oswego.edu/dl/html/malloc.html
40 On many systems, the standard malloc implementation is by itself not
41 thread-safe, and therefore wrapped with a single global lock around
42 all malloc-related functions. In some applications, especially with
43 multiple available processors, this can lead to contention problems
44 and bad performance. This malloc version was designed with the goal
45 to avoid waiting for locks as much as possible. Statistics indicate
46 that this goal is achieved in many cases.
48 * Synopsis of public routines
50 (Much fuller descriptions are contained in the program documentation below.)
52 ptmalloc_init();
53 Initialize global configuration. When compiled for multiple threads,
54 this function must be called once before any other function in the
55 package. It is not required otherwise. It is called automatically
56 in the Linux/GNU C libray or when compiling with MALLOC_HOOKS.
57 malloc(size_t n);
58 Return a pointer to a newly allocated chunk of at least n bytes, or null
59 if no space is available.
60 free(Void_t* p);
61 Release the chunk of memory pointed to by p, or no effect if p is null.
62 realloc(Void_t* p, size_t n);
63 Return a pointer to a chunk of size n that contains the same data
64 as does chunk p up to the minimum of (n, p's size) bytes, or null
65 if no space is available. The returned pointer may or may not be
66 the same as p. If p is null, equivalent to malloc. Unless the
67 #define REALLOC_ZERO_BYTES_FREES below is set, realloc with a
68 size argument of zero (re)allocates a minimum-sized chunk.
69 memalign(size_t alignment, size_t n);
70 Return a pointer to a newly allocated chunk of n bytes, aligned
71 in accord with the alignment argument, which must be a power of
72 two.
73 valloc(size_t n);
74 Equivalent to memalign(pagesize, n), where pagesize is the page
75 size of the system (or as near to this as can be figured out from
76 all the includes/defines below.)
77 pvalloc(size_t n);
78 Equivalent to valloc(minimum-page-that-holds(n)), that is,
79 round up n to nearest pagesize.
80 calloc(size_t unit, size_t quantity);
81 Returns a pointer to quantity * unit bytes, with all locations
82 set to zero.
83 cfree(Void_t* p);
84 Equivalent to free(p).
85 malloc_trim(size_t pad);
86 Release all but pad bytes of freed top-most memory back
87 to the system. Return 1 if successful, else 0.
88 malloc_usable_size(Void_t* p);
89 Report the number usable allocated bytes associated with allocated
90 chunk p. This may or may not report more bytes than were requested,
91 due to alignment and minimum size constraints.
92 malloc_stats();
93 Prints brief summary statistics on stderr.
94 mallinfo()
95 Returns (by copy) a struct containing various summary statistics.
96 mallopt(int parameter_number, int parameter_value)
97 Changes one of the tunable parameters described below. Returns
98 1 if successful in changing the parameter, else 0.
100 * Vital statistics:
102 Alignment: 8-byte
103 8 byte alignment is currently hardwired into the design. This
104 seems to suffice for all current machines and C compilers.
106 Assumed pointer representation: 4 or 8 bytes
107 Code for 8-byte pointers is untested by me but has worked
108 reliably by Wolfram Gloger, who contributed most of the
109 changes supporting this.
111 Assumed size_t representation: 4 or 8 bytes
112 Note that size_t is allowed to be 4 bytes even if pointers are 8.
114 Minimum overhead per allocated chunk: 4 or 8 bytes
115 Each malloced chunk has a hidden overhead of 4 bytes holding size
116 and status information.
118 Minimum allocated size: 4-byte ptrs: 16 bytes (including 4 overhead)
119 8-byte ptrs: 24/32 bytes (including, 4/8 overhead)
121 When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte
122 ptrs but 4 byte size) or 24 (for 8/8) additional bytes are
123 needed; 4 (8) for a trailing size field
124 and 8 (16) bytes for free list pointers. Thus, the minimum
125 allocatable size is 16/24/32 bytes.
127 Even a request for zero bytes (i.e., malloc(0)) returns a
128 pointer to something of the minimum allocatable size.
130 Maximum allocated size: 4-byte size_t: 2^31 - 8 bytes
131 8-byte size_t: 2^63 - 16 bytes
133 It is assumed that (possibly signed) size_t bit values suffice to
134 represent chunk sizes. `Possibly signed' is due to the fact
135 that `size_t' may be defined on a system as either a signed or
136 an unsigned type. To be conservative, values that would appear
137 as negative numbers are avoided.
138 Requests for sizes with a negative sign bit will return a
139 minimum-sized chunk.
141 Maximum overhead wastage per allocated chunk: normally 15 bytes
143 Alignment demands, plus the minimum allocatable size restriction
144 make the normal worst-case wastage 15 bytes (i.e., up to 15
145 more bytes will be allocated than were requested in malloc), with
146 two exceptions:
147 1. Because requests for zero bytes allocate non-zero space,
148 the worst case wastage for a request of zero bytes is 24 bytes.
149 2. For requests >= mmap_threshold that are serviced via
150 mmap(), the worst case wastage is 8 bytes plus the remainder
151 from a system page (the minimal mmap unit); typically 4096 bytes.
153 * Limitations
155 Here are some features that are NOT currently supported
157 * No automated mechanism for fully checking that all accesses
158 to malloced memory stay within their bounds.
159 * No support for compaction.
161 * Synopsis of compile-time options:
163 People have reported using previous versions of this malloc on all
164 versions of Unix, sometimes by tweaking some of the defines
165 below. It has been tested most extensively on Solaris and
166 Linux. People have also reported adapting this malloc for use in
167 stand-alone embedded systems.
169 The implementation is in straight, hand-tuned ANSI C. Among other
170 consequences, it uses a lot of macros. Because of this, to be at
171 all usable, this code should be compiled using an optimizing compiler
172 (for example gcc -O2) that can simplify expressions and control
173 paths.
175 __STD_C (default: derived from C compiler defines)
176 Nonzero if using ANSI-standard C compiler, a C++ compiler, or
177 a C compiler sufficiently close to ANSI to get away with it.
178 MALLOC_DEBUG (default: NOT defined)
179 Define to enable debugging. Adds fairly extensive assertion-based
180 checking to help track down memory errors, but noticeably slows down
181 execution.
182 MALLOC_HOOKS (default: NOT defined)
183 Define to enable support run-time replacement of the allocation
184 functions through user-defined `hooks'.
185 REALLOC_ZERO_BYTES_FREES (default: defined)
186 Define this if you think that realloc(p, 0) should be equivalent
187 to free(p). (The C standard requires this behaviour, therefore
188 it is the default.) Otherwise, since malloc returns a unique
189 pointer for malloc(0), so does realloc(p, 0).
190 HAVE_MEMCPY (default: defined)
191 Define if you are not otherwise using ANSI STD C, but still
192 have memcpy and memset in your C library and want to use them.
193 Otherwise, simple internal versions are supplied.
194 USE_MEMCPY (default: 1 if HAVE_MEMCPY is defined, 0 otherwise)
195 Define as 1 if you want the C library versions of memset and
196 memcpy called in realloc and calloc (otherwise macro versions are used).
197 At least on some platforms, the simple macro versions usually
198 outperform libc versions.
199 HAVE_MMAP (default: defined as 1)
200 Define to non-zero to optionally make malloc() use mmap() to
201 allocate very large blocks.
202 HAVE_MREMAP (default: defined as 0 unless Linux libc set)
203 Define to non-zero to optionally make realloc() use mremap() to
204 reallocate very large blocks.
205 USE_ARENAS (default: the same as HAVE_MMAP)
206 Enable support for multiple arenas, allocated using mmap().
207 malloc_getpagesize (default: derived from system #includes)
208 Either a constant or routine call returning the system page size.
209 HAVE_USR_INCLUDE_MALLOC_H (default: NOT defined)
210 Optionally define if you are on a system with a /usr/include/malloc.h
211 that declares struct mallinfo. It is not at all necessary to
212 define this even if you do, but will ensure consistency.
213 INTERNAL_SIZE_T (default: size_t)
214 Define to a 32-bit type (probably `unsigned int') if you are on a
215 64-bit machine, yet do not want or need to allow malloc requests of
216 greater than 2^31 to be handled. This saves space, especially for
217 very small chunks.
218 _LIBC (default: NOT defined)
219 Defined only when compiled as part of the Linux libc/glibc.
220 Also note that there is some odd internal name-mangling via defines
221 (for example, internally, `malloc' is named `mALLOc') needed
222 when compiling in this case. These look funny but don't otherwise
223 affect anything.
224 LACKS_UNISTD_H (default: undefined)
225 Define this if your system does not have a <unistd.h>.
226 MORECORE (default: sbrk)
227 The name of the routine to call to obtain more memory from the system.
228 MORECORE_FAILURE (default: -1)
229 The value returned upon failure of MORECORE.
230 MORECORE_CLEARS (default 1)
231 The degree to which the routine mapped to MORECORE zeroes out
232 memory: never (0), only for newly allocated space (1) or always
233 (2). The distinction between (1) and (2) is necessary because on
234 some systems, if the application first decrements and then
235 increments the break value, the contents of the reallocated space
236 are unspecified.
237 DEFAULT_TRIM_THRESHOLD
238 DEFAULT_TOP_PAD
239 DEFAULT_MMAP_THRESHOLD
240 DEFAULT_MMAP_MAX
241 Default values of tunable parameters (described in detail below)
242 controlling interaction with host system routines (sbrk, mmap, etc).
243 These values may also be changed dynamically via mallopt(). The
244 preset defaults are those that give best performance for typical
245 programs/systems.
246 DEFAULT_CHECK_ACTION
247 When the standard debugging hooks are in place, and a pointer is
248 detected as corrupt, do nothing (0), print an error message (1),
249 or call abort() (2).
256 * Compile-time options for multiple threads:
258 USE_PTHREADS, USE_THR, USE_SPROC
259 Define one of these as 1 to select the thread interface:
260 POSIX threads, Solaris threads or SGI sproc's, respectively.
261 If none of these is defined as non-zero, you get a `normal'
262 malloc implementation which is not thread-safe. Support for
263 multiple threads requires HAVE_MMAP=1. As an exception, when
264 compiling for GNU libc, i.e. when _LIBC is defined, then none of
265 the USE_... symbols have to be defined.
267 HEAP_MIN_SIZE
268 HEAP_MAX_SIZE
269 When thread support is enabled, additional `heap's are created
270 with mmap calls. These are limited in size; HEAP_MIN_SIZE should
271 be a multiple of the page size, while HEAP_MAX_SIZE must be a power
272 of two for alignment reasons. HEAP_MAX_SIZE should be at least
273 twice as large as the mmap threshold.
274 THREAD_STATS
275 When this is defined as non-zero, some statistics on mutex locking
276 are computed.
283 /* Preliminaries */
285 #ifndef __STD_C
286 #if defined (__STDC__)
287 #define __STD_C 1
288 #else
289 #if __cplusplus
290 #define __STD_C 1
291 #else
292 #define __STD_C 0
293 #endif /*__cplusplus*/
294 #endif /*__STDC__*/
295 #endif /*__STD_C*/
297 #ifndef Void_t
298 #if __STD_C
299 #define Void_t void
300 #else
301 #define Void_t char
302 #endif
303 #endif /*Void_t*/
305 #if __STD_C
306 # include <stddef.h> /* for size_t */
307 # if defined _LIBC || defined MALLOC_HOOKS
308 # include <stdlib.h> /* for getenv(), abort() */
309 # endif
310 #else
311 # include <sys/types.h>
312 # if defined _LIBC || defined MALLOC_HOOKS
313 extern char* getenv();
314 # endif
315 #endif
317 #ifndef _LIBC
318 # define __secure_getenv(Str) getenv (Str)
319 #endif
321 /* Macros for handling mutexes and thread-specific data. This is
322 included early, because some thread-related header files (such as
323 pthread.h) should be included before any others. */
324 #include "thread-m.h"
326 #ifdef __cplusplus
327 extern "C" {
328 #endif
330 #include <errno.h>
331 #include <stdio.h> /* needed for malloc_stats */
335 Compile-time options
340 Debugging:
342 Because freed chunks may be overwritten with link fields, this
343 malloc will often die when freed memory is overwritten by user
344 programs. This can be very effective (albeit in an annoying way)
345 in helping track down dangling pointers.
347 If you compile with -DMALLOC_DEBUG, a number of assertion checks are
348 enabled that will catch more memory errors. You probably won't be
349 able to make much sense of the actual assertion errors, but they
350 should help you locate incorrectly overwritten memory. The
351 checking is fairly extensive, and will slow down execution
352 noticeably. Calling malloc_stats or mallinfo with MALLOC_DEBUG set will
353 attempt to check every non-mmapped allocated and free chunk in the
354 course of computing the summaries. (By nature, mmapped regions
355 cannot be checked very much automatically.)
357 Setting MALLOC_DEBUG may also be helpful if you are trying to modify
358 this code. The assertions in the check routines spell out in more
359 detail the assumptions and invariants underlying the algorithms.
363 #if MALLOC_DEBUG
364 #include <assert.h>
365 #else
366 #define assert(x) ((void)0)
367 #endif
371 INTERNAL_SIZE_T is the word-size used for internal bookkeeping
372 of chunk sizes. On a 64-bit machine, you can reduce malloc
373 overhead by defining INTERNAL_SIZE_T to be a 32 bit `unsigned int'
374 at the expense of not being able to handle requests greater than
375 2^31. This limitation is hardly ever a concern; you are encouraged
376 to set this. However, the default version is the same as size_t.
379 #ifndef INTERNAL_SIZE_T
380 #define INTERNAL_SIZE_T size_t
381 #endif
384 REALLOC_ZERO_BYTES_FREES should be set if a call to realloc with
385 zero bytes should be the same as a call to free. The C standard
386 requires this. Otherwise, since this malloc returns a unique pointer
387 for malloc(0), so does realloc(p, 0).
391 #define REALLOC_ZERO_BYTES_FREES
395 HAVE_MEMCPY should be defined if you are not otherwise using
396 ANSI STD C, but still have memcpy and memset in your C library
397 and want to use them in calloc and realloc. Otherwise simple
398 macro versions are defined here.
400 USE_MEMCPY should be defined as 1 if you actually want to
401 have memset and memcpy called. People report that the macro
402 versions are often enough faster than libc versions on many
403 systems that it is better to use them.
407 #define HAVE_MEMCPY 1
409 #ifndef USE_MEMCPY
410 #ifdef HAVE_MEMCPY
411 #define USE_MEMCPY 1
412 #else
413 #define USE_MEMCPY 0
414 #endif
415 #endif
417 #if (__STD_C || defined(HAVE_MEMCPY))
419 #if __STD_C
420 void* memset(void*, int, size_t);
421 void* memcpy(void*, const void*, size_t);
422 #else
423 Void_t* memset();
424 Void_t* memcpy();
425 #endif
426 #endif
428 #if USE_MEMCPY
430 /* The following macros are only invoked with (2n+1)-multiples of
431 INTERNAL_SIZE_T units, with a positive integer n. This is exploited
432 for fast inline execution when n is small. */
434 #define MALLOC_ZERO(charp, nbytes) \
435 do { \
436 INTERNAL_SIZE_T mzsz = (nbytes); \
437 if(mzsz <= 9*sizeof(mzsz)) { \
438 INTERNAL_SIZE_T* mz = (INTERNAL_SIZE_T*) (charp); \
439 if(mzsz >= 5*sizeof(mzsz)) { *mz++ = 0; \
440 *mz++ = 0; \
441 if(mzsz >= 7*sizeof(mzsz)) { *mz++ = 0; \
442 *mz++ = 0; \
443 if(mzsz >= 9*sizeof(mzsz)) { *mz++ = 0; \
444 *mz++ = 0; }}} \
445 *mz++ = 0; \
446 *mz++ = 0; \
447 *mz = 0; \
448 } else memset((charp), 0, mzsz); \
449 } while(0)
451 #define MALLOC_COPY(dest,src,nbytes) \
452 do { \
453 INTERNAL_SIZE_T mcsz = (nbytes); \
454 if(mcsz <= 9*sizeof(mcsz)) { \
455 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) (src); \
456 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) (dest); \
457 if(mcsz >= 5*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
458 *mcdst++ = *mcsrc++; \
459 if(mcsz >= 7*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
460 *mcdst++ = *mcsrc++; \
461 if(mcsz >= 9*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
462 *mcdst++ = *mcsrc++; }}} \
463 *mcdst++ = *mcsrc++; \
464 *mcdst++ = *mcsrc++; \
465 *mcdst = *mcsrc ; \
466 } else memcpy(dest, src, mcsz); \
467 } while(0)
469 #else /* !USE_MEMCPY */
471 /* Use Duff's device for good zeroing/copying performance. */
473 #define MALLOC_ZERO(charp, nbytes) \
474 do { \
475 INTERNAL_SIZE_T* mzp = (INTERNAL_SIZE_T*)(charp); \
476 long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
477 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
478 switch (mctmp) { \
479 case 0: for(;;) { *mzp++ = 0; \
480 case 7: *mzp++ = 0; \
481 case 6: *mzp++ = 0; \
482 case 5: *mzp++ = 0; \
483 case 4: *mzp++ = 0; \
484 case 3: *mzp++ = 0; \
485 case 2: *mzp++ = 0; \
486 case 1: *mzp++ = 0; if(mcn <= 0) break; mcn--; } \
488 } while(0)
490 #define MALLOC_COPY(dest,src,nbytes) \
491 do { \
492 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) src; \
493 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) dest; \
494 long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
495 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
496 switch (mctmp) { \
497 case 0: for(;;) { *mcdst++ = *mcsrc++; \
498 case 7: *mcdst++ = *mcsrc++; \
499 case 6: *mcdst++ = *mcsrc++; \
500 case 5: *mcdst++ = *mcsrc++; \
501 case 4: *mcdst++ = *mcsrc++; \
502 case 3: *mcdst++ = *mcsrc++; \
503 case 2: *mcdst++ = *mcsrc++; \
504 case 1: *mcdst++ = *mcsrc++; if(mcn <= 0) break; mcn--; } \
506 } while(0)
508 #endif
511 #ifndef LACKS_UNISTD_H
512 # include <unistd.h>
513 #endif
516 Define HAVE_MMAP to optionally make malloc() use mmap() to allocate
517 very large blocks. These will be returned to the operating system
518 immediately after a free(). HAVE_MMAP is also a prerequisite to
519 support multiple `arenas' (see USE_ARENAS below).
522 #ifndef HAVE_MMAP
523 # ifdef _POSIX_MAPPED_FILES
524 # define HAVE_MMAP 1
525 # endif
526 #endif
529 Define HAVE_MREMAP to make realloc() use mremap() to re-allocate
530 large blocks. This is currently only possible on Linux with
531 kernel versions newer than 1.3.77.
534 #ifndef HAVE_MREMAP
535 #define HAVE_MREMAP defined(__linux__) && !defined(__arm__)
536 #endif
538 /* Define USE_ARENAS to enable support for multiple `arenas'. These
539 are allocated using mmap(), are necessary for threads and
540 occasionally useful to overcome address space limitations affecting
541 sbrk(). */
543 #ifndef USE_ARENAS
544 #define USE_ARENAS HAVE_MMAP
545 #endif
547 #if HAVE_MMAP
549 #include <unistd.h>
550 #include <fcntl.h>
551 #include <sys/mman.h>
553 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
554 #define MAP_ANONYMOUS MAP_ANON
555 #endif
556 #if !defined(MAP_FAILED)
557 #define MAP_FAILED ((char*)-1)
558 #endif
560 #ifndef MAP_NORESERVE
561 # ifdef MAP_AUTORESRV
562 # define MAP_NORESERVE MAP_AUTORESRV
563 # else
564 # define MAP_NORESERVE 0
565 # endif
566 #endif
568 #endif /* HAVE_MMAP */
571 Access to system page size. To the extent possible, this malloc
572 manages memory from the system in page-size units.
574 The following mechanics for getpagesize were adapted from
575 bsd/gnu getpagesize.h
578 #ifndef malloc_getpagesize
579 # ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
580 # ifndef _SC_PAGE_SIZE
581 # define _SC_PAGE_SIZE _SC_PAGESIZE
582 # endif
583 # endif
584 # ifdef _SC_PAGE_SIZE
585 # define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
586 # else
587 # if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
588 extern size_t getpagesize();
589 # define malloc_getpagesize getpagesize()
590 # else
591 # include <sys/param.h>
592 # ifdef EXEC_PAGESIZE
593 # define malloc_getpagesize EXEC_PAGESIZE
594 # else
595 # ifdef NBPG
596 # ifndef CLSIZE
597 # define malloc_getpagesize NBPG
598 # else
599 # define malloc_getpagesize (NBPG * CLSIZE)
600 # endif
601 # else
602 # ifdef NBPC
603 # define malloc_getpagesize NBPC
604 # else
605 # ifdef PAGESIZE
606 # define malloc_getpagesize PAGESIZE
607 # else
608 # define malloc_getpagesize (4096) /* just guess */
609 # endif
610 # endif
611 # endif
612 # endif
613 # endif
614 # endif
615 #endif
621 This version of malloc supports the standard SVID/XPG mallinfo
622 routine that returns a struct containing the same kind of
623 information you can get from malloc_stats. It should work on
624 any SVID/XPG compliant system that has a /usr/include/malloc.h
625 defining struct mallinfo. (If you'd like to install such a thing
626 yourself, cut out the preliminary declarations as described above
627 and below and save them in a malloc.h file. But there's no
628 compelling reason to bother to do this.)
630 The main declaration needed is the mallinfo struct that is returned
631 (by-copy) by mallinfo(). The SVID/XPG malloinfo struct contains a
632 bunch of fields, most of which are not even meaningful in this
633 version of malloc. Some of these fields are are instead filled by
634 mallinfo() with other numbers that might possibly be of interest.
636 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
637 /usr/include/malloc.h file that includes a declaration of struct
638 mallinfo. If so, it is included; else an SVID2/XPG2 compliant
639 version is declared below. These must be precisely the same for
640 mallinfo() to work.
644 /* #define HAVE_USR_INCLUDE_MALLOC_H */
646 #if HAVE_USR_INCLUDE_MALLOC_H
647 # include "/usr/include/malloc.h"
648 #else
649 # ifdef _LIBC
650 # include "malloc.h"
651 # else
652 # include "ptmalloc.h"
653 # endif
654 #endif
658 #ifndef DEFAULT_TRIM_THRESHOLD
659 #define DEFAULT_TRIM_THRESHOLD (128 * 1024)
660 #endif
663 M_TRIM_THRESHOLD is the maximum amount of unused top-most memory
664 to keep before releasing via malloc_trim in free().
666 Automatic trimming is mainly useful in long-lived programs.
667 Because trimming via sbrk can be slow on some systems, and can
668 sometimes be wasteful (in cases where programs immediately
669 afterward allocate more large chunks) the value should be high
670 enough so that your overall system performance would improve by
671 releasing.
673 The trim threshold and the mmap control parameters (see below)
674 can be traded off with one another. Trimming and mmapping are
675 two different ways of releasing unused memory back to the
676 system. Between these two, it is often possible to keep
677 system-level demands of a long-lived program down to a bare
678 minimum. For example, in one test suite of sessions measuring
679 the XF86 X server on Linux, using a trim threshold of 128K and a
680 mmap threshold of 192K led to near-minimal long term resource
681 consumption.
683 If you are using this malloc in a long-lived program, it should
684 pay to experiment with these values. As a rough guide, you
685 might set to a value close to the average size of a process
686 (program) running on your system. Releasing this much memory
687 would allow such a process to run in memory. Generally, it's
688 worth it to tune for trimming rather than memory mapping when a
689 program undergoes phases where several large chunks are
690 allocated and released in ways that can reuse each other's
691 storage, perhaps mixed with phases where there are no such
692 chunks at all. And in well-behaved long-lived programs,
693 controlling release of large blocks via trimming versus mapping
694 is usually faster.
696 However, in most programs, these parameters serve mainly as
697 protection against the system-level effects of carrying around
698 massive amounts of unneeded memory. Since frequent calls to
699 sbrk, mmap, and munmap otherwise degrade performance, the default
700 parameters are set to relatively high values that serve only as
701 safeguards.
703 The default trim value is high enough to cause trimming only in
704 fairly extreme (by current memory consumption standards) cases.
705 It must be greater than page size to have any useful effect. To
706 disable trimming completely, you can set to (unsigned long)(-1);
712 #ifndef DEFAULT_TOP_PAD
713 #define DEFAULT_TOP_PAD (0)
714 #endif
717 M_TOP_PAD is the amount of extra `padding' space to allocate or
718 retain whenever sbrk is called. It is used in two ways internally:
720 * When sbrk is called to extend the top of the arena to satisfy
721 a new malloc request, this much padding is added to the sbrk
722 request.
724 * When malloc_trim is called automatically from free(),
725 it is used as the `pad' argument.
727 In both cases, the actual amount of padding is rounded
728 so that the end of the arena is always a system page boundary.
730 The main reason for using padding is to avoid calling sbrk so
731 often. Having even a small pad greatly reduces the likelihood
732 that nearly every malloc request during program start-up (or
733 after trimming) will invoke sbrk, which needlessly wastes
734 time.
736 Automatic rounding-up to page-size units is normally sufficient
737 to avoid measurable overhead, so the default is 0. However, in
738 systems where sbrk is relatively slow, it can pay to increase
739 this value, at the expense of carrying around more memory than
740 the program needs.
745 #ifndef DEFAULT_MMAP_THRESHOLD
746 #define DEFAULT_MMAP_THRESHOLD (128 * 1024)
747 #endif
751 M_MMAP_THRESHOLD is the request size threshold for using mmap()
752 to service a request. Requests of at least this size that cannot
753 be allocated using already-existing space will be serviced via mmap.
754 (If enough normal freed space already exists it is used instead.)
756 Using mmap segregates relatively large chunks of memory so that
757 they can be individually obtained and released from the host
758 system. A request serviced through mmap is never reused by any
759 other request (at least not directly; the system may just so
760 happen to remap successive requests to the same locations).
762 Segregating space in this way has the benefit that mmapped space
763 can ALWAYS be individually released back to the system, which
764 helps keep the system level memory demands of a long-lived
765 program low. Mapped memory can never become `locked' between
766 other chunks, as can happen with normally allocated chunks, which
767 menas that even trimming via malloc_trim would not release them.
769 However, it has the disadvantages that:
771 1. The space cannot be reclaimed, consolidated, and then
772 used to service later requests, as happens with normal chunks.
773 2. It can lead to more wastage because of mmap page alignment
774 requirements
775 3. It causes malloc performance to be more dependent on host
776 system memory management support routines which may vary in
777 implementation quality and may impose arbitrary
778 limitations. Generally, servicing a request via normal
779 malloc steps is faster than going through a system's mmap.
781 All together, these considerations should lead you to use mmap
782 only for relatively large requests.
789 #ifndef DEFAULT_MMAP_MAX
790 #if HAVE_MMAP
791 #define DEFAULT_MMAP_MAX (1024)
792 #else
793 #define DEFAULT_MMAP_MAX (0)
794 #endif
795 #endif
798 M_MMAP_MAX is the maximum number of requests to simultaneously
799 service using mmap. This parameter exists because:
801 1. Some systems have a limited number of internal tables for
802 use by mmap.
803 2. In most systems, overreliance on mmap can degrade overall
804 performance.
805 3. If a program allocates many large regions, it is probably
806 better off using normal sbrk-based allocation routines that
807 can reclaim and reallocate normal heap memory. Using a
808 small value allows transition into this mode after the
809 first few allocations.
811 Setting to 0 disables all use of mmap. If HAVE_MMAP is not set,
812 the default value is 0, and attempts to set it to non-zero values
813 in mallopt will fail.
818 #ifndef DEFAULT_CHECK_ACTION
819 #define DEFAULT_CHECK_ACTION 1
820 #endif
822 /* What to do if the standard debugging hooks are in place and a
823 corrupt pointer is detected: do nothing (0), print an error message
824 (1), or call abort() (2). */
828 #define HEAP_MIN_SIZE (32*1024)
829 #define HEAP_MAX_SIZE (1024*1024) /* must be a power of two */
831 /* HEAP_MIN_SIZE and HEAP_MAX_SIZE limit the size of mmap()ed heaps
832 that are dynamically created for multi-threaded programs. The
833 maximum size must be a power of two, for fast determination of
834 which heap belongs to a chunk. It should be much larger than
835 the mmap threshold, so that requests with a size just below that
836 threshold can be fulfilled without creating too many heaps.
841 #ifndef THREAD_STATS
842 #define THREAD_STATS 0
843 #endif
845 /* If THREAD_STATS is non-zero, some statistics on mutex locking are
846 computed. */
849 /* Macro to set errno. */
850 #ifndef __set_errno
851 # define __set_errno(val) errno = (val)
852 #endif
854 /* On some platforms we can compile internal, not exported functions better.
855 Let the environment provide a macro and define it to be empty if it
856 is not available. */
857 #ifndef internal_function
858 # define internal_function
859 #endif
864 Special defines for the Linux/GNU C library.
869 #ifdef _LIBC
871 #if __STD_C
873 Void_t * __default_morecore (ptrdiff_t);
874 Void_t *(*__morecore)(ptrdiff_t) = __default_morecore;
876 #else
878 Void_t * __default_morecore ();
879 Void_t *(*__morecore)() = __default_morecore;
881 #endif
883 #define MORECORE (*__morecore)
884 #define MORECORE_FAILURE 0
886 #ifndef MORECORE_CLEARS
887 #define MORECORE_CLEARS 1
888 #endif
890 static size_t __libc_pagesize;
892 #define mmap __mmap
893 #define munmap __munmap
894 #define mremap __mremap
895 #define mprotect __mprotect
896 #undef malloc_getpagesize
897 #define malloc_getpagesize __libc_pagesize
899 #else /* _LIBC */
901 #if __STD_C
902 extern Void_t* sbrk(ptrdiff_t);
903 #else
904 extern Void_t* sbrk();
905 #endif
907 #ifndef MORECORE
908 #define MORECORE sbrk
909 #endif
911 #ifndef MORECORE_FAILURE
912 #define MORECORE_FAILURE -1
913 #endif
915 #ifndef MORECORE_CLEARS
916 #define MORECORE_CLEARS 1
917 #endif
919 #endif /* _LIBC */
921 #ifdef _LIBC
923 #define cALLOc __libc_calloc
924 #define fREe __libc_free
925 #define mALLOc __libc_malloc
926 #define mEMALIGn __libc_memalign
927 #define rEALLOc __libc_realloc
928 #define vALLOc __libc_valloc
929 #define pvALLOc __libc_pvalloc
930 #define mALLINFo __libc_mallinfo
931 #define mALLOPt __libc_mallopt
932 #define mALLOC_STATs __malloc_stats
933 #define mALLOC_USABLE_SIZe __malloc_usable_size
934 #define mALLOC_TRIm __malloc_trim
935 #define mALLOC_GET_STATe __malloc_get_state
936 #define mALLOC_SET_STATe __malloc_set_state
938 #else
940 #define cALLOc calloc
941 #define fREe free
942 #define mALLOc malloc
943 #define mEMALIGn memalign
944 #define rEALLOc realloc
945 #define vALLOc valloc
946 #define pvALLOc pvalloc
947 #define mALLINFo mallinfo
948 #define mALLOPt mallopt
949 #define mALLOC_STATs malloc_stats
950 #define mALLOC_USABLE_SIZe malloc_usable_size
951 #define mALLOC_TRIm malloc_trim
952 #define mALLOC_GET_STATe malloc_get_state
953 #define mALLOC_SET_STATe malloc_set_state
955 #endif
957 /* Public routines */
959 #if __STD_C
961 #ifndef _LIBC
962 void ptmalloc_init(void);
963 #endif
964 Void_t* mALLOc(size_t);
965 void fREe(Void_t*);
966 Void_t* rEALLOc(Void_t*, size_t);
967 Void_t* mEMALIGn(size_t, size_t);
968 Void_t* vALLOc(size_t);
969 Void_t* pvALLOc(size_t);
970 Void_t* cALLOc(size_t, size_t);
971 void cfree(Void_t*);
972 int mALLOC_TRIm(size_t);
973 size_t mALLOC_USABLE_SIZe(Void_t*);
974 void mALLOC_STATs(void);
975 int mALLOPt(int, int);
976 struct mallinfo mALLINFo(void);
977 Void_t* mALLOC_GET_STATe(void);
978 int mALLOC_SET_STATe(Void_t*);
980 #else /* !__STD_C */
982 #ifndef _LIBC
983 void ptmalloc_init();
984 #endif
985 Void_t* mALLOc();
986 void fREe();
987 Void_t* rEALLOc();
988 Void_t* mEMALIGn();
989 Void_t* vALLOc();
990 Void_t* pvALLOc();
991 Void_t* cALLOc();
992 void cfree();
993 int mALLOC_TRIm();
994 size_t mALLOC_USABLE_SIZe();
995 void mALLOC_STATs();
996 int mALLOPt();
997 struct mallinfo mALLINFo();
998 Void_t* mALLOC_GET_STATe();
999 int mALLOC_SET_STATe();
1001 #endif /* __STD_C */
1004 #ifdef __cplusplus
1005 } /* end of extern "C" */
1006 #endif
1008 #if !defined(NO_THREADS) && !HAVE_MMAP
1009 "Can't have threads support without mmap"
1010 #endif
1011 #if USE_ARENAS && !HAVE_MMAP
1012 "Can't have multiple arenas without mmap"
1013 #endif
1017 Type declarations
1021 struct malloc_chunk
1023 INTERNAL_SIZE_T prev_size; /* Size of previous chunk (if free). */
1024 INTERNAL_SIZE_T size; /* Size in bytes, including overhead. */
1025 struct malloc_chunk* fd; /* double links -- used only if free. */
1026 struct malloc_chunk* bk;
1029 typedef struct malloc_chunk* mchunkptr;
1033 malloc_chunk details:
1035 (The following includes lightly edited explanations by Colin Plumb.)
1037 Chunks of memory are maintained using a `boundary tag' method as
1038 described in e.g., Knuth or Standish. (See the paper by Paul
1039 Wilson ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a
1040 survey of such techniques.) Sizes of free chunks are stored both
1041 in the front of each chunk and at the end. This makes
1042 consolidating fragmented chunks into bigger chunks very fast. The
1043 size fields also hold bits representing whether chunks are free or
1044 in use.
1046 An allocated chunk looks like this:
1049 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1050 | Size of previous chunk, if allocated | |
1051 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1052 | Size of chunk, in bytes |P|
1053 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1054 | User data starts here... .
1056 . (malloc_usable_space() bytes) .
1058 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1059 | Size of chunk |
1060 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1063 Where "chunk" is the front of the chunk for the purpose of most of
1064 the malloc code, but "mem" is the pointer that is returned to the
1065 user. "Nextchunk" is the beginning of the next contiguous chunk.
1067 Chunks always begin on even word boundaries, so the mem portion
1068 (which is returned to the user) is also on an even word boundary, and
1069 thus double-word aligned.
1071 Free chunks are stored in circular doubly-linked lists, and look like this:
1073 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1074 | Size of previous chunk |
1075 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1076 `head:' | Size of chunk, in bytes |P|
1077 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1078 | Forward pointer to next chunk in list |
1079 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1080 | Back pointer to previous chunk in list |
1081 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1082 | Unused space (may be 0 bytes long) .
1085 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1086 `foot:' | Size of chunk, in bytes |
1087 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1089 The P (PREV_INUSE) bit, stored in the unused low-order bit of the
1090 chunk size (which is always a multiple of two words), is an in-use
1091 bit for the *previous* chunk. If that bit is *clear*, then the
1092 word before the current chunk size contains the previous chunk
1093 size, and can be used to find the front of the previous chunk.
1094 (The very first chunk allocated always has this bit set,
1095 preventing access to non-existent (or non-owned) memory.)
1097 Note that the `foot' of the current chunk is actually represented
1098 as the prev_size of the NEXT chunk. (This makes it easier to
1099 deal with alignments etc).
1101 The two exceptions to all this are
1103 1. The special chunk `top', which doesn't bother using the
1104 trailing size field since there is no
1105 next contiguous chunk that would have to index off it. (After
1106 initialization, `top' is forced to always exist. If it would
1107 become less than MINSIZE bytes long, it is replenished via
1108 malloc_extend_top.)
1110 2. Chunks allocated via mmap, which have the second-lowest-order
1111 bit (IS_MMAPPED) set in their size fields. Because they are
1112 never merged or traversed from any other chunk, they have no
1113 foot size or inuse information.
1115 Available chunks are kept in any of several places (all declared below):
1117 * `av': An array of chunks serving as bin headers for consolidated
1118 chunks. Each bin is doubly linked. The bins are approximately
1119 proportionally (log) spaced. There are a lot of these bins
1120 (128). This may look excessive, but works very well in
1121 practice. All procedures maintain the invariant that no
1122 consolidated chunk physically borders another one. Chunks in
1123 bins are kept in size order, with ties going to the
1124 approximately least recently used chunk.
1126 The chunks in each bin are maintained in decreasing sorted order by
1127 size. This is irrelevant for the small bins, which all contain
1128 the same-sized chunks, but facilitates best-fit allocation for
1129 larger chunks. (These lists are just sequential. Keeping them in
1130 order almost never requires enough traversal to warrant using
1131 fancier ordered data structures.) Chunks of the same size are
1132 linked with the most recently freed at the front, and allocations
1133 are taken from the back. This results in LRU or FIFO allocation
1134 order, which tends to give each chunk an equal opportunity to be
1135 consolidated with adjacent freed chunks, resulting in larger free
1136 chunks and less fragmentation.
1138 * `top': The top-most available chunk (i.e., the one bordering the
1139 end of available memory) is treated specially. It is never
1140 included in any bin, is used only if no other chunk is
1141 available, and is released back to the system if it is very
1142 large (see M_TRIM_THRESHOLD).
1144 * `last_remainder': A bin holding only the remainder of the
1145 most recently split (non-top) chunk. This bin is checked
1146 before other non-fitting chunks, so as to provide better
1147 locality for runs of sequentially allocated chunks.
1149 * Implicitly, through the host system's memory mapping tables.
1150 If supported, requests greater than a threshold are usually
1151 serviced via calls to mmap, and then later released via munmap.
1156 Bins
1158 The bins are an array of pairs of pointers serving as the
1159 heads of (initially empty) doubly-linked lists of chunks, laid out
1160 in a way so that each pair can be treated as if it were in a
1161 malloc_chunk. (This way, the fd/bk offsets for linking bin heads
1162 and chunks are the same).
1164 Bins for sizes < 512 bytes contain chunks of all the same size, spaced
1165 8 bytes apart. Larger bins are approximately logarithmically
1166 spaced. (See the table below.)
1168 Bin layout:
1170 64 bins of size 8
1171 32 bins of size 64
1172 16 bins of size 512
1173 8 bins of size 4096
1174 4 bins of size 32768
1175 2 bins of size 262144
1176 1 bin of size what's left
1178 There is actually a little bit of slop in the numbers in bin_index
1179 for the sake of speed. This makes no difference elsewhere.
1181 The special chunks `top' and `last_remainder' get their own bins,
1182 (this is implemented via yet more trickery with the av array),
1183 although `top' is never properly linked to its bin since it is
1184 always handled specially.
1188 #define NAV 128 /* number of bins */
1190 typedef struct malloc_chunk* mbinptr;
1192 /* An arena is a configuration of malloc_chunks together with an array
1193 of bins. With multiple threads, it must be locked via a mutex
1194 before changing its data structures. One or more `heaps' are
1195 associated with each arena, except for the main_arena, which is
1196 associated only with the `main heap', i.e. the conventional free
1197 store obtained with calls to MORECORE() (usually sbrk). The `av'
1198 array is never mentioned directly in the code, but instead used via
1199 bin access macros. */
1201 typedef struct _arena {
1202 mbinptr av[2*NAV + 2];
1203 struct _arena *next;
1204 size_t size;
1205 #if THREAD_STATS
1206 long stat_lock_direct, stat_lock_loop, stat_lock_wait;
1207 #endif
1208 mutex_t mutex;
1209 } arena;
1212 /* A heap is a single contiguous memory region holding (coalesceable)
1213 malloc_chunks. It is allocated with mmap() and always starts at an
1214 address aligned to HEAP_MAX_SIZE. Not used unless compiling with
1215 USE_ARENAS. */
1217 typedef struct _heap_info {
1218 arena *ar_ptr; /* Arena for this heap. */
1219 struct _heap_info *prev; /* Previous heap. */
1220 size_t size; /* Current size in bytes. */
1221 size_t pad; /* Make sure the following data is properly aligned. */
1222 } heap_info;
1226 Static functions (forward declarations)
1229 #if __STD_C
1231 static void chunk_free(arena *ar_ptr, mchunkptr p) internal_function;
1232 static mchunkptr chunk_alloc(arena *ar_ptr, INTERNAL_SIZE_T size)
1233 internal_function;
1234 static mchunkptr chunk_realloc(arena *ar_ptr, mchunkptr oldp,
1235 INTERNAL_SIZE_T oldsize, INTERNAL_SIZE_T nb)
1236 internal_function;
1237 static mchunkptr chunk_align(arena *ar_ptr, INTERNAL_SIZE_T nb,
1238 size_t alignment) internal_function;
1239 static int main_trim(size_t pad) internal_function;
1240 #if USE_ARENAS
1241 static int heap_trim(heap_info *heap, size_t pad) internal_function;
1242 #endif
1243 #if defined _LIBC || defined MALLOC_HOOKS
1244 static Void_t* malloc_check(size_t sz, const Void_t *caller);
1245 static void free_check(Void_t* mem, const Void_t *caller);
1246 static Void_t* realloc_check(Void_t* oldmem, size_t bytes,
1247 const Void_t *caller);
1248 static Void_t* memalign_check(size_t alignment, size_t bytes,
1249 const Void_t *caller);
1250 #ifndef NO_THREADS
1251 static Void_t* malloc_starter(size_t sz, const Void_t *caller);
1252 static void free_starter(Void_t* mem, const Void_t *caller);
1253 static Void_t* malloc_atfork(size_t sz, const Void_t *caller);
1254 static void free_atfork(Void_t* mem, const Void_t *caller);
1255 #endif
1256 #endif
1258 #else
1260 static void chunk_free();
1261 static mchunkptr chunk_alloc();
1262 static mchunkptr chunk_realloc();
1263 static mchunkptr chunk_align();
1264 static int main_trim();
1265 #if USE_ARENAS
1266 static int heap_trim();
1267 #endif
1268 #if defined _LIBC || defined MALLOC_HOOKS
1269 static Void_t* malloc_check();
1270 static void free_check();
1271 static Void_t* realloc_check();
1272 static Void_t* memalign_check();
1273 #ifndef NO_THREADS
1274 static Void_t* malloc_starter();
1275 static void free_starter();
1276 static Void_t* malloc_atfork();
1277 static void free_atfork();
1278 #endif
1279 #endif
1281 #endif
1285 /* sizes, alignments */
1287 #define SIZE_SZ (sizeof(INTERNAL_SIZE_T))
1288 #define MALLOC_ALIGNMENT (SIZE_SZ + SIZE_SZ)
1289 #define MALLOC_ALIGN_MASK (MALLOC_ALIGNMENT - 1)
1290 #define MINSIZE (sizeof(struct malloc_chunk))
1292 /* conversion from malloc headers to user pointers, and back */
1294 #define chunk2mem(p) ((Void_t*)((char*)(p) + 2*SIZE_SZ))
1295 #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - 2*SIZE_SZ))
1297 /* pad request bytes into a usable size, return non-zero on overflow */
1299 #define request2size(req, nb) \
1300 ((nb = (req) + (SIZE_SZ + MALLOC_ALIGN_MASK)),\
1301 ((long)nb <= 0 || nb < (INTERNAL_SIZE_T) (req) \
1302 ? (__set_errno (ENOMEM), 1) \
1303 : ((nb < (MINSIZE + MALLOC_ALIGN_MASK) \
1304 ? (nb = MINSIZE) : (nb &= ~MALLOC_ALIGN_MASK)), 0)))
1306 /* Check if m has acceptable alignment */
1308 #define aligned_OK(m) (((unsigned long)((m)) & (MALLOC_ALIGN_MASK)) == 0)
1314 Physical chunk operations
1318 /* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */
1320 #define PREV_INUSE 0x1UL
1322 /* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */
1324 #define IS_MMAPPED 0x2UL
1326 /* Bits to mask off when extracting size */
1328 #define SIZE_BITS (PREV_INUSE|IS_MMAPPED)
1331 /* Ptr to next physical malloc_chunk. */
1333 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->size & ~PREV_INUSE) ))
1335 /* Ptr to previous physical malloc_chunk */
1337 #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_size) ))
1340 /* Treat space at ptr + offset as a chunk */
1342 #define chunk_at_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
1348 Dealing with use bits
1351 /* extract p's inuse bit */
1353 #define inuse(p) \
1354 ((((mchunkptr)(((char*)(p))+((p)->size & ~PREV_INUSE)))->size) & PREV_INUSE)
1356 /* extract inuse bit of previous chunk */
1358 #define prev_inuse(p) ((p)->size & PREV_INUSE)
1360 /* check for mmap()'ed chunk */
1362 #define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED)
1364 /* set/clear chunk as in use without otherwise disturbing */
1366 #define set_inuse(p) \
1367 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size |= PREV_INUSE
1369 #define clear_inuse(p) \
1370 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size &= ~(PREV_INUSE)
1372 /* check/set/clear inuse bits in known places */
1374 #define inuse_bit_at_offset(p, s)\
1375 (((mchunkptr)(((char*)(p)) + (s)))->size & PREV_INUSE)
1377 #define set_inuse_bit_at_offset(p, s)\
1378 (((mchunkptr)(((char*)(p)) + (s)))->size |= PREV_INUSE)
1380 #define clear_inuse_bit_at_offset(p, s)\
1381 (((mchunkptr)(((char*)(p)) + (s)))->size &= ~(PREV_INUSE))
1387 Dealing with size fields
1390 /* Get size, ignoring use bits */
1392 #define chunksize(p) ((p)->size & ~(SIZE_BITS))
1394 /* Set size at head, without disturbing its use bit */
1396 #define set_head_size(p, s) ((p)->size = (((p)->size & PREV_INUSE) | (s)))
1398 /* Set size/use ignoring previous bits in header */
1400 #define set_head(p, s) ((p)->size = (s))
1402 /* Set size at footer (only when chunk is not in use) */
1404 #define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_size = (s))
1410 /* access macros */
1412 #define bin_at(a, i) ((mbinptr)((char*)&(((a)->av)[2*(i) + 2]) - 2*SIZE_SZ))
1413 #define init_bin(a, i) ((a)->av[2*i+2] = (a)->av[2*i+3] = bin_at((a), i))
1414 #define next_bin(b) ((mbinptr)((char*)(b) + 2 * sizeof(mbinptr)))
1415 #define prev_bin(b) ((mbinptr)((char*)(b) - 2 * sizeof(mbinptr)))
1418 The first 2 bins are never indexed. The corresponding av cells are instead
1419 used for bookkeeping. This is not to save space, but to simplify
1420 indexing, maintain locality, and avoid some initialization tests.
1423 #define binblocks(a) (bin_at(a,0)->size)/* bitvector of nonempty blocks */
1424 #define top(a) (bin_at(a,0)->fd) /* The topmost chunk */
1425 #define last_remainder(a) (bin_at(a,1)) /* remainder from last split */
1428 Because top initially points to its own bin with initial
1429 zero size, thus forcing extension on the first malloc request,
1430 we avoid having any special code in malloc to check whether
1431 it even exists yet. But we still need to in malloc_extend_top.
1434 #define initial_top(a) ((mchunkptr)bin_at(a, 0))
1438 /* field-extraction macros */
1440 #define first(b) ((b)->fd)
1441 #define last(b) ((b)->bk)
1444 Indexing into bins
1447 #define bin_index(sz) \
1448 (((((unsigned long)(sz)) >> 9) == 0) ? (((unsigned long)(sz)) >> 3):\
1449 ((((unsigned long)(sz)) >> 9) <= 4) ? 56 + (((unsigned long)(sz)) >> 6):\
1450 ((((unsigned long)(sz)) >> 9) <= 20) ? 91 + (((unsigned long)(sz)) >> 9):\
1451 ((((unsigned long)(sz)) >> 9) <= 84) ? 110 + (((unsigned long)(sz)) >> 12):\
1452 ((((unsigned long)(sz)) >> 9) <= 340) ? 119 + (((unsigned long)(sz)) >> 15):\
1453 ((((unsigned long)(sz)) >> 9) <= 1364) ? 124 + (((unsigned long)(sz)) >> 18):\
1454 126)
1456 bins for chunks < 512 are all spaced 8 bytes apart, and hold
1457 identically sized chunks. This is exploited in malloc.
1460 #define MAX_SMALLBIN 63
1461 #define MAX_SMALLBIN_SIZE 512
1462 #define SMALLBIN_WIDTH 8
1464 #define smallbin_index(sz) (((unsigned long)(sz)) >> 3)
1467 Requests are `small' if both the corresponding and the next bin are small
1470 #define is_small_request(nb) ((nb) < MAX_SMALLBIN_SIZE - SMALLBIN_WIDTH)
1475 To help compensate for the large number of bins, a one-level index
1476 structure is used for bin-by-bin searching. `binblocks' is a
1477 one-word bitvector recording whether groups of BINBLOCKWIDTH bins
1478 have any (possibly) non-empty bins, so they can be skipped over
1479 all at once during during traversals. The bits are NOT always
1480 cleared as soon as all bins in a block are empty, but instead only
1481 when all are noticed to be empty during traversal in malloc.
1484 #define BINBLOCKWIDTH 4 /* bins per block */
1486 /* bin<->block macros */
1488 #define idx2binblock(ix) ((unsigned)1 << ((ix) / BINBLOCKWIDTH))
1489 #define mark_binblock(a, ii) (binblocks(a) |= idx2binblock(ii))
1490 #define clear_binblock(a, ii) (binblocks(a) &= ~(idx2binblock(ii)))
1495 /* Static bookkeeping data */
1497 /* Helper macro to initialize bins */
1498 #define IAV(i) bin_at(&main_arena, i), bin_at(&main_arena, i)
1500 static arena main_arena = {
1502 0, 0,
1503 IAV(0), IAV(1), IAV(2), IAV(3), IAV(4), IAV(5), IAV(6), IAV(7),
1504 IAV(8), IAV(9), IAV(10), IAV(11), IAV(12), IAV(13), IAV(14), IAV(15),
1505 IAV(16), IAV(17), IAV(18), IAV(19), IAV(20), IAV(21), IAV(22), IAV(23),
1506 IAV(24), IAV(25), IAV(26), IAV(27), IAV(28), IAV(29), IAV(30), IAV(31),
1507 IAV(32), IAV(33), IAV(34), IAV(35), IAV(36), IAV(37), IAV(38), IAV(39),
1508 IAV(40), IAV(41), IAV(42), IAV(43), IAV(44), IAV(45), IAV(46), IAV(47),
1509 IAV(48), IAV(49), IAV(50), IAV(51), IAV(52), IAV(53), IAV(54), IAV(55),
1510 IAV(56), IAV(57), IAV(58), IAV(59), IAV(60), IAV(61), IAV(62), IAV(63),
1511 IAV(64), IAV(65), IAV(66), IAV(67), IAV(68), IAV(69), IAV(70), IAV(71),
1512 IAV(72), IAV(73), IAV(74), IAV(75), IAV(76), IAV(77), IAV(78), IAV(79),
1513 IAV(80), IAV(81), IAV(82), IAV(83), IAV(84), IAV(85), IAV(86), IAV(87),
1514 IAV(88), IAV(89), IAV(90), IAV(91), IAV(92), IAV(93), IAV(94), IAV(95),
1515 IAV(96), IAV(97), IAV(98), IAV(99), IAV(100), IAV(101), IAV(102), IAV(103),
1516 IAV(104), IAV(105), IAV(106), IAV(107), IAV(108), IAV(109), IAV(110), IAV(111),
1517 IAV(112), IAV(113), IAV(114), IAV(115), IAV(116), IAV(117), IAV(118), IAV(119),
1518 IAV(120), IAV(121), IAV(122), IAV(123), IAV(124), IAV(125), IAV(126), IAV(127)
1520 &main_arena, /* next */
1521 0, /* size */
1522 #if THREAD_STATS
1523 0, 0, 0, /* stat_lock_direct, stat_lock_loop, stat_lock_wait */
1524 #endif
1525 MUTEX_INITIALIZER /* mutex */
1528 #undef IAV
1530 /* Thread specific data */
1532 static tsd_key_t arena_key;
1533 static mutex_t list_lock = MUTEX_INITIALIZER;
1535 #if THREAD_STATS
1536 static int stat_n_heaps = 0;
1537 #define THREAD_STAT(x) x
1538 #else
1539 #define THREAD_STAT(x) do ; while(0)
1540 #endif
1542 /* variables holding tunable values */
1544 static unsigned long trim_threshold = DEFAULT_TRIM_THRESHOLD;
1545 static unsigned long top_pad = DEFAULT_TOP_PAD;
1546 static unsigned int n_mmaps_max = DEFAULT_MMAP_MAX;
1547 static unsigned long mmap_threshold = DEFAULT_MMAP_THRESHOLD;
1548 static int check_action = DEFAULT_CHECK_ACTION;
1550 /* The first value returned from sbrk */
1551 static char* sbrk_base = (char*)(-1);
1553 /* The maximum memory obtained from system via sbrk */
1554 static unsigned long max_sbrked_mem = 0;
1556 /* The maximum via either sbrk or mmap (too difficult to track with threads) */
1557 #ifdef NO_THREADS
1558 static unsigned long max_total_mem = 0;
1559 #endif
1561 /* The total memory obtained from system via sbrk */
1562 #define sbrked_mem (main_arena.size)
1564 /* Tracking mmaps */
1566 static unsigned int n_mmaps = 0;
1567 static unsigned int max_n_mmaps = 0;
1568 static unsigned long mmapped_mem = 0;
1569 static unsigned long max_mmapped_mem = 0;
1571 /* Mapped memory in non-main arenas (reliable only for NO_THREADS). */
1572 static unsigned long arena_mem = 0;
1576 #ifndef _LIBC
1577 #define weak_variable
1578 #else
1579 /* In GNU libc we want the hook variables to be weak definitions to
1580 avoid a problem with Emacs. */
1581 #define weak_variable weak_function
1582 #endif
1584 /* Already initialized? */
1585 int __malloc_initialized = -1;
1588 #ifndef NO_THREADS
1590 /* The following two functions are registered via thread_atfork() to
1591 make sure that the mutexes remain in a consistent state in the
1592 fork()ed version of a thread. Also adapt the malloc and free hooks
1593 temporarily, because the `atfork' handler mechanism may use
1594 malloc/free internally (e.g. in LinuxThreads). */
1596 #if defined _LIBC || defined MALLOC_HOOKS
1597 static __malloc_ptr_t (*save_malloc_hook) __MALLOC_P ((size_t __size,
1598 const __malloc_ptr_t));
1599 static void (*save_free_hook) __MALLOC_P ((__malloc_ptr_t __ptr,
1600 const __malloc_ptr_t));
1601 static Void_t* save_arena;
1602 #endif
1604 static void
1605 ptmalloc_lock_all __MALLOC_P((void))
1607 arena *ar_ptr;
1609 (void)mutex_lock(&list_lock);
1610 for(ar_ptr = &main_arena;;) {
1611 (void)mutex_lock(&ar_ptr->mutex);
1612 ar_ptr = ar_ptr->next;
1613 if(ar_ptr == &main_arena) break;
1615 #if defined _LIBC || defined MALLOC_HOOKS
1616 save_malloc_hook = __malloc_hook;
1617 save_free_hook = __free_hook;
1618 __malloc_hook = malloc_atfork;
1619 __free_hook = free_atfork;
1620 /* Only the current thread may perform malloc/free calls now. */
1621 tsd_getspecific(arena_key, save_arena);
1622 tsd_setspecific(arena_key, (Void_t*)0);
1623 #endif
1626 static void
1627 ptmalloc_unlock_all __MALLOC_P((void))
1629 arena *ar_ptr;
1631 #if defined _LIBC || defined MALLOC_HOOKS
1632 tsd_setspecific(arena_key, save_arena);
1633 __malloc_hook = save_malloc_hook;
1634 __free_hook = save_free_hook;
1635 #endif
1636 for(ar_ptr = &main_arena;;) {
1637 (void)mutex_unlock(&ar_ptr->mutex);
1638 ar_ptr = ar_ptr->next;
1639 if(ar_ptr == &main_arena) break;
1641 (void)mutex_unlock(&list_lock);
1644 static void
1645 ptmalloc_init_all __MALLOC_P((void))
1647 arena *ar_ptr;
1649 #if defined _LIBC || defined MALLOC_HOOKS
1650 tsd_setspecific(arena_key, save_arena);
1651 __malloc_hook = save_malloc_hook;
1652 __free_hook = save_free_hook;
1653 #endif
1654 for(ar_ptr = &main_arena;;) {
1655 (void)mutex_init(&ar_ptr->mutex);
1656 ar_ptr = ar_ptr->next;
1657 if(ar_ptr == &main_arena) break;
1659 (void)mutex_init(&list_lock);
1662 #endif /* !defined NO_THREADS */
1664 /* Initialization routine. */
1665 #if defined(_LIBC)
1666 #if 0
1667 static void ptmalloc_init __MALLOC_P ((void)) __attribute__ ((constructor));
1668 #endif
1670 static void
1671 ptmalloc_init __MALLOC_P((void))
1672 #else
1673 void
1674 ptmalloc_init __MALLOC_P((void))
1675 #endif
1677 #if defined _LIBC || defined MALLOC_HOOKS
1678 # if __STD_C
1679 const char* s;
1680 # else
1681 char* s;
1682 # endif
1683 #endif
1685 if(__malloc_initialized >= 0) return;
1686 __malloc_initialized = 0;
1687 #ifdef _LIBC
1688 __libc_pagesize = __getpagesize();
1689 #endif
1690 #ifndef NO_THREADS
1691 #if defined _LIBC || defined MALLOC_HOOKS
1692 /* With some threads implementations, creating thread-specific data
1693 or initializing a mutex may call malloc() itself. Provide a
1694 simple starter version (realloc() won't work). */
1695 save_malloc_hook = __malloc_hook;
1696 save_free_hook = __free_hook;
1697 __malloc_hook = malloc_starter;
1698 __free_hook = free_starter;
1699 #endif
1700 #ifdef _LIBC
1701 /* Initialize the pthreads interface. */
1702 if (__pthread_initialize != NULL)
1703 __pthread_initialize();
1704 #endif
1705 #endif /* !defined NO_THREADS */
1706 mutex_init(&main_arena.mutex);
1707 mutex_init(&list_lock);
1708 tsd_key_create(&arena_key, NULL);
1709 tsd_setspecific(arena_key, (Void_t *)&main_arena);
1710 thread_atfork(ptmalloc_lock_all, ptmalloc_unlock_all, ptmalloc_init_all);
1711 #if defined _LIBC || defined MALLOC_HOOKS
1712 if((s = __secure_getenv("MALLOC_TRIM_THRESHOLD_")))
1713 mALLOPt(M_TRIM_THRESHOLD, atoi(s));
1714 if((s = __secure_getenv("MALLOC_TOP_PAD_")))
1715 mALLOPt(M_TOP_PAD, atoi(s));
1716 if((s = __secure_getenv("MALLOC_MMAP_THRESHOLD_")))
1717 mALLOPt(M_MMAP_THRESHOLD, atoi(s));
1718 if((s = __secure_getenv("MALLOC_MMAP_MAX_")))
1719 mALLOPt(M_MMAP_MAX, atoi(s));
1720 s = getenv("MALLOC_CHECK_");
1721 #ifndef NO_THREADS
1722 __malloc_hook = save_malloc_hook;
1723 __free_hook = save_free_hook;
1724 #endif
1725 if(s && (! __libc_enable_secure || access ("/etc/suid-debug", F_OK) == 0)) {
1726 if(s[0]) mALLOPt(M_CHECK_ACTION, (int)(s[0] - '0'));
1727 __malloc_check_init();
1729 if(__malloc_initialize_hook != NULL)
1730 (*__malloc_initialize_hook)();
1731 #endif
1732 __malloc_initialized = 1;
1735 /* There are platforms (e.g. Hurd) with a link-time hook mechanism. */
1736 #ifdef thread_atfork_static
1737 thread_atfork_static(ptmalloc_lock_all, ptmalloc_unlock_all, \
1738 ptmalloc_init_all)
1739 #endif
1741 #if defined _LIBC || defined MALLOC_HOOKS
1743 /* Hooks for debugging versions. The initial hooks just call the
1744 initialization routine, then do the normal work. */
1746 static Void_t*
1747 #if __STD_C
1748 malloc_hook_ini(size_t sz, const __malloc_ptr_t caller)
1749 #else
1750 malloc_hook_ini(sz, caller)
1751 size_t sz; const __malloc_ptr_t caller;
1752 #endif
1754 __malloc_hook = NULL;
1755 ptmalloc_init();
1756 return mALLOc(sz);
1759 static Void_t*
1760 #if __STD_C
1761 realloc_hook_ini(Void_t* ptr, size_t sz, const __malloc_ptr_t caller)
1762 #else
1763 realloc_hook_ini(ptr, sz, caller)
1764 Void_t* ptr; size_t sz; const __malloc_ptr_t caller;
1765 #endif
1767 __malloc_hook = NULL;
1768 __realloc_hook = NULL;
1769 ptmalloc_init();
1770 return rEALLOc(ptr, sz);
1773 static Void_t*
1774 #if __STD_C
1775 memalign_hook_ini(size_t sz, size_t alignment, const __malloc_ptr_t caller)
1776 #else
1777 memalign_hook_ini(sz, alignment, caller)
1778 size_t sz; size_t alignment; const __malloc_ptr_t caller;
1779 #endif
1781 __memalign_hook = NULL;
1782 ptmalloc_init();
1783 return mEMALIGn(sz, alignment);
1786 void weak_variable (*__malloc_initialize_hook) __MALLOC_P ((void)) = NULL;
1787 void weak_variable (*__free_hook) __MALLOC_P ((__malloc_ptr_t __ptr,
1788 const __malloc_ptr_t)) = NULL;
1789 __malloc_ptr_t weak_variable (*__malloc_hook)
1790 __MALLOC_P ((size_t __size, const __malloc_ptr_t)) = malloc_hook_ini;
1791 __malloc_ptr_t weak_variable (*__realloc_hook)
1792 __MALLOC_P ((__malloc_ptr_t __ptr, size_t __size, const __malloc_ptr_t))
1793 = realloc_hook_ini;
1794 __malloc_ptr_t weak_variable (*__memalign_hook)
1795 __MALLOC_P ((size_t __size, size_t __alignment, const __malloc_ptr_t))
1796 = memalign_hook_ini;
1797 void weak_variable (*__after_morecore_hook) __MALLOC_P ((void)) = NULL;
1799 /* Whether we are using malloc checking. */
1800 static int using_malloc_checking;
1802 /* A flag that is set by malloc_set_state, to signal that malloc checking
1803 must not be enabled on the request from the user (via the MALLOC_CHECK_
1804 environment variable). It is reset by __malloc_check_init to tell
1805 malloc_set_state that the user has requested malloc checking.
1807 The purpose of this flag is to make sure that malloc checking is not
1808 enabled when the heap to be restored was constructed without malloc
1809 checking, and thus does not contain the required magic bytes.
1810 Otherwise the heap would be corrupted by calls to free and realloc. If
1811 it turns out that the heap was created with malloc checking and the
1812 user has requested it malloc_set_state just calls __malloc_check_init
1813 again to enable it. On the other hand, reusing such a heap without
1814 further malloc checking is safe. */
1815 static int disallow_malloc_check;
1817 /* Activate a standard set of debugging hooks. */
1818 void
1819 __malloc_check_init()
1821 if (disallow_malloc_check) {
1822 disallow_malloc_check = 0;
1823 return;
1825 using_malloc_checking = 1;
1826 __malloc_hook = malloc_check;
1827 __free_hook = free_check;
1828 __realloc_hook = realloc_check;
1829 __memalign_hook = memalign_check;
1830 if(check_action & 1)
1831 fprintf(stderr, "malloc: using debugging hooks\n");
1834 #endif
1840 /* Routines dealing with mmap(). */
1842 #if HAVE_MMAP
1844 #ifndef MAP_ANONYMOUS
1846 static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
1848 #define MMAP(addr, size, prot, flags) ((dev_zero_fd < 0) ? \
1849 (dev_zero_fd = open("/dev/zero", O_RDWR), \
1850 mmap((addr), (size), (prot), (flags), dev_zero_fd, 0)) : \
1851 mmap((addr), (size), (prot), (flags), dev_zero_fd, 0))
1853 #else
1855 #define MMAP(addr, size, prot, flags) \
1856 (mmap((addr), (size), (prot), (flags)|MAP_ANONYMOUS, -1, 0))
1858 #endif
1860 #if defined __GNUC__ && __GNUC__ >= 2
1861 /* This function is only called from one place, inline it. */
1862 __inline__
1863 #endif
1864 static mchunkptr
1865 internal_function
1866 #if __STD_C
1867 mmap_chunk(size_t size)
1868 #else
1869 mmap_chunk(size) size_t size;
1870 #endif
1872 size_t page_mask = malloc_getpagesize - 1;
1873 mchunkptr p;
1875 if(n_mmaps >= n_mmaps_max) return 0; /* too many regions */
1877 /* For mmapped chunks, the overhead is one SIZE_SZ unit larger, because
1878 * there is no following chunk whose prev_size field could be used.
1880 size = (size + SIZE_SZ + page_mask) & ~page_mask;
1882 p = (mchunkptr)MMAP(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE);
1883 if(p == (mchunkptr) MAP_FAILED) return 0;
1885 n_mmaps++;
1886 if (n_mmaps > max_n_mmaps) max_n_mmaps = n_mmaps;
1888 /* We demand that eight bytes into a page must be 8-byte aligned. */
1889 assert(aligned_OK(chunk2mem(p)));
1891 /* The offset to the start of the mmapped region is stored
1892 * in the prev_size field of the chunk; normally it is zero,
1893 * but that can be changed in memalign().
1895 p->prev_size = 0;
1896 set_head(p, size|IS_MMAPPED);
1898 mmapped_mem += size;
1899 if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
1900 max_mmapped_mem = mmapped_mem;
1901 #ifdef NO_THREADS
1902 if ((unsigned long)(mmapped_mem + arena_mem + sbrked_mem) > max_total_mem)
1903 max_total_mem = mmapped_mem + arena_mem + sbrked_mem;
1904 #endif
1905 return p;
1908 static void
1909 internal_function
1910 #if __STD_C
1911 munmap_chunk(mchunkptr p)
1912 #else
1913 munmap_chunk(p) mchunkptr p;
1914 #endif
1916 INTERNAL_SIZE_T size = chunksize(p);
1917 int ret;
1919 assert (chunk_is_mmapped(p));
1920 assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
1921 assert((n_mmaps > 0));
1922 assert(((p->prev_size + size) & (malloc_getpagesize-1)) == 0);
1924 n_mmaps--;
1925 mmapped_mem -= (size + p->prev_size);
1927 ret = munmap((char *)p - p->prev_size, size + p->prev_size);
1929 /* munmap returns non-zero on failure */
1930 assert(ret == 0);
1933 #if HAVE_MREMAP
1935 static mchunkptr
1936 internal_function
1937 #if __STD_C
1938 mremap_chunk(mchunkptr p, size_t new_size)
1939 #else
1940 mremap_chunk(p, new_size) mchunkptr p; size_t new_size;
1941 #endif
1943 size_t page_mask = malloc_getpagesize - 1;
1944 INTERNAL_SIZE_T offset = p->prev_size;
1945 INTERNAL_SIZE_T size = chunksize(p);
1946 char *cp;
1948 assert (chunk_is_mmapped(p));
1949 assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
1950 assert((n_mmaps > 0));
1951 assert(((size + offset) & (malloc_getpagesize-1)) == 0);
1953 /* Note the extra SIZE_SZ overhead as in mmap_chunk(). */
1954 new_size = (new_size + offset + SIZE_SZ + page_mask) & ~page_mask;
1956 cp = (char *)mremap((char *)p - offset, size + offset, new_size,
1957 MREMAP_MAYMOVE);
1959 if (cp == MAP_FAILED) return 0;
1961 p = (mchunkptr)(cp + offset);
1963 assert(aligned_OK(chunk2mem(p)));
1965 assert((p->prev_size == offset));
1966 set_head(p, (new_size - offset)|IS_MMAPPED);
1968 mmapped_mem -= size + offset;
1969 mmapped_mem += new_size;
1970 if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
1971 max_mmapped_mem = mmapped_mem;
1972 #ifdef NO_THREADS
1973 if ((unsigned long)(mmapped_mem + arena_mem + sbrked_mem) > max_total_mem)
1974 max_total_mem = mmapped_mem + arena_mem + sbrked_mem;
1975 #endif
1976 return p;
1979 #endif /* HAVE_MREMAP */
1981 #endif /* HAVE_MMAP */
1985 /* Managing heaps and arenas (for concurrent threads) */
1987 #if USE_ARENAS
1989 /* Create a new heap. size is automatically rounded up to a multiple
1990 of the page size. */
1992 static heap_info *
1993 internal_function
1994 #if __STD_C
1995 new_heap(size_t size)
1996 #else
1997 new_heap(size) size_t size;
1998 #endif
2000 size_t page_mask = malloc_getpagesize - 1;
2001 char *p1, *p2;
2002 unsigned long ul;
2003 heap_info *h;
2005 if(size+top_pad < HEAP_MIN_SIZE)
2006 size = HEAP_MIN_SIZE;
2007 else if(size+top_pad <= HEAP_MAX_SIZE)
2008 size += top_pad;
2009 else if(size > HEAP_MAX_SIZE)
2010 return 0;
2011 else
2012 size = HEAP_MAX_SIZE;
2013 size = (size + page_mask) & ~page_mask;
2015 /* A memory region aligned to a multiple of HEAP_MAX_SIZE is needed.
2016 No swap space needs to be reserved for the following large
2017 mapping (on Linux, this is the case for all non-writable mappings
2018 anyway). */
2019 p1 = (char *)MMAP(0, HEAP_MAX_SIZE<<1, PROT_NONE, MAP_PRIVATE|MAP_NORESERVE);
2020 if(p1 == MAP_FAILED)
2021 return 0;
2022 p2 = (char *)(((unsigned long)p1 + HEAP_MAX_SIZE) & ~(HEAP_MAX_SIZE-1));
2023 ul = p2 - p1;
2024 munmap(p1, ul);
2025 munmap(p2 + HEAP_MAX_SIZE, HEAP_MAX_SIZE - ul);
2026 if(mprotect(p2, size, PROT_READ|PROT_WRITE) != 0) {
2027 munmap(p2, HEAP_MAX_SIZE);
2028 return 0;
2030 h = (heap_info *)p2;
2031 h->size = size;
2032 THREAD_STAT(stat_n_heaps++);
2033 return h;
2036 /* Grow or shrink a heap. size is automatically rounded up to a
2037 multiple of the page size if it is positive. */
2039 static int
2040 #if __STD_C
2041 grow_heap(heap_info *h, long diff)
2042 #else
2043 grow_heap(h, diff) heap_info *h; long diff;
2044 #endif
2046 size_t page_mask = malloc_getpagesize - 1;
2047 long new_size;
2049 if(diff >= 0) {
2050 diff = (diff + page_mask) & ~page_mask;
2051 new_size = (long)h->size + diff;
2052 if(new_size > HEAP_MAX_SIZE)
2053 return -1;
2054 if(mprotect((char *)h + h->size, diff, PROT_READ|PROT_WRITE) != 0)
2055 return -2;
2056 } else {
2057 new_size = (long)h->size + diff;
2058 if(new_size < (long)sizeof(*h))
2059 return -1;
2060 /* Try to re-map the extra heap space freshly to save memory, and
2061 make it inaccessible. */
2062 if((char *)MMAP((char *)h + new_size, -diff, PROT_NONE,
2063 MAP_PRIVATE|MAP_FIXED) == (char *) MAP_FAILED)
2064 return -2;
2066 h->size = new_size;
2067 return 0;
2070 /* Delete a heap. */
2072 #define delete_heap(heap) munmap((char*)(heap), HEAP_MAX_SIZE)
2074 /* arena_get() acquires an arena and locks the corresponding mutex.
2075 First, try the one last locked successfully by this thread. (This
2076 is the common case and handled with a macro for speed.) Then, loop
2077 once over the circularly linked list of arenas. If no arena is
2078 readily available, create a new one. In this latter case, `size'
2079 is just a hint as to how much memory will be required immediately
2080 in the new arena. */
2082 #define arena_get(ptr, size) do { \
2083 Void_t *vptr = NULL; \
2084 ptr = (arena *)tsd_getspecific(arena_key, vptr); \
2085 if(ptr && !mutex_trylock(&ptr->mutex)) { \
2086 THREAD_STAT(++(ptr->stat_lock_direct)); \
2087 } else \
2088 ptr = arena_get2(ptr, (size)); \
2089 } while(0)
2091 static arena *
2092 internal_function
2093 #if __STD_C
2094 arena_get2(arena *a_tsd, size_t size)
2095 #else
2096 arena_get2(a_tsd, size) arena *a_tsd; size_t size;
2097 #endif
2099 arena *a;
2100 heap_info *h;
2101 char *ptr;
2102 int i;
2103 unsigned long misalign;
2105 if(!a_tsd)
2106 a = a_tsd = &main_arena;
2107 else {
2108 a = a_tsd->next;
2109 if(!a) {
2110 /* This can only happen while initializing the new arena. */
2111 (void)mutex_lock(&main_arena.mutex);
2112 THREAD_STAT(++(main_arena.stat_lock_wait));
2113 return &main_arena;
2117 /* Check the global, circularly linked list for available arenas. */
2118 repeat:
2119 do {
2120 if(!mutex_trylock(&a->mutex)) {
2121 THREAD_STAT(++(a->stat_lock_loop));
2122 tsd_setspecific(arena_key, (Void_t *)a);
2123 return a;
2125 a = a->next;
2126 } while(a != a_tsd);
2128 /* If not even the list_lock can be obtained, try again. This can
2129 happen during `atfork', or for example on systems where thread
2130 creation makes it temporarily impossible to obtain _any_
2131 locks. */
2132 if(mutex_trylock(&list_lock)) {
2133 a = a_tsd;
2134 goto repeat;
2136 (void)mutex_unlock(&list_lock);
2138 /* Nothing immediately available, so generate a new arena. */
2139 h = new_heap(size + (sizeof(*h) + sizeof(*a) + MALLOC_ALIGNMENT));
2140 if(!h) {
2141 /* Maybe size is too large to fit in a single heap. So, just try
2142 to create a minimally-sized arena and let chunk_alloc() attempt
2143 to deal with the large request via mmap_chunk(). */
2144 h = new_heap(sizeof(*h) + sizeof(*a) + MALLOC_ALIGNMENT);
2145 if(!h)
2146 return 0;
2148 a = h->ar_ptr = (arena *)(h+1);
2149 for(i=0; i<NAV; i++)
2150 init_bin(a, i);
2151 a->next = NULL;
2152 a->size = h->size;
2153 arena_mem += h->size;
2154 #ifdef NO_THREADS
2155 if((unsigned long)(mmapped_mem + arena_mem + sbrked_mem) > max_total_mem)
2156 max_total_mem = mmapped_mem + arena_mem + sbrked_mem;
2157 #endif
2158 tsd_setspecific(arena_key, (Void_t *)a);
2159 mutex_init(&a->mutex);
2160 i = mutex_lock(&a->mutex); /* remember result */
2162 /* Set up the top chunk, with proper alignment. */
2163 ptr = (char *)(a + 1);
2164 misalign = (unsigned long)chunk2mem(ptr) & MALLOC_ALIGN_MASK;
2165 if (misalign > 0)
2166 ptr += MALLOC_ALIGNMENT - misalign;
2167 top(a) = (mchunkptr)ptr;
2168 set_head(top(a), (((char*)h + h->size) - ptr) | PREV_INUSE);
2170 /* Add the new arena to the list. */
2171 (void)mutex_lock(&list_lock);
2172 a->next = main_arena.next;
2173 main_arena.next = a;
2174 (void)mutex_unlock(&list_lock);
2176 if(i) /* locking failed; keep arena for further attempts later */
2177 return 0;
2179 THREAD_STAT(++(a->stat_lock_loop));
2180 return a;
2183 /* find the heap and corresponding arena for a given ptr */
2185 #define heap_for_ptr(ptr) \
2186 ((heap_info *)((unsigned long)(ptr) & ~(HEAP_MAX_SIZE-1)))
2187 #define arena_for_ptr(ptr) \
2188 (((mchunkptr)(ptr) < top(&main_arena) && (char *)(ptr) >= sbrk_base) ? \
2189 &main_arena : heap_for_ptr(ptr)->ar_ptr)
2191 #else /* !USE_ARENAS */
2193 /* There is only one arena, main_arena. */
2195 #define arena_get(ptr, sz) (ptr = &main_arena)
2196 #define arena_for_ptr(ptr) (&main_arena)
2198 #endif /* USE_ARENAS */
2203 Debugging support
2206 #if MALLOC_DEBUG
2210 These routines make a number of assertions about the states
2211 of data structures that should be true at all times. If any
2212 are not true, it's very likely that a user program has somehow
2213 trashed memory. (It's also possible that there is a coding error
2214 in malloc. In which case, please report it!)
2217 #if __STD_C
2218 static void do_check_chunk(arena *ar_ptr, mchunkptr p)
2219 #else
2220 static void do_check_chunk(ar_ptr, p) arena *ar_ptr; mchunkptr p;
2221 #endif
2223 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
2225 /* No checkable chunk is mmapped */
2226 assert(!chunk_is_mmapped(p));
2228 #if USE_ARENAS
2229 if(ar_ptr != &main_arena) {
2230 heap_info *heap = heap_for_ptr(p);
2231 assert(heap->ar_ptr == ar_ptr);
2232 if(p != top(ar_ptr))
2233 assert((char *)p + sz <= (char *)heap + heap->size);
2234 else
2235 assert((char *)p + sz == (char *)heap + heap->size);
2236 return;
2238 #endif
2240 /* Check for legal address ... */
2241 assert((char*)p >= sbrk_base);
2242 if (p != top(ar_ptr))
2243 assert((char*)p + sz <= (char*)top(ar_ptr));
2244 else
2245 assert((char*)p + sz <= sbrk_base + sbrked_mem);
2250 #if __STD_C
2251 static void do_check_free_chunk(arena *ar_ptr, mchunkptr p)
2252 #else
2253 static void do_check_free_chunk(ar_ptr, p) arena *ar_ptr; mchunkptr p;
2254 #endif
2256 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
2257 mchunkptr next = chunk_at_offset(p, sz);
2259 do_check_chunk(ar_ptr, p);
2261 /* Check whether it claims to be free ... */
2262 assert(!inuse(p));
2264 /* Must have OK size and fields */
2265 assert((long)sz >= (long)MINSIZE);
2266 assert((sz & MALLOC_ALIGN_MASK) == 0);
2267 assert(aligned_OK(chunk2mem(p)));
2268 /* ... matching footer field */
2269 assert(next->prev_size == sz);
2270 /* ... and is fully consolidated */
2271 assert(prev_inuse(p));
2272 assert (next == top(ar_ptr) || inuse(next));
2274 /* ... and has minimally sane links */
2275 assert(p->fd->bk == p);
2276 assert(p->bk->fd == p);
2279 #if __STD_C
2280 static void do_check_inuse_chunk(arena *ar_ptr, mchunkptr p)
2281 #else
2282 static void do_check_inuse_chunk(ar_ptr, p) arena *ar_ptr; mchunkptr p;
2283 #endif
2285 mchunkptr next = next_chunk(p);
2286 do_check_chunk(ar_ptr, p);
2288 /* Check whether it claims to be in use ... */
2289 assert(inuse(p));
2291 /* ... whether its size is OK (it might be a fencepost) ... */
2292 assert(chunksize(p) >= MINSIZE || next->size == (0|PREV_INUSE));
2294 /* ... and is surrounded by OK chunks.
2295 Since more things can be checked with free chunks than inuse ones,
2296 if an inuse chunk borders them and debug is on, it's worth doing them.
2298 if (!prev_inuse(p))
2300 mchunkptr prv = prev_chunk(p);
2301 assert(next_chunk(prv) == p);
2302 do_check_free_chunk(ar_ptr, prv);
2304 if (next == top(ar_ptr))
2306 assert(prev_inuse(next));
2307 assert(chunksize(next) >= MINSIZE);
2309 else if (!inuse(next))
2310 do_check_free_chunk(ar_ptr, next);
2314 #if __STD_C
2315 static void do_check_malloced_chunk(arena *ar_ptr,
2316 mchunkptr p, INTERNAL_SIZE_T s)
2317 #else
2318 static void do_check_malloced_chunk(ar_ptr, p, s)
2319 arena *ar_ptr; mchunkptr p; INTERNAL_SIZE_T s;
2320 #endif
2322 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
2323 long room = sz - s;
2325 do_check_inuse_chunk(ar_ptr, p);
2327 /* Legal size ... */
2328 assert((long)sz >= (long)MINSIZE);
2329 assert((sz & MALLOC_ALIGN_MASK) == 0);
2330 assert(room >= 0);
2331 assert(room < (long)MINSIZE);
2333 /* ... and alignment */
2334 assert(aligned_OK(chunk2mem(p)));
2337 /* ... and was allocated at front of an available chunk */
2338 assert(prev_inuse(p));
2343 #define check_free_chunk(A,P) do_check_free_chunk(A,P)
2344 #define check_inuse_chunk(A,P) do_check_inuse_chunk(A,P)
2345 #define check_chunk(A,P) do_check_chunk(A,P)
2346 #define check_malloced_chunk(A,P,N) do_check_malloced_chunk(A,P,N)
2347 #else
2348 #define check_free_chunk(A,P)
2349 #define check_inuse_chunk(A,P)
2350 #define check_chunk(A,P)
2351 #define check_malloced_chunk(A,P,N)
2352 #endif
2357 Macro-based internal utilities
2362 Linking chunks in bin lists.
2363 Call these only with variables, not arbitrary expressions, as arguments.
2367 Place chunk p of size s in its bin, in size order,
2368 putting it ahead of others of same size.
2372 #define frontlink(A, P, S, IDX, BK, FD) \
2374 if (S < MAX_SMALLBIN_SIZE) \
2376 IDX = smallbin_index(S); \
2377 mark_binblock(A, IDX); \
2378 BK = bin_at(A, IDX); \
2379 FD = BK->fd; \
2380 P->bk = BK; \
2381 P->fd = FD; \
2382 FD->bk = BK->fd = P; \
2384 else \
2386 IDX = bin_index(S); \
2387 BK = bin_at(A, IDX); \
2388 FD = BK->fd; \
2389 if (FD == BK) mark_binblock(A, IDX); \
2390 else \
2392 while (FD != BK && S < chunksize(FD)) FD = FD->fd; \
2393 BK = FD->bk; \
2395 P->bk = BK; \
2396 P->fd = FD; \
2397 FD->bk = BK->fd = P; \
2402 /* take a chunk off a list */
2404 #define unlink(P, BK, FD) \
2406 BK = P->bk; \
2407 FD = P->fd; \
2408 FD->bk = BK; \
2409 BK->fd = FD; \
2412 /* Place p as the last remainder */
2414 #define link_last_remainder(A, P) \
2416 last_remainder(A)->fd = last_remainder(A)->bk = P; \
2417 P->fd = P->bk = last_remainder(A); \
2420 /* Clear the last_remainder bin */
2422 #define clear_last_remainder(A) \
2423 (last_remainder(A)->fd = last_remainder(A)->bk = last_remainder(A))
2430 Extend the top-most chunk by obtaining memory from system.
2431 Main interface to sbrk (but see also malloc_trim).
2434 #if defined __GNUC__ && __GNUC__ >= 2
2435 /* This function is called only from one place, inline it. */
2436 __inline__
2437 #endif
2438 static void
2439 internal_function
2440 #if __STD_C
2441 malloc_extend_top(arena *ar_ptr, INTERNAL_SIZE_T nb)
2442 #else
2443 malloc_extend_top(ar_ptr, nb) arena *ar_ptr; INTERNAL_SIZE_T nb;
2444 #endif
2446 unsigned long pagesz = malloc_getpagesize;
2447 mchunkptr old_top = top(ar_ptr); /* Record state of old top */
2448 INTERNAL_SIZE_T old_top_size = chunksize(old_top);
2449 INTERNAL_SIZE_T top_size; /* new size of top chunk */
2451 #if USE_ARENAS
2452 if(ar_ptr == &main_arena) {
2453 #endif
2455 char* brk; /* return value from sbrk */
2456 INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of sbrked space */
2457 INTERNAL_SIZE_T correction; /* bytes for 2nd sbrk call */
2458 char* new_brk; /* return of 2nd sbrk call */
2459 char* old_end = (char*)(chunk_at_offset(old_top, old_top_size));
2461 /* Pad request with top_pad plus minimal overhead */
2462 INTERNAL_SIZE_T sbrk_size = nb + top_pad + MINSIZE;
2464 /* If not the first time through, round to preserve page boundary */
2465 /* Otherwise, we need to correct to a page size below anyway. */
2466 /* (We also correct below if an intervening foreign sbrk call.) */
2468 if (sbrk_base != (char*)(-1))
2469 sbrk_size = (sbrk_size + (pagesz - 1)) & ~(pagesz - 1);
2471 brk = (char*)(MORECORE (sbrk_size));
2473 /* Fail if sbrk failed or if a foreign sbrk call killed our space */
2474 if (brk == (char*)(MORECORE_FAILURE) ||
2475 (brk < old_end && old_top != initial_top(&main_arena)))
2476 return;
2478 #if defined _LIBC || defined MALLOC_HOOKS
2479 /* Call the `morecore' hook if necessary. */
2480 if (__after_morecore_hook)
2481 (*__after_morecore_hook) ();
2482 #endif
2484 sbrked_mem += sbrk_size;
2486 if (brk == old_end) { /* can just add bytes to current top */
2487 top_size = sbrk_size + old_top_size;
2488 set_head(old_top, top_size | PREV_INUSE);
2489 old_top = 0; /* don't free below */
2490 } else {
2491 if (sbrk_base == (char*)(-1)) /* First time through. Record base */
2492 sbrk_base = brk;
2493 else
2494 /* Someone else called sbrk(). Count those bytes as sbrked_mem. */
2495 sbrked_mem += brk - (char*)old_end;
2497 /* Guarantee alignment of first new chunk made from this space */
2498 front_misalign = (unsigned long)chunk2mem(brk) & MALLOC_ALIGN_MASK;
2499 if (front_misalign > 0) {
2500 correction = (MALLOC_ALIGNMENT) - front_misalign;
2501 brk += correction;
2502 } else
2503 correction = 0;
2505 /* Guarantee the next brk will be at a page boundary */
2506 correction += pagesz - ((unsigned long)(brk + sbrk_size) & (pagesz - 1));
2508 /* Allocate correction */
2509 new_brk = (char*)(MORECORE (correction));
2510 if (new_brk == (char*)(MORECORE_FAILURE)) return;
2512 #if defined _LIBC || defined MALLOC_HOOKS
2513 /* Call the `morecore' hook if necessary. */
2514 if (__after_morecore_hook)
2515 (*__after_morecore_hook) ();
2516 #endif
2518 sbrked_mem += correction;
2520 top(&main_arena) = (mchunkptr)brk;
2521 top_size = new_brk - brk + correction;
2522 set_head(top(&main_arena), top_size | PREV_INUSE);
2524 if (old_top == initial_top(&main_arena))
2525 old_top = 0; /* don't free below */
2528 if ((unsigned long)sbrked_mem > (unsigned long)max_sbrked_mem)
2529 max_sbrked_mem = sbrked_mem;
2530 #ifdef NO_THREADS
2531 if ((unsigned long)(mmapped_mem + arena_mem + sbrked_mem) > max_total_mem)
2532 max_total_mem = mmapped_mem + arena_mem + sbrked_mem;
2533 #endif
2535 #if USE_ARENAS
2536 } else { /* ar_ptr != &main_arena */
2537 heap_info *old_heap, *heap;
2538 size_t old_heap_size;
2540 if(old_top_size < MINSIZE) /* this should never happen */
2541 return;
2543 /* First try to extend the current heap. */
2544 if(MINSIZE + nb <= old_top_size)
2545 return;
2546 old_heap = heap_for_ptr(old_top);
2547 old_heap_size = old_heap->size;
2548 if(grow_heap(old_heap, MINSIZE + nb - old_top_size) == 0) {
2549 ar_ptr->size += old_heap->size - old_heap_size;
2550 arena_mem += old_heap->size - old_heap_size;
2551 #ifdef NO_THREADS
2552 if(mmapped_mem + arena_mem + sbrked_mem > max_total_mem)
2553 max_total_mem = mmapped_mem + arena_mem + sbrked_mem;
2554 #endif
2555 top_size = ((char *)old_heap + old_heap->size) - (char *)old_top;
2556 set_head(old_top, top_size | PREV_INUSE);
2557 return;
2560 /* A new heap must be created. */
2561 heap = new_heap(nb + (MINSIZE + sizeof(*heap)));
2562 if(!heap)
2563 return;
2564 heap->ar_ptr = ar_ptr;
2565 heap->prev = old_heap;
2566 ar_ptr->size += heap->size;
2567 arena_mem += heap->size;
2568 #ifdef NO_THREADS
2569 if((unsigned long)(mmapped_mem + arena_mem + sbrked_mem) > max_total_mem)
2570 max_total_mem = mmapped_mem + arena_mem + sbrked_mem;
2571 #endif
2573 /* Set up the new top, so we can safely use chunk_free() below. */
2574 top(ar_ptr) = chunk_at_offset(heap, sizeof(*heap));
2575 top_size = heap->size - sizeof(*heap);
2576 set_head(top(ar_ptr), top_size | PREV_INUSE);
2578 #endif /* USE_ARENAS */
2580 /* We always land on a page boundary */
2581 assert(((unsigned long)((char*)top(ar_ptr) + top_size) & (pagesz-1)) == 0);
2583 /* Setup fencepost and free the old top chunk. */
2584 if(old_top) {
2585 /* The fencepost takes at least MINSIZE bytes, because it might
2586 become the top chunk again later. Note that a footer is set
2587 up, too, although the chunk is marked in use. */
2588 old_top_size -= MINSIZE;
2589 set_head(chunk_at_offset(old_top, old_top_size + 2*SIZE_SZ), 0|PREV_INUSE);
2590 if(old_top_size >= MINSIZE) {
2591 set_head(chunk_at_offset(old_top, old_top_size), (2*SIZE_SZ)|PREV_INUSE);
2592 set_foot(chunk_at_offset(old_top, old_top_size), (2*SIZE_SZ));
2593 set_head_size(old_top, old_top_size);
2594 chunk_free(ar_ptr, old_top);
2595 } else {
2596 set_head(old_top, (old_top_size + 2*SIZE_SZ)|PREV_INUSE);
2597 set_foot(old_top, (old_top_size + 2*SIZE_SZ));
2605 /* Main public routines */
2609 Malloc Algorithm:
2611 The requested size is first converted into a usable form, `nb'.
2612 This currently means to add 4 bytes overhead plus possibly more to
2613 obtain 8-byte alignment and/or to obtain a size of at least
2614 MINSIZE (currently 16, 24, or 32 bytes), the smallest allocatable
2615 size. (All fits are considered `exact' if they are within MINSIZE
2616 bytes.)
2618 From there, the first successful of the following steps is taken:
2620 1. The bin corresponding to the request size is scanned, and if
2621 a chunk of exactly the right size is found, it is taken.
2623 2. The most recently remaindered chunk is used if it is big
2624 enough. This is a form of (roving) first fit, used only in
2625 the absence of exact fits. Runs of consecutive requests use
2626 the remainder of the chunk used for the previous such request
2627 whenever possible. This limited use of a first-fit style
2628 allocation strategy tends to give contiguous chunks
2629 coextensive lifetimes, which improves locality and can reduce
2630 fragmentation in the long run.
2632 3. Other bins are scanned in increasing size order, using a
2633 chunk big enough to fulfill the request, and splitting off
2634 any remainder. This search is strictly by best-fit; i.e.,
2635 the smallest (with ties going to approximately the least
2636 recently used) chunk that fits is selected.
2638 4. If large enough, the chunk bordering the end of memory
2639 (`top') is split off. (This use of `top' is in accord with
2640 the best-fit search rule. In effect, `top' is treated as
2641 larger (and thus less well fitting) than any other available
2642 chunk since it can be extended to be as large as necessary
2643 (up to system limitations).
2645 5. If the request size meets the mmap threshold and the
2646 system supports mmap, and there are few enough currently
2647 allocated mmapped regions, and a call to mmap succeeds,
2648 the request is allocated via direct memory mapping.
2650 6. Otherwise, the top of memory is extended by
2651 obtaining more space from the system (normally using sbrk,
2652 but definable to anything else via the MORECORE macro).
2653 Memory is gathered from the system (in system page-sized
2654 units) in a way that allows chunks obtained across different
2655 sbrk calls to be consolidated, but does not require
2656 contiguous memory. Thus, it should be safe to intersperse
2657 mallocs with other sbrk calls.
2660 All allocations are made from the `lowest' part of any found
2661 chunk. (The implementation invariant is that prev_inuse is
2662 always true of any allocated chunk; i.e., that each allocated
2663 chunk borders either a previously allocated and still in-use chunk,
2664 or the base of its memory arena.)
2668 #if __STD_C
2669 Void_t* mALLOc(size_t bytes)
2670 #else
2671 Void_t* mALLOc(bytes) size_t bytes;
2672 #endif
2674 arena *ar_ptr;
2675 INTERNAL_SIZE_T nb; /* padded request size */
2676 mchunkptr victim;
2678 #if defined _LIBC || defined MALLOC_HOOKS
2679 if (__malloc_hook != NULL) {
2680 Void_t* result;
2682 #if defined __GNUC__ && __GNUC__ >= 2
2683 result = (*__malloc_hook)(bytes, __builtin_return_address (0));
2684 #else
2685 result = (*__malloc_hook)(bytes, NULL);
2686 #endif
2687 return result;
2689 #endif
2691 if(request2size(bytes, nb))
2692 return 0;
2693 arena_get(ar_ptr, nb);
2694 if(!ar_ptr)
2695 return 0;
2696 victim = chunk_alloc(ar_ptr, nb);
2697 if(!victim) {
2698 /* Maybe the failure is due to running out of mmapped areas. */
2699 if(ar_ptr != &main_arena) {
2700 (void)mutex_unlock(&ar_ptr->mutex);
2701 (void)mutex_lock(&main_arena.mutex);
2702 victim = chunk_alloc(&main_arena, nb);
2703 (void)mutex_unlock(&main_arena.mutex);
2704 } else {
2705 #if USE_ARENAS
2706 /* ... or sbrk() has failed and there is still a chance to mmap() */
2707 ar_ptr = arena_get2(ar_ptr->next ? ar_ptr : 0, nb);
2708 (void)mutex_unlock(&main_arena.mutex);
2709 if(ar_ptr) {
2710 victim = chunk_alloc(ar_ptr, nb);
2711 (void)mutex_unlock(&ar_ptr->mutex);
2713 #endif
2715 if(!victim) return 0;
2716 } else
2717 (void)mutex_unlock(&ar_ptr->mutex);
2718 return chunk2mem(victim);
2721 static mchunkptr
2722 internal_function
2723 #if __STD_C
2724 chunk_alloc(arena *ar_ptr, INTERNAL_SIZE_T nb)
2725 #else
2726 chunk_alloc(ar_ptr, nb) arena *ar_ptr; INTERNAL_SIZE_T nb;
2727 #endif
2729 mchunkptr victim; /* inspected/selected chunk */
2730 INTERNAL_SIZE_T victim_size; /* its size */
2731 int idx; /* index for bin traversal */
2732 mbinptr bin; /* associated bin */
2733 mchunkptr remainder; /* remainder from a split */
2734 long remainder_size; /* its size */
2735 int remainder_index; /* its bin index */
2736 unsigned long block; /* block traverser bit */
2737 int startidx; /* first bin of a traversed block */
2738 mchunkptr fwd; /* misc temp for linking */
2739 mchunkptr bck; /* misc temp for linking */
2740 mbinptr q; /* misc temp */
2743 /* Check for exact match in a bin */
2745 if (is_small_request(nb)) /* Faster version for small requests */
2747 idx = smallbin_index(nb);
2749 /* No traversal or size check necessary for small bins. */
2751 q = bin_at(ar_ptr, idx);
2752 victim = last(q);
2754 /* Also scan the next one, since it would have a remainder < MINSIZE */
2755 if (victim == q)
2757 q = next_bin(q);
2758 victim = last(q);
2760 if (victim != q)
2762 victim_size = chunksize(victim);
2763 unlink(victim, bck, fwd);
2764 set_inuse_bit_at_offset(victim, victim_size);
2765 check_malloced_chunk(ar_ptr, victim, nb);
2766 return victim;
2769 idx += 2; /* Set for bin scan below. We've already scanned 2 bins. */
2772 else
2774 idx = bin_index(nb);
2775 bin = bin_at(ar_ptr, idx);
2777 for (victim = last(bin); victim != bin; victim = victim->bk)
2779 victim_size = chunksize(victim);
2780 remainder_size = victim_size - nb;
2782 if (remainder_size >= (long)MINSIZE) /* too big */
2784 --idx; /* adjust to rescan below after checking last remainder */
2785 break;
2788 else if (remainder_size >= 0) /* exact fit */
2790 unlink(victim, bck, fwd);
2791 set_inuse_bit_at_offset(victim, victim_size);
2792 check_malloced_chunk(ar_ptr, victim, nb);
2793 return victim;
2797 ++idx;
2801 /* Try to use the last split-off remainder */
2803 if ( (victim = last_remainder(ar_ptr)->fd) != last_remainder(ar_ptr))
2805 victim_size = chunksize(victim);
2806 remainder_size = victim_size - nb;
2808 if (remainder_size >= (long)MINSIZE) /* re-split */
2810 remainder = chunk_at_offset(victim, nb);
2811 set_head(victim, nb | PREV_INUSE);
2812 link_last_remainder(ar_ptr, remainder);
2813 set_head(remainder, remainder_size | PREV_INUSE);
2814 set_foot(remainder, remainder_size);
2815 check_malloced_chunk(ar_ptr, victim, nb);
2816 return victim;
2819 clear_last_remainder(ar_ptr);
2821 if (remainder_size >= 0) /* exhaust */
2823 set_inuse_bit_at_offset(victim, victim_size);
2824 check_malloced_chunk(ar_ptr, victim, nb);
2825 return victim;
2828 /* Else place in bin */
2830 frontlink(ar_ptr, victim, victim_size, remainder_index, bck, fwd);
2834 If there are any possibly nonempty big-enough blocks,
2835 search for best fitting chunk by scanning bins in blockwidth units.
2838 if ( (block = idx2binblock(idx)) <= binblocks(ar_ptr))
2841 /* Get to the first marked block */
2843 if ( (block & binblocks(ar_ptr)) == 0)
2845 /* force to an even block boundary */
2846 idx = (idx & ~(BINBLOCKWIDTH - 1)) + BINBLOCKWIDTH;
2847 block <<= 1;
2848 while ((block & binblocks(ar_ptr)) == 0)
2850 idx += BINBLOCKWIDTH;
2851 block <<= 1;
2855 /* For each possibly nonempty block ... */
2856 for (;;)
2858 startidx = idx; /* (track incomplete blocks) */
2859 q = bin = bin_at(ar_ptr, idx);
2861 /* For each bin in this block ... */
2864 /* Find and use first big enough chunk ... */
2866 for (victim = last(bin); victim != bin; victim = victim->bk)
2868 victim_size = chunksize(victim);
2869 remainder_size = victim_size - nb;
2871 if (remainder_size >= (long)MINSIZE) /* split */
2873 remainder = chunk_at_offset(victim, nb);
2874 set_head(victim, nb | PREV_INUSE);
2875 unlink(victim, bck, fwd);
2876 link_last_remainder(ar_ptr, remainder);
2877 set_head(remainder, remainder_size | PREV_INUSE);
2878 set_foot(remainder, remainder_size);
2879 check_malloced_chunk(ar_ptr, victim, nb);
2880 return victim;
2883 else if (remainder_size >= 0) /* take */
2885 set_inuse_bit_at_offset(victim, victim_size);
2886 unlink(victim, bck, fwd);
2887 check_malloced_chunk(ar_ptr, victim, nb);
2888 return victim;
2893 bin = next_bin(bin);
2895 } while ((++idx & (BINBLOCKWIDTH - 1)) != 0);
2897 /* Clear out the block bit. */
2899 do /* Possibly backtrack to try to clear a partial block */
2901 if ((startidx & (BINBLOCKWIDTH - 1)) == 0)
2903 binblocks(ar_ptr) &= ~block;
2904 break;
2906 --startidx;
2907 q = prev_bin(q);
2908 } while (first(q) == q);
2910 /* Get to the next possibly nonempty block */
2912 if ( (block <<= 1) <= binblocks(ar_ptr) && (block != 0) )
2914 while ((block & binblocks(ar_ptr)) == 0)
2916 idx += BINBLOCKWIDTH;
2917 block <<= 1;
2920 else
2921 break;
2926 /* Try to use top chunk */
2928 /* Require that there be a remainder, ensuring top always exists */
2929 if ( (remainder_size = chunksize(top(ar_ptr)) - nb) < (long)MINSIZE)
2932 #if HAVE_MMAP
2933 /* If big and would otherwise need to extend, try to use mmap instead */
2934 if ((unsigned long)nb >= (unsigned long)mmap_threshold &&
2935 (victim = mmap_chunk(nb)) != 0)
2936 return victim;
2937 #endif
2939 /* Try to extend */
2940 malloc_extend_top(ar_ptr, nb);
2941 if ((remainder_size = chunksize(top(ar_ptr)) - nb) < (long)MINSIZE)
2942 return 0; /* propagate failure */
2945 victim = top(ar_ptr);
2946 set_head(victim, nb | PREV_INUSE);
2947 top(ar_ptr) = chunk_at_offset(victim, nb);
2948 set_head(top(ar_ptr), remainder_size | PREV_INUSE);
2949 check_malloced_chunk(ar_ptr, victim, nb);
2950 return victim;
2959 free() algorithm :
2961 cases:
2963 1. free(0) has no effect.
2965 2. If the chunk was allocated via mmap, it is released via munmap().
2967 3. If a returned chunk borders the current high end of memory,
2968 it is consolidated into the top, and if the total unused
2969 topmost memory exceeds the trim threshold, malloc_trim is
2970 called.
2972 4. Other chunks are consolidated as they arrive, and
2973 placed in corresponding bins. (This includes the case of
2974 consolidating with the current `last_remainder').
2979 #if __STD_C
2980 void fREe(Void_t* mem)
2981 #else
2982 void fREe(mem) Void_t* mem;
2983 #endif
2985 arena *ar_ptr;
2986 mchunkptr p; /* chunk corresponding to mem */
2988 #if defined _LIBC || defined MALLOC_HOOKS
2989 if (__free_hook != NULL) {
2990 #if defined __GNUC__ && __GNUC__ >= 2
2991 (*__free_hook)(mem, __builtin_return_address (0));
2992 #else
2993 (*__free_hook)(mem, NULL);
2994 #endif
2995 return;
2997 #endif
2999 if (mem == 0) /* free(0) has no effect */
3000 return;
3002 p = mem2chunk(mem);
3004 #if HAVE_MMAP
3005 if (chunk_is_mmapped(p)) /* release mmapped memory. */
3007 munmap_chunk(p);
3008 return;
3010 #endif
3012 ar_ptr = arena_for_ptr(p);
3013 #if THREAD_STATS
3014 if(!mutex_trylock(&ar_ptr->mutex))
3015 ++(ar_ptr->stat_lock_direct);
3016 else {
3017 (void)mutex_lock(&ar_ptr->mutex);
3018 ++(ar_ptr->stat_lock_wait);
3020 #else
3021 (void)mutex_lock(&ar_ptr->mutex);
3022 #endif
3023 chunk_free(ar_ptr, p);
3024 (void)mutex_unlock(&ar_ptr->mutex);
3027 static void
3028 internal_function
3029 #if __STD_C
3030 chunk_free(arena *ar_ptr, mchunkptr p)
3031 #else
3032 chunk_free(ar_ptr, p) arena *ar_ptr; mchunkptr p;
3033 #endif
3035 INTERNAL_SIZE_T hd = p->size; /* its head field */
3036 INTERNAL_SIZE_T sz; /* its size */
3037 int idx; /* its bin index */
3038 mchunkptr next; /* next contiguous chunk */
3039 INTERNAL_SIZE_T nextsz; /* its size */
3040 INTERNAL_SIZE_T prevsz; /* size of previous contiguous chunk */
3041 mchunkptr bck; /* misc temp for linking */
3042 mchunkptr fwd; /* misc temp for linking */
3043 int islr; /* track whether merging with last_remainder */
3045 check_inuse_chunk(ar_ptr, p);
3047 sz = hd & ~PREV_INUSE;
3048 next = chunk_at_offset(p, sz);
3049 nextsz = chunksize(next);
3051 if (next == top(ar_ptr)) /* merge with top */
3053 sz += nextsz;
3055 if (!(hd & PREV_INUSE)) /* consolidate backward */
3057 prevsz = p->prev_size;
3058 p = chunk_at_offset(p, -(long)prevsz);
3059 sz += prevsz;
3060 unlink(p, bck, fwd);
3063 set_head(p, sz | PREV_INUSE);
3064 top(ar_ptr) = p;
3066 #if USE_ARENAS
3067 if(ar_ptr == &main_arena) {
3068 #endif
3069 if ((unsigned long)(sz) >= (unsigned long)trim_threshold)
3070 main_trim(top_pad);
3071 #if USE_ARENAS
3072 } else {
3073 heap_info *heap = heap_for_ptr(p);
3075 assert(heap->ar_ptr == ar_ptr);
3077 /* Try to get rid of completely empty heaps, if possible. */
3078 if((unsigned long)(sz) >= (unsigned long)trim_threshold ||
3079 p == chunk_at_offset(heap, sizeof(*heap)))
3080 heap_trim(heap, top_pad);
3082 #endif
3083 return;
3086 islr = 0;
3088 if (!(hd & PREV_INUSE)) /* consolidate backward */
3090 prevsz = p->prev_size;
3091 p = chunk_at_offset(p, -(long)prevsz);
3092 sz += prevsz;
3094 if (p->fd == last_remainder(ar_ptr)) /* keep as last_remainder */
3095 islr = 1;
3096 else
3097 unlink(p, bck, fwd);
3100 if (!(inuse_bit_at_offset(next, nextsz))) /* consolidate forward */
3102 sz += nextsz;
3104 if (!islr && next->fd == last_remainder(ar_ptr))
3105 /* re-insert last_remainder */
3107 islr = 1;
3108 link_last_remainder(ar_ptr, p);
3110 else
3111 unlink(next, bck, fwd);
3113 next = chunk_at_offset(p, sz);
3115 else
3116 set_head(next, nextsz); /* clear inuse bit */
3118 set_head(p, sz | PREV_INUSE);
3119 next->prev_size = sz;
3120 if (!islr)
3121 frontlink(ar_ptr, p, sz, idx, bck, fwd);
3123 #if USE_ARENAS
3124 /* Check whether the heap containing top can go away now. */
3125 if(next->size < MINSIZE &&
3126 (unsigned long)sz > trim_threshold &&
3127 ar_ptr != &main_arena) { /* fencepost */
3128 heap_info *heap = heap_for_ptr(top(ar_ptr));
3130 if(top(ar_ptr) == chunk_at_offset(heap, sizeof(*heap)) &&
3131 heap->prev == heap_for_ptr(p))
3132 heap_trim(heap, top_pad);
3134 #endif
3143 Realloc algorithm:
3145 Chunks that were obtained via mmap cannot be extended or shrunk
3146 unless HAVE_MREMAP is defined, in which case mremap is used.
3147 Otherwise, if their reallocation is for additional space, they are
3148 copied. If for less, they are just left alone.
3150 Otherwise, if the reallocation is for additional space, and the
3151 chunk can be extended, it is, else a malloc-copy-free sequence is
3152 taken. There are several different ways that a chunk could be
3153 extended. All are tried:
3155 * Extending forward into following adjacent free chunk.
3156 * Shifting backwards, joining preceding adjacent space
3157 * Both shifting backwards and extending forward.
3158 * Extending into newly sbrked space
3160 Unless the #define REALLOC_ZERO_BYTES_FREES is set, realloc with a
3161 size argument of zero (re)allocates a minimum-sized chunk.
3163 If the reallocation is for less space, and the new request is for
3164 a `small' (<512 bytes) size, then the newly unused space is lopped
3165 off and freed.
3167 The old unix realloc convention of allowing the last-free'd chunk
3168 to be used as an argument to realloc is no longer supported.
3169 I don't know of any programs still relying on this feature,
3170 and allowing it would also allow too many other incorrect
3171 usages of realloc to be sensible.
3177 #if __STD_C
3178 Void_t* rEALLOc(Void_t* oldmem, size_t bytes)
3179 #else
3180 Void_t* rEALLOc(oldmem, bytes) Void_t* oldmem; size_t bytes;
3181 #endif
3183 arena *ar_ptr;
3184 INTERNAL_SIZE_T nb; /* padded request size */
3186 mchunkptr oldp; /* chunk corresponding to oldmem */
3187 INTERNAL_SIZE_T oldsize; /* its size */
3189 mchunkptr newp; /* chunk to return */
3191 #if defined _LIBC || defined MALLOC_HOOKS
3192 if (__realloc_hook != NULL) {
3193 Void_t* result;
3195 #if defined __GNUC__ && __GNUC__ >= 2
3196 result = (*__realloc_hook)(oldmem, bytes, __builtin_return_address (0));
3197 #else
3198 result = (*__realloc_hook)(oldmem, bytes, NULL);
3199 #endif
3200 return result;
3202 #endif
3204 #ifdef REALLOC_ZERO_BYTES_FREES
3205 if (bytes == 0 && oldmem != NULL) { fREe(oldmem); return 0; }
3206 #endif
3208 /* realloc of null is supposed to be same as malloc */
3209 if (oldmem == 0) return mALLOc(bytes);
3211 oldp = mem2chunk(oldmem);
3212 oldsize = chunksize(oldp);
3214 if(request2size(bytes, nb))
3215 return 0;
3217 #if HAVE_MMAP
3218 if (chunk_is_mmapped(oldp))
3220 Void_t* newmem;
3222 #if HAVE_MREMAP
3223 newp = mremap_chunk(oldp, nb);
3224 if(newp) return chunk2mem(newp);
3225 #endif
3226 /* Note the extra SIZE_SZ overhead. */
3227 if(oldsize - SIZE_SZ >= nb) return oldmem; /* do nothing */
3228 /* Must alloc, copy, free. */
3229 newmem = mALLOc(bytes);
3230 if (newmem == 0) return 0; /* propagate failure */
3231 MALLOC_COPY(newmem, oldmem, oldsize - 2*SIZE_SZ);
3232 munmap_chunk(oldp);
3233 return newmem;
3235 #endif
3237 ar_ptr = arena_for_ptr(oldp);
3238 #if THREAD_STATS
3239 if(!mutex_trylock(&ar_ptr->mutex))
3240 ++(ar_ptr->stat_lock_direct);
3241 else {
3242 (void)mutex_lock(&ar_ptr->mutex);
3243 ++(ar_ptr->stat_lock_wait);
3245 #else
3246 (void)mutex_lock(&ar_ptr->mutex);
3247 #endif
3249 #ifndef NO_THREADS
3250 /* As in malloc(), remember this arena for the next allocation. */
3251 tsd_setspecific(arena_key, (Void_t *)ar_ptr);
3252 #endif
3254 newp = chunk_realloc(ar_ptr, oldp, oldsize, nb);
3256 (void)mutex_unlock(&ar_ptr->mutex);
3257 return newp ? chunk2mem(newp) : NULL;
3260 static mchunkptr
3261 internal_function
3262 #if __STD_C
3263 chunk_realloc(arena* ar_ptr, mchunkptr oldp, INTERNAL_SIZE_T oldsize,
3264 INTERNAL_SIZE_T nb)
3265 #else
3266 chunk_realloc(ar_ptr, oldp, oldsize, nb)
3267 arena* ar_ptr; mchunkptr oldp; INTERNAL_SIZE_T oldsize, nb;
3268 #endif
3270 mchunkptr newp = oldp; /* chunk to return */
3271 INTERNAL_SIZE_T newsize = oldsize; /* its size */
3273 mchunkptr next; /* next contiguous chunk after oldp */
3274 INTERNAL_SIZE_T nextsize; /* its size */
3276 mchunkptr prev; /* previous contiguous chunk before oldp */
3277 INTERNAL_SIZE_T prevsize; /* its size */
3279 mchunkptr remainder; /* holds split off extra space from newp */
3280 INTERNAL_SIZE_T remainder_size; /* its size */
3282 mchunkptr bck; /* misc temp for linking */
3283 mchunkptr fwd; /* misc temp for linking */
3285 check_inuse_chunk(ar_ptr, oldp);
3287 if ((long)(oldsize) < (long)(nb))
3290 /* Try expanding forward */
3292 next = chunk_at_offset(oldp, oldsize);
3293 if (next == top(ar_ptr) || !inuse(next))
3295 nextsize = chunksize(next);
3297 /* Forward into top only if a remainder */
3298 if (next == top(ar_ptr))
3300 if ((long)(nextsize + newsize) >= (long)(nb + MINSIZE))
3302 newsize += nextsize;
3303 top(ar_ptr) = chunk_at_offset(oldp, nb);
3304 set_head(top(ar_ptr), (newsize - nb) | PREV_INUSE);
3305 set_head_size(oldp, nb);
3306 return oldp;
3310 /* Forward into next chunk */
3311 else if (((long)(nextsize + newsize) >= (long)(nb)))
3313 unlink(next, bck, fwd);
3314 newsize += nextsize;
3315 goto split;
3318 else
3320 next = 0;
3321 nextsize = 0;
3324 /* Try shifting backwards. */
3326 if (!prev_inuse(oldp))
3328 prev = prev_chunk(oldp);
3329 prevsize = chunksize(prev);
3331 /* try forward + backward first to save a later consolidation */
3333 if (next != 0)
3335 /* into top */
3336 if (next == top(ar_ptr))
3338 if ((long)(nextsize + prevsize + newsize) >= (long)(nb + MINSIZE))
3340 unlink(prev, bck, fwd);
3341 newp = prev;
3342 newsize += prevsize + nextsize;
3343 MALLOC_COPY(chunk2mem(newp), chunk2mem(oldp), oldsize - SIZE_SZ);
3344 top(ar_ptr) = chunk_at_offset(newp, nb);
3345 set_head(top(ar_ptr), (newsize - nb) | PREV_INUSE);
3346 set_head_size(newp, nb);
3347 return newp;
3351 /* into next chunk */
3352 else if (((long)(nextsize + prevsize + newsize) >= (long)(nb)))
3354 unlink(next, bck, fwd);
3355 unlink(prev, bck, fwd);
3356 newp = prev;
3357 newsize += nextsize + prevsize;
3358 MALLOC_COPY(chunk2mem(newp), chunk2mem(oldp), oldsize - SIZE_SZ);
3359 goto split;
3363 /* backward only */
3364 if (prev != 0 && (long)(prevsize + newsize) >= (long)nb)
3366 unlink(prev, bck, fwd);
3367 newp = prev;
3368 newsize += prevsize;
3369 MALLOC_COPY(chunk2mem(newp), chunk2mem(oldp), oldsize - SIZE_SZ);
3370 goto split;
3374 /* Must allocate */
3376 newp = chunk_alloc (ar_ptr, nb);
3378 if (newp == 0) {
3379 /* Maybe the failure is due to running out of mmapped areas. */
3380 if (ar_ptr != &main_arena) {
3381 (void)mutex_lock(&main_arena.mutex);
3382 newp = chunk_alloc(&main_arena, nb);
3383 (void)mutex_unlock(&main_arena.mutex);
3384 } else {
3385 #if USE_ARENAS
3386 /* ... or sbrk() has failed and there is still a chance to mmap() */
3387 arena* ar_ptr2 = arena_get2(ar_ptr->next ? ar_ptr : 0, nb);
3388 if(ar_ptr2) {
3389 newp = chunk_alloc(ar_ptr2, nb);
3390 (void)mutex_unlock(&ar_ptr2->mutex);
3392 #endif
3394 if (newp == 0) /* propagate failure */
3395 return 0;
3398 /* Avoid copy if newp is next chunk after oldp. */
3399 /* (This can only happen when new chunk is sbrk'ed.) */
3401 if ( newp == next_chunk(oldp))
3403 newsize += chunksize(newp);
3404 newp = oldp;
3405 goto split;
3408 /* Otherwise copy, free, and exit */
3409 MALLOC_COPY(chunk2mem(newp), chunk2mem(oldp), oldsize - SIZE_SZ);
3410 chunk_free(ar_ptr, oldp);
3411 return newp;
3415 split: /* split off extra room in old or expanded chunk */
3417 if (newsize - nb >= MINSIZE) /* split off remainder */
3419 remainder = chunk_at_offset(newp, nb);
3420 remainder_size = newsize - nb;
3421 set_head_size(newp, nb);
3422 set_head(remainder, remainder_size | PREV_INUSE);
3423 set_inuse_bit_at_offset(remainder, remainder_size);
3424 chunk_free(ar_ptr, remainder);
3426 else
3428 set_head_size(newp, newsize);
3429 set_inuse_bit_at_offset(newp, newsize);
3432 check_inuse_chunk(ar_ptr, newp);
3433 return newp;
3441 memalign algorithm:
3443 memalign requests more than enough space from malloc, finds a spot
3444 within that chunk that meets the alignment request, and then
3445 possibly frees the leading and trailing space.
3447 The alignment argument must be a power of two. This property is not
3448 checked by memalign, so misuse may result in random runtime errors.
3450 8-byte alignment is guaranteed by normal malloc calls, so don't
3451 bother calling memalign with an argument of 8 or less.
3453 Overreliance on memalign is a sure way to fragment space.
3458 #if __STD_C
3459 Void_t* mEMALIGn(size_t alignment, size_t bytes)
3460 #else
3461 Void_t* mEMALIGn(alignment, bytes) size_t alignment; size_t bytes;
3462 #endif
3464 arena *ar_ptr;
3465 INTERNAL_SIZE_T nb; /* padded request size */
3466 mchunkptr p;
3468 #if defined _LIBC || defined MALLOC_HOOKS
3469 if (__memalign_hook != NULL) {
3470 Void_t* result;
3472 #if defined __GNUC__ && __GNUC__ >= 2
3473 result = (*__memalign_hook)(alignment, bytes,
3474 __builtin_return_address (0));
3475 #else
3476 result = (*__memalign_hook)(alignment, bytes, NULL);
3477 #endif
3478 return result;
3480 #endif
3482 /* If need less alignment than we give anyway, just relay to malloc */
3484 if (alignment <= MALLOC_ALIGNMENT) return mALLOc(bytes);
3486 /* Otherwise, ensure that it is at least a minimum chunk size */
3488 if (alignment < MINSIZE) alignment = MINSIZE;
3490 if(request2size(bytes, nb))
3491 return 0;
3492 arena_get(ar_ptr, nb + alignment + MINSIZE);
3493 if(!ar_ptr)
3494 return 0;
3495 p = chunk_align(ar_ptr, nb, alignment);
3496 (void)mutex_unlock(&ar_ptr->mutex);
3497 if(!p) {
3498 /* Maybe the failure is due to running out of mmapped areas. */
3499 if(ar_ptr != &main_arena) {
3500 (void)mutex_lock(&main_arena.mutex);
3501 p = chunk_align(&main_arena, nb, alignment);
3502 (void)mutex_unlock(&main_arena.mutex);
3503 } else {
3504 #if USE_ARENAS
3505 /* ... or sbrk() has failed and there is still a chance to mmap() */
3506 ar_ptr = arena_get2(ar_ptr->next ? ar_ptr : 0, nb);
3507 if(ar_ptr) {
3508 p = chunk_align(ar_ptr, nb, alignment);
3509 (void)mutex_unlock(&ar_ptr->mutex);
3511 #endif
3513 if(!p) return 0;
3515 return chunk2mem(p);
3518 static mchunkptr
3519 internal_function
3520 #if __STD_C
3521 chunk_align(arena* ar_ptr, INTERNAL_SIZE_T nb, size_t alignment)
3522 #else
3523 chunk_align(ar_ptr, nb, alignment)
3524 arena* ar_ptr; INTERNAL_SIZE_T nb; size_t alignment;
3525 #endif
3527 char* m; /* memory returned by malloc call */
3528 mchunkptr p; /* corresponding chunk */
3529 char* brk; /* alignment point within p */
3530 mchunkptr newp; /* chunk to return */
3531 INTERNAL_SIZE_T newsize; /* its size */
3532 INTERNAL_SIZE_T leadsize; /* leading space befor alignment point */
3533 mchunkptr remainder; /* spare room at end to split off */
3534 long remainder_size; /* its size */
3536 /* Call chunk_alloc with worst case padding to hit alignment. */
3537 p = chunk_alloc(ar_ptr, nb + alignment + MINSIZE);
3538 if (p == 0)
3539 return 0; /* propagate failure */
3541 m = (char*)chunk2mem(p);
3543 if ((((unsigned long)(m)) % alignment) == 0) /* aligned */
3545 #if HAVE_MMAP
3546 if(chunk_is_mmapped(p)) {
3547 return p; /* nothing more to do */
3549 #endif
3551 else /* misaligned */
3554 Find an aligned spot inside chunk.
3555 Since we need to give back leading space in a chunk of at
3556 least MINSIZE, if the first calculation places us at
3557 a spot with less than MINSIZE leader, we can move to the
3558 next aligned spot -- we've allocated enough total room so that
3559 this is always possible.
3562 brk = (char*)mem2chunk(((unsigned long)(m + alignment - 1)) &
3563 -(long)alignment);
3564 if ((long)(brk - (char*)(p)) < (long)MINSIZE) brk += alignment;
3566 newp = (mchunkptr)brk;
3567 leadsize = brk - (char*)(p);
3568 newsize = chunksize(p) - leadsize;
3570 #if HAVE_MMAP
3571 if(chunk_is_mmapped(p))
3573 newp->prev_size = p->prev_size + leadsize;
3574 set_head(newp, newsize|IS_MMAPPED);
3575 return newp;
3577 #endif
3579 /* give back leader, use the rest */
3581 set_head(newp, newsize | PREV_INUSE);
3582 set_inuse_bit_at_offset(newp, newsize);
3583 set_head_size(p, leadsize);
3584 chunk_free(ar_ptr, p);
3585 p = newp;
3587 assert (newsize>=nb && (((unsigned long)(chunk2mem(p))) % alignment) == 0);
3590 /* Also give back spare room at the end */
3592 remainder_size = chunksize(p) - nb;
3594 if (remainder_size >= (long)MINSIZE)
3596 remainder = chunk_at_offset(p, nb);
3597 set_head(remainder, remainder_size | PREV_INUSE);
3598 set_head_size(p, nb);
3599 chunk_free(ar_ptr, remainder);
3602 check_inuse_chunk(ar_ptr, p);
3603 return p;
3610 valloc just invokes memalign with alignment argument equal
3611 to the page size of the system (or as near to this as can
3612 be figured out from all the includes/defines above.)
3615 #if __STD_C
3616 Void_t* vALLOc(size_t bytes)
3617 #else
3618 Void_t* vALLOc(bytes) size_t bytes;
3619 #endif
3621 if(__malloc_initialized < 0)
3622 ptmalloc_init ();
3623 return mEMALIGn (malloc_getpagesize, bytes);
3627 pvalloc just invokes valloc for the nearest pagesize
3628 that will accommodate request
3632 #if __STD_C
3633 Void_t* pvALLOc(size_t bytes)
3634 #else
3635 Void_t* pvALLOc(bytes) size_t bytes;
3636 #endif
3638 size_t pagesize;
3639 if(__malloc_initialized < 0)
3640 ptmalloc_init ();
3641 pagesize = malloc_getpagesize;
3642 return mEMALIGn (pagesize, (bytes + pagesize - 1) & ~(pagesize - 1));
3647 calloc calls chunk_alloc, then zeroes out the allocated chunk.
3651 #if __STD_C
3652 Void_t* cALLOc(size_t n, size_t elem_size)
3653 #else
3654 Void_t* cALLOc(n, elem_size) size_t n; size_t elem_size;
3655 #endif
3657 arena *ar_ptr;
3658 mchunkptr p, oldtop;
3659 INTERNAL_SIZE_T sz, csz, oldtopsize;
3660 Void_t* mem;
3662 #if defined _LIBC || defined MALLOC_HOOKS
3663 if (__malloc_hook != NULL) {
3664 sz = n * elem_size;
3665 #if defined __GNUC__ && __GNUC__ >= 2
3666 mem = (*__malloc_hook)(sz, __builtin_return_address (0));
3667 #else
3668 mem = (*__malloc_hook)(sz, NULL);
3669 #endif
3670 if(mem == 0)
3671 return 0;
3672 #ifdef HAVE_MEMSET
3673 return memset(mem, 0, sz);
3674 #else
3675 while(sz > 0) ((char*)mem)[--sz] = 0; /* rather inefficient */
3676 return mem;
3677 #endif
3679 #endif
3681 if(request2size(n * elem_size, sz))
3682 return 0;
3683 arena_get(ar_ptr, sz);
3684 if(!ar_ptr)
3685 return 0;
3687 /* Check if expand_top called, in which case there may be
3688 no need to clear. */
3689 #if MORECORE_CLEARS
3690 oldtop = top(ar_ptr);
3691 oldtopsize = chunksize(top(ar_ptr));
3692 #if MORECORE_CLEARS < 2
3693 /* Only newly allocated memory is guaranteed to be cleared. */
3694 if (ar_ptr == &main_arena &&
3695 oldtopsize < sbrk_base + max_sbrked_mem - (char *)oldtop)
3696 oldtopsize = (sbrk_base + max_sbrked_mem - (char *)oldtop);
3697 #endif
3698 #endif
3699 p = chunk_alloc (ar_ptr, sz);
3701 /* Only clearing follows, so we can unlock early. */
3702 (void)mutex_unlock(&ar_ptr->mutex);
3704 if (p == 0) {
3705 /* Maybe the failure is due to running out of mmapped areas. */
3706 if(ar_ptr != &main_arena) {
3707 (void)mutex_lock(&main_arena.mutex);
3708 p = chunk_alloc(&main_arena, sz);
3709 (void)mutex_unlock(&main_arena.mutex);
3710 } else {
3711 #if USE_ARENAS
3712 /* ... or sbrk() has failed and there is still a chance to mmap() */
3713 (void)mutex_lock(&main_arena.mutex);
3714 ar_ptr = arena_get2(ar_ptr->next ? ar_ptr : 0, sz);
3715 (void)mutex_unlock(&main_arena.mutex);
3716 if(ar_ptr) {
3717 p = chunk_alloc(ar_ptr, sz);
3718 (void)mutex_unlock(&ar_ptr->mutex);
3720 #endif
3722 if (p == 0) return 0;
3724 mem = chunk2mem(p);
3726 /* Two optional cases in which clearing not necessary */
3728 #if HAVE_MMAP
3729 if (chunk_is_mmapped(p)) return mem;
3730 #endif
3732 csz = chunksize(p);
3734 #if MORECORE_CLEARS
3735 if (p == oldtop && csz > oldtopsize) {
3736 /* clear only the bytes from non-freshly-sbrked memory */
3737 csz = oldtopsize;
3739 #endif
3741 MALLOC_ZERO(mem, csz - SIZE_SZ);
3742 return mem;
3747 cfree just calls free. It is needed/defined on some systems
3748 that pair it with calloc, presumably for odd historical reasons.
3752 #if !defined(_LIBC)
3753 #if __STD_C
3754 void cfree(Void_t *mem)
3755 #else
3756 void cfree(mem) Void_t *mem;
3757 #endif
3759 fREe(mem);
3761 #endif
3767 Malloc_trim gives memory back to the system (via negative
3768 arguments to sbrk) if there is unused memory at the `high' end of
3769 the malloc pool. You can call this after freeing large blocks of
3770 memory to potentially reduce the system-level memory requirements
3771 of a program. However, it cannot guarantee to reduce memory. Under
3772 some allocation patterns, some large free blocks of memory will be
3773 locked between two used chunks, so they cannot be given back to
3774 the system.
3776 The `pad' argument to malloc_trim represents the amount of free
3777 trailing space to leave untrimmed. If this argument is zero,
3778 only the minimum amount of memory to maintain internal data
3779 structures will be left (one page or less). Non-zero arguments
3780 can be supplied to maintain enough trailing space to service
3781 future expected allocations without having to re-obtain memory
3782 from the system.
3784 Malloc_trim returns 1 if it actually released any memory, else 0.
3788 #if __STD_C
3789 int mALLOC_TRIm(size_t pad)
3790 #else
3791 int mALLOC_TRIm(pad) size_t pad;
3792 #endif
3794 int res;
3796 (void)mutex_lock(&main_arena.mutex);
3797 res = main_trim(pad);
3798 (void)mutex_unlock(&main_arena.mutex);
3799 return res;
3802 /* Trim the main arena. */
3804 static int
3805 internal_function
3806 #if __STD_C
3807 main_trim(size_t pad)
3808 #else
3809 main_trim(pad) size_t pad;
3810 #endif
3812 mchunkptr top_chunk; /* The current top chunk */
3813 long top_size; /* Amount of top-most memory */
3814 long extra; /* Amount to release */
3815 char* current_brk; /* address returned by pre-check sbrk call */
3816 char* new_brk; /* address returned by negative sbrk call */
3818 unsigned long pagesz = malloc_getpagesize;
3820 top_chunk = top(&main_arena);
3821 top_size = chunksize(top_chunk);
3822 extra = ((top_size - pad - MINSIZE + (pagesz-1)) / pagesz - 1) * pagesz;
3824 if (extra < (long)pagesz) /* Not enough memory to release */
3825 return 0;
3827 /* Test to make sure no one else called sbrk */
3828 current_brk = (char*)(MORECORE (0));
3829 if (current_brk != (char*)(top_chunk) + top_size)
3830 return 0; /* Apparently we don't own memory; must fail */
3832 new_brk = (char*)(MORECORE (-extra));
3834 #if defined _LIBC || defined MALLOC_HOOKS
3835 /* Call the `morecore' hook if necessary. */
3836 if (__after_morecore_hook)
3837 (*__after_morecore_hook) ();
3838 #endif
3840 if (new_brk == (char*)(MORECORE_FAILURE)) { /* sbrk failed? */
3841 /* Try to figure out what we have */
3842 current_brk = (char*)(MORECORE (0));
3843 top_size = current_brk - (char*)top_chunk;
3844 if (top_size >= (long)MINSIZE) /* if not, we are very very dead! */
3846 sbrked_mem = current_brk - sbrk_base;
3847 set_head(top_chunk, top_size | PREV_INUSE);
3849 check_chunk(&main_arena, top_chunk);
3850 return 0;
3852 sbrked_mem -= extra;
3854 /* Success. Adjust top accordingly. */
3855 set_head(top_chunk, (top_size - extra) | PREV_INUSE);
3856 check_chunk(&main_arena, top_chunk);
3857 return 1;
3860 #if USE_ARENAS
3862 static int
3863 internal_function
3864 #if __STD_C
3865 heap_trim(heap_info *heap, size_t pad)
3866 #else
3867 heap_trim(heap, pad) heap_info *heap; size_t pad;
3868 #endif
3870 unsigned long pagesz = malloc_getpagesize;
3871 arena *ar_ptr = heap->ar_ptr;
3872 mchunkptr top_chunk = top(ar_ptr), p, bck, fwd;
3873 heap_info *prev_heap;
3874 long new_size, top_size, extra;
3876 /* Can this heap go away completely ? */
3877 while(top_chunk == chunk_at_offset(heap, sizeof(*heap))) {
3878 prev_heap = heap->prev;
3879 p = chunk_at_offset(prev_heap, prev_heap->size - (MINSIZE-2*SIZE_SZ));
3880 assert(p->size == (0|PREV_INUSE)); /* must be fencepost */
3881 p = prev_chunk(p);
3882 new_size = chunksize(p) + (MINSIZE-2*SIZE_SZ);
3883 assert(new_size>0 && new_size<(long)(2*MINSIZE));
3884 if(!prev_inuse(p))
3885 new_size += p->prev_size;
3886 assert(new_size>0 && new_size<HEAP_MAX_SIZE);
3887 if(new_size + (HEAP_MAX_SIZE - prev_heap->size) < pad + MINSIZE + pagesz)
3888 break;
3889 ar_ptr->size -= heap->size;
3890 arena_mem -= heap->size;
3891 delete_heap(heap);
3892 heap = prev_heap;
3893 if(!prev_inuse(p)) { /* consolidate backward */
3894 p = prev_chunk(p);
3895 unlink(p, bck, fwd);
3897 assert(((unsigned long)((char*)p + new_size) & (pagesz-1)) == 0);
3898 assert( ((char*)p + new_size) == ((char*)heap + heap->size) );
3899 top(ar_ptr) = top_chunk = p;
3900 set_head(top_chunk, new_size | PREV_INUSE);
3901 check_chunk(ar_ptr, top_chunk);
3903 top_size = chunksize(top_chunk);
3904 extra = ((top_size - pad - MINSIZE + (pagesz-1))/pagesz - 1) * pagesz;
3905 if(extra < (long)pagesz)
3906 return 0;
3907 /* Try to shrink. */
3908 if(grow_heap(heap, -extra) != 0)
3909 return 0;
3910 ar_ptr->size -= extra;
3911 arena_mem -= extra;
3913 /* Success. Adjust top accordingly. */
3914 set_head(top_chunk, (top_size - extra) | PREV_INUSE);
3915 check_chunk(ar_ptr, top_chunk);
3916 return 1;
3919 #endif /* USE_ARENAS */
3924 malloc_usable_size:
3926 This routine tells you how many bytes you can actually use in an
3927 allocated chunk, which may be more than you requested (although
3928 often not). You can use this many bytes without worrying about
3929 overwriting other allocated objects. Not a particularly great
3930 programming practice, but still sometimes useful.
3934 #if __STD_C
3935 size_t mALLOC_USABLE_SIZe(Void_t* mem)
3936 #else
3937 size_t mALLOC_USABLE_SIZe(mem) Void_t* mem;
3938 #endif
3940 mchunkptr p;
3942 if (mem == 0)
3943 return 0;
3944 else
3946 p = mem2chunk(mem);
3947 if(!chunk_is_mmapped(p))
3949 if (!inuse(p)) return 0;
3950 check_inuse_chunk(arena_for_ptr(mem), p);
3951 return chunksize(p) - SIZE_SZ;
3953 return chunksize(p) - 2*SIZE_SZ;
3960 /* Utility to update mallinfo for malloc_stats() and mallinfo() */
3962 static void
3963 #if __STD_C
3964 malloc_update_mallinfo(arena *ar_ptr, struct mallinfo *mi)
3965 #else
3966 malloc_update_mallinfo(ar_ptr, mi) arena *ar_ptr; struct mallinfo *mi;
3967 #endif
3969 int i, navail;
3970 mbinptr b;
3971 mchunkptr p;
3972 #if MALLOC_DEBUG
3973 mchunkptr q;
3974 #endif
3975 INTERNAL_SIZE_T avail;
3977 (void)mutex_lock(&ar_ptr->mutex);
3978 avail = chunksize(top(ar_ptr));
3979 navail = ((long)(avail) >= (long)MINSIZE)? 1 : 0;
3981 for (i = 1; i < NAV; ++i)
3983 b = bin_at(ar_ptr, i);
3984 for (p = last(b); p != b; p = p->bk)
3986 #if MALLOC_DEBUG
3987 check_free_chunk(ar_ptr, p);
3988 for (q = next_chunk(p);
3989 q != top(ar_ptr) && inuse(q) && (long)chunksize(q) > 0;
3990 q = next_chunk(q))
3991 check_inuse_chunk(ar_ptr, q);
3992 #endif
3993 avail += chunksize(p);
3994 navail++;
3998 mi->arena = ar_ptr->size;
3999 mi->ordblks = navail;
4000 mi->smblks = mi->usmblks = mi->fsmblks = 0; /* clear unused fields */
4001 mi->uordblks = ar_ptr->size - avail;
4002 mi->fordblks = avail;
4003 mi->hblks = n_mmaps;
4004 mi->hblkhd = mmapped_mem;
4005 mi->keepcost = chunksize(top(ar_ptr));
4007 (void)mutex_unlock(&ar_ptr->mutex);
4010 #if USE_ARENAS && MALLOC_DEBUG > 1
4012 /* Print the complete contents of a single heap to stderr. */
4014 static void
4015 #if __STD_C
4016 dump_heap(heap_info *heap)
4017 #else
4018 dump_heap(heap) heap_info *heap;
4019 #endif
4021 char *ptr;
4022 mchunkptr p;
4024 fprintf(stderr, "Heap %p, size %10lx:\n", heap, (long)heap->size);
4025 ptr = (heap->ar_ptr != (arena*)(heap+1)) ?
4026 (char*)(heap + 1) : (char*)(heap + 1) + sizeof(arena);
4027 p = (mchunkptr)(((unsigned long)ptr + MALLOC_ALIGN_MASK) &
4028 ~MALLOC_ALIGN_MASK);
4029 for(;;) {
4030 fprintf(stderr, "chunk %p size %10lx", p, (long)p->size);
4031 if(p == top(heap->ar_ptr)) {
4032 fprintf(stderr, " (top)\n");
4033 break;
4034 } else if(p->size == (0|PREV_INUSE)) {
4035 fprintf(stderr, " (fence)\n");
4036 break;
4038 fprintf(stderr, "\n");
4039 p = next_chunk(p);
4043 #endif
4049 malloc_stats:
4051 For all arenas separately and in total, prints on stderr the
4052 amount of space obtained from the system, and the current number
4053 of bytes allocated via malloc (or realloc, etc) but not yet
4054 freed. (Note that this is the number of bytes allocated, not the
4055 number requested. It will be larger than the number requested
4056 because of alignment and bookkeeping overhead.) When not compiled
4057 for multiple threads, the maximum amount of allocated memory
4058 (which may be more than current if malloc_trim and/or munmap got
4059 called) is also reported. When using mmap(), prints the maximum
4060 number of simultaneous mmap regions used, too.
4064 void mALLOC_STATs()
4066 int i;
4067 arena *ar_ptr;
4068 struct mallinfo mi;
4069 unsigned int in_use_b = mmapped_mem, system_b = in_use_b;
4070 #if THREAD_STATS
4071 long stat_lock_direct = 0, stat_lock_loop = 0, stat_lock_wait = 0;
4072 #endif
4074 for(i=0, ar_ptr = &main_arena;; i++) {
4075 malloc_update_mallinfo(ar_ptr, &mi);
4076 fprintf(stderr, "Arena %d:\n", i);
4077 fprintf(stderr, "system bytes = %10u\n", (unsigned int)mi.arena);
4078 fprintf(stderr, "in use bytes = %10u\n", (unsigned int)mi.uordblks);
4079 system_b += mi.arena;
4080 in_use_b += mi.uordblks;
4081 #if THREAD_STATS
4082 stat_lock_direct += ar_ptr->stat_lock_direct;
4083 stat_lock_loop += ar_ptr->stat_lock_loop;
4084 stat_lock_wait += ar_ptr->stat_lock_wait;
4085 #endif
4086 #if USE_ARENAS && MALLOC_DEBUG > 1
4087 if(ar_ptr != &main_arena) {
4088 heap_info *heap;
4089 (void)mutex_lock(&ar_ptr->mutex);
4090 heap = heap_for_ptr(top(ar_ptr));
4091 while(heap) { dump_heap(heap); heap = heap->prev; }
4092 (void)mutex_unlock(&ar_ptr->mutex);
4094 #endif
4095 ar_ptr = ar_ptr->next;
4096 if(ar_ptr == &main_arena) break;
4098 #if HAVE_MMAP
4099 fprintf(stderr, "Total (incl. mmap):\n");
4100 #else
4101 fprintf(stderr, "Total:\n");
4102 #endif
4103 fprintf(stderr, "system bytes = %10u\n", system_b);
4104 fprintf(stderr, "in use bytes = %10u\n", in_use_b);
4105 #ifdef NO_THREADS
4106 fprintf(stderr, "max system bytes = %10u\n", (unsigned int)max_total_mem);
4107 #endif
4108 #if HAVE_MMAP
4109 fprintf(stderr, "max mmap regions = %10u\n", (unsigned int)max_n_mmaps);
4110 fprintf(stderr, "max mmap bytes = %10lu\n", max_mmapped_mem);
4111 #endif
4112 #if THREAD_STATS
4113 fprintf(stderr, "heaps created = %10d\n", stat_n_heaps);
4114 fprintf(stderr, "locked directly = %10ld\n", stat_lock_direct);
4115 fprintf(stderr, "locked in loop = %10ld\n", stat_lock_loop);
4116 fprintf(stderr, "locked waiting = %10ld\n", stat_lock_wait);
4117 fprintf(stderr, "locked total = %10ld\n",
4118 stat_lock_direct + stat_lock_loop + stat_lock_wait);
4119 #endif
4123 mallinfo returns a copy of updated current mallinfo.
4124 The information reported is for the arena last used by the thread.
4127 struct mallinfo mALLINFo()
4129 struct mallinfo mi;
4130 Void_t *vptr = NULL;
4132 #ifndef NO_THREADS
4133 tsd_getspecific(arena_key, vptr);
4134 #endif
4135 malloc_update_mallinfo((vptr ? (arena*)vptr : &main_arena), &mi);
4136 return mi;
4143 mallopt:
4145 mallopt is the general SVID/XPG interface to tunable parameters.
4146 The format is to provide a (parameter-number, parameter-value) pair.
4147 mallopt then sets the corresponding parameter to the argument
4148 value if it can (i.e., so long as the value is meaningful),
4149 and returns 1 if successful else 0.
4151 See descriptions of tunable parameters above.
4155 #if __STD_C
4156 int mALLOPt(int param_number, int value)
4157 #else
4158 int mALLOPt(param_number, value) int param_number; int value;
4159 #endif
4161 switch(param_number)
4163 case M_TRIM_THRESHOLD:
4164 trim_threshold = value; return 1;
4165 case M_TOP_PAD:
4166 top_pad = value; return 1;
4167 case M_MMAP_THRESHOLD:
4168 #if USE_ARENAS
4169 /* Forbid setting the threshold too high. */
4170 if((unsigned long)value > HEAP_MAX_SIZE/2) return 0;
4171 #endif
4172 mmap_threshold = value; return 1;
4173 case M_MMAP_MAX:
4174 #if HAVE_MMAP
4175 n_mmaps_max = value; return 1;
4176 #else
4177 if (value != 0) return 0; else n_mmaps_max = value; return 1;
4178 #endif
4179 case M_CHECK_ACTION:
4180 check_action = value; return 1;
4182 default:
4183 return 0;
4189 /* Get/set state: malloc_get_state() records the current state of all
4190 malloc variables (_except_ for the actual heap contents and `hook'
4191 function pointers) in a system dependent, opaque data structure.
4192 This data structure is dynamically allocated and can be free()d
4193 after use. malloc_set_state() restores the state of all malloc
4194 variables to the previously obtained state. This is especially
4195 useful when using this malloc as part of a shared library, and when
4196 the heap contents are saved/restored via some other method. The
4197 primary example for this is GNU Emacs with its `dumping' procedure.
4198 `Hook' function pointers are never saved or restored by these
4199 functions, with two exceptions: If malloc checking was in use when
4200 malloc_get_state() was called, then malloc_set_state() calls
4201 __malloc_check_init() if possible; if malloc checking was not in
4202 use in the recorded state but the user requested malloc checking,
4203 then the hooks are reset to 0. */
4205 #define MALLOC_STATE_MAGIC 0x444c4541l
4206 #define MALLOC_STATE_VERSION (0*0x100l + 1l) /* major*0x100 + minor */
4208 struct malloc_state {
4209 long magic;
4210 long version;
4211 mbinptr av[NAV * 2 + 2];
4212 char* sbrk_base;
4213 int sbrked_mem_bytes;
4214 unsigned long trim_threshold;
4215 unsigned long top_pad;
4216 unsigned int n_mmaps_max;
4217 unsigned long mmap_threshold;
4218 int check_action;
4219 unsigned long max_sbrked_mem;
4220 unsigned long max_total_mem;
4221 unsigned int n_mmaps;
4222 unsigned int max_n_mmaps;
4223 unsigned long mmapped_mem;
4224 unsigned long max_mmapped_mem;
4225 int using_malloc_checking;
4228 Void_t*
4229 mALLOC_GET_STATe()
4231 struct malloc_state* ms;
4232 int i;
4233 mbinptr b;
4235 ms = (struct malloc_state*)mALLOc(sizeof(*ms));
4236 if (!ms)
4237 return 0;
4238 (void)mutex_lock(&main_arena.mutex);
4239 ms->magic = MALLOC_STATE_MAGIC;
4240 ms->version = MALLOC_STATE_VERSION;
4241 ms->av[0] = main_arena.av[0];
4242 ms->av[1] = main_arena.av[1];
4243 for(i=0; i<NAV; i++) {
4244 b = bin_at(&main_arena, i);
4245 if(first(b) == b)
4246 ms->av[2*i+2] = ms->av[2*i+3] = 0; /* empty bin (or initial top) */
4247 else {
4248 ms->av[2*i+2] = first(b);
4249 ms->av[2*i+3] = last(b);
4252 ms->sbrk_base = sbrk_base;
4253 ms->sbrked_mem_bytes = sbrked_mem;
4254 ms->trim_threshold = trim_threshold;
4255 ms->top_pad = top_pad;
4256 ms->n_mmaps_max = n_mmaps_max;
4257 ms->mmap_threshold = mmap_threshold;
4258 ms->check_action = check_action;
4259 ms->max_sbrked_mem = max_sbrked_mem;
4260 #ifdef NO_THREADS
4261 ms->max_total_mem = max_total_mem;
4262 #else
4263 ms->max_total_mem = 0;
4264 #endif
4265 ms->n_mmaps = n_mmaps;
4266 ms->max_n_mmaps = max_n_mmaps;
4267 ms->mmapped_mem = mmapped_mem;
4268 ms->max_mmapped_mem = max_mmapped_mem;
4269 #if defined _LIBC || defined MALLOC_HOOKS
4270 ms->using_malloc_checking = using_malloc_checking;
4271 #else
4272 ms->using_malloc_checking = 0;
4273 #endif
4274 (void)mutex_unlock(&main_arena.mutex);
4275 return (Void_t*)ms;
4279 #if __STD_C
4280 mALLOC_SET_STATe(Void_t* msptr)
4281 #else
4282 mALLOC_SET_STATe(msptr) Void_t* msptr;
4283 #endif
4285 struct malloc_state* ms = (struct malloc_state*)msptr;
4286 int i;
4287 mbinptr b;
4289 #if defined _LIBC || defined MALLOC_HOOKS
4290 disallow_malloc_check = 1;
4291 #endif
4292 ptmalloc_init();
4293 if(ms->magic != MALLOC_STATE_MAGIC) return -1;
4294 /* Must fail if the major version is too high. */
4295 if((ms->version & ~0xffl) > (MALLOC_STATE_VERSION & ~0xffl)) return -2;
4296 (void)mutex_lock(&main_arena.mutex);
4297 main_arena.av[0] = ms->av[0];
4298 main_arena.av[1] = ms->av[1];
4299 for(i=0; i<NAV; i++) {
4300 b = bin_at(&main_arena, i);
4301 if(ms->av[2*i+2] == 0)
4302 first(b) = last(b) = b;
4303 else {
4304 first(b) = ms->av[2*i+2];
4305 last(b) = ms->av[2*i+3];
4306 if(i > 0) {
4307 /* Make sure the links to the `av'-bins in the heap are correct. */
4308 first(b)->bk = b;
4309 last(b)->fd = b;
4313 sbrk_base = ms->sbrk_base;
4314 sbrked_mem = ms->sbrked_mem_bytes;
4315 trim_threshold = ms->trim_threshold;
4316 top_pad = ms->top_pad;
4317 n_mmaps_max = ms->n_mmaps_max;
4318 mmap_threshold = ms->mmap_threshold;
4319 check_action = ms->check_action;
4320 max_sbrked_mem = ms->max_sbrked_mem;
4321 #ifdef NO_THREADS
4322 max_total_mem = ms->max_total_mem;
4323 #endif
4324 n_mmaps = ms->n_mmaps;
4325 max_n_mmaps = ms->max_n_mmaps;
4326 mmapped_mem = ms->mmapped_mem;
4327 max_mmapped_mem = ms->max_mmapped_mem;
4328 /* add version-dependent code here */
4329 if (ms->version >= 1) {
4330 #if defined _LIBC || defined MALLOC_HOOKS
4331 /* Check whether it is safe to enable malloc checking, or whether
4332 it is necessary to disable it. */
4333 if (ms->using_malloc_checking && !using_malloc_checking &&
4334 !disallow_malloc_check)
4335 __malloc_check_init ();
4336 else if (!ms->using_malloc_checking && using_malloc_checking) {
4337 __malloc_hook = 0;
4338 __free_hook = 0;
4339 __realloc_hook = 0;
4340 __memalign_hook = 0;
4341 using_malloc_checking = 0;
4343 #endif
4346 (void)mutex_unlock(&main_arena.mutex);
4347 return 0;
4352 #if defined _LIBC || defined MALLOC_HOOKS
4354 /* A simple, standard set of debugging hooks. Overhead is `only' one
4355 byte per chunk; still this will catch most cases of double frees or
4356 overruns. The goal here is to avoid obscure crashes due to invalid
4357 usage, unlike in the MALLOC_DEBUG code. */
4359 #define MAGICBYTE(p) ( ( ((size_t)p >> 3) ^ ((size_t)p >> 11)) & 0xFF )
4361 /* Instrument a chunk with overrun detector byte(s) and convert it
4362 into a user pointer with requested size sz. */
4364 static Void_t*
4365 internal_function
4366 #if __STD_C
4367 chunk2mem_check(mchunkptr p, size_t sz)
4368 #else
4369 chunk2mem_check(p, sz) mchunkptr p; size_t sz;
4370 #endif
4372 unsigned char* m_ptr = (unsigned char*)chunk2mem(p);
4373 size_t i;
4375 for(i = chunksize(p) - (chunk_is_mmapped(p) ? 2*SIZE_SZ+1 : SIZE_SZ+1);
4376 i > sz;
4377 i -= 0xFF) {
4378 if(i-sz < 0x100) {
4379 m_ptr[i] = (unsigned char)(i-sz);
4380 break;
4382 m_ptr[i] = 0xFF;
4384 m_ptr[sz] = MAGICBYTE(p);
4385 return (Void_t*)m_ptr;
4388 /* Convert a pointer to be free()d or realloc()ed to a valid chunk
4389 pointer. If the provided pointer is not valid, return NULL. */
4391 static mchunkptr
4392 internal_function
4393 #if __STD_C
4394 mem2chunk_check(Void_t* mem)
4395 #else
4396 mem2chunk_check(mem) Void_t* mem;
4397 #endif
4399 mchunkptr p;
4400 INTERNAL_SIZE_T sz, c;
4401 unsigned char magic;
4403 p = mem2chunk(mem);
4404 if(!aligned_OK(p)) return NULL;
4405 if( (char*)p>=sbrk_base && (char*)p<(sbrk_base+sbrked_mem) ) {
4406 /* Must be a chunk in conventional heap memory. */
4407 if(chunk_is_mmapped(p) ||
4408 ( (sz = chunksize(p)), ((char*)p + sz)>=(sbrk_base+sbrked_mem) ) ||
4409 sz<MINSIZE || sz&MALLOC_ALIGN_MASK || !inuse(p) ||
4410 ( !prev_inuse(p) && (p->prev_size&MALLOC_ALIGN_MASK ||
4411 (long)prev_chunk(p)<(long)sbrk_base ||
4412 next_chunk(prev_chunk(p))!=p) ))
4413 return NULL;
4414 magic = MAGICBYTE(p);
4415 for(sz += SIZE_SZ-1; (c = ((unsigned char*)p)[sz]) != magic; sz -= c) {
4416 if(c<=0 || sz<(c+2*SIZE_SZ)) return NULL;
4418 ((unsigned char*)p)[sz] ^= 0xFF;
4419 } else {
4420 unsigned long offset, page_mask = malloc_getpagesize-1;
4422 /* mmap()ed chunks have MALLOC_ALIGNMENT or higher power-of-two
4423 alignment relative to the beginning of a page. Check this
4424 first. */
4425 offset = (unsigned long)mem & page_mask;
4426 if((offset!=MALLOC_ALIGNMENT && offset!=0 && offset!=0x10 &&
4427 offset!=0x20 && offset!=0x40 && offset!=0x80 && offset!=0x100 &&
4428 offset!=0x200 && offset!=0x400 && offset!=0x800 && offset!=0x1000 &&
4429 offset<0x2000) ||
4430 !chunk_is_mmapped(p) || (p->size & PREV_INUSE) ||
4431 ( (((unsigned long)p - p->prev_size) & page_mask) != 0 ) ||
4432 ( (sz = chunksize(p)), ((p->prev_size + sz) & page_mask) != 0 ) )
4433 return NULL;
4434 magic = MAGICBYTE(p);
4435 for(sz -= 1; (c = ((unsigned char*)p)[sz]) != magic; sz -= c) {
4436 if(c<=0 || sz<(c+2*SIZE_SZ)) return NULL;
4438 ((unsigned char*)p)[sz] ^= 0xFF;
4440 return p;
4443 /* Check for corruption of the top chunk, and try to recover if
4444 necessary. */
4446 static int
4447 internal_function
4448 #if __STD_C
4449 top_check(void)
4450 #else
4451 top_check()
4452 #endif
4454 mchunkptr t = top(&main_arena);
4455 char* brk, * new_brk;
4456 INTERNAL_SIZE_T front_misalign, sbrk_size;
4457 unsigned long pagesz = malloc_getpagesize;
4459 if((char*)t + chunksize(t) == sbrk_base + sbrked_mem ||
4460 t == initial_top(&main_arena)) return 0;
4462 if(check_action & 1)
4463 fprintf(stderr, "malloc: top chunk is corrupt\n");
4464 if(check_action & 2)
4465 abort();
4467 /* Try to set up a new top chunk. */
4468 brk = MORECORE(0);
4469 front_misalign = (unsigned long)chunk2mem(brk) & MALLOC_ALIGN_MASK;
4470 if (front_misalign > 0)
4471 front_misalign = MALLOC_ALIGNMENT - front_misalign;
4472 sbrk_size = front_misalign + top_pad + MINSIZE;
4473 sbrk_size += pagesz - ((unsigned long)(brk + sbrk_size) & (pagesz - 1));
4474 new_brk = (char*)(MORECORE (sbrk_size));
4475 if (new_brk == (char*)(MORECORE_FAILURE)) return -1;
4476 sbrked_mem = (new_brk - sbrk_base) + sbrk_size;
4478 top(&main_arena) = (mchunkptr)(brk + front_misalign);
4479 set_head(top(&main_arena), (sbrk_size - front_misalign) | PREV_INUSE);
4481 return 0;
4484 static Void_t*
4485 #if __STD_C
4486 malloc_check(size_t sz, const Void_t *caller)
4487 #else
4488 malloc_check(sz, caller) size_t sz; const Void_t *caller;
4489 #endif
4491 mchunkptr victim;
4492 INTERNAL_SIZE_T nb;
4494 if(request2size(sz+1, nb))
4495 return 0;
4496 (void)mutex_lock(&main_arena.mutex);
4497 victim = (top_check() >= 0) ? chunk_alloc(&main_arena, nb) : NULL;
4498 (void)mutex_unlock(&main_arena.mutex);
4499 if(!victim) return NULL;
4500 return chunk2mem_check(victim, sz);
4503 static void
4504 #if __STD_C
4505 free_check(Void_t* mem, const Void_t *caller)
4506 #else
4507 free_check(mem, caller) Void_t* mem; const Void_t *caller;
4508 #endif
4510 mchunkptr p;
4512 if(!mem) return;
4513 (void)mutex_lock(&main_arena.mutex);
4514 p = mem2chunk_check(mem);
4515 if(!p) {
4516 (void)mutex_unlock(&main_arena.mutex);
4517 if(check_action & 1)
4518 fprintf(stderr, "free(): invalid pointer %p!\n", mem);
4519 if(check_action & 2)
4520 abort();
4521 return;
4523 #if HAVE_MMAP
4524 if (chunk_is_mmapped(p)) {
4525 (void)mutex_unlock(&main_arena.mutex);
4526 munmap_chunk(p);
4527 return;
4529 #endif
4530 #if 0 /* Erase freed memory. */
4531 memset(mem, 0, chunksize(p) - (SIZE_SZ+1));
4532 #endif
4533 chunk_free(&main_arena, p);
4534 (void)mutex_unlock(&main_arena.mutex);
4537 static Void_t*
4538 #if __STD_C
4539 realloc_check(Void_t* oldmem, size_t bytes, const Void_t *caller)
4540 #else
4541 realloc_check(oldmem, bytes, caller)
4542 Void_t* oldmem; size_t bytes; const Void_t *caller;
4543 #endif
4545 mchunkptr oldp, newp;
4546 INTERNAL_SIZE_T nb, oldsize;
4548 if (oldmem == 0) return malloc_check(bytes, NULL);
4549 (void)mutex_lock(&main_arena.mutex);
4550 oldp = mem2chunk_check(oldmem);
4551 if(!oldp) {
4552 (void)mutex_unlock(&main_arena.mutex);
4553 if(check_action & 1)
4554 fprintf(stderr, "realloc(): invalid pointer %p!\n", oldmem);
4555 if(check_action & 2)
4556 abort();
4557 return malloc_check(bytes, NULL);
4559 oldsize = chunksize(oldp);
4561 if(request2size(bytes+1, nb)) {
4562 (void)mutex_unlock(&main_arena.mutex);
4563 return 0;
4566 #if HAVE_MMAP
4567 if (chunk_is_mmapped(oldp)) {
4568 #if HAVE_MREMAP
4569 newp = mremap_chunk(oldp, nb);
4570 if(!newp) {
4571 #endif
4572 /* Note the extra SIZE_SZ overhead. */
4573 if(oldsize - SIZE_SZ >= nb) newp = oldp; /* do nothing */
4574 else {
4575 /* Must alloc, copy, free. */
4576 newp = (top_check() >= 0) ? chunk_alloc(&main_arena, nb) : NULL;
4577 if (newp) {
4578 MALLOC_COPY(chunk2mem(newp), oldmem, oldsize - 2*SIZE_SZ);
4579 munmap_chunk(oldp);
4582 #if HAVE_MREMAP
4584 #endif
4585 } else {
4586 #endif /* HAVE_MMAP */
4587 newp = (top_check() >= 0) ?
4588 chunk_realloc(&main_arena, oldp, oldsize, nb) : NULL;
4589 #if 0 /* Erase freed memory. */
4590 nb = chunksize(newp);
4591 if(oldp<newp || oldp>=chunk_at_offset(newp, nb)) {
4592 memset((char*)oldmem + 2*sizeof(mbinptr), 0,
4593 oldsize - (2*sizeof(mbinptr)+2*SIZE_SZ+1));
4594 } else if(nb > oldsize+SIZE_SZ) {
4595 memset((char*)chunk2mem(newp) + oldsize, 0, nb - (oldsize+SIZE_SZ));
4597 #endif
4598 #if HAVE_MMAP
4600 #endif
4601 (void)mutex_unlock(&main_arena.mutex);
4603 if(!newp) return NULL;
4604 return chunk2mem_check(newp, bytes);
4607 static Void_t*
4608 #if __STD_C
4609 memalign_check(size_t alignment, size_t bytes, const Void_t *caller)
4610 #else
4611 memalign_check(alignment, bytes, caller)
4612 size_t alignment; size_t bytes; const Void_t *caller;
4613 #endif
4615 INTERNAL_SIZE_T nb;
4616 mchunkptr p;
4618 if (alignment <= MALLOC_ALIGNMENT) return malloc_check(bytes, NULL);
4619 if (alignment < MINSIZE) alignment = MINSIZE;
4621 if(request2size(bytes+1, nb))
4622 return 0;
4623 (void)mutex_lock(&main_arena.mutex);
4624 p = (top_check() >= 0) ? chunk_align(&main_arena, nb, alignment) : NULL;
4625 (void)mutex_unlock(&main_arena.mutex);
4626 if(!p) return NULL;
4627 return chunk2mem_check(p, bytes);
4630 #ifndef NO_THREADS
4632 /* The following hooks are used when the global initialization in
4633 ptmalloc_init() hasn't completed yet. */
4635 static Void_t*
4636 #if __STD_C
4637 malloc_starter(size_t sz, const Void_t *caller)
4638 #else
4639 malloc_starter(sz, caller) size_t sz; const Void_t *caller;
4640 #endif
4642 INTERNAL_SIZE_T nb;
4643 mchunkptr victim;
4645 if(request2size(sz, nb))
4646 return 0;
4647 victim = chunk_alloc(&main_arena, nb);
4649 return victim ? chunk2mem(victim) : 0;
4652 static void
4653 #if __STD_C
4654 free_starter(Void_t* mem, const Void_t *caller)
4655 #else
4656 free_starter(mem, caller) Void_t* mem; const Void_t *caller;
4657 #endif
4659 mchunkptr p;
4661 if(!mem) return;
4662 p = mem2chunk(mem);
4663 #if HAVE_MMAP
4664 if (chunk_is_mmapped(p)) {
4665 munmap_chunk(p);
4666 return;
4668 #endif
4669 chunk_free(&main_arena, p);
4672 /* The following hooks are used while the `atfork' handling mechanism
4673 is active. */
4675 static Void_t*
4676 #if __STD_C
4677 malloc_atfork (size_t sz, const Void_t *caller)
4678 #else
4679 malloc_atfork(sz, caller) size_t sz; const Void_t *caller;
4680 #endif
4682 Void_t *vptr = NULL;
4683 INTERNAL_SIZE_T nb;
4684 mchunkptr victim;
4686 tsd_getspecific(arena_key, vptr);
4687 if(!vptr) {
4688 if(save_malloc_hook != malloc_check) {
4689 if(request2size(sz, nb))
4690 return 0;
4691 victim = chunk_alloc(&main_arena, nb);
4692 return victim ? chunk2mem(victim) : 0;
4693 } else {
4694 if(top_check()<0 || request2size(sz+1, nb))
4695 return 0;
4696 victim = chunk_alloc(&main_arena, nb);
4697 return victim ? chunk2mem_check(victim, sz) : 0;
4699 } else {
4700 /* Suspend the thread until the `atfork' handlers have completed.
4701 By that time, the hooks will have been reset as well, so that
4702 mALLOc() can be used again. */
4703 (void)mutex_lock(&list_lock);
4704 (void)mutex_unlock(&list_lock);
4705 return mALLOc(sz);
4709 static void
4710 #if __STD_C
4711 free_atfork(Void_t* mem, const Void_t *caller)
4712 #else
4713 free_atfork(mem, caller) Void_t* mem; const Void_t *caller;
4714 #endif
4716 Void_t *vptr = NULL;
4717 arena *ar_ptr;
4718 mchunkptr p; /* chunk corresponding to mem */
4720 if (mem == 0) /* free(0) has no effect */
4721 return;
4723 p = mem2chunk(mem); /* do not bother to replicate free_check here */
4725 #if HAVE_MMAP
4726 if (chunk_is_mmapped(p)) /* release mmapped memory. */
4728 munmap_chunk(p);
4729 return;
4731 #endif
4733 ar_ptr = arena_for_ptr(p);
4734 tsd_getspecific(arena_key, vptr);
4735 if(vptr)
4736 (void)mutex_lock(&ar_ptr->mutex);
4737 chunk_free(ar_ptr, p);
4738 if(vptr)
4739 (void)mutex_unlock(&ar_ptr->mutex);
4742 #endif /* !defined NO_THREADS */
4744 #endif /* defined _LIBC || defined MALLOC_HOOKS */
4748 #ifdef _LIBC
4749 /* We need a wrapper function for one of the additions of POSIX. */
4751 __posix_memalign (void **memptr, size_t alignment, size_t size)
4753 void *mem;
4755 /* Test whether the SIZE argument is valid. It must be a power of
4756 two multiple of sizeof (void *). */
4757 if (size % sizeof (void *) != 0 || (size & (size - 1)) != 0)
4758 return EINVAL;
4760 mem = __libc_memalign (alignment, size);
4762 if (mem != NULL)
4764 *memptr = mem;
4765 return 0;
4768 return ENOMEM;
4770 weak_alias (__posix_memalign, posix_memalign)
4772 weak_alias (__libc_calloc, __calloc) weak_alias (__libc_calloc, calloc)
4773 weak_alias (__libc_free, __cfree) weak_alias (__libc_free, cfree)
4774 weak_alias (__libc_free, __free) weak_alias (__libc_free, free)
4775 weak_alias (__libc_malloc, __malloc) weak_alias (__libc_malloc, malloc)
4776 weak_alias (__libc_memalign, __memalign) weak_alias (__libc_memalign, memalign)
4777 weak_alias (__libc_realloc, __realloc) weak_alias (__libc_realloc, realloc)
4778 weak_alias (__libc_valloc, __valloc) weak_alias (__libc_valloc, valloc)
4779 weak_alias (__libc_pvalloc, __pvalloc) weak_alias (__libc_pvalloc, pvalloc)
4780 weak_alias (__libc_mallinfo, __mallinfo) weak_alias (__libc_mallinfo, mallinfo)
4781 weak_alias (__libc_mallopt, __mallopt) weak_alias (__libc_mallopt, mallopt)
4783 weak_alias (__malloc_stats, malloc_stats)
4784 weak_alias (__malloc_usable_size, malloc_usable_size)
4785 weak_alias (__malloc_trim, malloc_trim)
4786 weak_alias (__malloc_get_state, malloc_get_state)
4787 weak_alias (__malloc_set_state, malloc_set_state)
4788 #endif
4792 History:
4794 V2.6.4-pt3 Thu Feb 20 1997 Wolfram Gloger (wmglo@dent.med.uni-muenchen.de)
4795 * Added malloc_get/set_state() (mainly for use in GNU emacs),
4796 using interface from Marcus Daniels
4797 * All parameters are now adjustable via environment variables
4799 V2.6.4-pt2 Sat Dec 14 1996 Wolfram Gloger (wmglo@dent.med.uni-muenchen.de)
4800 * Added debugging hooks
4801 * Fixed possible deadlock in realloc() when out of memory
4802 * Don't pollute namespace in glibc: use __getpagesize, __mmap, etc.
4804 V2.6.4-pt Wed Dec 4 1996 Wolfram Gloger (wmglo@dent.med.uni-muenchen.de)
4805 * Very minor updates from the released 2.6.4 version.
4806 * Trimmed include file down to exported data structures.
4807 * Changes from H.J. Lu for glibc-2.0.
4809 V2.6.3i-pt Sep 16 1996 Wolfram Gloger (wmglo@dent.med.uni-muenchen.de)
4810 * Many changes for multiple threads
4811 * Introduced arenas and heaps
4813 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
4814 * Added pvalloc, as recommended by H.J. Liu
4815 * Added 64bit pointer support mainly from Wolfram Gloger
4816 * Added anonymously donated WIN32 sbrk emulation
4817 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
4818 * malloc_extend_top: fix mask error that caused wastage after
4819 foreign sbrks
4820 * Add linux mremap support code from HJ Liu
4822 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
4823 * Integrated most documentation with the code.
4824 * Add support for mmap, with help from
4825 Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
4826 * Use last_remainder in more cases.
4827 * Pack bins using idea from colin@nyx10.cs.du.edu
4828 * Use ordered bins instead of best-fit threshold
4829 * Eliminate block-local decls to simplify tracing and debugging.
4830 * Support another case of realloc via move into top
4831 * Fix error occurring when initial sbrk_base not word-aligned.
4832 * Rely on page size for units instead of SBRK_UNIT to
4833 avoid surprises about sbrk alignment conventions.
4834 * Add mallinfo, mallopt. Thanks to Raymond Nijssen
4835 (raymond@es.ele.tue.nl) for the suggestion.
4836 * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
4837 * More precautions for cases where other routines call sbrk,
4838 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
4839 * Added macros etc., allowing use in linux libc from
4840 H.J. Lu (hjl@gnu.ai.mit.edu)
4841 * Inverted this history list
4843 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
4844 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
4845 * Removed all preallocation code since under current scheme
4846 the work required to undo bad preallocations exceeds
4847 the work saved in good cases for most test programs.
4848 * No longer use return list or unconsolidated bins since
4849 no scheme using them consistently outperforms those that don't
4850 given above changes.
4851 * Use best fit for very large chunks to prevent some worst-cases.
4852 * Added some support for debugging
4854 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
4855 * Removed footers when chunks are in use. Thanks to
4856 Paul Wilson (wilson@cs.texas.edu) for the suggestion.
4858 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
4859 * Added malloc_trim, with help from Wolfram Gloger
4860 (wmglo@Dent.MED.Uni-Muenchen.DE).
4862 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
4864 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
4865 * realloc: try to expand in both directions
4866 * malloc: swap order of clean-bin strategy;
4867 * realloc: only conditionally expand backwards
4868 * Try not to scavenge used bins
4869 * Use bin counts as a guide to preallocation
4870 * Occasionally bin return list chunks in first scan
4871 * Add a few optimizations from colin@nyx10.cs.du.edu
4873 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
4874 * faster bin computation & slightly different binning
4875 * merged all consolidations to one part of malloc proper
4876 (eliminating old malloc_find_space & malloc_clean_bin)
4877 * Scan 2 returns chunks (not just 1)
4878 * Propagate failure in realloc if malloc returns 0
4879 * Add stuff to allow compilation on non-ANSI compilers
4880 from kpv@research.att.com
4882 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
4883 * removed potential for odd address access in prev_chunk
4884 * removed dependency on getpagesize.h
4885 * misc cosmetics and a bit more internal documentation
4886 * anticosmetics: mangled names in macros to evade debugger strangeness
4887 * tested on sparc, hp-700, dec-mips, rs6000
4888 with gcc & native cc (hp, dec only) allowing
4889 Detlefs & Zorn comparison study (in SIGPLAN Notices.)
4891 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
4892 * Based loosely on libg++-1.2X malloc. (It retains some of the overall
4893 structure of old version, but most details differ.)