2000-10-13 Michael Fedrowitz <michael@fedrowitz.de>
[glibc.git] / malloc / malloc.c
blob2eb9c558b9f5a6b53bf48eaadf0f686b1e3ef2d2
1 /* Malloc implementation for multiple threads without lock contention.
2 Copyright (C) 1996, 1997, 1998, 1999, 2000 Free Software Foundation, Inc.
3 This file is part of the GNU C Library.
4 Contributed by Wolfram Gloger <wmglo@dent.med.uni-muenchen.de>
5 and Doug Lea <dl@cs.oswego.edu>, 1996.
7 The GNU C Library is free software; you can redistribute it and/or
8 modify it under the terms of the GNU Library General Public License as
9 published by the Free Software Foundation; either version 2 of the
10 License, or (at your option) any later version.
12 The GNU C Library is distributed in the hope that it will be useful,
13 but WITHOUT ANY WARRANTY; without even the implied warranty of
14 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
15 Library General Public License for more details.
17 You should have received a copy of the GNU Library General Public
18 License along with the GNU C Library; see the file COPYING.LIB. If not,
19 write to the Free Software Foundation, Inc., 59 Temple Place - Suite 330,
20 Boston, MA 02111-1307, USA. */
22 /* $Id$
24 This work is mainly derived from malloc-2.6.4 by Doug Lea
25 <dl@cs.oswego.edu>, which is available from:
27 ftp://g.oswego.edu/pub/misc/malloc.c
29 Most of the original comments are reproduced in the code below.
31 * Why use this malloc?
33 This is not the fastest, most space-conserving, most portable, or
34 most tunable malloc ever written. However it is among the fastest
35 while also being among the most space-conserving, portable and tunable.
36 Consistent balance across these factors results in a good general-purpose
37 allocator. For a high-level description, see
38 http://g.oswego.edu/dl/html/malloc.html
40 On many systems, the standard malloc implementation is by itself not
41 thread-safe, and therefore wrapped with a single global lock around
42 all malloc-related functions. In some applications, especially with
43 multiple available processors, this can lead to contention problems
44 and bad performance. This malloc version was designed with the goal
45 to avoid waiting for locks as much as possible. Statistics indicate
46 that this goal is achieved in many cases.
48 * Synopsis of public routines
50 (Much fuller descriptions are contained in the program documentation below.)
52 ptmalloc_init();
53 Initialize global configuration. When compiled for multiple threads,
54 this function must be called once before any other function in the
55 package. It is not required otherwise. It is called automatically
56 in the Linux/GNU C libray or when compiling with MALLOC_HOOKS.
57 malloc(size_t n);
58 Return a pointer to a newly allocated chunk of at least n bytes, or null
59 if no space is available.
60 free(Void_t* p);
61 Release the chunk of memory pointed to by p, or no effect if p is null.
62 realloc(Void_t* p, size_t n);
63 Return a pointer to a chunk of size n that contains the same data
64 as does chunk p up to the minimum of (n, p's size) bytes, or null
65 if no space is available. The returned pointer may or may not be
66 the same as p. If p is null, equivalent to malloc. Unless the
67 #define REALLOC_ZERO_BYTES_FREES below is set, realloc with a
68 size argument of zero (re)allocates a minimum-sized chunk.
69 memalign(size_t alignment, size_t n);
70 Return a pointer to a newly allocated chunk of n bytes, aligned
71 in accord with the alignment argument, which must be a power of
72 two.
73 valloc(size_t n);
74 Equivalent to memalign(pagesize, n), where pagesize is the page
75 size of the system (or as near to this as can be figured out from
76 all the includes/defines below.)
77 pvalloc(size_t n);
78 Equivalent to valloc(minimum-page-that-holds(n)), that is,
79 round up n to nearest pagesize.
80 calloc(size_t unit, size_t quantity);
81 Returns a pointer to quantity * unit bytes, with all locations
82 set to zero.
83 cfree(Void_t* p);
84 Equivalent to free(p).
85 malloc_trim(size_t pad);
86 Release all but pad bytes of freed top-most memory back
87 to the system. Return 1 if successful, else 0.
88 malloc_usable_size(Void_t* p);
89 Report the number usable allocated bytes associated with allocated
90 chunk p. This may or may not report more bytes than were requested,
91 due to alignment and minimum size constraints.
92 malloc_stats();
93 Prints brief summary statistics on stderr.
94 mallinfo()
95 Returns (by copy) a struct containing various summary statistics.
96 mallopt(int parameter_number, int parameter_value)
97 Changes one of the tunable parameters described below. Returns
98 1 if successful in changing the parameter, else 0.
100 * Vital statistics:
102 Alignment: 8-byte
103 8 byte alignment is currently hardwired into the design. This
104 seems to suffice for all current machines and C compilers.
106 Assumed pointer representation: 4 or 8 bytes
107 Code for 8-byte pointers is untested by me but has worked
108 reliably by Wolfram Gloger, who contributed most of the
109 changes supporting this.
111 Assumed size_t representation: 4 or 8 bytes
112 Note that size_t is allowed to be 4 bytes even if pointers are 8.
114 Minimum overhead per allocated chunk: 4 or 8 bytes
115 Each malloced chunk has a hidden overhead of 4 bytes holding size
116 and status information.
118 Minimum allocated size: 4-byte ptrs: 16 bytes (including 4 overhead)
119 8-byte ptrs: 24/32 bytes (including, 4/8 overhead)
121 When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte
122 ptrs but 4 byte size) or 24 (for 8/8) additional bytes are
123 needed; 4 (8) for a trailing size field
124 and 8 (16) bytes for free list pointers. Thus, the minimum
125 allocatable size is 16/24/32 bytes.
127 Even a request for zero bytes (i.e., malloc(0)) returns a
128 pointer to something of the minimum allocatable size.
130 Maximum allocated size: 4-byte size_t: 2^31 - 8 bytes
131 8-byte size_t: 2^63 - 16 bytes
133 It is assumed that (possibly signed) size_t bit values suffice to
134 represent chunk sizes. `Possibly signed' is due to the fact
135 that `size_t' may be defined on a system as either a signed or
136 an unsigned type. To be conservative, values that would appear
137 as negative numbers are avoided.
138 Requests for sizes with a negative sign bit will return a
139 minimum-sized chunk.
141 Maximum overhead wastage per allocated chunk: normally 15 bytes
143 Alignment demands, plus the minimum allocatable size restriction
144 make the normal worst-case wastage 15 bytes (i.e., up to 15
145 more bytes will be allocated than were requested in malloc), with
146 two exceptions:
147 1. Because requests for zero bytes allocate non-zero space,
148 the worst case wastage for a request of zero bytes is 24 bytes.
149 2. For requests >= mmap_threshold that are serviced via
150 mmap(), the worst case wastage is 8 bytes plus the remainder
151 from a system page (the minimal mmap unit); typically 4096 bytes.
153 * Limitations
155 Here are some features that are NOT currently supported
157 * No automated mechanism for fully checking that all accesses
158 to malloced memory stay within their bounds.
159 * No support for compaction.
161 * Synopsis of compile-time options:
163 People have reported using previous versions of this malloc on all
164 versions of Unix, sometimes by tweaking some of the defines
165 below. It has been tested most extensively on Solaris and
166 Linux. People have also reported adapting this malloc for use in
167 stand-alone embedded systems.
169 The implementation is in straight, hand-tuned ANSI C. Among other
170 consequences, it uses a lot of macros. Because of this, to be at
171 all usable, this code should be compiled using an optimizing compiler
172 (for example gcc -O2) that can simplify expressions and control
173 paths.
175 __STD_C (default: derived from C compiler defines)
176 Nonzero if using ANSI-standard C compiler, a C++ compiler, or
177 a C compiler sufficiently close to ANSI to get away with it.
178 MALLOC_DEBUG (default: NOT defined)
179 Define to enable debugging. Adds fairly extensive assertion-based
180 checking to help track down memory errors, but noticeably slows down
181 execution.
182 MALLOC_HOOKS (default: NOT defined)
183 Define to enable support run-time replacement of the allocation
184 functions through user-defined `hooks'.
185 REALLOC_ZERO_BYTES_FREES (default: defined)
186 Define this if you think that realloc(p, 0) should be equivalent
187 to free(p). (The C standard requires this behaviour, therefore
188 it is the default.) Otherwise, since malloc returns a unique
189 pointer for malloc(0), so does realloc(p, 0).
190 HAVE_MEMCPY (default: defined)
191 Define if you are not otherwise using ANSI STD C, but still
192 have memcpy and memset in your C library and want to use them.
193 Otherwise, simple internal versions are supplied.
194 USE_MEMCPY (default: 1 if HAVE_MEMCPY is defined, 0 otherwise)
195 Define as 1 if you want the C library versions of memset and
196 memcpy called in realloc and calloc (otherwise macro versions are used).
197 At least on some platforms, the simple macro versions usually
198 outperform libc versions.
199 HAVE_MMAP (default: defined as 1)
200 Define to non-zero to optionally make malloc() use mmap() to
201 allocate very large blocks.
202 HAVE_MREMAP (default: defined as 0 unless Linux libc set)
203 Define to non-zero to optionally make realloc() use mremap() to
204 reallocate very large blocks.
205 USE_ARENAS (default: the same as HAVE_MMAP)
206 Enable support for multiple arenas, allocated using mmap().
207 malloc_getpagesize (default: derived from system #includes)
208 Either a constant or routine call returning the system page size.
209 HAVE_USR_INCLUDE_MALLOC_H (default: NOT defined)
210 Optionally define if you are on a system with a /usr/include/malloc.h
211 that declares struct mallinfo. It is not at all necessary to
212 define this even if you do, but will ensure consistency.
213 INTERNAL_SIZE_T (default: size_t)
214 Define to a 32-bit type (probably `unsigned int') if you are on a
215 64-bit machine, yet do not want or need to allow malloc requests of
216 greater than 2^31 to be handled. This saves space, especially for
217 very small chunks.
218 _LIBC (default: NOT defined)
219 Defined only when compiled as part of the Linux libc/glibc.
220 Also note that there is some odd internal name-mangling via defines
221 (for example, internally, `malloc' is named `mALLOc') needed
222 when compiling in this case. These look funny but don't otherwise
223 affect anything.
224 LACKS_UNISTD_H (default: undefined)
225 Define this if your system does not have a <unistd.h>.
226 MORECORE (default: sbrk)
227 The name of the routine to call to obtain more memory from the system.
228 MORECORE_FAILURE (default: -1)
229 The value returned upon failure of MORECORE.
230 MORECORE_CLEARS (default 1)
231 The degree to which the routine mapped to MORECORE zeroes out
232 memory: never (0), only for newly allocated space (1) or always
233 (2). The distinction between (1) and (2) is necessary because on
234 some systems, if the application first decrements and then
235 increments the break value, the contents of the reallocated space
236 are unspecified.
237 DEFAULT_TRIM_THRESHOLD
238 DEFAULT_TOP_PAD
239 DEFAULT_MMAP_THRESHOLD
240 DEFAULT_MMAP_MAX
241 Default values of tunable parameters (described in detail below)
242 controlling interaction with host system routines (sbrk, mmap, etc).
243 These values may also be changed dynamically via mallopt(). The
244 preset defaults are those that give best performance for typical
245 programs/systems.
246 DEFAULT_CHECK_ACTION
247 When the standard debugging hooks are in place, and a pointer is
248 detected as corrupt, do nothing (0), print an error message (1),
249 or call abort() (2).
256 * Compile-time options for multiple threads:
258 USE_PTHREADS, USE_THR, USE_SPROC
259 Define one of these as 1 to select the thread interface:
260 POSIX threads, Solaris threads or SGI sproc's, respectively.
261 If none of these is defined as non-zero, you get a `normal'
262 malloc implementation which is not thread-safe. Support for
263 multiple threads requires HAVE_MMAP=1. As an exception, when
264 compiling for GNU libc, i.e. when _LIBC is defined, then none of
265 the USE_... symbols have to be defined.
267 HEAP_MIN_SIZE
268 HEAP_MAX_SIZE
269 When thread support is enabled, additional `heap's are created
270 with mmap calls. These are limited in size; HEAP_MIN_SIZE should
271 be a multiple of the page size, while HEAP_MAX_SIZE must be a power
272 of two for alignment reasons. HEAP_MAX_SIZE should be at least
273 twice as large as the mmap threshold.
274 THREAD_STATS
275 When this is defined as non-zero, some statistics on mutex locking
276 are computed.
283 /* Preliminaries */
285 #ifndef __STD_C
286 #if defined (__STDC__)
287 #define __STD_C 1
288 #else
289 #if __cplusplus
290 #define __STD_C 1
291 #else
292 #define __STD_C 0
293 #endif /*__cplusplus*/
294 #endif /*__STDC__*/
295 #endif /*__STD_C*/
297 #ifndef Void_t
298 #if __STD_C
299 #define Void_t void
300 #else
301 #define Void_t char
302 #endif
303 #endif /*Void_t*/
305 #if __STD_C
306 # include <stddef.h> /* for size_t */
307 # if defined _LIBC || defined MALLOC_HOOKS
308 # include <stdlib.h> /* for getenv(), abort() */
309 # endif
310 #else
311 # include <sys/types.h>
312 # if defined _LIBC || defined MALLOC_HOOKS
313 extern char* getenv();
314 # endif
315 #endif
317 /* Macros for handling mutexes and thread-specific data. This is
318 included early, because some thread-related header files (such as
319 pthread.h) should be included before any others. */
320 #include "thread-m.h"
322 #ifdef __cplusplus
323 extern "C" {
324 #endif
326 #include <errno.h>
327 #include <stdio.h> /* needed for malloc_stats */
331 Compile-time options
336 Debugging:
338 Because freed chunks may be overwritten with link fields, this
339 malloc will often die when freed memory is overwritten by user
340 programs. This can be very effective (albeit in an annoying way)
341 in helping track down dangling pointers.
343 If you compile with -DMALLOC_DEBUG, a number of assertion checks are
344 enabled that will catch more memory errors. You probably won't be
345 able to make much sense of the actual assertion errors, but they
346 should help you locate incorrectly overwritten memory. The
347 checking is fairly extensive, and will slow down execution
348 noticeably. Calling malloc_stats or mallinfo with MALLOC_DEBUG set will
349 attempt to check every non-mmapped allocated and free chunk in the
350 course of computing the summaries. (By nature, mmapped regions
351 cannot be checked very much automatically.)
353 Setting MALLOC_DEBUG may also be helpful if you are trying to modify
354 this code. The assertions in the check routines spell out in more
355 detail the assumptions and invariants underlying the algorithms.
359 #if MALLOC_DEBUG
360 #include <assert.h>
361 #else
362 #define assert(x) ((void)0)
363 #endif
367 INTERNAL_SIZE_T is the word-size used for internal bookkeeping
368 of chunk sizes. On a 64-bit machine, you can reduce malloc
369 overhead by defining INTERNAL_SIZE_T to be a 32 bit `unsigned int'
370 at the expense of not being able to handle requests greater than
371 2^31. This limitation is hardly ever a concern; you are encouraged
372 to set this. However, the default version is the same as size_t.
375 #ifndef INTERNAL_SIZE_T
376 #define INTERNAL_SIZE_T size_t
377 #endif
380 REALLOC_ZERO_BYTES_FREES should be set if a call to realloc with
381 zero bytes should be the same as a call to free. The C standard
382 requires this. Otherwise, since this malloc returns a unique pointer
383 for malloc(0), so does realloc(p, 0).
387 #define REALLOC_ZERO_BYTES_FREES
391 HAVE_MEMCPY should be defined if you are not otherwise using
392 ANSI STD C, but still have memcpy and memset in your C library
393 and want to use them in calloc and realloc. Otherwise simple
394 macro versions are defined here.
396 USE_MEMCPY should be defined as 1 if you actually want to
397 have memset and memcpy called. People report that the macro
398 versions are often enough faster than libc versions on many
399 systems that it is better to use them.
403 #define HAVE_MEMCPY 1
405 #ifndef USE_MEMCPY
406 #ifdef HAVE_MEMCPY
407 #define USE_MEMCPY 1
408 #else
409 #define USE_MEMCPY 0
410 #endif
411 #endif
413 #if (__STD_C || defined(HAVE_MEMCPY))
415 #if __STD_C
416 void* memset(void*, int, size_t);
417 void* memcpy(void*, const void*, size_t);
418 #else
419 Void_t* memset();
420 Void_t* memcpy();
421 #endif
422 #endif
424 #if USE_MEMCPY
426 /* The following macros are only invoked with (2n+1)-multiples of
427 INTERNAL_SIZE_T units, with a positive integer n. This is exploited
428 for fast inline execution when n is small. */
430 #define MALLOC_ZERO(charp, nbytes) \
431 do { \
432 INTERNAL_SIZE_T mzsz = (nbytes); \
433 if(mzsz <= 9*sizeof(mzsz)) { \
434 INTERNAL_SIZE_T* mz = (INTERNAL_SIZE_T*) (charp); \
435 if(mzsz >= 5*sizeof(mzsz)) { *mz++ = 0; \
436 *mz++ = 0; \
437 if(mzsz >= 7*sizeof(mzsz)) { *mz++ = 0; \
438 *mz++ = 0; \
439 if(mzsz >= 9*sizeof(mzsz)) { *mz++ = 0; \
440 *mz++ = 0; }}} \
441 *mz++ = 0; \
442 *mz++ = 0; \
443 *mz = 0; \
444 } else memset((charp), 0, mzsz); \
445 } while(0)
447 #define MALLOC_COPY(dest,src,nbytes) \
448 do { \
449 INTERNAL_SIZE_T mcsz = (nbytes); \
450 if(mcsz <= 9*sizeof(mcsz)) { \
451 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) (src); \
452 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) (dest); \
453 if(mcsz >= 5*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
454 *mcdst++ = *mcsrc++; \
455 if(mcsz >= 7*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
456 *mcdst++ = *mcsrc++; \
457 if(mcsz >= 9*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
458 *mcdst++ = *mcsrc++; }}} \
459 *mcdst++ = *mcsrc++; \
460 *mcdst++ = *mcsrc++; \
461 *mcdst = *mcsrc ; \
462 } else memcpy(dest, src, mcsz); \
463 } while(0)
465 #else /* !USE_MEMCPY */
467 /* Use Duff's device for good zeroing/copying performance. */
469 #define MALLOC_ZERO(charp, nbytes) \
470 do { \
471 INTERNAL_SIZE_T* mzp = (INTERNAL_SIZE_T*)(charp); \
472 long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
473 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
474 switch (mctmp) { \
475 case 0: for(;;) { *mzp++ = 0; \
476 case 7: *mzp++ = 0; \
477 case 6: *mzp++ = 0; \
478 case 5: *mzp++ = 0; \
479 case 4: *mzp++ = 0; \
480 case 3: *mzp++ = 0; \
481 case 2: *mzp++ = 0; \
482 case 1: *mzp++ = 0; if(mcn <= 0) break; mcn--; } \
484 } while(0)
486 #define MALLOC_COPY(dest,src,nbytes) \
487 do { \
488 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) src; \
489 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) dest; \
490 long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
491 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
492 switch (mctmp) { \
493 case 0: for(;;) { *mcdst++ = *mcsrc++; \
494 case 7: *mcdst++ = *mcsrc++; \
495 case 6: *mcdst++ = *mcsrc++; \
496 case 5: *mcdst++ = *mcsrc++; \
497 case 4: *mcdst++ = *mcsrc++; \
498 case 3: *mcdst++ = *mcsrc++; \
499 case 2: *mcdst++ = *mcsrc++; \
500 case 1: *mcdst++ = *mcsrc++; if(mcn <= 0) break; mcn--; } \
502 } while(0)
504 #endif
507 #ifndef LACKS_UNISTD_H
508 # include <unistd.h>
509 #endif
512 Define HAVE_MMAP to optionally make malloc() use mmap() to allocate
513 very large blocks. These will be returned to the operating system
514 immediately after a free(). HAVE_MMAP is also a prerequisite to
515 support multiple `arenas' (see USE_ARENAS below).
518 #ifndef HAVE_MMAP
519 # ifdef _POSIX_MAPPED_FILES
520 # define HAVE_MMAP 1
521 # endif
522 #endif
525 Define HAVE_MREMAP to make realloc() use mremap() to re-allocate
526 large blocks. This is currently only possible on Linux with
527 kernel versions newer than 1.3.77.
530 #ifndef HAVE_MREMAP
531 #define HAVE_MREMAP defined(__linux__)
532 #endif
534 /* Define USE_ARENAS to enable support for multiple `arenas'. These
535 are allocated using mmap(), are necessary for threads and
536 occasionally useful to overcome address space limitations affecting
537 sbrk(). */
539 #ifndef USE_ARENAS
540 #define USE_ARENAS HAVE_MMAP
541 #endif
543 #if HAVE_MMAP
545 #include <unistd.h>
546 #include <fcntl.h>
547 #include <sys/mman.h>
549 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
550 #define MAP_ANONYMOUS MAP_ANON
551 #endif
552 #if !defined(MAP_FAILED)
553 #define MAP_FAILED ((char*)-1)
554 #endif
556 #ifndef MAP_NORESERVE
557 # ifdef MAP_AUTORESRV
558 # define MAP_NORESERVE MAP_AUTORESRV
559 # else
560 # define MAP_NORESERVE 0
561 # endif
562 #endif
564 #endif /* HAVE_MMAP */
567 Access to system page size. To the extent possible, this malloc
568 manages memory from the system in page-size units.
570 The following mechanics for getpagesize were adapted from
571 bsd/gnu getpagesize.h
574 #ifndef malloc_getpagesize
575 # ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
576 # ifndef _SC_PAGE_SIZE
577 # define _SC_PAGE_SIZE _SC_PAGESIZE
578 # endif
579 # endif
580 # ifdef _SC_PAGE_SIZE
581 # define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
582 # else
583 # if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
584 extern size_t getpagesize();
585 # define malloc_getpagesize getpagesize()
586 # else
587 # include <sys/param.h>
588 # ifdef EXEC_PAGESIZE
589 # define malloc_getpagesize EXEC_PAGESIZE
590 # else
591 # ifdef NBPG
592 # ifndef CLSIZE
593 # define malloc_getpagesize NBPG
594 # else
595 # define malloc_getpagesize (NBPG * CLSIZE)
596 # endif
597 # else
598 # ifdef NBPC
599 # define malloc_getpagesize NBPC
600 # else
601 # ifdef PAGESIZE
602 # define malloc_getpagesize PAGESIZE
603 # else
604 # define malloc_getpagesize (4096) /* just guess */
605 # endif
606 # endif
607 # endif
608 # endif
609 # endif
610 # endif
611 #endif
617 This version of malloc supports the standard SVID/XPG mallinfo
618 routine that returns a struct containing the same kind of
619 information you can get from malloc_stats. It should work on
620 any SVID/XPG compliant system that has a /usr/include/malloc.h
621 defining struct mallinfo. (If you'd like to install such a thing
622 yourself, cut out the preliminary declarations as described above
623 and below and save them in a malloc.h file. But there's no
624 compelling reason to bother to do this.)
626 The main declaration needed is the mallinfo struct that is returned
627 (by-copy) by mallinfo(). The SVID/XPG malloinfo struct contains a
628 bunch of fields, most of which are not even meaningful in this
629 version of malloc. Some of these fields are are instead filled by
630 mallinfo() with other numbers that might possibly be of interest.
632 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
633 /usr/include/malloc.h file that includes a declaration of struct
634 mallinfo. If so, it is included; else an SVID2/XPG2 compliant
635 version is declared below. These must be precisely the same for
636 mallinfo() to work.
640 /* #define HAVE_USR_INCLUDE_MALLOC_H */
642 #if HAVE_USR_INCLUDE_MALLOC_H
643 # include "/usr/include/malloc.h"
644 #else
645 # ifdef _LIBC
646 # include "malloc.h"
647 # else
648 # include "ptmalloc.h"
649 # endif
650 #endif
652 #include <bp-checks.h>
654 #ifndef DEFAULT_TRIM_THRESHOLD
655 #define DEFAULT_TRIM_THRESHOLD (128 * 1024)
656 #endif
659 M_TRIM_THRESHOLD is the maximum amount of unused top-most memory
660 to keep before releasing via malloc_trim in free().
662 Automatic trimming is mainly useful in long-lived programs.
663 Because trimming via sbrk can be slow on some systems, and can
664 sometimes be wasteful (in cases where programs immediately
665 afterward allocate more large chunks) the value should be high
666 enough so that your overall system performance would improve by
667 releasing.
669 The trim threshold and the mmap control parameters (see below)
670 can be traded off with one another. Trimming and mmapping are
671 two different ways of releasing unused memory back to the
672 system. Between these two, it is often possible to keep
673 system-level demands of a long-lived program down to a bare
674 minimum. For example, in one test suite of sessions measuring
675 the XF86 X server on Linux, using a trim threshold of 128K and a
676 mmap threshold of 192K led to near-minimal long term resource
677 consumption.
679 If you are using this malloc in a long-lived program, it should
680 pay to experiment with these values. As a rough guide, you
681 might set to a value close to the average size of a process
682 (program) running on your system. Releasing this much memory
683 would allow such a process to run in memory. Generally, it's
684 worth it to tune for trimming rather than memory mapping when a
685 program undergoes phases where several large chunks are
686 allocated and released in ways that can reuse each other's
687 storage, perhaps mixed with phases where there are no such
688 chunks at all. And in well-behaved long-lived programs,
689 controlling release of large blocks via trimming versus mapping
690 is usually faster.
692 However, in most programs, these parameters serve mainly as
693 protection against the system-level effects of carrying around
694 massive amounts of unneeded memory. Since frequent calls to
695 sbrk, mmap, and munmap otherwise degrade performance, the default
696 parameters are set to relatively high values that serve only as
697 safeguards.
699 The default trim value is high enough to cause trimming only in
700 fairly extreme (by current memory consumption standards) cases.
701 It must be greater than page size to have any useful effect. To
702 disable trimming completely, you can set to (unsigned long)(-1);
708 #ifndef DEFAULT_TOP_PAD
709 #define DEFAULT_TOP_PAD (0)
710 #endif
713 M_TOP_PAD is the amount of extra `padding' space to allocate or
714 retain whenever sbrk is called. It is used in two ways internally:
716 * When sbrk is called to extend the top of the arena to satisfy
717 a new malloc request, this much padding is added to the sbrk
718 request.
720 * When malloc_trim is called automatically from free(),
721 it is used as the `pad' argument.
723 In both cases, the actual amount of padding is rounded
724 so that the end of the arena is always a system page boundary.
726 The main reason for using padding is to avoid calling sbrk so
727 often. Having even a small pad greatly reduces the likelihood
728 that nearly every malloc request during program start-up (or
729 after trimming) will invoke sbrk, which needlessly wastes
730 time.
732 Automatic rounding-up to page-size units is normally sufficient
733 to avoid measurable overhead, so the default is 0. However, in
734 systems where sbrk is relatively slow, it can pay to increase
735 this value, at the expense of carrying around more memory than
736 the program needs.
741 #ifndef DEFAULT_MMAP_THRESHOLD
742 #define DEFAULT_MMAP_THRESHOLD (128 * 1024)
743 #endif
747 M_MMAP_THRESHOLD is the request size threshold for using mmap()
748 to service a request. Requests of at least this size that cannot
749 be allocated using already-existing space will be serviced via mmap.
750 (If enough normal freed space already exists it is used instead.)
752 Using mmap segregates relatively large chunks of memory so that
753 they can be individually obtained and released from the host
754 system. A request serviced through mmap is never reused by any
755 other request (at least not directly; the system may just so
756 happen to remap successive requests to the same locations).
758 Segregating space in this way has the benefit that mmapped space
759 can ALWAYS be individually released back to the system, which
760 helps keep the system level memory demands of a long-lived
761 program low. Mapped memory can never become `locked' between
762 other chunks, as can happen with normally allocated chunks, which
763 menas that even trimming via malloc_trim would not release them.
765 However, it has the disadvantages that:
767 1. The space cannot be reclaimed, consolidated, and then
768 used to service later requests, as happens with normal chunks.
769 2. It can lead to more wastage because of mmap page alignment
770 requirements
771 3. It causes malloc performance to be more dependent on host
772 system memory management support routines which may vary in
773 implementation quality and may impose arbitrary
774 limitations. Generally, servicing a request via normal
775 malloc steps is faster than going through a system's mmap.
777 All together, these considerations should lead you to use mmap
778 only for relatively large requests.
785 #ifndef DEFAULT_MMAP_MAX
786 #if HAVE_MMAP
787 #define DEFAULT_MMAP_MAX (1024)
788 #else
789 #define DEFAULT_MMAP_MAX (0)
790 #endif
791 #endif
794 M_MMAP_MAX is the maximum number of requests to simultaneously
795 service using mmap. This parameter exists because:
797 1. Some systems have a limited number of internal tables for
798 use by mmap.
799 2. In most systems, overreliance on mmap can degrade overall
800 performance.
801 3. If a program allocates many large regions, it is probably
802 better off using normal sbrk-based allocation routines that
803 can reclaim and reallocate normal heap memory. Using a
804 small value allows transition into this mode after the
805 first few allocations.
807 Setting to 0 disables all use of mmap. If HAVE_MMAP is not set,
808 the default value is 0, and attempts to set it to non-zero values
809 in mallopt will fail.
814 #ifndef DEFAULT_CHECK_ACTION
815 #define DEFAULT_CHECK_ACTION 1
816 #endif
818 /* What to do if the standard debugging hooks are in place and a
819 corrupt pointer is detected: do nothing (0), print an error message
820 (1), or call abort() (2). */
824 #define HEAP_MIN_SIZE (32*1024)
825 #define HEAP_MAX_SIZE (1024*1024) /* must be a power of two */
827 /* HEAP_MIN_SIZE and HEAP_MAX_SIZE limit the size of mmap()ed heaps
828 that are dynamically created for multi-threaded programs. The
829 maximum size must be a power of two, for fast determination of
830 which heap belongs to a chunk. It should be much larger than
831 the mmap threshold, so that requests with a size just below that
832 threshold can be fulfilled without creating too many heaps.
837 #ifndef THREAD_STATS
838 #define THREAD_STATS 0
839 #endif
841 /* If THREAD_STATS is non-zero, some statistics on mutex locking are
842 computed. */
845 /* Macro to set errno. */
846 #ifndef __set_errno
847 # define __set_errno(val) errno = (val)
848 #endif
850 /* On some platforms we can compile internal, not exported functions better.
851 Let the environment provide a macro and define it to be empty if it
852 is not available. */
853 #ifndef internal_function
854 # define internal_function
855 #endif
860 Special defines for the Linux/GNU C library.
865 #ifdef _LIBC
867 #if __STD_C
869 Void_t * __default_morecore (ptrdiff_t);
870 Void_t *(*__morecore)(ptrdiff_t) = __default_morecore;
872 #else
874 Void_t * __default_morecore ();
875 Void_t *(*__morecore)() = __default_morecore;
877 #endif
879 #define MORECORE (*__morecore)
880 #define MORECORE_FAILURE 0
882 #ifndef MORECORE_CLEARS
883 #define MORECORE_CLEARS 1
884 #endif
886 static size_t __libc_pagesize;
888 #define access __access
889 #define mmap __mmap
890 #define munmap __munmap
891 #define mremap __mremap
892 #define mprotect __mprotect
893 #undef malloc_getpagesize
894 #define malloc_getpagesize __libc_pagesize
896 #else /* _LIBC */
898 #if __STD_C
899 extern Void_t* sbrk(ptrdiff_t);
900 #else
901 extern Void_t* sbrk();
902 #endif
904 #ifndef MORECORE
905 #define MORECORE sbrk
906 #endif
908 #ifndef MORECORE_FAILURE
909 #define MORECORE_FAILURE -1
910 #endif
912 #ifndef MORECORE_CLEARS
913 #define MORECORE_CLEARS 1
914 #endif
916 #endif /* _LIBC */
918 #ifdef _LIBC
920 #define cALLOc __libc_calloc
921 #define fREe __libc_free
922 #define mALLOc __libc_malloc
923 #define mEMALIGn __libc_memalign
924 #define rEALLOc __libc_realloc
925 #define vALLOc __libc_valloc
926 #define pvALLOc __libc_pvalloc
927 #define mALLINFo __libc_mallinfo
928 #define mALLOPt __libc_mallopt
929 #define mALLOC_STATs __malloc_stats
930 #define mALLOC_USABLE_SIZe __malloc_usable_size
931 #define mALLOC_TRIm __malloc_trim
932 #define mALLOC_GET_STATe __malloc_get_state
933 #define mALLOC_SET_STATe __malloc_set_state
935 #else
937 #define cALLOc calloc
938 #define fREe free
939 #define mALLOc malloc
940 #define mEMALIGn memalign
941 #define rEALLOc realloc
942 #define vALLOc valloc
943 #define pvALLOc pvalloc
944 #define mALLINFo mallinfo
945 #define mALLOPt mallopt
946 #define mALLOC_STATs malloc_stats
947 #define mALLOC_USABLE_SIZe malloc_usable_size
948 #define mALLOC_TRIm malloc_trim
949 #define mALLOC_GET_STATe malloc_get_state
950 #define mALLOC_SET_STATe malloc_set_state
952 #endif
954 /* Public routines */
956 #if __STD_C
958 #ifndef _LIBC
959 void ptmalloc_init(void);
960 #endif
961 Void_t* mALLOc(size_t);
962 void fREe(Void_t*);
963 Void_t* rEALLOc(Void_t*, size_t);
964 Void_t* mEMALIGn(size_t, size_t);
965 Void_t* vALLOc(size_t);
966 Void_t* pvALLOc(size_t);
967 Void_t* cALLOc(size_t, size_t);
968 void cfree(Void_t*);
969 int mALLOC_TRIm(size_t);
970 size_t mALLOC_USABLE_SIZe(Void_t*);
971 void mALLOC_STATs(void);
972 int mALLOPt(int, int);
973 struct mallinfo mALLINFo(void);
974 Void_t* mALLOC_GET_STATe(void);
975 int mALLOC_SET_STATe(Void_t*);
977 #else /* !__STD_C */
979 #ifndef _LIBC
980 void ptmalloc_init();
981 #endif
982 Void_t* mALLOc();
983 void fREe();
984 Void_t* rEALLOc();
985 Void_t* mEMALIGn();
986 Void_t* vALLOc();
987 Void_t* pvALLOc();
988 Void_t* cALLOc();
989 void cfree();
990 int mALLOC_TRIm();
991 size_t mALLOC_USABLE_SIZe();
992 void mALLOC_STATs();
993 int mALLOPt();
994 struct mallinfo mALLINFo();
995 Void_t* mALLOC_GET_STATe();
996 int mALLOC_SET_STATe();
998 #endif /* __STD_C */
1001 #ifdef __cplusplus
1002 } /* end of extern "C" */
1003 #endif
1005 #if !defined(NO_THREADS) && !HAVE_MMAP
1006 "Can't have threads support without mmap"
1007 #endif
1008 #if USE_ARENAS && !HAVE_MMAP
1009 "Can't have multiple arenas without mmap"
1010 #endif
1014 Type declarations
1018 struct malloc_chunk
1020 INTERNAL_SIZE_T prev_size; /* Size of previous chunk (if free). */
1021 INTERNAL_SIZE_T size; /* Size in bytes, including overhead. */
1022 struct malloc_chunk* fd; /* double links -- used only if free. */
1023 struct malloc_chunk* bk;
1026 typedef struct malloc_chunk* mchunkptr;
1030 malloc_chunk details:
1032 (The following includes lightly edited explanations by Colin Plumb.)
1034 Chunks of memory are maintained using a `boundary tag' method as
1035 described in e.g., Knuth or Standish. (See the paper by Paul
1036 Wilson ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a
1037 survey of such techniques.) Sizes of free chunks are stored both
1038 in the front of each chunk and at the end. This makes
1039 consolidating fragmented chunks into bigger chunks very fast. The
1040 size fields also hold bits representing whether chunks are free or
1041 in use.
1043 An allocated chunk looks like this:
1046 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1047 | Size of previous chunk, if allocated | |
1048 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1049 | Size of chunk, in bytes |P|
1050 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1051 | User data starts here... .
1053 . (malloc_usable_space() bytes) .
1055 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1056 | Size of chunk |
1057 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1060 Where "chunk" is the front of the chunk for the purpose of most of
1061 the malloc code, but "mem" is the pointer that is returned to the
1062 user. "Nextchunk" is the beginning of the next contiguous chunk.
1064 Chunks always begin on even word boundaries, so the mem portion
1065 (which is returned to the user) is also on an even word boundary, and
1066 thus double-word aligned.
1068 Free chunks are stored in circular doubly-linked lists, and look like this:
1070 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1071 | Size of previous chunk |
1072 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1073 `head:' | Size of chunk, in bytes |P|
1074 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1075 | Forward pointer to next chunk in list |
1076 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1077 | Back pointer to previous chunk in list |
1078 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1079 | Unused space (may be 0 bytes long) .
1082 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1083 `foot:' | Size of chunk, in bytes |
1084 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1086 The P (PREV_INUSE) bit, stored in the unused low-order bit of the
1087 chunk size (which is always a multiple of two words), is an in-use
1088 bit for the *previous* chunk. If that bit is *clear*, then the
1089 word before the current chunk size contains the previous chunk
1090 size, and can be used to find the front of the previous chunk.
1091 (The very first chunk allocated always has this bit set,
1092 preventing access to non-existent (or non-owned) memory.)
1094 Note that the `foot' of the current chunk is actually represented
1095 as the prev_size of the NEXT chunk. (This makes it easier to
1096 deal with alignments etc).
1098 The two exceptions to all this are
1100 1. The special chunk `top', which doesn't bother using the
1101 trailing size field since there is no
1102 next contiguous chunk that would have to index off it. (After
1103 initialization, `top' is forced to always exist. If it would
1104 become less than MINSIZE bytes long, it is replenished via
1105 malloc_extend_top.)
1107 2. Chunks allocated via mmap, which have the second-lowest-order
1108 bit (IS_MMAPPED) set in their size fields. Because they are
1109 never merged or traversed from any other chunk, they have no
1110 foot size or inuse information.
1112 Available chunks are kept in any of several places (all declared below):
1114 * `av': An array of chunks serving as bin headers for consolidated
1115 chunks. Each bin is doubly linked. The bins are approximately
1116 proportionally (log) spaced. There are a lot of these bins
1117 (128). This may look excessive, but works very well in
1118 practice. All procedures maintain the invariant that no
1119 consolidated chunk physically borders another one. Chunks in
1120 bins are kept in size order, with ties going to the
1121 approximately least recently used chunk.
1123 The chunks in each bin are maintained in decreasing sorted order by
1124 size. This is irrelevant for the small bins, which all contain
1125 the same-sized chunks, but facilitates best-fit allocation for
1126 larger chunks. (These lists are just sequential. Keeping them in
1127 order almost never requires enough traversal to warrant using
1128 fancier ordered data structures.) Chunks of the same size are
1129 linked with the most recently freed at the front, and allocations
1130 are taken from the back. This results in LRU or FIFO allocation
1131 order, which tends to give each chunk an equal opportunity to be
1132 consolidated with adjacent freed chunks, resulting in larger free
1133 chunks and less fragmentation.
1135 * `top': The top-most available chunk (i.e., the one bordering the
1136 end of available memory) is treated specially. It is never
1137 included in any bin, is used only if no other chunk is
1138 available, and is released back to the system if it is very
1139 large (see M_TRIM_THRESHOLD).
1141 * `last_remainder': A bin holding only the remainder of the
1142 most recently split (non-top) chunk. This bin is checked
1143 before other non-fitting chunks, so as to provide better
1144 locality for runs of sequentially allocated chunks.
1146 * Implicitly, through the host system's memory mapping tables.
1147 If supported, requests greater than a threshold are usually
1148 serviced via calls to mmap, and then later released via munmap.
1153 Bins
1155 The bins are an array of pairs of pointers serving as the
1156 heads of (initially empty) doubly-linked lists of chunks, laid out
1157 in a way so that each pair can be treated as if it were in a
1158 malloc_chunk. (This way, the fd/bk offsets for linking bin heads
1159 and chunks are the same).
1161 Bins for sizes < 512 bytes contain chunks of all the same size, spaced
1162 8 bytes apart. Larger bins are approximately logarithmically
1163 spaced. (See the table below.)
1165 Bin layout:
1167 64 bins of size 8
1168 32 bins of size 64
1169 16 bins of size 512
1170 8 bins of size 4096
1171 4 bins of size 32768
1172 2 bins of size 262144
1173 1 bin of size what's left
1175 There is actually a little bit of slop in the numbers in bin_index
1176 for the sake of speed. This makes no difference elsewhere.
1178 The special chunks `top' and `last_remainder' get their own bins,
1179 (this is implemented via yet more trickery with the av array),
1180 although `top' is never properly linked to its bin since it is
1181 always handled specially.
1185 #define NAV 128 /* number of bins */
1187 typedef struct malloc_chunk* mbinptr;
1189 /* An arena is a configuration of malloc_chunks together with an array
1190 of bins. With multiple threads, it must be locked via a mutex
1191 before changing its data structures. One or more `heaps' are
1192 associated with each arena, except for the main_arena, which is
1193 associated only with the `main heap', i.e. the conventional free
1194 store obtained with calls to MORECORE() (usually sbrk). The `av'
1195 array is never mentioned directly in the code, but instead used via
1196 bin access macros. */
1198 typedef struct _arena {
1199 mbinptr av[2*NAV + 2];
1200 struct _arena *next;
1201 size_t size;
1202 #if THREAD_STATS
1203 long stat_lock_direct, stat_lock_loop, stat_lock_wait;
1204 #endif
1205 mutex_t mutex;
1206 } arena;
1209 /* A heap is a single contiguous memory region holding (coalesceable)
1210 malloc_chunks. It is allocated with mmap() and always starts at an
1211 address aligned to HEAP_MAX_SIZE. Not used unless compiling with
1212 USE_ARENAS. */
1214 typedef struct _heap_info {
1215 arena *ar_ptr; /* Arena for this heap. */
1216 struct _heap_info *prev; /* Previous heap. */
1217 size_t size; /* Current size in bytes. */
1218 size_t pad; /* Make sure the following data is properly aligned. */
1219 } heap_info;
1223 Static functions (forward declarations)
1226 #if __STD_C
1228 static void chunk_free(arena *ar_ptr, mchunkptr p) internal_function;
1229 static mchunkptr chunk_alloc(arena *ar_ptr, INTERNAL_SIZE_T size)
1230 internal_function;
1231 static mchunkptr chunk_realloc(arena *ar_ptr, mchunkptr oldp,
1232 INTERNAL_SIZE_T oldsize, INTERNAL_SIZE_T nb)
1233 internal_function;
1234 static mchunkptr chunk_align(arena *ar_ptr, INTERNAL_SIZE_T nb,
1235 size_t alignment) internal_function;
1236 static int main_trim(size_t pad) internal_function;
1237 #if USE_ARENAS
1238 static int heap_trim(heap_info *heap, size_t pad) internal_function;
1239 #endif
1240 #if defined _LIBC || defined MALLOC_HOOKS
1241 static Void_t* malloc_check(size_t sz, const Void_t *caller);
1242 static void free_check(Void_t* mem, const Void_t *caller);
1243 static Void_t* realloc_check(Void_t* oldmem, size_t bytes,
1244 const Void_t *caller);
1245 static Void_t* memalign_check(size_t alignment, size_t bytes,
1246 const Void_t *caller);
1247 #ifndef NO_THREADS
1248 static Void_t* malloc_starter(size_t sz, const Void_t *caller);
1249 static void free_starter(Void_t* mem, const Void_t *caller);
1250 static Void_t* malloc_atfork(size_t sz, const Void_t *caller);
1251 static void free_atfork(Void_t* mem, const Void_t *caller);
1252 #endif
1253 #endif
1255 #else
1257 static void chunk_free();
1258 static mchunkptr chunk_alloc();
1259 static mchunkptr chunk_realloc();
1260 static mchunkptr chunk_align();
1261 static int main_trim();
1262 #if USE_ARENAS
1263 static int heap_trim();
1264 #endif
1265 #if defined _LIBC || defined MALLOC_HOOKS
1266 static Void_t* malloc_check();
1267 static void free_check();
1268 static Void_t* realloc_check();
1269 static Void_t* memalign_check();
1270 #ifndef NO_THREADS
1271 static Void_t* malloc_starter();
1272 static void free_starter();
1273 static Void_t* malloc_atfork();
1274 static void free_atfork();
1275 #endif
1276 #endif
1278 #endif
1282 /* sizes, alignments */
1284 #define SIZE_SZ (sizeof(INTERNAL_SIZE_T))
1285 #define MALLOC_ALIGNMENT (SIZE_SZ + SIZE_SZ)
1286 #define MALLOC_ALIGN_MASK (MALLOC_ALIGNMENT - 1)
1287 #define MINSIZE (sizeof(struct malloc_chunk))
1289 /* conversion from malloc headers to user pointers, and back */
1291 #define chunk2mem(p) ((Void_t*)((char*)(p) + 2*SIZE_SZ))
1292 #define mem2chunk(mem) chunk_at_offset((mem), -2*SIZE_SZ)
1294 /* pad request bytes into a usable size, return non-zero on overflow */
1296 #define request2size(req, nb) \
1297 ((nb = (req) + (SIZE_SZ + MALLOC_ALIGN_MASK)),\
1298 ((long)nb <= 0 || nb < (INTERNAL_SIZE_T) (req) \
1299 ? (__set_errno (ENOMEM), 1) \
1300 : ((nb < (MINSIZE + MALLOC_ALIGN_MASK) \
1301 ? (nb = MINSIZE) : (nb &= ~MALLOC_ALIGN_MASK)), 0)))
1303 /* Check if m has acceptable alignment */
1305 #define aligned_OK(m) (((unsigned long)((m)) & (MALLOC_ALIGN_MASK)) == 0)
1311 Physical chunk operations
1315 /* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */
1317 #define PREV_INUSE 0x1UL
1319 /* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */
1321 #define IS_MMAPPED 0x2UL
1323 /* Bits to mask off when extracting size */
1325 #define SIZE_BITS (PREV_INUSE|IS_MMAPPED)
1328 /* Ptr to next physical malloc_chunk. */
1330 #define next_chunk(p) chunk_at_offset((p), (p)->size & ~PREV_INUSE)
1332 /* Ptr to previous physical malloc_chunk */
1334 #define prev_chunk(p) chunk_at_offset((p), -(p)->prev_size)
1337 /* Treat space at ptr + offset as a chunk */
1339 #define chunk_at_offset(p, s) BOUNDED_1((mchunkptr)(((char*)(p)) + (s)))
1345 Dealing with use bits
1348 /* extract p's inuse bit */
1350 #define inuse(p) (next_chunk(p)->size & PREV_INUSE)
1352 /* extract inuse bit of previous chunk */
1354 #define prev_inuse(p) ((p)->size & PREV_INUSE)
1356 /* check for mmap()'ed chunk */
1358 #define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED)
1360 /* set/clear chunk as in use without otherwise disturbing */
1362 #define set_inuse(p) (next_chunk(p)->size |= PREV_INUSE)
1364 #define clear_inuse(p) (next_chunk(p)->size &= ~PREV_INUSE)
1366 /* check/set/clear inuse bits in known places */
1368 #define inuse_bit_at_offset(p, s) \
1369 (chunk_at_offset((p), (s))->size & PREV_INUSE)
1371 #define set_inuse_bit_at_offset(p, s) \
1372 (chunk_at_offset((p), (s))->size |= PREV_INUSE)
1374 #define clear_inuse_bit_at_offset(p, s) \
1375 (chunk_at_offset((p), (s))->size &= ~(PREV_INUSE))
1381 Dealing with size fields
1384 /* Get size, ignoring use bits */
1386 #define chunksize(p) ((p)->size & ~(SIZE_BITS))
1388 /* Set size at head, without disturbing its use bit */
1390 #define set_head_size(p, s) ((p)->size = (((p)->size & PREV_INUSE) | (s)))
1392 /* Set size/use ignoring previous bits in header */
1394 #define set_head(p, s) ((p)->size = (s))
1396 /* Set size at footer (only when chunk is not in use) */
1398 #define set_foot(p, s) (chunk_at_offset(p, s)->prev_size = (s))
1404 /* access macros */
1406 #define bin_at(a, i) BOUNDED_1(_bin_at(a, i))
1407 #define _bin_at(a, i) ((mbinptr)((char*)&(((a)->av)[2*(i)+2]) - 2*SIZE_SZ))
1408 #define init_bin(a, i) ((a)->av[2*(i)+2] = (a)->av[2*(i)+3] = bin_at((a), (i)))
1409 #define next_bin(b) ((mbinptr)((char*)(b) + 2 * sizeof(((arena*)0)->av[0])))
1410 #define prev_bin(b) ((mbinptr)((char*)(b) - 2 * sizeof(((arena*)0)->av[0])))
1413 The first 2 bins are never indexed. The corresponding av cells are instead
1414 used for bookkeeping. This is not to save space, but to simplify
1415 indexing, maintain locality, and avoid some initialization tests.
1418 #define binblocks(a) (bin_at(a,0)->size)/* bitvector of nonempty blocks */
1419 #define top(a) (bin_at(a,0)->fd) /* The topmost chunk */
1420 #define last_remainder(a) (bin_at(a,1)) /* remainder from last split */
1423 Because top initially points to its own bin with initial
1424 zero size, thus forcing extension on the first malloc request,
1425 we avoid having any special code in malloc to check whether
1426 it even exists yet. But we still need to in malloc_extend_top.
1429 #define initial_top(a) ((mchunkptr)bin_at(a, 0))
1433 /* field-extraction macros */
1435 #define first(b) ((b)->fd)
1436 #define last(b) ((b)->bk)
1439 Indexing into bins
1442 #define bin_index(sz) \
1443 (((((unsigned long)(sz)) >> 9) == 0) ? (((unsigned long)(sz)) >> 3):\
1444 ((((unsigned long)(sz)) >> 9) <= 4) ? 56 + (((unsigned long)(sz)) >> 6):\
1445 ((((unsigned long)(sz)) >> 9) <= 20) ? 91 + (((unsigned long)(sz)) >> 9):\
1446 ((((unsigned long)(sz)) >> 9) <= 84) ? 110 + (((unsigned long)(sz)) >> 12):\
1447 ((((unsigned long)(sz)) >> 9) <= 340) ? 119 + (((unsigned long)(sz)) >> 15):\
1448 ((((unsigned long)(sz)) >> 9) <= 1364) ? 124 + (((unsigned long)(sz)) >> 18):\
1449 126)
1451 bins for chunks < 512 are all spaced 8 bytes apart, and hold
1452 identically sized chunks. This is exploited in malloc.
1455 #define MAX_SMALLBIN 63
1456 #define MAX_SMALLBIN_SIZE 512
1457 #define SMALLBIN_WIDTH 8
1459 #define smallbin_index(sz) (((unsigned long)(sz)) >> 3)
1462 Requests are `small' if both the corresponding and the next bin are small
1465 #define is_small_request(nb) ((nb) < MAX_SMALLBIN_SIZE - SMALLBIN_WIDTH)
1470 To help compensate for the large number of bins, a one-level index
1471 structure is used for bin-by-bin searching. `binblocks' is a
1472 one-word bitvector recording whether groups of BINBLOCKWIDTH bins
1473 have any (possibly) non-empty bins, so they can be skipped over
1474 all at once during during traversals. The bits are NOT always
1475 cleared as soon as all bins in a block are empty, but instead only
1476 when all are noticed to be empty during traversal in malloc.
1479 #define BINBLOCKWIDTH 4 /* bins per block */
1481 /* bin<->block macros */
1483 #define idx2binblock(ix) ((unsigned)1 << ((ix) / BINBLOCKWIDTH))
1484 #define mark_binblock(a, ii) (binblocks(a) |= idx2binblock(ii))
1485 #define clear_binblock(a, ii) (binblocks(a) &= ~(idx2binblock(ii)))
1490 /* Static bookkeeping data */
1492 /* Helper macro to initialize bins */
1493 #define IAV(i) _bin_at(&main_arena, i), _bin_at(&main_arena, i)
1495 static arena main_arena = {
1497 0, 0,
1498 IAV(0), IAV(1), IAV(2), IAV(3), IAV(4), IAV(5), IAV(6), IAV(7),
1499 IAV(8), IAV(9), IAV(10), IAV(11), IAV(12), IAV(13), IAV(14), IAV(15),
1500 IAV(16), IAV(17), IAV(18), IAV(19), IAV(20), IAV(21), IAV(22), IAV(23),
1501 IAV(24), IAV(25), IAV(26), IAV(27), IAV(28), IAV(29), IAV(30), IAV(31),
1502 IAV(32), IAV(33), IAV(34), IAV(35), IAV(36), IAV(37), IAV(38), IAV(39),
1503 IAV(40), IAV(41), IAV(42), IAV(43), IAV(44), IAV(45), IAV(46), IAV(47),
1504 IAV(48), IAV(49), IAV(50), IAV(51), IAV(52), IAV(53), IAV(54), IAV(55),
1505 IAV(56), IAV(57), IAV(58), IAV(59), IAV(60), IAV(61), IAV(62), IAV(63),
1506 IAV(64), IAV(65), IAV(66), IAV(67), IAV(68), IAV(69), IAV(70), IAV(71),
1507 IAV(72), IAV(73), IAV(74), IAV(75), IAV(76), IAV(77), IAV(78), IAV(79),
1508 IAV(80), IAV(81), IAV(82), IAV(83), IAV(84), IAV(85), IAV(86), IAV(87),
1509 IAV(88), IAV(89), IAV(90), IAV(91), IAV(92), IAV(93), IAV(94), IAV(95),
1510 IAV(96), IAV(97), IAV(98), IAV(99), IAV(100), IAV(101), IAV(102), IAV(103),
1511 IAV(104), IAV(105), IAV(106), IAV(107), IAV(108), IAV(109), IAV(110), IAV(111),
1512 IAV(112), IAV(113), IAV(114), IAV(115), IAV(116), IAV(117), IAV(118), IAV(119),
1513 IAV(120), IAV(121), IAV(122), IAV(123), IAV(124), IAV(125), IAV(126), IAV(127)
1515 &main_arena, /* next */
1516 0, /* size */
1517 #if THREAD_STATS
1518 0, 0, 0, /* stat_lock_direct, stat_lock_loop, stat_lock_wait */
1519 #endif
1520 MUTEX_INITIALIZER /* mutex */
1523 #undef IAV
1525 /* Thread specific data */
1527 static tsd_key_t arena_key;
1528 static mutex_t list_lock = MUTEX_INITIALIZER;
1530 #if THREAD_STATS
1531 static int stat_n_heaps = 0;
1532 #define THREAD_STAT(x) x
1533 #else
1534 #define THREAD_STAT(x) do ; while(0)
1535 #endif
1537 /* variables holding tunable values */
1539 static unsigned long trim_threshold = DEFAULT_TRIM_THRESHOLD;
1540 static unsigned long top_pad = DEFAULT_TOP_PAD;
1541 static unsigned int n_mmaps_max = DEFAULT_MMAP_MAX;
1542 static unsigned long mmap_threshold = DEFAULT_MMAP_THRESHOLD;
1543 static int check_action = DEFAULT_CHECK_ACTION;
1545 /* The first value returned from sbrk */
1546 static char* sbrk_base = (char*)(-1);
1548 /* The maximum memory obtained from system via sbrk */
1549 static unsigned long max_sbrked_mem = 0;
1551 /* The maximum via either sbrk or mmap (too difficult to track with threads) */
1552 #ifdef NO_THREADS
1553 static unsigned long max_total_mem = 0;
1554 #endif
1556 /* The total memory obtained from system via sbrk */
1557 #define sbrked_mem (main_arena.size)
1559 /* Tracking mmaps */
1561 static unsigned int n_mmaps = 0;
1562 static unsigned int max_n_mmaps = 0;
1563 static unsigned long mmapped_mem = 0;
1564 static unsigned long max_mmapped_mem = 0;
1566 /* Mapped memory in non-main arenas (reliable only for NO_THREADS). */
1567 static unsigned long arena_mem = 0;
1571 #ifndef _LIBC
1572 #define weak_variable
1573 #else
1574 /* In GNU libc we want the hook variables to be weak definitions to
1575 avoid a problem with Emacs. */
1576 #define weak_variable weak_function
1577 #endif
1579 /* Already initialized? */
1580 int __malloc_initialized = -1;
1583 #ifndef NO_THREADS
1585 /* The following two functions are registered via thread_atfork() to
1586 make sure that the mutexes remain in a consistent state in the
1587 fork()ed version of a thread. Also adapt the malloc and free hooks
1588 temporarily, because the `atfork' handler mechanism may use
1589 malloc/free internally (e.g. in LinuxThreads). */
1591 #if defined _LIBC || defined MALLOC_HOOKS
1592 static __malloc_ptr_t (*save_malloc_hook) __MALLOC_P ((size_t __size,
1593 const __malloc_ptr_t));
1594 static void (*save_free_hook) __MALLOC_P ((__malloc_ptr_t __ptr,
1595 const __malloc_ptr_t));
1596 static Void_t* save_arena;
1597 #endif
1599 static void
1600 ptmalloc_lock_all __MALLOC_P((void))
1602 arena *ar_ptr;
1604 (void)mutex_lock(&list_lock);
1605 for(ar_ptr = &main_arena;;) {
1606 (void)mutex_lock(&ar_ptr->mutex);
1607 ar_ptr = ar_ptr->next;
1608 if(ar_ptr == &main_arena) break;
1610 #if defined _LIBC || defined MALLOC_HOOKS
1611 save_malloc_hook = __malloc_hook;
1612 save_free_hook = __free_hook;
1613 __malloc_hook = malloc_atfork;
1614 __free_hook = free_atfork;
1615 /* Only the current thread may perform malloc/free calls now. */
1616 tsd_getspecific(arena_key, save_arena);
1617 tsd_setspecific(arena_key, (Void_t*)0);
1618 #endif
1621 static void
1622 ptmalloc_unlock_all __MALLOC_P((void))
1624 arena *ar_ptr;
1626 #if defined _LIBC || defined MALLOC_HOOKS
1627 tsd_setspecific(arena_key, save_arena);
1628 __malloc_hook = save_malloc_hook;
1629 __free_hook = save_free_hook;
1630 #endif
1631 for(ar_ptr = &main_arena;;) {
1632 (void)mutex_unlock(&ar_ptr->mutex);
1633 ar_ptr = ar_ptr->next;
1634 if(ar_ptr == &main_arena) break;
1636 (void)mutex_unlock(&list_lock);
1639 static void
1640 ptmalloc_init_all __MALLOC_P((void))
1642 arena *ar_ptr;
1644 #if defined _LIBC || defined MALLOC_HOOKS
1645 tsd_setspecific(arena_key, save_arena);
1646 __malloc_hook = save_malloc_hook;
1647 __free_hook = save_free_hook;
1648 #endif
1649 for(ar_ptr = &main_arena;;) {
1650 (void)mutex_init(&ar_ptr->mutex);
1651 ar_ptr = ar_ptr->next;
1652 if(ar_ptr == &main_arena) break;
1654 (void)mutex_init(&list_lock);
1657 #endif /* !defined NO_THREADS */
1659 /* Initialization routine. */
1660 #if defined(_LIBC)
1661 #if 0
1662 static void ptmalloc_init __MALLOC_P ((void)) __attribute__ ((constructor));
1663 #endif
1665 static void
1666 ptmalloc_init __MALLOC_P((void))
1667 #else
1668 void
1669 ptmalloc_init __MALLOC_P((void))
1670 #endif
1672 #if defined _LIBC || defined MALLOC_HOOKS
1673 # if __STD_C
1674 const char* s;
1675 # else
1676 char* s;
1677 # endif
1678 #endif
1679 int secure;
1681 if(__malloc_initialized >= 0) return;
1682 __malloc_initialized = 0;
1683 #ifdef _LIBC
1684 __libc_pagesize = __getpagesize();
1685 #endif
1686 #ifndef NO_THREADS
1687 #if defined _LIBC || defined MALLOC_HOOKS
1688 /* With some threads implementations, creating thread-specific data
1689 or initializing a mutex may call malloc() itself. Provide a
1690 simple starter version (realloc() won't work). */
1691 save_malloc_hook = __malloc_hook;
1692 save_free_hook = __free_hook;
1693 __malloc_hook = malloc_starter;
1694 __free_hook = free_starter;
1695 #endif
1696 #ifdef _LIBC
1697 /* Initialize the pthreads interface. */
1698 if (__pthread_initialize != NULL)
1699 __pthread_initialize();
1700 #endif
1701 #endif /* !defined NO_THREADS */
1702 mutex_init(&main_arena.mutex);
1703 mutex_init(&list_lock);
1704 tsd_key_create(&arena_key, NULL);
1705 tsd_setspecific(arena_key, (Void_t *)&main_arena);
1706 thread_atfork(ptmalloc_lock_all, ptmalloc_unlock_all, ptmalloc_init_all);
1707 #if defined _LIBC || defined MALLOC_HOOKS
1708 #ifndef NO_THREADS
1709 __malloc_hook = save_malloc_hook;
1710 __free_hook = save_free_hook;
1711 #endif
1712 secure = __libc_enable_secure;
1713 if (! secure)
1715 if((s = getenv("MALLOC_TRIM_THRESHOLD_")))
1716 mALLOPt(M_TRIM_THRESHOLD, atoi(s));
1717 if((s = getenv("MALLOC_TOP_PAD_")))
1718 mALLOPt(M_TOP_PAD, atoi(s));
1719 if((s = getenv("MALLOC_MMAP_THRESHOLD_")))
1720 mALLOPt(M_MMAP_THRESHOLD, atoi(s));
1721 if((s = getenv("MALLOC_MMAP_MAX_")))
1722 mALLOPt(M_MMAP_MAX, atoi(s));
1724 s = getenv("MALLOC_CHECK_");
1725 if(s) {
1726 if(s[0]) mALLOPt(M_CHECK_ACTION, (int)(s[0] - '0'));
1727 __malloc_check_init();
1729 if(__malloc_initialize_hook != NULL)
1730 (*__malloc_initialize_hook)();
1731 #endif
1732 __malloc_initialized = 1;
1735 /* There are platforms (e.g. Hurd) with a link-time hook mechanism. */
1736 #ifdef thread_atfork_static
1737 thread_atfork_static(ptmalloc_lock_all, ptmalloc_unlock_all, \
1738 ptmalloc_init_all)
1739 #endif
1741 #if defined _LIBC || defined MALLOC_HOOKS
1743 /* Hooks for debugging versions. The initial hooks just call the
1744 initialization routine, then do the normal work. */
1746 static Void_t*
1747 #if __STD_C
1748 malloc_hook_ini(size_t sz, const __malloc_ptr_t caller)
1749 #else
1750 malloc_hook_ini(sz, caller)
1751 size_t sz; const __malloc_ptr_t caller;
1752 #endif
1754 __malloc_hook = NULL;
1755 ptmalloc_init();
1756 return mALLOc(sz);
1759 static Void_t*
1760 #if __STD_C
1761 realloc_hook_ini(Void_t* ptr, size_t sz, const __malloc_ptr_t caller)
1762 #else
1763 realloc_hook_ini(ptr, sz, caller)
1764 Void_t* ptr; size_t sz; const __malloc_ptr_t caller;
1765 #endif
1767 __malloc_hook = NULL;
1768 __realloc_hook = NULL;
1769 ptmalloc_init();
1770 return rEALLOc(ptr, sz);
1773 static Void_t*
1774 #if __STD_C
1775 memalign_hook_ini(size_t alignment, size_t sz, const __malloc_ptr_t caller)
1776 #else
1777 memalign_hook_ini(alignment, sz, caller)
1778 size_t alignment; size_t sz; const __malloc_ptr_t caller;
1779 #endif
1781 __memalign_hook = NULL;
1782 ptmalloc_init();
1783 return mEMALIGn(alignment, sz);
1786 void weak_variable (*__malloc_initialize_hook) __MALLOC_P ((void)) = NULL;
1787 void weak_variable (*__free_hook) __MALLOC_P ((__malloc_ptr_t __ptr,
1788 const __malloc_ptr_t)) = NULL;
1789 __malloc_ptr_t weak_variable (*__malloc_hook)
1790 __MALLOC_P ((size_t __size, const __malloc_ptr_t)) = malloc_hook_ini;
1791 __malloc_ptr_t weak_variable (*__realloc_hook)
1792 __MALLOC_P ((__malloc_ptr_t __ptr, size_t __size, const __malloc_ptr_t))
1793 = realloc_hook_ini;
1794 __malloc_ptr_t weak_variable (*__memalign_hook)
1795 __MALLOC_P ((size_t __alignment, size_t __size, const __malloc_ptr_t))
1796 = memalign_hook_ini;
1797 void weak_variable (*__after_morecore_hook) __MALLOC_P ((void)) = NULL;
1799 /* Whether we are using malloc checking. */
1800 static int using_malloc_checking;
1802 /* A flag that is set by malloc_set_state, to signal that malloc checking
1803 must not be enabled on the request from the user (via the MALLOC_CHECK_
1804 environment variable). It is reset by __malloc_check_init to tell
1805 malloc_set_state that the user has requested malloc checking.
1807 The purpose of this flag is to make sure that malloc checking is not
1808 enabled when the heap to be restored was constructed without malloc
1809 checking, and thus does not contain the required magic bytes.
1810 Otherwise the heap would be corrupted by calls to free and realloc. If
1811 it turns out that the heap was created with malloc checking and the
1812 user has requested it malloc_set_state just calls __malloc_check_init
1813 again to enable it. On the other hand, reusing such a heap without
1814 further malloc checking is safe. */
1815 static int disallow_malloc_check;
1817 /* Activate a standard set of debugging hooks. */
1818 void
1819 __malloc_check_init()
1821 if (disallow_malloc_check) {
1822 disallow_malloc_check = 0;
1823 return;
1825 using_malloc_checking = 1;
1826 __malloc_hook = malloc_check;
1827 __free_hook = free_check;
1828 __realloc_hook = realloc_check;
1829 __memalign_hook = memalign_check;
1830 if(check_action & 1)
1831 fprintf(stderr, "malloc: using debugging hooks\n");
1834 #endif
1840 /* Routines dealing with mmap(). */
1842 #if HAVE_MMAP
1844 #ifndef MAP_ANONYMOUS
1846 static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
1848 #define MMAP(addr, size, prot, flags) ((dev_zero_fd < 0) ? \
1849 (dev_zero_fd = open("/dev/zero", O_RDWR), \
1850 mmap((addr), (size), (prot), (flags), dev_zero_fd, 0)) : \
1851 mmap((addr), (size), (prot), (flags), dev_zero_fd, 0))
1853 #else
1855 #define MMAP(addr, size, prot, flags) \
1856 (mmap((addr), (size), (prot), (flags)|MAP_ANONYMOUS, -1, 0))
1858 #endif
1860 #if defined __GNUC__ && __GNUC__ >= 2
1861 /* This function is only called from one place, inline it. */
1862 __inline__
1863 #endif
1864 static mchunkptr
1865 internal_function
1866 #if __STD_C
1867 mmap_chunk(size_t size)
1868 #else
1869 mmap_chunk(size) size_t size;
1870 #endif
1872 size_t page_mask = malloc_getpagesize - 1;
1873 mchunkptr p;
1875 /* For mmapped chunks, the overhead is one SIZE_SZ unit larger, because
1876 * there is no following chunk whose prev_size field could be used.
1878 size = (size + SIZE_SZ + page_mask) & ~page_mask;
1880 p = (mchunkptr)MMAP(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE);
1881 if(p == (mchunkptr) MAP_FAILED) return 0;
1883 n_mmaps++;
1884 if (n_mmaps > max_n_mmaps) max_n_mmaps = n_mmaps;
1886 /* We demand that eight bytes into a page must be 8-byte aligned. */
1887 assert(aligned_OK(chunk2mem(p)));
1889 /* The offset to the start of the mmapped region is stored
1890 * in the prev_size field of the chunk; normally it is zero,
1891 * but that can be changed in memalign().
1893 p->prev_size = 0;
1894 set_head(p, size|IS_MMAPPED);
1896 mmapped_mem += size;
1897 if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
1898 max_mmapped_mem = mmapped_mem;
1899 #ifdef NO_THREADS
1900 if ((unsigned long)(mmapped_mem + arena_mem + sbrked_mem) > max_total_mem)
1901 max_total_mem = mmapped_mem + arena_mem + sbrked_mem;
1902 #endif
1903 return p;
1906 static void
1907 internal_function
1908 #if __STD_C
1909 munmap_chunk(mchunkptr p)
1910 #else
1911 munmap_chunk(p) mchunkptr p;
1912 #endif
1914 INTERNAL_SIZE_T size = chunksize(p);
1915 int ret;
1917 assert (chunk_is_mmapped(p));
1918 assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
1919 assert((n_mmaps > 0));
1920 assert(((p->prev_size + size) & (malloc_getpagesize-1)) == 0);
1922 n_mmaps--;
1923 mmapped_mem -= (size + p->prev_size);
1925 ret = munmap((char *)p - p->prev_size, size + p->prev_size);
1927 /* munmap returns non-zero on failure */
1928 assert(ret == 0);
1931 #if HAVE_MREMAP
1933 static mchunkptr
1934 internal_function
1935 #if __STD_C
1936 mremap_chunk(mchunkptr p, size_t new_size)
1937 #else
1938 mremap_chunk(p, new_size) mchunkptr p; size_t new_size;
1939 #endif
1941 size_t page_mask = malloc_getpagesize - 1;
1942 INTERNAL_SIZE_T offset = p->prev_size;
1943 INTERNAL_SIZE_T size = chunksize(p);
1944 char *cp;
1946 assert (chunk_is_mmapped(p));
1947 assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
1948 assert((n_mmaps > 0));
1949 assert(((size + offset) & (malloc_getpagesize-1)) == 0);
1951 /* Note the extra SIZE_SZ overhead as in mmap_chunk(). */
1952 new_size = (new_size + offset + SIZE_SZ + page_mask) & ~page_mask;
1954 cp = (char *)mremap((char *)p - offset, size + offset, new_size,
1955 MREMAP_MAYMOVE);
1957 if (cp == MAP_FAILED) return 0;
1959 p = (mchunkptr)(cp + offset);
1961 assert(aligned_OK(chunk2mem(p)));
1963 assert((p->prev_size == offset));
1964 set_head(p, (new_size - offset)|IS_MMAPPED);
1966 mmapped_mem -= size + offset;
1967 mmapped_mem += new_size;
1968 if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
1969 max_mmapped_mem = mmapped_mem;
1970 #ifdef NO_THREADS
1971 if ((unsigned long)(mmapped_mem + arena_mem + sbrked_mem) > max_total_mem)
1972 max_total_mem = mmapped_mem + arena_mem + sbrked_mem;
1973 #endif
1974 return p;
1977 #endif /* HAVE_MREMAP */
1979 #endif /* HAVE_MMAP */
1983 /* Managing heaps and arenas (for concurrent threads) */
1985 #if USE_ARENAS
1987 /* Create a new heap. size is automatically rounded up to a multiple
1988 of the page size. */
1990 static heap_info *
1991 internal_function
1992 #if __STD_C
1993 new_heap(size_t size)
1994 #else
1995 new_heap(size) size_t size;
1996 #endif
1998 size_t page_mask = malloc_getpagesize - 1;
1999 char *p1, *p2;
2000 unsigned long ul;
2001 heap_info *h;
2003 if(size+top_pad < HEAP_MIN_SIZE)
2004 size = HEAP_MIN_SIZE;
2005 else if(size+top_pad <= HEAP_MAX_SIZE)
2006 size += top_pad;
2007 else if(size > HEAP_MAX_SIZE)
2008 return 0;
2009 else
2010 size = HEAP_MAX_SIZE;
2011 size = (size + page_mask) & ~page_mask;
2013 /* A memory region aligned to a multiple of HEAP_MAX_SIZE is needed.
2014 No swap space needs to be reserved for the following large
2015 mapping (on Linux, this is the case for all non-writable mappings
2016 anyway). */
2017 p1 = (char *)MMAP(0, HEAP_MAX_SIZE<<1, PROT_NONE, MAP_PRIVATE|MAP_NORESERVE);
2018 if(p1 != MAP_FAILED) {
2019 p2 = (char *)(((unsigned long)p1 + HEAP_MAX_SIZE) & ~(HEAP_MAX_SIZE-1));
2020 ul = p2 - p1;
2021 munmap(p1, ul);
2022 munmap(p2 + HEAP_MAX_SIZE, HEAP_MAX_SIZE - ul);
2023 } else {
2024 /* Try to take the chance that an allocation of only HEAP_MAX_SIZE
2025 is already aligned. */
2026 p2 = (char *)MMAP(0, HEAP_MAX_SIZE, PROT_NONE, MAP_PRIVATE|MAP_NORESERVE);
2027 if(p2 == MAP_FAILED)
2028 return 0;
2029 if((unsigned long)p2 & (HEAP_MAX_SIZE-1)) {
2030 munmap(p2, HEAP_MAX_SIZE);
2031 return 0;
2034 if(mprotect(p2, size, PROT_READ|PROT_WRITE) != 0) {
2035 munmap(p2, HEAP_MAX_SIZE);
2036 return 0;
2038 h = (heap_info *)p2;
2039 h->size = size;
2040 THREAD_STAT(stat_n_heaps++);
2041 return h;
2044 /* Grow or shrink a heap. size is automatically rounded up to a
2045 multiple of the page size if it is positive. */
2047 static int
2048 #if __STD_C
2049 grow_heap(heap_info *h, long diff)
2050 #else
2051 grow_heap(h, diff) heap_info *h; long diff;
2052 #endif
2054 size_t page_mask = malloc_getpagesize - 1;
2055 long new_size;
2057 if(diff >= 0) {
2058 diff = (diff + page_mask) & ~page_mask;
2059 new_size = (long)h->size + diff;
2060 if(new_size > HEAP_MAX_SIZE)
2061 return -1;
2062 if(mprotect((char *)h + h->size, diff, PROT_READ|PROT_WRITE) != 0)
2063 return -2;
2064 } else {
2065 new_size = (long)h->size + diff;
2066 if(new_size < (long)sizeof(*h))
2067 return -1;
2068 /* Try to re-map the extra heap space freshly to save memory, and
2069 make it inaccessible. */
2070 if((char *)MMAP((char *)h + new_size, -diff, PROT_NONE,
2071 MAP_PRIVATE|MAP_FIXED) == (char *) MAP_FAILED)
2072 return -2;
2074 h->size = new_size;
2075 return 0;
2078 /* Delete a heap. */
2080 #define delete_heap(heap) munmap((char*)(heap), HEAP_MAX_SIZE)
2082 /* arena_get() acquires an arena and locks the corresponding mutex.
2083 First, try the one last locked successfully by this thread. (This
2084 is the common case and handled with a macro for speed.) Then, loop
2085 once over the circularly linked list of arenas. If no arena is
2086 readily available, create a new one. In this latter case, `size'
2087 is just a hint as to how much memory will be required immediately
2088 in the new arena. */
2090 #define arena_get(ptr, size) do { \
2091 Void_t *vptr = NULL; \
2092 ptr = (arena *)tsd_getspecific(arena_key, vptr); \
2093 if(ptr && !mutex_trylock(&ptr->mutex)) { \
2094 THREAD_STAT(++(ptr->stat_lock_direct)); \
2095 } else \
2096 ptr = arena_get2(ptr, (size)); \
2097 } while(0)
2099 static arena *
2100 internal_function
2101 #if __STD_C
2102 arena_get2(arena *a_tsd, size_t size)
2103 #else
2104 arena_get2(a_tsd, size) arena *a_tsd; size_t size;
2105 #endif
2107 arena *a;
2108 heap_info *h;
2109 char *ptr;
2110 int i;
2111 unsigned long misalign;
2113 if(!a_tsd)
2114 a = a_tsd = &main_arena;
2115 else {
2116 a = a_tsd->next;
2117 if(!a) {
2118 /* This can only happen while initializing the new arena. */
2119 (void)mutex_lock(&main_arena.mutex);
2120 THREAD_STAT(++(main_arena.stat_lock_wait));
2121 return &main_arena;
2125 /* Check the global, circularly linked list for available arenas. */
2126 repeat:
2127 do {
2128 if(!mutex_trylock(&a->mutex)) {
2129 THREAD_STAT(++(a->stat_lock_loop));
2130 tsd_setspecific(arena_key, (Void_t *)a);
2131 return a;
2133 a = a->next;
2134 } while(a != a_tsd);
2136 /* If not even the list_lock can be obtained, try again. This can
2137 happen during `atfork', or for example on systems where thread
2138 creation makes it temporarily impossible to obtain _any_
2139 locks. */
2140 if(mutex_trylock(&list_lock)) {
2141 a = a_tsd;
2142 goto repeat;
2144 (void)mutex_unlock(&list_lock);
2146 /* Nothing immediately available, so generate a new arena. */
2147 h = new_heap(size + (sizeof(*h) + sizeof(*a) + MALLOC_ALIGNMENT));
2148 if(!h) {
2149 /* Maybe size is too large to fit in a single heap. So, just try
2150 to create a minimally-sized arena and let chunk_alloc() attempt
2151 to deal with the large request via mmap_chunk(). */
2152 h = new_heap(sizeof(*h) + sizeof(*a) + MALLOC_ALIGNMENT);
2153 if(!h)
2154 return 0;
2156 a = h->ar_ptr = (arena *)(h+1);
2157 for(i=0; i<NAV; i++)
2158 init_bin(a, i);
2159 a->next = NULL;
2160 a->size = h->size;
2161 arena_mem += h->size;
2162 #ifdef NO_THREADS
2163 if((unsigned long)(mmapped_mem + arena_mem + sbrked_mem) > max_total_mem)
2164 max_total_mem = mmapped_mem + arena_mem + sbrked_mem;
2165 #endif
2166 tsd_setspecific(arena_key, (Void_t *)a);
2167 mutex_init(&a->mutex);
2168 i = mutex_lock(&a->mutex); /* remember result */
2170 /* Set up the top chunk, with proper alignment. */
2171 ptr = (char *)(a + 1);
2172 misalign = (unsigned long)chunk2mem(ptr) & MALLOC_ALIGN_MASK;
2173 if (misalign > 0)
2174 ptr += MALLOC_ALIGNMENT - misalign;
2175 top(a) = (mchunkptr)ptr;
2176 set_head(top(a), (((char*)h + h->size) - ptr) | PREV_INUSE);
2178 /* Add the new arena to the list. */
2179 (void)mutex_lock(&list_lock);
2180 a->next = main_arena.next;
2181 main_arena.next = a;
2182 (void)mutex_unlock(&list_lock);
2184 if(i) /* locking failed; keep arena for further attempts later */
2185 return 0;
2187 THREAD_STAT(++(a->stat_lock_loop));
2188 return a;
2191 /* find the heap and corresponding arena for a given ptr */
2193 #define heap_for_ptr(ptr) \
2194 ((heap_info *)((unsigned long)(ptr) & ~(HEAP_MAX_SIZE-1)))
2195 #define arena_for_ptr(ptr) \
2196 (((mchunkptr)(ptr) < top(&main_arena) && (char *)(ptr) >= sbrk_base) ? \
2197 &main_arena : heap_for_ptr(ptr)->ar_ptr)
2199 #else /* !USE_ARENAS */
2201 /* There is only one arena, main_arena. */
2203 #define arena_get(ptr, sz) (ptr = &main_arena)
2204 #define arena_for_ptr(ptr) (&main_arena)
2206 #endif /* USE_ARENAS */
2211 Debugging support
2214 #if MALLOC_DEBUG
2218 These routines make a number of assertions about the states
2219 of data structures that should be true at all times. If any
2220 are not true, it's very likely that a user program has somehow
2221 trashed memory. (It's also possible that there is a coding error
2222 in malloc. In which case, please report it!)
2225 #if __STD_C
2226 static void do_check_chunk(arena *ar_ptr, mchunkptr p)
2227 #else
2228 static void do_check_chunk(ar_ptr, p) arena *ar_ptr; mchunkptr p;
2229 #endif
2231 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
2233 /* No checkable chunk is mmapped */
2234 assert(!chunk_is_mmapped(p));
2236 #if USE_ARENAS
2237 if(ar_ptr != &main_arena) {
2238 heap_info *heap = heap_for_ptr(p);
2239 assert(heap->ar_ptr == ar_ptr);
2240 if(p != top(ar_ptr))
2241 assert((char *)p + sz <= (char *)heap + heap->size);
2242 else
2243 assert((char *)p + sz == (char *)heap + heap->size);
2244 return;
2246 #endif
2248 /* Check for legal address ... */
2249 assert((char*)p >= sbrk_base);
2250 if (p != top(ar_ptr))
2251 assert((char*)p + sz <= (char*)top(ar_ptr));
2252 else
2253 assert((char*)p + sz <= sbrk_base + sbrked_mem);
2258 #if __STD_C
2259 static void do_check_free_chunk(arena *ar_ptr, mchunkptr p)
2260 #else
2261 static void do_check_free_chunk(ar_ptr, p) arena *ar_ptr; mchunkptr p;
2262 #endif
2264 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
2265 mchunkptr next = chunk_at_offset(p, sz);
2267 do_check_chunk(ar_ptr, p);
2269 /* Check whether it claims to be free ... */
2270 assert(!inuse(p));
2272 /* Must have OK size and fields */
2273 assert((long)sz >= (long)MINSIZE);
2274 assert((sz & MALLOC_ALIGN_MASK) == 0);
2275 assert(aligned_OK(chunk2mem(p)));
2276 /* ... matching footer field */
2277 assert(next->prev_size == sz);
2278 /* ... and is fully consolidated */
2279 assert(prev_inuse(p));
2280 assert (next == top(ar_ptr) || inuse(next));
2282 /* ... and has minimally sane links */
2283 assert(p->fd->bk == p);
2284 assert(p->bk->fd == p);
2287 #if __STD_C
2288 static void do_check_inuse_chunk(arena *ar_ptr, mchunkptr p)
2289 #else
2290 static void do_check_inuse_chunk(ar_ptr, p) arena *ar_ptr; mchunkptr p;
2291 #endif
2293 mchunkptr next = next_chunk(p);
2294 do_check_chunk(ar_ptr, p);
2296 /* Check whether it claims to be in use ... */
2297 assert(inuse(p));
2299 /* ... whether its size is OK (it might be a fencepost) ... */
2300 assert(chunksize(p) >= MINSIZE || next->size == (0|PREV_INUSE));
2302 /* ... and is surrounded by OK chunks.
2303 Since more things can be checked with free chunks than inuse ones,
2304 if an inuse chunk borders them and debug is on, it's worth doing them.
2306 if (!prev_inuse(p))
2308 mchunkptr prv = prev_chunk(p);
2309 assert(next_chunk(prv) == p);
2310 do_check_free_chunk(ar_ptr, prv);
2312 if (next == top(ar_ptr))
2314 assert(prev_inuse(next));
2315 assert(chunksize(next) >= MINSIZE);
2317 else if (!inuse(next))
2318 do_check_free_chunk(ar_ptr, next);
2322 #if __STD_C
2323 static void do_check_malloced_chunk(arena *ar_ptr,
2324 mchunkptr p, INTERNAL_SIZE_T s)
2325 #else
2326 static void do_check_malloced_chunk(ar_ptr, p, s)
2327 arena *ar_ptr; mchunkptr p; INTERNAL_SIZE_T s;
2328 #endif
2330 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
2331 long room = sz - s;
2333 do_check_inuse_chunk(ar_ptr, p);
2335 /* Legal size ... */
2336 assert((long)sz >= (long)MINSIZE);
2337 assert((sz & MALLOC_ALIGN_MASK) == 0);
2338 assert(room >= 0);
2339 assert(room < (long)MINSIZE);
2341 /* ... and alignment */
2342 assert(aligned_OK(chunk2mem(p)));
2345 /* ... and was allocated at front of an available chunk */
2346 assert(prev_inuse(p));
2351 #define check_free_chunk(A,P) do_check_free_chunk(A,P)
2352 #define check_inuse_chunk(A,P) do_check_inuse_chunk(A,P)
2353 #define check_chunk(A,P) do_check_chunk(A,P)
2354 #define check_malloced_chunk(A,P,N) do_check_malloced_chunk(A,P,N)
2355 #else
2356 #define check_free_chunk(A,P)
2357 #define check_inuse_chunk(A,P)
2358 #define check_chunk(A,P)
2359 #define check_malloced_chunk(A,P,N)
2360 #endif
2365 Macro-based internal utilities
2370 Linking chunks in bin lists.
2371 Call these only with variables, not arbitrary expressions, as arguments.
2375 Place chunk p of size s in its bin, in size order,
2376 putting it ahead of others of same size.
2380 #define frontlink(A, P, S, IDX, BK, FD) \
2382 if (S < MAX_SMALLBIN_SIZE) \
2384 IDX = smallbin_index(S); \
2385 mark_binblock(A, IDX); \
2386 BK = bin_at(A, IDX); \
2387 FD = BK->fd; \
2388 P->bk = BK; \
2389 P->fd = FD; \
2390 FD->bk = BK->fd = P; \
2392 else \
2394 IDX = bin_index(S); \
2395 BK = bin_at(A, IDX); \
2396 FD = BK->fd; \
2397 if (FD == BK) mark_binblock(A, IDX); \
2398 else \
2400 while (FD != BK && S < chunksize(FD)) FD = FD->fd; \
2401 BK = FD->bk; \
2403 P->bk = BK; \
2404 P->fd = FD; \
2405 FD->bk = BK->fd = P; \
2410 /* take a chunk off a list */
2412 #define unlink(P, BK, FD) \
2414 BK = P->bk; \
2415 FD = P->fd; \
2416 FD->bk = BK; \
2417 BK->fd = FD; \
2420 /* Place p as the last remainder */
2422 #define link_last_remainder(A, P) \
2424 last_remainder(A)->fd = last_remainder(A)->bk = P; \
2425 P->fd = P->bk = last_remainder(A); \
2428 /* Clear the last_remainder bin */
2430 #define clear_last_remainder(A) \
2431 (last_remainder(A)->fd = last_remainder(A)->bk = last_remainder(A))
2438 Extend the top-most chunk by obtaining memory from system.
2439 Main interface to sbrk (but see also malloc_trim).
2442 #if defined __GNUC__ && __GNUC__ >= 2
2443 /* This function is called only from one place, inline it. */
2444 __inline__
2445 #endif
2446 static void
2447 internal_function
2448 #if __STD_C
2449 malloc_extend_top(arena *ar_ptr, INTERNAL_SIZE_T nb)
2450 #else
2451 malloc_extend_top(ar_ptr, nb) arena *ar_ptr; INTERNAL_SIZE_T nb;
2452 #endif
2454 unsigned long pagesz = malloc_getpagesize;
2455 mchunkptr old_top = top(ar_ptr); /* Record state of old top */
2456 INTERNAL_SIZE_T old_top_size = chunksize(old_top);
2457 INTERNAL_SIZE_T top_size; /* new size of top chunk */
2459 #if USE_ARENAS
2460 if(ar_ptr == &main_arena) {
2461 #endif
2463 char* brk; /* return value from sbrk */
2464 INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of sbrked space */
2465 INTERNAL_SIZE_T correction; /* bytes for 2nd sbrk call */
2466 char* new_brk; /* return of 2nd sbrk call */
2467 char* old_end = (char*)(chunk_at_offset(old_top, old_top_size));
2469 /* Pad request with top_pad plus minimal overhead */
2470 INTERNAL_SIZE_T sbrk_size = nb + top_pad + MINSIZE;
2472 /* If not the first time through, round to preserve page boundary */
2473 /* Otherwise, we need to correct to a page size below anyway. */
2474 /* (We also correct below if an intervening foreign sbrk call.) */
2476 if (sbrk_base != (char*)(-1))
2477 sbrk_size = (sbrk_size + (pagesz - 1)) & ~(pagesz - 1);
2479 brk = (char*)(MORECORE (sbrk_size));
2481 /* Fail if sbrk failed or if a foreign sbrk call killed our space */
2482 if (brk == (char*)(MORECORE_FAILURE) ||
2483 (brk < old_end && old_top != initial_top(&main_arena)))
2484 return;
2486 #if defined _LIBC || defined MALLOC_HOOKS
2487 /* Call the `morecore' hook if necessary. */
2488 if (__after_morecore_hook)
2489 (*__after_morecore_hook) ();
2490 #endif
2492 sbrked_mem += sbrk_size;
2494 if (brk == old_end) { /* can just add bytes to current top */
2495 top_size = sbrk_size + old_top_size;
2496 set_head(old_top, top_size | PREV_INUSE);
2497 old_top = 0; /* don't free below */
2498 } else {
2499 if (sbrk_base == (char*)(-1)) /* First time through. Record base */
2500 sbrk_base = brk;
2501 else
2502 /* Someone else called sbrk(). Count those bytes as sbrked_mem. */
2503 sbrked_mem += brk - (char*)old_end;
2505 /* Guarantee alignment of first new chunk made from this space */
2506 front_misalign = (unsigned long)chunk2mem(brk) & MALLOC_ALIGN_MASK;
2507 if (front_misalign > 0) {
2508 correction = (MALLOC_ALIGNMENT) - front_misalign;
2509 brk += correction;
2510 } else
2511 correction = 0;
2513 /* Guarantee the next brk will be at a page boundary */
2514 correction += pagesz - ((unsigned long)(brk + sbrk_size) & (pagesz - 1));
2516 /* Allocate correction */
2517 new_brk = (char*)(MORECORE (correction));
2518 if (new_brk == (char*)(MORECORE_FAILURE)) return;
2520 #if defined _LIBC || defined MALLOC_HOOKS
2521 /* Call the `morecore' hook if necessary. */
2522 if (__after_morecore_hook)
2523 (*__after_morecore_hook) ();
2524 #endif
2526 sbrked_mem += correction;
2528 top(&main_arena) = chunk_at_offset(brk, 0);
2529 top_size = new_brk - brk + correction;
2530 set_head(top(&main_arena), top_size | PREV_INUSE);
2532 if (old_top == initial_top(&main_arena))
2533 old_top = 0; /* don't free below */
2536 if ((unsigned long)sbrked_mem > (unsigned long)max_sbrked_mem)
2537 max_sbrked_mem = sbrked_mem;
2538 #ifdef NO_THREADS
2539 if ((unsigned long)(mmapped_mem + arena_mem + sbrked_mem) > max_total_mem)
2540 max_total_mem = mmapped_mem + arena_mem + sbrked_mem;
2541 #endif
2543 #if USE_ARENAS
2544 } else { /* ar_ptr != &main_arena */
2545 heap_info *old_heap, *heap;
2546 size_t old_heap_size;
2548 if(old_top_size < MINSIZE) /* this should never happen */
2549 return;
2551 /* First try to extend the current heap. */
2552 if(MINSIZE + nb <= old_top_size)
2553 return;
2554 old_heap = heap_for_ptr(old_top);
2555 old_heap_size = old_heap->size;
2556 if(grow_heap(old_heap, MINSIZE + nb - old_top_size) == 0) {
2557 ar_ptr->size += old_heap->size - old_heap_size;
2558 arena_mem += old_heap->size - old_heap_size;
2559 #ifdef NO_THREADS
2560 if(mmapped_mem + arena_mem + sbrked_mem > max_total_mem)
2561 max_total_mem = mmapped_mem + arena_mem + sbrked_mem;
2562 #endif
2563 top_size = ((char *)old_heap + old_heap->size) - (char *)old_top;
2564 set_head(old_top, top_size | PREV_INUSE);
2565 return;
2568 /* A new heap must be created. */
2569 heap = new_heap(nb + (MINSIZE + sizeof(*heap)));
2570 if(!heap)
2571 return;
2572 heap->ar_ptr = ar_ptr;
2573 heap->prev = old_heap;
2574 ar_ptr->size += heap->size;
2575 arena_mem += heap->size;
2576 #ifdef NO_THREADS
2577 if((unsigned long)(mmapped_mem + arena_mem + sbrked_mem) > max_total_mem)
2578 max_total_mem = mmapped_mem + arena_mem + sbrked_mem;
2579 #endif
2581 /* Set up the new top, so we can safely use chunk_free() below. */
2582 top(ar_ptr) = chunk_at_offset(heap, sizeof(*heap));
2583 top_size = heap->size - sizeof(*heap);
2584 set_head(top(ar_ptr), top_size | PREV_INUSE);
2586 #endif /* USE_ARENAS */
2588 /* We always land on a page boundary */
2589 assert(((unsigned long)((char*)top(ar_ptr) + top_size) & (pagesz-1)) == 0);
2591 /* Setup fencepost and free the old top chunk. */
2592 if(old_top) {
2593 /* The fencepost takes at least MINSIZE bytes, because it might
2594 become the top chunk again later. Note that a footer is set
2595 up, too, although the chunk is marked in use. */
2596 old_top_size -= MINSIZE;
2597 set_head(chunk_at_offset(old_top, old_top_size + 2*SIZE_SZ), 0|PREV_INUSE);
2598 if(old_top_size >= MINSIZE) {
2599 set_head(chunk_at_offset(old_top, old_top_size), (2*SIZE_SZ)|PREV_INUSE);
2600 set_foot(chunk_at_offset(old_top, old_top_size), (2*SIZE_SZ));
2601 set_head_size(old_top, old_top_size);
2602 chunk_free(ar_ptr, old_top);
2603 } else {
2604 set_head(old_top, (old_top_size + 2*SIZE_SZ)|PREV_INUSE);
2605 set_foot(old_top, (old_top_size + 2*SIZE_SZ));
2613 /* Main public routines */
2617 Malloc Algorithm:
2619 The requested size is first converted into a usable form, `nb'.
2620 This currently means to add 4 bytes overhead plus possibly more to
2621 obtain 8-byte alignment and/or to obtain a size of at least
2622 MINSIZE (currently 16, 24, or 32 bytes), the smallest allocatable
2623 size. (All fits are considered `exact' if they are within MINSIZE
2624 bytes.)
2626 From there, the first successful of the following steps is taken:
2628 1. The bin corresponding to the request size is scanned, and if
2629 a chunk of exactly the right size is found, it is taken.
2631 2. The most recently remaindered chunk is used if it is big
2632 enough. This is a form of (roving) first fit, used only in
2633 the absence of exact fits. Runs of consecutive requests use
2634 the remainder of the chunk used for the previous such request
2635 whenever possible. This limited use of a first-fit style
2636 allocation strategy tends to give contiguous chunks
2637 coextensive lifetimes, which improves locality and can reduce
2638 fragmentation in the long run.
2640 3. Other bins are scanned in increasing size order, using a
2641 chunk big enough to fulfill the request, and splitting off
2642 any remainder. This search is strictly by best-fit; i.e.,
2643 the smallest (with ties going to approximately the least
2644 recently used) chunk that fits is selected.
2646 4. If large enough, the chunk bordering the end of memory
2647 (`top') is split off. (This use of `top' is in accord with
2648 the best-fit search rule. In effect, `top' is treated as
2649 larger (and thus less well fitting) than any other available
2650 chunk since it can be extended to be as large as necessary
2651 (up to system limitations).
2653 5. If the request size meets the mmap threshold and the
2654 system supports mmap, and there are few enough currently
2655 allocated mmapped regions, and a call to mmap succeeds,
2656 the request is allocated via direct memory mapping.
2658 6. Otherwise, the top of memory is extended by
2659 obtaining more space from the system (normally using sbrk,
2660 but definable to anything else via the MORECORE macro).
2661 Memory is gathered from the system (in system page-sized
2662 units) in a way that allows chunks obtained across different
2663 sbrk calls to be consolidated, but does not require
2664 contiguous memory. Thus, it should be safe to intersperse
2665 mallocs with other sbrk calls.
2668 All allocations are made from the `lowest' part of any found
2669 chunk. (The implementation invariant is that prev_inuse is
2670 always true of any allocated chunk; i.e., that each allocated
2671 chunk borders either a previously allocated and still in-use chunk,
2672 or the base of its memory arena.)
2676 #if __STD_C
2677 Void_t* mALLOc(size_t bytes)
2678 #else
2679 Void_t* mALLOc(bytes) size_t bytes;
2680 #endif
2682 arena *ar_ptr;
2683 INTERNAL_SIZE_T nb; /* padded request size */
2684 mchunkptr victim;
2686 #if defined _LIBC || defined MALLOC_HOOKS
2687 if (__malloc_hook != NULL) {
2688 Void_t* result;
2690 #if defined __GNUC__ && __GNUC__ >= 2
2691 result = (*__malloc_hook)(bytes, RETURN_ADDRESS (0));
2692 #else
2693 result = (*__malloc_hook)(bytes, NULL);
2694 #endif
2695 return result;
2697 #endif
2699 if(request2size(bytes, nb))
2700 return 0;
2701 arena_get(ar_ptr, nb);
2702 if(!ar_ptr)
2703 return 0;
2704 victim = chunk_alloc(ar_ptr, nb);
2705 if(!victim) {
2706 /* Maybe the failure is due to running out of mmapped areas. */
2707 if(ar_ptr != &main_arena) {
2708 (void)mutex_unlock(&ar_ptr->mutex);
2709 (void)mutex_lock(&main_arena.mutex);
2710 victim = chunk_alloc(&main_arena, nb);
2711 (void)mutex_unlock(&main_arena.mutex);
2712 } else {
2713 #if USE_ARENAS
2714 /* ... or sbrk() has failed and there is still a chance to mmap() */
2715 ar_ptr = arena_get2(ar_ptr->next ? ar_ptr : 0, nb);
2716 (void)mutex_unlock(&main_arena.mutex);
2717 if(ar_ptr) {
2718 victim = chunk_alloc(ar_ptr, nb);
2719 (void)mutex_unlock(&ar_ptr->mutex);
2721 #endif
2723 if(!victim) return 0;
2724 } else
2725 (void)mutex_unlock(&ar_ptr->mutex);
2726 return BOUNDED_N(chunk2mem(victim), bytes);
2729 static mchunkptr
2730 internal_function
2731 #if __STD_C
2732 chunk_alloc(arena *ar_ptr, INTERNAL_SIZE_T nb)
2733 #else
2734 chunk_alloc(ar_ptr, nb) arena *ar_ptr; INTERNAL_SIZE_T nb;
2735 #endif
2737 mchunkptr victim; /* inspected/selected chunk */
2738 INTERNAL_SIZE_T victim_size; /* its size */
2739 int idx; /* index for bin traversal */
2740 mbinptr bin; /* associated bin */
2741 mchunkptr remainder; /* remainder from a split */
2742 long remainder_size; /* its size */
2743 int remainder_index; /* its bin index */
2744 unsigned long block; /* block traverser bit */
2745 int startidx; /* first bin of a traversed block */
2746 mchunkptr fwd; /* misc temp for linking */
2747 mchunkptr bck; /* misc temp for linking */
2748 mbinptr q; /* misc temp */
2751 /* Check for exact match in a bin */
2753 if (is_small_request(nb)) /* Faster version for small requests */
2755 idx = smallbin_index(nb);
2757 /* No traversal or size check necessary for small bins. */
2759 q = _bin_at(ar_ptr, idx);
2760 victim = last(q);
2762 /* Also scan the next one, since it would have a remainder < MINSIZE */
2763 if (victim == q)
2765 q = next_bin(q);
2766 victim = last(q);
2768 if (victim != q)
2770 victim_size = chunksize(victim);
2771 unlink(victim, bck, fwd);
2772 set_inuse_bit_at_offset(victim, victim_size);
2773 check_malloced_chunk(ar_ptr, victim, nb);
2774 return victim;
2777 idx += 2; /* Set for bin scan below. We've already scanned 2 bins. */
2780 else
2782 idx = bin_index(nb);
2783 bin = bin_at(ar_ptr, idx);
2785 for (victim = last(bin); victim != bin; victim = victim->bk)
2787 victim_size = chunksize(victim);
2788 remainder_size = victim_size - nb;
2790 if (remainder_size >= (long)MINSIZE) /* too big */
2792 --idx; /* adjust to rescan below after checking last remainder */
2793 break;
2796 else if (remainder_size >= 0) /* exact fit */
2798 unlink(victim, bck, fwd);
2799 set_inuse_bit_at_offset(victim, victim_size);
2800 check_malloced_chunk(ar_ptr, victim, nb);
2801 return victim;
2805 ++idx;
2809 /* Try to use the last split-off remainder */
2811 if ( (victim = last_remainder(ar_ptr)->fd) != last_remainder(ar_ptr))
2813 victim_size = chunksize(victim);
2814 remainder_size = victim_size - nb;
2816 if (remainder_size >= (long)MINSIZE) /* re-split */
2818 remainder = chunk_at_offset(victim, nb);
2819 set_head(victim, nb | PREV_INUSE);
2820 link_last_remainder(ar_ptr, remainder);
2821 set_head(remainder, remainder_size | PREV_INUSE);
2822 set_foot(remainder, remainder_size);
2823 check_malloced_chunk(ar_ptr, victim, nb);
2824 return victim;
2827 clear_last_remainder(ar_ptr);
2829 if (remainder_size >= 0) /* exhaust */
2831 set_inuse_bit_at_offset(victim, victim_size);
2832 check_malloced_chunk(ar_ptr, victim, nb);
2833 return victim;
2836 /* Else place in bin */
2838 frontlink(ar_ptr, victim, victim_size, remainder_index, bck, fwd);
2842 If there are any possibly nonempty big-enough blocks,
2843 search for best fitting chunk by scanning bins in blockwidth units.
2846 if ( (block = idx2binblock(idx)) <= binblocks(ar_ptr))
2849 /* Get to the first marked block */
2851 if ( (block & binblocks(ar_ptr)) == 0)
2853 /* force to an even block boundary */
2854 idx = (idx & ~(BINBLOCKWIDTH - 1)) + BINBLOCKWIDTH;
2855 block <<= 1;
2856 while ((block & binblocks(ar_ptr)) == 0)
2858 idx += BINBLOCKWIDTH;
2859 block <<= 1;
2863 /* For each possibly nonempty block ... */
2864 for (;;)
2866 startidx = idx; /* (track incomplete blocks) */
2867 q = bin = _bin_at(ar_ptr, idx);
2869 /* For each bin in this block ... */
2872 /* Find and use first big enough chunk ... */
2874 for (victim = last(bin); victim != bin; victim = victim->bk)
2876 victim_size = chunksize(victim);
2877 remainder_size = victim_size - nb;
2879 if (remainder_size >= (long)MINSIZE) /* split */
2881 remainder = chunk_at_offset(victim, nb);
2882 set_head(victim, nb | PREV_INUSE);
2883 unlink(victim, bck, fwd);
2884 link_last_remainder(ar_ptr, remainder);
2885 set_head(remainder, remainder_size | PREV_INUSE);
2886 set_foot(remainder, remainder_size);
2887 check_malloced_chunk(ar_ptr, victim, nb);
2888 return victim;
2891 else if (remainder_size >= 0) /* take */
2893 set_inuse_bit_at_offset(victim, victim_size);
2894 unlink(victim, bck, fwd);
2895 check_malloced_chunk(ar_ptr, victim, nb);
2896 return victim;
2901 bin = next_bin(bin);
2903 } while ((++idx & (BINBLOCKWIDTH - 1)) != 0);
2905 /* Clear out the block bit. */
2907 do /* Possibly backtrack to try to clear a partial block */
2909 if ((startidx & (BINBLOCKWIDTH - 1)) == 0)
2911 binblocks(ar_ptr) &= ~block;
2912 break;
2914 --startidx;
2915 q = prev_bin(q);
2916 } while (first(q) == q);
2918 /* Get to the next possibly nonempty block */
2920 if ( (block <<= 1) <= binblocks(ar_ptr) && (block != 0) )
2922 while ((block & binblocks(ar_ptr)) == 0)
2924 idx += BINBLOCKWIDTH;
2925 block <<= 1;
2928 else
2929 break;
2934 /* Try to use top chunk */
2936 /* Require that there be a remainder, ensuring top always exists */
2937 if ( (remainder_size = chunksize(top(ar_ptr)) - nb) < (long)MINSIZE)
2940 #if HAVE_MMAP
2941 /* If the request is big and there are not yet too many regions,
2942 and we would otherwise need to extend, try to use mmap instead. */
2943 if ((unsigned long)nb >= (unsigned long)mmap_threshold &&
2944 n_mmaps < n_mmaps_max &&
2945 (victim = mmap_chunk(nb)) != 0)
2946 return victim;
2947 #endif
2949 /* Try to extend */
2950 malloc_extend_top(ar_ptr, nb);
2951 if ((remainder_size = chunksize(top(ar_ptr)) - nb) < (long)MINSIZE)
2953 #if HAVE_MMAP
2954 /* A last attempt: when we are out of address space in a
2955 non-main arena, try mmap anyway, as long as it is allowed at
2956 all. */
2957 if (ar_ptr != &main_arena &&
2958 n_mmaps_max > 0 &&
2959 (victim = mmap_chunk(nb)) != 0)
2960 return victim;
2961 #endif
2962 return 0; /* propagate failure */
2966 victim = top(ar_ptr);
2967 set_head(victim, nb | PREV_INUSE);
2968 top(ar_ptr) = chunk_at_offset(victim, nb);
2969 set_head(top(ar_ptr), remainder_size | PREV_INUSE);
2970 check_malloced_chunk(ar_ptr, victim, nb);
2971 return victim;
2980 free() algorithm :
2982 cases:
2984 1. free(0) has no effect.
2986 2. If the chunk was allocated via mmap, it is released via munmap().
2988 3. If a returned chunk borders the current high end of memory,
2989 it is consolidated into the top, and if the total unused
2990 topmost memory exceeds the trim threshold, malloc_trim is
2991 called.
2993 4. Other chunks are consolidated as they arrive, and
2994 placed in corresponding bins. (This includes the case of
2995 consolidating with the current `last_remainder').
3000 #if __STD_C
3001 void fREe(Void_t* mem)
3002 #else
3003 void fREe(mem) Void_t* mem;
3004 #endif
3006 arena *ar_ptr;
3007 mchunkptr p; /* chunk corresponding to mem */
3009 #if defined _LIBC || defined MALLOC_HOOKS
3010 if (__free_hook != NULL) {
3011 #if defined __GNUC__ && __GNUC__ >= 2
3012 (*__free_hook)(mem, RETURN_ADDRESS (0));
3013 #else
3014 (*__free_hook)(mem, NULL);
3015 #endif
3016 return;
3018 #endif
3020 if (mem == 0) /* free(0) has no effect */
3021 return;
3023 p = mem2chunk(mem);
3025 #if HAVE_MMAP
3026 if (chunk_is_mmapped(p)) /* release mmapped memory. */
3028 munmap_chunk(p);
3029 return;
3031 #endif
3033 ar_ptr = arena_for_ptr(p);
3034 #if THREAD_STATS
3035 if(!mutex_trylock(&ar_ptr->mutex))
3036 ++(ar_ptr->stat_lock_direct);
3037 else {
3038 (void)mutex_lock(&ar_ptr->mutex);
3039 ++(ar_ptr->stat_lock_wait);
3041 #else
3042 (void)mutex_lock(&ar_ptr->mutex);
3043 #endif
3044 chunk_free(ar_ptr, p);
3045 (void)mutex_unlock(&ar_ptr->mutex);
3048 static void
3049 internal_function
3050 #if __STD_C
3051 chunk_free(arena *ar_ptr, mchunkptr p)
3052 #else
3053 chunk_free(ar_ptr, p) arena *ar_ptr; mchunkptr p;
3054 #endif
3056 INTERNAL_SIZE_T hd = p->size; /* its head field */
3057 INTERNAL_SIZE_T sz; /* its size */
3058 int idx; /* its bin index */
3059 mchunkptr next; /* next contiguous chunk */
3060 INTERNAL_SIZE_T nextsz; /* its size */
3061 INTERNAL_SIZE_T prevsz; /* size of previous contiguous chunk */
3062 mchunkptr bck; /* misc temp for linking */
3063 mchunkptr fwd; /* misc temp for linking */
3064 int islr; /* track whether merging with last_remainder */
3066 check_inuse_chunk(ar_ptr, p);
3068 sz = hd & ~PREV_INUSE;
3069 next = chunk_at_offset(p, sz);
3070 nextsz = chunksize(next);
3072 if (next == top(ar_ptr)) /* merge with top */
3074 sz += nextsz;
3076 if (!(hd & PREV_INUSE)) /* consolidate backward */
3078 prevsz = p->prev_size;
3079 p = chunk_at_offset(p, -(long)prevsz);
3080 sz += prevsz;
3081 unlink(p, bck, fwd);
3084 set_head(p, sz | PREV_INUSE);
3085 top(ar_ptr) = p;
3087 #if USE_ARENAS
3088 if(ar_ptr == &main_arena) {
3089 #endif
3090 if ((unsigned long)(sz) >= (unsigned long)trim_threshold)
3091 main_trim(top_pad);
3092 #if USE_ARENAS
3093 } else {
3094 heap_info *heap = heap_for_ptr(p);
3096 assert(heap->ar_ptr == ar_ptr);
3098 /* Try to get rid of completely empty heaps, if possible. */
3099 if((unsigned long)(sz) >= (unsigned long)trim_threshold ||
3100 p == chunk_at_offset(heap, sizeof(*heap)))
3101 heap_trim(heap, top_pad);
3103 #endif
3104 return;
3107 islr = 0;
3109 if (!(hd & PREV_INUSE)) /* consolidate backward */
3111 prevsz = p->prev_size;
3112 p = chunk_at_offset(p, -(long)prevsz);
3113 sz += prevsz;
3115 if (p->fd == last_remainder(ar_ptr)) /* keep as last_remainder */
3116 islr = 1;
3117 else
3118 unlink(p, bck, fwd);
3121 if (!(inuse_bit_at_offset(next, nextsz))) /* consolidate forward */
3123 sz += nextsz;
3125 if (!islr && next->fd == last_remainder(ar_ptr))
3126 /* re-insert last_remainder */
3128 islr = 1;
3129 link_last_remainder(ar_ptr, p);
3131 else
3132 unlink(next, bck, fwd);
3134 next = chunk_at_offset(p, sz);
3136 else
3137 set_head(next, nextsz); /* clear inuse bit */
3139 set_head(p, sz | PREV_INUSE);
3140 next->prev_size = sz;
3141 if (!islr)
3142 frontlink(ar_ptr, p, sz, idx, bck, fwd);
3144 #if USE_ARENAS
3145 /* Check whether the heap containing top can go away now. */
3146 if(next->size < MINSIZE &&
3147 (unsigned long)sz > trim_threshold &&
3148 ar_ptr != &main_arena) { /* fencepost */
3149 heap_info *heap = heap_for_ptr(top(ar_ptr));
3151 if(top(ar_ptr) == chunk_at_offset(heap, sizeof(*heap)) &&
3152 heap->prev == heap_for_ptr(p))
3153 heap_trim(heap, top_pad);
3155 #endif
3164 Realloc algorithm:
3166 Chunks that were obtained via mmap cannot be extended or shrunk
3167 unless HAVE_MREMAP is defined, in which case mremap is used.
3168 Otherwise, if their reallocation is for additional space, they are
3169 copied. If for less, they are just left alone.
3171 Otherwise, if the reallocation is for additional space, and the
3172 chunk can be extended, it is, else a malloc-copy-free sequence is
3173 taken. There are several different ways that a chunk could be
3174 extended. All are tried:
3176 * Extending forward into following adjacent free chunk.
3177 * Shifting backwards, joining preceding adjacent space
3178 * Both shifting backwards and extending forward.
3179 * Extending into newly sbrked space
3181 Unless the #define REALLOC_ZERO_BYTES_FREES is set, realloc with a
3182 size argument of zero (re)allocates a minimum-sized chunk.
3184 If the reallocation is for less space, and the new request is for
3185 a `small' (<512 bytes) size, then the newly unused space is lopped
3186 off and freed.
3188 The old unix realloc convention of allowing the last-free'd chunk
3189 to be used as an argument to realloc is no longer supported.
3190 I don't know of any programs still relying on this feature,
3191 and allowing it would also allow too many other incorrect
3192 usages of realloc to be sensible.
3198 #if __STD_C
3199 Void_t* rEALLOc(Void_t* oldmem, size_t bytes)
3200 #else
3201 Void_t* rEALLOc(oldmem, bytes) Void_t* oldmem; size_t bytes;
3202 #endif
3204 arena *ar_ptr;
3205 INTERNAL_SIZE_T nb; /* padded request size */
3207 mchunkptr oldp; /* chunk corresponding to oldmem */
3208 INTERNAL_SIZE_T oldsize; /* its size */
3210 mchunkptr newp; /* chunk to return */
3212 #if defined _LIBC || defined MALLOC_HOOKS
3213 if (__realloc_hook != NULL) {
3214 Void_t* result;
3216 #if defined __GNUC__ && __GNUC__ >= 2
3217 result = (*__realloc_hook)(oldmem, bytes, RETURN_ADDRESS (0));
3218 #else
3219 result = (*__realloc_hook)(oldmem, bytes, NULL);
3220 #endif
3221 return result;
3223 #endif
3225 #ifdef REALLOC_ZERO_BYTES_FREES
3226 if (bytes == 0 && oldmem != NULL) { fREe(oldmem); return 0; }
3227 #endif
3229 /* realloc of null is supposed to be same as malloc */
3230 if (oldmem == 0) return mALLOc(bytes);
3232 oldp = mem2chunk(oldmem);
3233 oldsize = chunksize(oldp);
3235 if(request2size(bytes, nb))
3236 return 0;
3238 #if HAVE_MMAP
3239 if (chunk_is_mmapped(oldp))
3241 Void_t* newmem;
3243 #if HAVE_MREMAP
3244 newp = mremap_chunk(oldp, nb);
3245 if(newp)
3246 return BOUNDED_N(chunk2mem(newp), bytes);
3247 #endif
3248 /* Note the extra SIZE_SZ overhead. */
3249 if(oldsize - SIZE_SZ >= nb) return oldmem; /* do nothing */
3250 /* Must alloc, copy, free. */
3251 newmem = mALLOc(bytes);
3252 if (newmem == 0) return 0; /* propagate failure */
3253 MALLOC_COPY(newmem, oldmem, oldsize - 2*SIZE_SZ);
3254 munmap_chunk(oldp);
3255 return newmem;
3257 #endif
3259 ar_ptr = arena_for_ptr(oldp);
3260 #if THREAD_STATS
3261 if(!mutex_trylock(&ar_ptr->mutex))
3262 ++(ar_ptr->stat_lock_direct);
3263 else {
3264 (void)mutex_lock(&ar_ptr->mutex);
3265 ++(ar_ptr->stat_lock_wait);
3267 #else
3268 (void)mutex_lock(&ar_ptr->mutex);
3269 #endif
3271 #ifndef NO_THREADS
3272 /* As in malloc(), remember this arena for the next allocation. */
3273 tsd_setspecific(arena_key, (Void_t *)ar_ptr);
3274 #endif
3276 newp = chunk_realloc(ar_ptr, oldp, oldsize, nb);
3278 (void)mutex_unlock(&ar_ptr->mutex);
3279 return newp ? BOUNDED_N(chunk2mem(newp), bytes) : NULL;
3282 static mchunkptr
3283 internal_function
3284 #if __STD_C
3285 chunk_realloc(arena* ar_ptr, mchunkptr oldp, INTERNAL_SIZE_T oldsize,
3286 INTERNAL_SIZE_T nb)
3287 #else
3288 chunk_realloc(ar_ptr, oldp, oldsize, nb)
3289 arena* ar_ptr; mchunkptr oldp; INTERNAL_SIZE_T oldsize, nb;
3290 #endif
3292 mchunkptr newp = oldp; /* chunk to return */
3293 INTERNAL_SIZE_T newsize = oldsize; /* its size */
3295 mchunkptr next; /* next contiguous chunk after oldp */
3296 INTERNAL_SIZE_T nextsize; /* its size */
3298 mchunkptr prev; /* previous contiguous chunk before oldp */
3299 INTERNAL_SIZE_T prevsize; /* its size */
3301 mchunkptr remainder; /* holds split off extra space from newp */
3302 INTERNAL_SIZE_T remainder_size; /* its size */
3304 mchunkptr bck; /* misc temp for linking */
3305 mchunkptr fwd; /* misc temp for linking */
3307 check_inuse_chunk(ar_ptr, oldp);
3309 if ((long)(oldsize) < (long)(nb))
3311 Void_t* oldmem = BOUNDED_N(chunk2mem(oldp), oldsize);
3313 /* Try expanding forward */
3315 next = chunk_at_offset(oldp, oldsize);
3316 if (next == top(ar_ptr) || !inuse(next))
3318 nextsize = chunksize(next);
3320 /* Forward into top only if a remainder */
3321 if (next == top(ar_ptr))
3323 if ((long)(nextsize + newsize) >= (long)(nb + MINSIZE))
3325 newsize += nextsize;
3326 top(ar_ptr) = chunk_at_offset(oldp, nb);
3327 set_head(top(ar_ptr), (newsize - nb) | PREV_INUSE);
3328 set_head_size(oldp, nb);
3329 return oldp;
3333 /* Forward into next chunk */
3334 else if (((long)(nextsize + newsize) >= (long)(nb)))
3336 unlink(next, bck, fwd);
3337 newsize += nextsize;
3338 goto split;
3341 else
3343 next = 0;
3344 nextsize = 0;
3347 oldsize -= SIZE_SZ;
3349 /* Try shifting backwards. */
3351 if (!prev_inuse(oldp))
3353 prev = prev_chunk(oldp);
3354 prevsize = chunksize(prev);
3356 /* try forward + backward first to save a later consolidation */
3358 if (next != 0)
3360 /* into top */
3361 if (next == top(ar_ptr))
3363 if ((long)(nextsize + prevsize + newsize) >= (long)(nb + MINSIZE))
3365 unlink(prev, bck, fwd);
3366 newp = prev;
3367 newsize += prevsize + nextsize;
3368 MALLOC_COPY(BOUNDED_N(chunk2mem(newp), oldsize), oldmem, oldsize);
3369 top(ar_ptr) = chunk_at_offset(newp, nb);
3370 set_head(top(ar_ptr), (newsize - nb) | PREV_INUSE);
3371 set_head_size(newp, nb);
3372 return newp;
3376 /* into next chunk */
3377 else if (((long)(nextsize + prevsize + newsize) >= (long)(nb)))
3379 unlink(next, bck, fwd);
3380 unlink(prev, bck, fwd);
3381 newp = prev;
3382 newsize += nextsize + prevsize;
3383 MALLOC_COPY(BOUNDED_N(chunk2mem(newp), oldsize), oldmem, oldsize);
3384 goto split;
3388 /* backward only */
3389 if (prev != 0 && (long)(prevsize + newsize) >= (long)nb)
3391 unlink(prev, bck, fwd);
3392 newp = prev;
3393 newsize += prevsize;
3394 MALLOC_COPY(BOUNDED_N(chunk2mem(newp), oldsize), oldmem, oldsize);
3395 goto split;
3399 /* Must allocate */
3401 newp = chunk_alloc (ar_ptr, nb);
3403 if (newp == 0) {
3404 /* Maybe the failure is due to running out of mmapped areas. */
3405 if (ar_ptr != &main_arena) {
3406 (void)mutex_lock(&main_arena.mutex);
3407 newp = chunk_alloc(&main_arena, nb);
3408 (void)mutex_unlock(&main_arena.mutex);
3409 } else {
3410 #if USE_ARENAS
3411 /* ... or sbrk() has failed and there is still a chance to mmap() */
3412 arena* ar_ptr2 = arena_get2(ar_ptr->next ? ar_ptr : 0, nb);
3413 if(ar_ptr2) {
3414 newp = chunk_alloc(ar_ptr2, nb);
3415 (void)mutex_unlock(&ar_ptr2->mutex);
3417 #endif
3419 if (newp == 0) /* propagate failure */
3420 return 0;
3423 /* Avoid copy if newp is next chunk after oldp. */
3424 /* (This can only happen when new chunk is sbrk'ed.) */
3426 if ( newp == next_chunk(oldp))
3428 newsize += chunksize(newp);
3429 newp = oldp;
3430 goto split;
3433 /* Otherwise copy, free, and exit */
3434 MALLOC_COPY(BOUNDED_N(chunk2mem(newp), oldsize), oldmem, oldsize);
3435 chunk_free(ar_ptr, oldp);
3436 return newp;
3440 split: /* split off extra room in old or expanded chunk */
3442 if (newsize - nb >= MINSIZE) /* split off remainder */
3444 remainder = chunk_at_offset(newp, nb);
3445 remainder_size = newsize - nb;
3446 set_head_size(newp, nb);
3447 set_head(remainder, remainder_size | PREV_INUSE);
3448 set_inuse_bit_at_offset(remainder, remainder_size);
3449 chunk_free(ar_ptr, remainder);
3451 else
3453 set_head_size(newp, newsize);
3454 set_inuse_bit_at_offset(newp, newsize);
3457 check_inuse_chunk(ar_ptr, newp);
3458 return newp;
3466 memalign algorithm:
3468 memalign requests more than enough space from malloc, finds a spot
3469 within that chunk that meets the alignment request, and then
3470 possibly frees the leading and trailing space.
3472 The alignment argument must be a power of two. This property is not
3473 checked by memalign, so misuse may result in random runtime errors.
3475 8-byte alignment is guaranteed by normal malloc calls, so don't
3476 bother calling memalign with an argument of 8 or less.
3478 Overreliance on memalign is a sure way to fragment space.
3483 #if __STD_C
3484 Void_t* mEMALIGn(size_t alignment, size_t bytes)
3485 #else
3486 Void_t* mEMALIGn(alignment, bytes) size_t alignment; size_t bytes;
3487 #endif
3489 arena *ar_ptr;
3490 INTERNAL_SIZE_T nb; /* padded request size */
3491 mchunkptr p;
3493 #if defined _LIBC || defined MALLOC_HOOKS
3494 if (__memalign_hook != NULL) {
3495 Void_t* result;
3497 #if defined __GNUC__ && __GNUC__ >= 2
3498 result = (*__memalign_hook)(alignment, bytes, RETURN_ADDRESS (0));
3499 #else
3500 result = (*__memalign_hook)(alignment, bytes, NULL);
3501 #endif
3502 return result;
3504 #endif
3506 /* If need less alignment than we give anyway, just relay to malloc */
3508 if (alignment <= MALLOC_ALIGNMENT) return mALLOc(bytes);
3510 /* Otherwise, ensure that it is at least a minimum chunk size */
3512 if (alignment < MINSIZE) alignment = MINSIZE;
3514 if(request2size(bytes, nb))
3515 return 0;
3516 arena_get(ar_ptr, nb + alignment + MINSIZE);
3517 if(!ar_ptr)
3518 return 0;
3519 p = chunk_align(ar_ptr, nb, alignment);
3520 (void)mutex_unlock(&ar_ptr->mutex);
3521 if(!p) {
3522 /* Maybe the failure is due to running out of mmapped areas. */
3523 if(ar_ptr != &main_arena) {
3524 (void)mutex_lock(&main_arena.mutex);
3525 p = chunk_align(&main_arena, nb, alignment);
3526 (void)mutex_unlock(&main_arena.mutex);
3527 } else {
3528 #if USE_ARENAS
3529 /* ... or sbrk() has failed and there is still a chance to mmap() */
3530 ar_ptr = arena_get2(ar_ptr->next ? ar_ptr : 0, nb);
3531 if(ar_ptr) {
3532 p = chunk_align(ar_ptr, nb, alignment);
3533 (void)mutex_unlock(&ar_ptr->mutex);
3535 #endif
3537 if(!p) return 0;
3539 return BOUNDED_N(chunk2mem(p), bytes);
3542 static mchunkptr
3543 internal_function
3544 #if __STD_C
3545 chunk_align(arena* ar_ptr, INTERNAL_SIZE_T nb, size_t alignment)
3546 #else
3547 chunk_align(ar_ptr, nb, alignment)
3548 arena* ar_ptr; INTERNAL_SIZE_T nb; size_t alignment;
3549 #endif
3551 unsigned long m; /* memory returned by malloc call */
3552 mchunkptr p; /* corresponding chunk */
3553 char* brk; /* alignment point within p */
3554 mchunkptr newp; /* chunk to return */
3555 INTERNAL_SIZE_T newsize; /* its size */
3556 INTERNAL_SIZE_T leadsize; /* leading space befor alignment point */
3557 mchunkptr remainder; /* spare room at end to split off */
3558 long remainder_size; /* its size */
3560 /* Call chunk_alloc with worst case padding to hit alignment. */
3561 p = chunk_alloc(ar_ptr, nb + alignment + MINSIZE);
3562 if (p == 0)
3563 return 0; /* propagate failure */
3565 m = (unsigned long)chunk2mem(p);
3567 if ((m % alignment) == 0) /* aligned */
3569 #if HAVE_MMAP
3570 if(chunk_is_mmapped(p)) {
3571 return p; /* nothing more to do */
3573 #endif
3575 else /* misaligned */
3578 Find an aligned spot inside chunk.
3579 Since we need to give back leading space in a chunk of at
3580 least MINSIZE, if the first calculation places us at
3581 a spot with less than MINSIZE leader, we can move to the
3582 next aligned spot -- we've allocated enough total room so that
3583 this is always possible.
3586 brk = (char*)mem2chunk(((m + alignment - 1)) & -(long)alignment);
3587 if ((long)(brk - (char*)(p)) < (long)MINSIZE) brk += alignment;
3589 newp = chunk_at_offset(brk, 0);
3590 leadsize = brk - (char*)(p);
3591 newsize = chunksize(p) - leadsize;
3593 #if HAVE_MMAP
3594 if(chunk_is_mmapped(p))
3596 newp->prev_size = p->prev_size + leadsize;
3597 set_head(newp, newsize|IS_MMAPPED);
3598 return newp;
3600 #endif
3602 /* give back leader, use the rest */
3604 set_head(newp, newsize | PREV_INUSE);
3605 set_inuse_bit_at_offset(newp, newsize);
3606 set_head_size(p, leadsize);
3607 chunk_free(ar_ptr, p);
3608 p = newp;
3610 assert (newsize>=nb && (((unsigned long)(chunk2mem(p))) % alignment) == 0);
3613 /* Also give back spare room at the end */
3615 remainder_size = chunksize(p) - nb;
3617 if (remainder_size >= (long)MINSIZE)
3619 remainder = chunk_at_offset(p, nb);
3620 set_head(remainder, remainder_size | PREV_INUSE);
3621 set_head_size(p, nb);
3622 chunk_free(ar_ptr, remainder);
3625 check_inuse_chunk(ar_ptr, p);
3626 return p;
3633 valloc just invokes memalign with alignment argument equal
3634 to the page size of the system (or as near to this as can
3635 be figured out from all the includes/defines above.)
3638 #if __STD_C
3639 Void_t* vALLOc(size_t bytes)
3640 #else
3641 Void_t* vALLOc(bytes) size_t bytes;
3642 #endif
3644 if(__malloc_initialized < 0)
3645 ptmalloc_init ();
3646 return mEMALIGn (malloc_getpagesize, bytes);
3650 pvalloc just invokes valloc for the nearest pagesize
3651 that will accommodate request
3655 #if __STD_C
3656 Void_t* pvALLOc(size_t bytes)
3657 #else
3658 Void_t* pvALLOc(bytes) size_t bytes;
3659 #endif
3661 size_t pagesize;
3662 if(__malloc_initialized < 0)
3663 ptmalloc_init ();
3664 pagesize = malloc_getpagesize;
3665 return mEMALIGn (pagesize, (bytes + pagesize - 1) & ~(pagesize - 1));
3670 calloc calls chunk_alloc, then zeroes out the allocated chunk.
3674 #if __STD_C
3675 Void_t* cALLOc(size_t n, size_t elem_size)
3676 #else
3677 Void_t* cALLOc(n, elem_size) size_t n; size_t elem_size;
3678 #endif
3680 arena *ar_ptr;
3681 mchunkptr p, oldtop;
3682 INTERNAL_SIZE_T sz, csz, oldtopsize;
3683 Void_t* mem;
3685 #if defined _LIBC || defined MALLOC_HOOKS
3686 if (__malloc_hook != NULL) {
3687 sz = n * elem_size;
3688 #if defined __GNUC__ && __GNUC__ >= 2
3689 mem = (*__malloc_hook)(sz, RETURN_ADDRESS (0));
3690 #else
3691 mem = (*__malloc_hook)(sz, NULL);
3692 #endif
3693 if(mem == 0)
3694 return 0;
3695 #ifdef HAVE_MEMSET
3696 return memset(mem, 0, sz);
3697 #else
3698 while(sz > 0) ((char*)mem)[--sz] = 0; /* rather inefficient */
3699 return mem;
3700 #endif
3702 #endif
3704 if(request2size(n * elem_size, sz))
3705 return 0;
3706 arena_get(ar_ptr, sz);
3707 if(!ar_ptr)
3708 return 0;
3710 /* Check if expand_top called, in which case there may be
3711 no need to clear. */
3712 #if MORECORE_CLEARS
3713 oldtop = top(ar_ptr);
3714 oldtopsize = chunksize(top(ar_ptr));
3715 #if MORECORE_CLEARS < 2
3716 /* Only newly allocated memory is guaranteed to be cleared. */
3717 if (ar_ptr == &main_arena &&
3718 oldtopsize < sbrk_base + max_sbrked_mem - (char *)oldtop)
3719 oldtopsize = (sbrk_base + max_sbrked_mem - (char *)oldtop);
3720 #endif
3721 #endif
3722 p = chunk_alloc (ar_ptr, sz);
3724 /* Only clearing follows, so we can unlock early. */
3725 (void)mutex_unlock(&ar_ptr->mutex);
3727 if (p == 0) {
3728 /* Maybe the failure is due to running out of mmapped areas. */
3729 if(ar_ptr != &main_arena) {
3730 (void)mutex_lock(&main_arena.mutex);
3731 p = chunk_alloc(&main_arena, sz);
3732 (void)mutex_unlock(&main_arena.mutex);
3733 } else {
3734 #if USE_ARENAS
3735 /* ... or sbrk() has failed and there is still a chance to mmap() */
3736 (void)mutex_lock(&main_arena.mutex);
3737 ar_ptr = arena_get2(ar_ptr->next ? ar_ptr : 0, sz);
3738 (void)mutex_unlock(&main_arena.mutex);
3739 if(ar_ptr) {
3740 p = chunk_alloc(ar_ptr, sz);
3741 (void)mutex_unlock(&ar_ptr->mutex);
3743 #endif
3745 if (p == 0) return 0;
3747 mem = BOUNDED_N(chunk2mem(p), n * elem_size);
3749 /* Two optional cases in which clearing not necessary */
3751 #if HAVE_MMAP
3752 if (chunk_is_mmapped(p)) return mem;
3753 #endif
3755 csz = chunksize(p);
3757 #if MORECORE_CLEARS
3758 if (p == oldtop && csz > oldtopsize) {
3759 /* clear only the bytes from non-freshly-sbrked memory */
3760 csz = oldtopsize;
3762 #endif
3764 csz -= SIZE_SZ;
3765 MALLOC_ZERO(BOUNDED_N(chunk2mem(p), csz), csz);
3766 return mem;
3771 cfree just calls free. It is needed/defined on some systems
3772 that pair it with calloc, presumably for odd historical reasons.
3776 #if !defined(_LIBC)
3777 #if __STD_C
3778 void cfree(Void_t *mem)
3779 #else
3780 void cfree(mem) Void_t *mem;
3781 #endif
3783 fREe(mem);
3785 #endif
3791 Malloc_trim gives memory back to the system (via negative
3792 arguments to sbrk) if there is unused memory at the `high' end of
3793 the malloc pool. You can call this after freeing large blocks of
3794 memory to potentially reduce the system-level memory requirements
3795 of a program. However, it cannot guarantee to reduce memory. Under
3796 some allocation patterns, some large free blocks of memory will be
3797 locked between two used chunks, so they cannot be given back to
3798 the system.
3800 The `pad' argument to malloc_trim represents the amount of free
3801 trailing space to leave untrimmed. If this argument is zero,
3802 only the minimum amount of memory to maintain internal data
3803 structures will be left (one page or less). Non-zero arguments
3804 can be supplied to maintain enough trailing space to service
3805 future expected allocations without having to re-obtain memory
3806 from the system.
3808 Malloc_trim returns 1 if it actually released any memory, else 0.
3812 #if __STD_C
3813 int mALLOC_TRIm(size_t pad)
3814 #else
3815 int mALLOC_TRIm(pad) size_t pad;
3816 #endif
3818 int res;
3820 (void)mutex_lock(&main_arena.mutex);
3821 res = main_trim(pad);
3822 (void)mutex_unlock(&main_arena.mutex);
3823 return res;
3826 /* Trim the main arena. */
3828 static int
3829 internal_function
3830 #if __STD_C
3831 main_trim(size_t pad)
3832 #else
3833 main_trim(pad) size_t pad;
3834 #endif
3836 mchunkptr top_chunk; /* The current top chunk */
3837 long top_size; /* Amount of top-most memory */
3838 long extra; /* Amount to release */
3839 char* current_brk; /* address returned by pre-check sbrk call */
3840 char* new_brk; /* address returned by negative sbrk call */
3842 unsigned long pagesz = malloc_getpagesize;
3844 top_chunk = top(&main_arena);
3845 top_size = chunksize(top_chunk);
3846 extra = ((top_size - pad - MINSIZE + (pagesz-1)) / pagesz - 1) * pagesz;
3848 if (extra < (long)pagesz) /* Not enough memory to release */
3849 return 0;
3851 /* Test to make sure no one else called sbrk */
3852 current_brk = (char*)(MORECORE (0));
3853 if (current_brk != (char*)(top_chunk) + top_size)
3854 return 0; /* Apparently we don't own memory; must fail */
3856 new_brk = (char*)(MORECORE (-extra));
3858 #if defined _LIBC || defined MALLOC_HOOKS
3859 /* Call the `morecore' hook if necessary. */
3860 if (__after_morecore_hook)
3861 (*__after_morecore_hook) ();
3862 #endif
3864 if (new_brk == (char*)(MORECORE_FAILURE)) { /* sbrk failed? */
3865 /* Try to figure out what we have */
3866 current_brk = (char*)(MORECORE (0));
3867 top_size = current_brk - (char*)top_chunk;
3868 if (top_size >= (long)MINSIZE) /* if not, we are very very dead! */
3870 sbrked_mem = current_brk - sbrk_base;
3871 set_head(top_chunk, top_size | PREV_INUSE);
3873 check_chunk(&main_arena, top_chunk);
3874 return 0;
3876 sbrked_mem -= extra;
3878 /* Success. Adjust top accordingly. */
3879 set_head(top_chunk, (top_size - extra) | PREV_INUSE);
3880 check_chunk(&main_arena, top_chunk);
3881 return 1;
3884 #if USE_ARENAS
3886 static int
3887 internal_function
3888 #if __STD_C
3889 heap_trim(heap_info *heap, size_t pad)
3890 #else
3891 heap_trim(heap, pad) heap_info *heap; size_t pad;
3892 #endif
3894 unsigned long pagesz = malloc_getpagesize;
3895 arena *ar_ptr = heap->ar_ptr;
3896 mchunkptr top_chunk = top(ar_ptr), p, bck, fwd;
3897 heap_info *prev_heap;
3898 long new_size, top_size, extra;
3900 /* Can this heap go away completely ? */
3901 while(top_chunk == chunk_at_offset(heap, sizeof(*heap))) {
3902 prev_heap = heap->prev;
3903 p = chunk_at_offset(prev_heap, prev_heap->size - (MINSIZE-2*SIZE_SZ));
3904 assert(p->size == (0|PREV_INUSE)); /* must be fencepost */
3905 p = prev_chunk(p);
3906 new_size = chunksize(p) + (MINSIZE-2*SIZE_SZ);
3907 assert(new_size>0 && new_size<(long)(2*MINSIZE));
3908 if(!prev_inuse(p))
3909 new_size += p->prev_size;
3910 assert(new_size>0 && new_size<HEAP_MAX_SIZE);
3911 if(new_size + (HEAP_MAX_SIZE - prev_heap->size) < pad + MINSIZE + pagesz)
3912 break;
3913 ar_ptr->size -= heap->size;
3914 arena_mem -= heap->size;
3915 delete_heap(heap);
3916 heap = prev_heap;
3917 if(!prev_inuse(p)) { /* consolidate backward */
3918 p = prev_chunk(p);
3919 unlink(p, bck, fwd);
3921 assert(((unsigned long)((char*)p + new_size) & (pagesz-1)) == 0);
3922 assert( ((char*)p + new_size) == ((char*)heap + heap->size) );
3923 top(ar_ptr) = top_chunk = p;
3924 set_head(top_chunk, new_size | PREV_INUSE);
3925 check_chunk(ar_ptr, top_chunk);
3927 top_size = chunksize(top_chunk);
3928 extra = ((top_size - pad - MINSIZE + (pagesz-1))/pagesz - 1) * pagesz;
3929 if(extra < (long)pagesz)
3930 return 0;
3931 /* Try to shrink. */
3932 if(grow_heap(heap, -extra) != 0)
3933 return 0;
3934 ar_ptr->size -= extra;
3935 arena_mem -= extra;
3937 /* Success. Adjust top accordingly. */
3938 set_head(top_chunk, (top_size - extra) | PREV_INUSE);
3939 check_chunk(ar_ptr, top_chunk);
3940 return 1;
3943 #endif /* USE_ARENAS */
3948 malloc_usable_size:
3950 This routine tells you how many bytes you can actually use in an
3951 allocated chunk, which may be more than you requested (although
3952 often not). You can use this many bytes without worrying about
3953 overwriting other allocated objects. Not a particularly great
3954 programming practice, but still sometimes useful.
3958 #if __STD_C
3959 size_t mALLOC_USABLE_SIZe(Void_t* mem)
3960 #else
3961 size_t mALLOC_USABLE_SIZe(mem) Void_t* mem;
3962 #endif
3964 mchunkptr p;
3966 if (mem == 0)
3967 return 0;
3968 else
3970 p = mem2chunk(mem);
3971 if(!chunk_is_mmapped(p))
3973 if (!inuse(p)) return 0;
3974 check_inuse_chunk(arena_for_ptr(mem), p);
3975 return chunksize(p) - SIZE_SZ;
3977 return chunksize(p) - 2*SIZE_SZ;
3984 /* Utility to update mallinfo for malloc_stats() and mallinfo() */
3986 static void
3987 #if __STD_C
3988 malloc_update_mallinfo(arena *ar_ptr, struct mallinfo *mi)
3989 #else
3990 malloc_update_mallinfo(ar_ptr, mi) arena *ar_ptr; struct mallinfo *mi;
3991 #endif
3993 int i, navail;
3994 mbinptr b;
3995 mchunkptr p;
3996 #if MALLOC_DEBUG
3997 mchunkptr q;
3998 #endif
3999 INTERNAL_SIZE_T avail;
4001 (void)mutex_lock(&ar_ptr->mutex);
4002 avail = chunksize(top(ar_ptr));
4003 navail = ((long)(avail) >= (long)MINSIZE)? 1 : 0;
4005 for (i = 1; i < NAV; ++i)
4007 b = bin_at(ar_ptr, i);
4008 for (p = last(b); p != b; p = p->bk)
4010 #if MALLOC_DEBUG
4011 check_free_chunk(ar_ptr, p);
4012 for (q = next_chunk(p);
4013 q != top(ar_ptr) && inuse(q) && (long)chunksize(q) > 0;
4014 q = next_chunk(q))
4015 check_inuse_chunk(ar_ptr, q);
4016 #endif
4017 avail += chunksize(p);
4018 navail++;
4022 mi->arena = ar_ptr->size;
4023 mi->ordblks = navail;
4024 mi->smblks = mi->usmblks = mi->fsmblks = 0; /* clear unused fields */
4025 mi->uordblks = ar_ptr->size - avail;
4026 mi->fordblks = avail;
4027 mi->hblks = n_mmaps;
4028 mi->hblkhd = mmapped_mem;
4029 mi->keepcost = chunksize(top(ar_ptr));
4031 (void)mutex_unlock(&ar_ptr->mutex);
4034 #if USE_ARENAS && MALLOC_DEBUG > 1
4036 /* Print the complete contents of a single heap to stderr. */
4038 static void
4039 #if __STD_C
4040 dump_heap(heap_info *heap)
4041 #else
4042 dump_heap(heap) heap_info *heap;
4043 #endif
4045 char *ptr;
4046 mchunkptr p;
4048 fprintf(stderr, "Heap %p, size %10lx:\n", heap, (long)heap->size);
4049 ptr = (heap->ar_ptr != (arena*)(heap+1)) ?
4050 (char*)(heap + 1) : (char*)(heap + 1) + sizeof(arena);
4051 p = (mchunkptr)(((unsigned long)ptr + MALLOC_ALIGN_MASK) &
4052 ~MALLOC_ALIGN_MASK);
4053 for(;;) {
4054 fprintf(stderr, "chunk %p size %10lx", p, (long)p->size);
4055 if(p == top(heap->ar_ptr)) {
4056 fprintf(stderr, " (top)\n");
4057 break;
4058 } else if(p->size == (0|PREV_INUSE)) {
4059 fprintf(stderr, " (fence)\n");
4060 break;
4062 fprintf(stderr, "\n");
4063 p = next_chunk(p);
4067 #endif
4073 malloc_stats:
4075 For all arenas separately and in total, prints on stderr the
4076 amount of space obtained from the system, and the current number
4077 of bytes allocated via malloc (or realloc, etc) but not yet
4078 freed. (Note that this is the number of bytes allocated, not the
4079 number requested. It will be larger than the number requested
4080 because of alignment and bookkeeping overhead.) When not compiled
4081 for multiple threads, the maximum amount of allocated memory
4082 (which may be more than current if malloc_trim and/or munmap got
4083 called) is also reported. When using mmap(), prints the maximum
4084 number of simultaneous mmap regions used, too.
4088 void mALLOC_STATs()
4090 int i;
4091 arena *ar_ptr;
4092 struct mallinfo mi;
4093 unsigned int in_use_b = mmapped_mem, system_b = in_use_b;
4094 #if THREAD_STATS
4095 long stat_lock_direct = 0, stat_lock_loop = 0, stat_lock_wait = 0;
4096 #endif
4098 for(i=0, ar_ptr = &main_arena;; i++) {
4099 malloc_update_mallinfo(ar_ptr, &mi);
4100 fprintf(stderr, "Arena %d:\n", i);
4101 fprintf(stderr, "system bytes = %10u\n", (unsigned int)mi.arena);
4102 fprintf(stderr, "in use bytes = %10u\n", (unsigned int)mi.uordblks);
4103 system_b += mi.arena;
4104 in_use_b += mi.uordblks;
4105 #if THREAD_STATS
4106 stat_lock_direct += ar_ptr->stat_lock_direct;
4107 stat_lock_loop += ar_ptr->stat_lock_loop;
4108 stat_lock_wait += ar_ptr->stat_lock_wait;
4109 #endif
4110 #if USE_ARENAS && MALLOC_DEBUG > 1
4111 if(ar_ptr != &main_arena) {
4112 heap_info *heap;
4113 (void)mutex_lock(&ar_ptr->mutex);
4114 heap = heap_for_ptr(top(ar_ptr));
4115 while(heap) { dump_heap(heap); heap = heap->prev; }
4116 (void)mutex_unlock(&ar_ptr->mutex);
4118 #endif
4119 ar_ptr = ar_ptr->next;
4120 if(ar_ptr == &main_arena) break;
4122 #if HAVE_MMAP
4123 fprintf(stderr, "Total (incl. mmap):\n");
4124 #else
4125 fprintf(stderr, "Total:\n");
4126 #endif
4127 fprintf(stderr, "system bytes = %10u\n", system_b);
4128 fprintf(stderr, "in use bytes = %10u\n", in_use_b);
4129 #ifdef NO_THREADS
4130 fprintf(stderr, "max system bytes = %10u\n", (unsigned int)max_total_mem);
4131 #endif
4132 #if HAVE_MMAP
4133 fprintf(stderr, "max mmap regions = %10u\n", (unsigned int)max_n_mmaps);
4134 fprintf(stderr, "max mmap bytes = %10lu\n", max_mmapped_mem);
4135 #endif
4136 #if THREAD_STATS
4137 fprintf(stderr, "heaps created = %10d\n", stat_n_heaps);
4138 fprintf(stderr, "locked directly = %10ld\n", stat_lock_direct);
4139 fprintf(stderr, "locked in loop = %10ld\n", stat_lock_loop);
4140 fprintf(stderr, "locked waiting = %10ld\n", stat_lock_wait);
4141 fprintf(stderr, "locked total = %10ld\n",
4142 stat_lock_direct + stat_lock_loop + stat_lock_wait);
4143 #endif
4147 mallinfo returns a copy of updated current mallinfo.
4148 The information reported is for the arena last used by the thread.
4151 struct mallinfo mALLINFo()
4153 struct mallinfo mi;
4154 Void_t *vptr = NULL;
4156 #ifndef NO_THREADS
4157 tsd_getspecific(arena_key, vptr);
4158 #endif
4159 malloc_update_mallinfo((vptr ? (arena*)vptr : &main_arena), &mi);
4160 return mi;
4167 mallopt:
4169 mallopt is the general SVID/XPG interface to tunable parameters.
4170 The format is to provide a (parameter-number, parameter-value) pair.
4171 mallopt then sets the corresponding parameter to the argument
4172 value if it can (i.e., so long as the value is meaningful),
4173 and returns 1 if successful else 0.
4175 See descriptions of tunable parameters above.
4179 #if __STD_C
4180 int mALLOPt(int param_number, int value)
4181 #else
4182 int mALLOPt(param_number, value) int param_number; int value;
4183 #endif
4185 switch(param_number)
4187 case M_TRIM_THRESHOLD:
4188 trim_threshold = value; return 1;
4189 case M_TOP_PAD:
4190 top_pad = value; return 1;
4191 case M_MMAP_THRESHOLD:
4192 #if USE_ARENAS
4193 /* Forbid setting the threshold too high. */
4194 if((unsigned long)value > HEAP_MAX_SIZE/2) return 0;
4195 #endif
4196 mmap_threshold = value; return 1;
4197 case M_MMAP_MAX:
4198 #if HAVE_MMAP
4199 n_mmaps_max = value; return 1;
4200 #else
4201 if (value != 0) return 0; else n_mmaps_max = value; return 1;
4202 #endif
4203 case M_CHECK_ACTION:
4204 check_action = value; return 1;
4206 default:
4207 return 0;
4213 /* Get/set state: malloc_get_state() records the current state of all
4214 malloc variables (_except_ for the actual heap contents and `hook'
4215 function pointers) in a system dependent, opaque data structure.
4216 This data structure is dynamically allocated and can be free()d
4217 after use. malloc_set_state() restores the state of all malloc
4218 variables to the previously obtained state. This is especially
4219 useful when using this malloc as part of a shared library, and when
4220 the heap contents are saved/restored via some other method. The
4221 primary example for this is GNU Emacs with its `dumping' procedure.
4222 `Hook' function pointers are never saved or restored by these
4223 functions, with two exceptions: If malloc checking was in use when
4224 malloc_get_state() was called, then malloc_set_state() calls
4225 __malloc_check_init() if possible; if malloc checking was not in
4226 use in the recorded state but the user requested malloc checking,
4227 then the hooks are reset to 0. */
4229 #define MALLOC_STATE_MAGIC 0x444c4541l
4230 #define MALLOC_STATE_VERSION (0*0x100l + 1l) /* major*0x100 + minor */
4232 struct malloc_state {
4233 long magic;
4234 long version;
4235 mbinptr av[NAV * 2 + 2];
4236 char* sbrk_base;
4237 int sbrked_mem_bytes;
4238 unsigned long trim_threshold;
4239 unsigned long top_pad;
4240 unsigned int n_mmaps_max;
4241 unsigned long mmap_threshold;
4242 int check_action;
4243 unsigned long max_sbrked_mem;
4244 unsigned long max_total_mem;
4245 unsigned int n_mmaps;
4246 unsigned int max_n_mmaps;
4247 unsigned long mmapped_mem;
4248 unsigned long max_mmapped_mem;
4249 int using_malloc_checking;
4252 Void_t*
4253 mALLOC_GET_STATe()
4255 struct malloc_state* ms;
4256 int i;
4257 mbinptr b;
4259 ms = (struct malloc_state*)mALLOc(sizeof(*ms));
4260 if (!ms)
4261 return 0;
4262 (void)mutex_lock(&main_arena.mutex);
4263 ms->magic = MALLOC_STATE_MAGIC;
4264 ms->version = MALLOC_STATE_VERSION;
4265 ms->av[0] = main_arena.av[0];
4266 ms->av[1] = main_arena.av[1];
4267 for(i=0; i<NAV; i++) {
4268 b = bin_at(&main_arena, i);
4269 if(first(b) == b)
4270 ms->av[2*i+2] = ms->av[2*i+3] = 0; /* empty bin (or initial top) */
4271 else {
4272 ms->av[2*i+2] = first(b);
4273 ms->av[2*i+3] = last(b);
4276 ms->sbrk_base = sbrk_base;
4277 ms->sbrked_mem_bytes = sbrked_mem;
4278 ms->trim_threshold = trim_threshold;
4279 ms->top_pad = top_pad;
4280 ms->n_mmaps_max = n_mmaps_max;
4281 ms->mmap_threshold = mmap_threshold;
4282 ms->check_action = check_action;
4283 ms->max_sbrked_mem = max_sbrked_mem;
4284 #ifdef NO_THREADS
4285 ms->max_total_mem = max_total_mem;
4286 #else
4287 ms->max_total_mem = 0;
4288 #endif
4289 ms->n_mmaps = n_mmaps;
4290 ms->max_n_mmaps = max_n_mmaps;
4291 ms->mmapped_mem = mmapped_mem;
4292 ms->max_mmapped_mem = max_mmapped_mem;
4293 #if defined _LIBC || defined MALLOC_HOOKS
4294 ms->using_malloc_checking = using_malloc_checking;
4295 #else
4296 ms->using_malloc_checking = 0;
4297 #endif
4298 (void)mutex_unlock(&main_arena.mutex);
4299 return (Void_t*)ms;
4303 #if __STD_C
4304 mALLOC_SET_STATe(Void_t* msptr)
4305 #else
4306 mALLOC_SET_STATe(msptr) Void_t* msptr;
4307 #endif
4309 struct malloc_state* ms = (struct malloc_state*)msptr;
4310 int i;
4311 mbinptr b;
4313 #if defined _LIBC || defined MALLOC_HOOKS
4314 disallow_malloc_check = 1;
4315 #endif
4316 ptmalloc_init();
4317 if(ms->magic != MALLOC_STATE_MAGIC) return -1;
4318 /* Must fail if the major version is too high. */
4319 if((ms->version & ~0xffl) > (MALLOC_STATE_VERSION & ~0xffl)) return -2;
4320 (void)mutex_lock(&main_arena.mutex);
4321 main_arena.av[0] = ms->av[0];
4322 main_arena.av[1] = ms->av[1];
4323 for(i=0; i<NAV; i++) {
4324 b = bin_at(&main_arena, i);
4325 if(ms->av[2*i+2] == 0)
4326 first(b) = last(b) = b;
4327 else {
4328 first(b) = ms->av[2*i+2];
4329 last(b) = ms->av[2*i+3];
4330 if(i > 0) {
4331 /* Make sure the links to the `av'-bins in the heap are correct. */
4332 first(b)->bk = b;
4333 last(b)->fd = b;
4337 sbrk_base = ms->sbrk_base;
4338 sbrked_mem = ms->sbrked_mem_bytes;
4339 trim_threshold = ms->trim_threshold;
4340 top_pad = ms->top_pad;
4341 n_mmaps_max = ms->n_mmaps_max;
4342 mmap_threshold = ms->mmap_threshold;
4343 check_action = ms->check_action;
4344 max_sbrked_mem = ms->max_sbrked_mem;
4345 #ifdef NO_THREADS
4346 max_total_mem = ms->max_total_mem;
4347 #endif
4348 n_mmaps = ms->n_mmaps;
4349 max_n_mmaps = ms->max_n_mmaps;
4350 mmapped_mem = ms->mmapped_mem;
4351 max_mmapped_mem = ms->max_mmapped_mem;
4352 /* add version-dependent code here */
4353 if (ms->version >= 1) {
4354 #if defined _LIBC || defined MALLOC_HOOKS
4355 /* Check whether it is safe to enable malloc checking, or whether
4356 it is necessary to disable it. */
4357 if (ms->using_malloc_checking && !using_malloc_checking &&
4358 !disallow_malloc_check)
4359 __malloc_check_init ();
4360 else if (!ms->using_malloc_checking && using_malloc_checking) {
4361 __malloc_hook = 0;
4362 __free_hook = 0;
4363 __realloc_hook = 0;
4364 __memalign_hook = 0;
4365 using_malloc_checking = 0;
4367 #endif
4370 (void)mutex_unlock(&main_arena.mutex);
4371 return 0;
4376 #if defined _LIBC || defined MALLOC_HOOKS
4378 /* A simple, standard set of debugging hooks. Overhead is `only' one
4379 byte per chunk; still this will catch most cases of double frees or
4380 overruns. The goal here is to avoid obscure crashes due to invalid
4381 usage, unlike in the MALLOC_DEBUG code. */
4383 #define MAGICBYTE(p) ( ( ((size_t)p >> 3) ^ ((size_t)p >> 11)) & 0xFF )
4385 /* Instrument a chunk with overrun detector byte(s) and convert it
4386 into a user pointer with requested size sz. */
4388 static Void_t*
4389 internal_function
4390 #if __STD_C
4391 chunk2mem_check(mchunkptr p, size_t sz)
4392 #else
4393 chunk2mem_check(p, sz) mchunkptr p; size_t sz;
4394 #endif
4396 unsigned char* m_ptr = (unsigned char*)BOUNDED_N(chunk2mem(p), sz);
4397 size_t i;
4399 for(i = chunksize(p) - (chunk_is_mmapped(p) ? 2*SIZE_SZ+1 : SIZE_SZ+1);
4400 i > sz;
4401 i -= 0xFF) {
4402 if(i-sz < 0x100) {
4403 m_ptr[i] = (unsigned char)(i-sz);
4404 break;
4406 m_ptr[i] = 0xFF;
4408 m_ptr[sz] = MAGICBYTE(p);
4409 return (Void_t*)m_ptr;
4412 /* Convert a pointer to be free()d or realloc()ed to a valid chunk
4413 pointer. If the provided pointer is not valid, return NULL. */
4415 static mchunkptr
4416 internal_function
4417 #if __STD_C
4418 mem2chunk_check(Void_t* mem)
4419 #else
4420 mem2chunk_check(mem) Void_t* mem;
4421 #endif
4423 mchunkptr p;
4424 INTERNAL_SIZE_T sz, c;
4425 unsigned char magic;
4427 p = mem2chunk(mem);
4428 if(!aligned_OK(p)) return NULL;
4429 if( (char*)p>=sbrk_base && (char*)p<(sbrk_base+sbrked_mem) ) {
4430 /* Must be a chunk in conventional heap memory. */
4431 if(chunk_is_mmapped(p) ||
4432 ( (sz = chunksize(p)), ((char*)p + sz)>=(sbrk_base+sbrked_mem) ) ||
4433 sz<MINSIZE || sz&MALLOC_ALIGN_MASK || !inuse(p) ||
4434 ( !prev_inuse(p) && (p->prev_size&MALLOC_ALIGN_MASK ||
4435 (long)prev_chunk(p)<(long)sbrk_base ||
4436 next_chunk(prev_chunk(p))!=p) ))
4437 return NULL;
4438 magic = MAGICBYTE(p);
4439 for(sz += SIZE_SZ-1; (c = ((unsigned char*)p)[sz]) != magic; sz -= c) {
4440 if(c<=0 || sz<(c+2*SIZE_SZ)) return NULL;
4442 ((unsigned char*)p)[sz] ^= 0xFF;
4443 } else {
4444 unsigned long offset, page_mask = malloc_getpagesize-1;
4446 /* mmap()ed chunks have MALLOC_ALIGNMENT or higher power-of-two
4447 alignment relative to the beginning of a page. Check this
4448 first. */
4449 offset = (unsigned long)mem & page_mask;
4450 if((offset!=MALLOC_ALIGNMENT && offset!=0 && offset!=0x10 &&
4451 offset!=0x20 && offset!=0x40 && offset!=0x80 && offset!=0x100 &&
4452 offset!=0x200 && offset!=0x400 && offset!=0x800 && offset!=0x1000 &&
4453 offset<0x2000) ||
4454 !chunk_is_mmapped(p) || (p->size & PREV_INUSE) ||
4455 ( (((unsigned long)p - p->prev_size) & page_mask) != 0 ) ||
4456 ( (sz = chunksize(p)), ((p->prev_size + sz) & page_mask) != 0 ) )
4457 return NULL;
4458 magic = MAGICBYTE(p);
4459 for(sz -= 1; (c = ((unsigned char*)p)[sz]) != magic; sz -= c) {
4460 if(c<=0 || sz<(c+2*SIZE_SZ)) return NULL;
4462 ((unsigned char*)p)[sz] ^= 0xFF;
4464 return p;
4467 /* Check for corruption of the top chunk, and try to recover if
4468 necessary. */
4470 static int
4471 internal_function
4472 #if __STD_C
4473 top_check(void)
4474 #else
4475 top_check()
4476 #endif
4478 mchunkptr t = top(&main_arena);
4479 char* brk, * new_brk;
4480 INTERNAL_SIZE_T front_misalign, sbrk_size;
4481 unsigned long pagesz = malloc_getpagesize;
4483 if((char*)t + chunksize(t) == sbrk_base + sbrked_mem ||
4484 t == initial_top(&main_arena)) return 0;
4486 if(check_action & 1)
4487 fprintf(stderr, "malloc: top chunk is corrupt\n");
4488 if(check_action & 2)
4489 abort();
4491 /* Try to set up a new top chunk. */
4492 brk = MORECORE(0);
4493 front_misalign = (unsigned long)chunk2mem(brk) & MALLOC_ALIGN_MASK;
4494 if (front_misalign > 0)
4495 front_misalign = MALLOC_ALIGNMENT - front_misalign;
4496 sbrk_size = front_misalign + top_pad + MINSIZE;
4497 sbrk_size += pagesz - ((unsigned long)(brk + sbrk_size) & (pagesz - 1));
4498 new_brk = (char*)(MORECORE (sbrk_size));
4499 if (new_brk == (char*)(MORECORE_FAILURE)) return -1;
4500 sbrked_mem = (new_brk - sbrk_base) + sbrk_size;
4502 top(&main_arena) = (mchunkptr)(brk + front_misalign);
4503 set_head(top(&main_arena), (sbrk_size - front_misalign) | PREV_INUSE);
4505 return 0;
4508 static Void_t*
4509 #if __STD_C
4510 malloc_check(size_t sz, const Void_t *caller)
4511 #else
4512 malloc_check(sz, caller) size_t sz; const Void_t *caller;
4513 #endif
4515 mchunkptr victim;
4516 INTERNAL_SIZE_T nb;
4518 if(request2size(sz+1, nb))
4519 return 0;
4520 (void)mutex_lock(&main_arena.mutex);
4521 victim = (top_check() >= 0) ? chunk_alloc(&main_arena, nb) : NULL;
4522 (void)mutex_unlock(&main_arena.mutex);
4523 if(!victim) return NULL;
4524 return chunk2mem_check(victim, sz);
4527 static void
4528 #if __STD_C
4529 free_check(Void_t* mem, const Void_t *caller)
4530 #else
4531 free_check(mem, caller) Void_t* mem; const Void_t *caller;
4532 #endif
4534 mchunkptr p;
4536 if(!mem) return;
4537 (void)mutex_lock(&main_arena.mutex);
4538 p = mem2chunk_check(mem);
4539 if(!p) {
4540 (void)mutex_unlock(&main_arena.mutex);
4541 if(check_action & 1)
4542 fprintf(stderr, "free(): invalid pointer %p!\n", mem);
4543 if(check_action & 2)
4544 abort();
4545 return;
4547 #if HAVE_MMAP
4548 if (chunk_is_mmapped(p)) {
4549 (void)mutex_unlock(&main_arena.mutex);
4550 munmap_chunk(p);
4551 return;
4553 #endif
4554 #if 0 /* Erase freed memory. */
4555 memset(mem, 0, chunksize(p) - (SIZE_SZ+1));
4556 #endif
4557 chunk_free(&main_arena, p);
4558 (void)mutex_unlock(&main_arena.mutex);
4561 static Void_t*
4562 #if __STD_C
4563 realloc_check(Void_t* oldmem, size_t bytes, const Void_t *caller)
4564 #else
4565 realloc_check(oldmem, bytes, caller)
4566 Void_t* oldmem; size_t bytes; const Void_t *caller;
4567 #endif
4569 mchunkptr oldp, newp;
4570 INTERNAL_SIZE_T nb, oldsize;
4572 if (oldmem == 0) return malloc_check(bytes, NULL);
4573 (void)mutex_lock(&main_arena.mutex);
4574 oldp = mem2chunk_check(oldmem);
4575 if(!oldp) {
4576 (void)mutex_unlock(&main_arena.mutex);
4577 if(check_action & 1)
4578 fprintf(stderr, "realloc(): invalid pointer %p!\n", oldmem);
4579 if(check_action & 2)
4580 abort();
4581 return malloc_check(bytes, NULL);
4583 oldsize = chunksize(oldp);
4585 if(request2size(bytes+1, nb)) {
4586 (void)mutex_unlock(&main_arena.mutex);
4587 return 0;
4590 #if HAVE_MMAP
4591 if (chunk_is_mmapped(oldp)) {
4592 #if HAVE_MREMAP
4593 newp = mremap_chunk(oldp, nb);
4594 if(!newp) {
4595 #endif
4596 /* Note the extra SIZE_SZ overhead. */
4597 if(oldsize - SIZE_SZ >= nb) newp = oldp; /* do nothing */
4598 else {
4599 /* Must alloc, copy, free. */
4600 newp = (top_check() >= 0) ? chunk_alloc(&main_arena, nb) : NULL;
4601 if (newp) {
4602 MALLOC_COPY(BOUNDED_N(chunk2mem(newp), nb),
4603 oldmem, oldsize - 2*SIZE_SZ);
4604 munmap_chunk(oldp);
4607 #if HAVE_MREMAP
4609 #endif
4610 } else {
4611 #endif /* HAVE_MMAP */
4612 newp = (top_check() >= 0) ?
4613 chunk_realloc(&main_arena, oldp, oldsize, nb) : NULL;
4614 #if 0 /* Erase freed memory. */
4615 nb = chunksize(newp);
4616 if(oldp<newp || oldp>=chunk_at_offset(newp, nb)) {
4617 memset((char*)oldmem + 2*sizeof(mbinptr), 0,
4618 oldsize - (2*sizeof(mbinptr)+2*SIZE_SZ+1));
4619 } else if(nb > oldsize+SIZE_SZ) {
4620 memset((char*)BOUNDED_N(chunk2mem(newp), bytes) + oldsize,
4621 0, nb - (oldsize+SIZE_SZ));
4623 #endif
4624 #if HAVE_MMAP
4626 #endif
4627 (void)mutex_unlock(&main_arena.mutex);
4629 if(!newp) return NULL;
4630 return chunk2mem_check(newp, bytes);
4633 static Void_t*
4634 #if __STD_C
4635 memalign_check(size_t alignment, size_t bytes, const Void_t *caller)
4636 #else
4637 memalign_check(alignment, bytes, caller)
4638 size_t alignment; size_t bytes; const Void_t *caller;
4639 #endif
4641 INTERNAL_SIZE_T nb;
4642 mchunkptr p;
4644 if (alignment <= MALLOC_ALIGNMENT) return malloc_check(bytes, NULL);
4645 if (alignment < MINSIZE) alignment = MINSIZE;
4647 if(request2size(bytes+1, nb))
4648 return 0;
4649 (void)mutex_lock(&main_arena.mutex);
4650 p = (top_check() >= 0) ? chunk_align(&main_arena, nb, alignment) : NULL;
4651 (void)mutex_unlock(&main_arena.mutex);
4652 if(!p) return NULL;
4653 return chunk2mem_check(p, bytes);
4656 #ifndef NO_THREADS
4658 /* The following hooks are used when the global initialization in
4659 ptmalloc_init() hasn't completed yet. */
4661 static Void_t*
4662 #if __STD_C
4663 malloc_starter(size_t sz, const Void_t *caller)
4664 #else
4665 malloc_starter(sz, caller) size_t sz; const Void_t *caller;
4666 #endif
4668 INTERNAL_SIZE_T nb;
4669 mchunkptr victim;
4671 if(request2size(sz, nb))
4672 return 0;
4673 victim = chunk_alloc(&main_arena, nb);
4675 return victim ? BOUNDED_N(chunk2mem(victim), sz) : 0;
4678 static void
4679 #if __STD_C
4680 free_starter(Void_t* mem, const Void_t *caller)
4681 #else
4682 free_starter(mem, caller) Void_t* mem; const Void_t *caller;
4683 #endif
4685 mchunkptr p;
4687 if(!mem) return;
4688 p = mem2chunk(mem);
4689 #if HAVE_MMAP
4690 if (chunk_is_mmapped(p)) {
4691 munmap_chunk(p);
4692 return;
4694 #endif
4695 chunk_free(&main_arena, p);
4698 /* The following hooks are used while the `atfork' handling mechanism
4699 is active. */
4701 static Void_t*
4702 #if __STD_C
4703 malloc_atfork (size_t sz, const Void_t *caller)
4704 #else
4705 malloc_atfork(sz, caller) size_t sz; const Void_t *caller;
4706 #endif
4708 Void_t *vptr = NULL;
4709 INTERNAL_SIZE_T nb;
4710 mchunkptr victim;
4712 tsd_getspecific(arena_key, vptr);
4713 if(!vptr) {
4714 if(save_malloc_hook != malloc_check) {
4715 if(request2size(sz, nb))
4716 return 0;
4717 victim = chunk_alloc(&main_arena, nb);
4718 return victim ? BOUNDED_N(chunk2mem(victim), sz) : 0;
4719 } else {
4720 if(top_check()<0 || request2size(sz+1, nb))
4721 return 0;
4722 victim = chunk_alloc(&main_arena, nb);
4723 return victim ? chunk2mem_check(victim, sz) : 0;
4725 } else {
4726 /* Suspend the thread until the `atfork' handlers have completed.
4727 By that time, the hooks will have been reset as well, so that
4728 mALLOc() can be used again. */
4729 (void)mutex_lock(&list_lock);
4730 (void)mutex_unlock(&list_lock);
4731 return mALLOc(sz);
4735 static void
4736 #if __STD_C
4737 free_atfork(Void_t* mem, const Void_t *caller)
4738 #else
4739 free_atfork(mem, caller) Void_t* mem; const Void_t *caller;
4740 #endif
4742 Void_t *vptr = NULL;
4743 arena *ar_ptr;
4744 mchunkptr p; /* chunk corresponding to mem */
4746 if (mem == 0) /* free(0) has no effect */
4747 return;
4749 p = mem2chunk(mem); /* do not bother to replicate free_check here */
4751 #if HAVE_MMAP
4752 if (chunk_is_mmapped(p)) /* release mmapped memory. */
4754 munmap_chunk(p);
4755 return;
4757 #endif
4759 ar_ptr = arena_for_ptr(p);
4760 tsd_getspecific(arena_key, vptr);
4761 if(vptr)
4762 (void)mutex_lock(&ar_ptr->mutex);
4763 chunk_free(ar_ptr, p);
4764 if(vptr)
4765 (void)mutex_unlock(&ar_ptr->mutex);
4768 #endif /* !defined NO_THREADS */
4770 #endif /* defined _LIBC || defined MALLOC_HOOKS */
4774 #ifdef _LIBC
4775 /* We need a wrapper function for one of the additions of POSIX. */
4777 __posix_memalign (void **memptr, size_t alignment, size_t size)
4779 void *mem;
4781 /* Test whether the SIZE argument is valid. It must be a power of
4782 two multiple of sizeof (void *). */
4783 if (size % sizeof (void *) != 0 || (size & (size - 1)) != 0)
4784 return EINVAL;
4786 mem = __libc_memalign (alignment, size);
4788 if (mem != NULL)
4790 *memptr = mem;
4791 return 0;
4794 return ENOMEM;
4796 weak_alias (__posix_memalign, posix_memalign)
4798 weak_alias (__libc_calloc, __calloc) weak_alias (__libc_calloc, calloc)
4799 weak_alias (__libc_free, __cfree) weak_alias (__libc_free, cfree)
4800 weak_alias (__libc_free, __free) weak_alias (__libc_free, free)
4801 weak_alias (__libc_malloc, __malloc) weak_alias (__libc_malloc, malloc)
4802 weak_alias (__libc_memalign, __memalign) weak_alias (__libc_memalign, memalign)
4803 weak_alias (__libc_realloc, __realloc) weak_alias (__libc_realloc, realloc)
4804 weak_alias (__libc_valloc, __valloc) weak_alias (__libc_valloc, valloc)
4805 weak_alias (__libc_pvalloc, __pvalloc) weak_alias (__libc_pvalloc, pvalloc)
4806 weak_alias (__libc_mallinfo, __mallinfo) weak_alias (__libc_mallinfo, mallinfo)
4807 weak_alias (__libc_mallopt, __mallopt) weak_alias (__libc_mallopt, mallopt)
4809 weak_alias (__malloc_stats, malloc_stats)
4810 weak_alias (__malloc_usable_size, malloc_usable_size)
4811 weak_alias (__malloc_trim, malloc_trim)
4812 weak_alias (__malloc_get_state, malloc_get_state)
4813 weak_alias (__malloc_set_state, malloc_set_state)
4814 #endif
4818 History:
4820 V2.6.4-pt3 Thu Feb 20 1997 Wolfram Gloger (wmglo@dent.med.uni-muenchen.de)
4821 * Added malloc_get/set_state() (mainly for use in GNU emacs),
4822 using interface from Marcus Daniels
4823 * All parameters are now adjustable via environment variables
4825 V2.6.4-pt2 Sat Dec 14 1996 Wolfram Gloger (wmglo@dent.med.uni-muenchen.de)
4826 * Added debugging hooks
4827 * Fixed possible deadlock in realloc() when out of memory
4828 * Don't pollute namespace in glibc: use __getpagesize, __mmap, etc.
4830 V2.6.4-pt Wed Dec 4 1996 Wolfram Gloger (wmglo@dent.med.uni-muenchen.de)
4831 * Very minor updates from the released 2.6.4 version.
4832 * Trimmed include file down to exported data structures.
4833 * Changes from H.J. Lu for glibc-2.0.
4835 V2.6.3i-pt Sep 16 1996 Wolfram Gloger (wmglo@dent.med.uni-muenchen.de)
4836 * Many changes for multiple threads
4837 * Introduced arenas and heaps
4839 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
4840 * Added pvalloc, as recommended by H.J. Liu
4841 * Added 64bit pointer support mainly from Wolfram Gloger
4842 * Added anonymously donated WIN32 sbrk emulation
4843 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
4844 * malloc_extend_top: fix mask error that caused wastage after
4845 foreign sbrks
4846 * Add linux mremap support code from HJ Liu
4848 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
4849 * Integrated most documentation with the code.
4850 * Add support for mmap, with help from
4851 Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
4852 * Use last_remainder in more cases.
4853 * Pack bins using idea from colin@nyx10.cs.du.edu
4854 * Use ordered bins instead of best-fit threshold
4855 * Eliminate block-local decls to simplify tracing and debugging.
4856 * Support another case of realloc via move into top
4857 * Fix error occurring when initial sbrk_base not word-aligned.
4858 * Rely on page size for units instead of SBRK_UNIT to
4859 avoid surprises about sbrk alignment conventions.
4860 * Add mallinfo, mallopt. Thanks to Raymond Nijssen
4861 (raymond@es.ele.tue.nl) for the suggestion.
4862 * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
4863 * More precautions for cases where other routines call sbrk,
4864 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
4865 * Added macros etc., allowing use in linux libc from
4866 H.J. Lu (hjl@gnu.ai.mit.edu)
4867 * Inverted this history list
4869 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
4870 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
4871 * Removed all preallocation code since under current scheme
4872 the work required to undo bad preallocations exceeds
4873 the work saved in good cases for most test programs.
4874 * No longer use return list or unconsolidated bins since
4875 no scheme using them consistently outperforms those that don't
4876 given above changes.
4877 * Use best fit for very large chunks to prevent some worst-cases.
4878 * Added some support for debugging
4880 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
4881 * Removed footers when chunks are in use. Thanks to
4882 Paul Wilson (wilson@cs.texas.edu) for the suggestion.
4884 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
4885 * Added malloc_trim, with help from Wolfram Gloger
4886 (wmglo@Dent.MED.Uni-Muenchen.DE).
4888 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
4890 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
4891 * realloc: try to expand in both directions
4892 * malloc: swap order of clean-bin strategy;
4893 * realloc: only conditionally expand backwards
4894 * Try not to scavenge used bins
4895 * Use bin counts as a guide to preallocation
4896 * Occasionally bin return list chunks in first scan
4897 * Add a few optimizations from colin@nyx10.cs.du.edu
4899 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
4900 * faster bin computation & slightly different binning
4901 * merged all consolidations to one part of malloc proper
4902 (eliminating old malloc_find_space & malloc_clean_bin)
4903 * Scan 2 returns chunks (not just 1)
4904 * Propagate failure in realloc if malloc returns 0
4905 * Add stuff to allow compilation on non-ANSI compilers
4906 from kpv@research.att.com
4908 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
4909 * removed potential for odd address access in prev_chunk
4910 * removed dependency on getpagesize.h
4911 * misc cosmetics and a bit more internal documentation
4912 * anticosmetics: mangled names in macros to evade debugger strangeness
4913 * tested on sparc, hp-700, dec-mips, rs6000
4914 with gcc & native cc (hp, dec only) allowing
4915 Detlefs & Zorn comparison study (in SIGPLAN Notices.)
4917 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
4918 * Based loosely on libg++-1.2X malloc. (It retains some of the overall
4919 structure of old version, but most details differ.)