(HAVE_MREMAP): Undo change of 1997-12-04.
[glibc.git] / malloc / malloc.c
blobfbdcbc67c50d6ff5f61078ec0a0201911cbe2488
1 /* Malloc implementation for multiple threads without lock contention.
2 Copyright (C) 1996, 1997, 1998, 1999, 2000 Free Software Foundation, Inc.
3 This file is part of the GNU C Library.
4 Contributed by Wolfram Gloger <wmglo@dent.med.uni-muenchen.de>
5 and Doug Lea <dl@cs.oswego.edu>, 1996.
7 The GNU C Library is free software; you can redistribute it and/or
8 modify it under the terms of the GNU Library General Public License as
9 published by the Free Software Foundation; either version 2 of the
10 License, or (at your option) any later version.
12 The GNU C Library is distributed in the hope that it will be useful,
13 but WITHOUT ANY WARRANTY; without even the implied warranty of
14 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
15 Library General Public License for more details.
17 You should have received a copy of the GNU Library General Public
18 License along with the GNU C Library; see the file COPYING.LIB. If not,
19 write to the Free Software Foundation, Inc., 59 Temple Place - Suite 330,
20 Boston, MA 02111-1307, USA. */
22 /* $Id$
24 This work is mainly derived from malloc-2.6.4 by Doug Lea
25 <dl@cs.oswego.edu>, which is available from:
27 ftp://g.oswego.edu/pub/misc/malloc.c
29 Most of the original comments are reproduced in the code below.
31 * Why use this malloc?
33 This is not the fastest, most space-conserving, most portable, or
34 most tunable malloc ever written. However it is among the fastest
35 while also being among the most space-conserving, portable and tunable.
36 Consistent balance across these factors results in a good general-purpose
37 allocator. For a high-level description, see
38 http://g.oswego.edu/dl/html/malloc.html
40 On many systems, the standard malloc implementation is by itself not
41 thread-safe, and therefore wrapped with a single global lock around
42 all malloc-related functions. In some applications, especially with
43 multiple available processors, this can lead to contention problems
44 and bad performance. This malloc version was designed with the goal
45 to avoid waiting for locks as much as possible. Statistics indicate
46 that this goal is achieved in many cases.
48 * Synopsis of public routines
50 (Much fuller descriptions are contained in the program documentation below.)
52 ptmalloc_init();
53 Initialize global configuration. When compiled for multiple threads,
54 this function must be called once before any other function in the
55 package. It is not required otherwise. It is called automatically
56 in the Linux/GNU C libray or when compiling with MALLOC_HOOKS.
57 malloc(size_t n);
58 Return a pointer to a newly allocated chunk of at least n bytes, or null
59 if no space is available.
60 free(Void_t* p);
61 Release the chunk of memory pointed to by p, or no effect if p is null.
62 realloc(Void_t* p, size_t n);
63 Return a pointer to a chunk of size n that contains the same data
64 as does chunk p up to the minimum of (n, p's size) bytes, or null
65 if no space is available. The returned pointer may or may not be
66 the same as p. If p is null, equivalent to malloc. Unless the
67 #define REALLOC_ZERO_BYTES_FREES below is set, realloc with a
68 size argument of zero (re)allocates a minimum-sized chunk.
69 memalign(size_t alignment, size_t n);
70 Return a pointer to a newly allocated chunk of n bytes, aligned
71 in accord with the alignment argument, which must be a power of
72 two.
73 valloc(size_t n);
74 Equivalent to memalign(pagesize, n), where pagesize is the page
75 size of the system (or as near to this as can be figured out from
76 all the includes/defines below.)
77 pvalloc(size_t n);
78 Equivalent to valloc(minimum-page-that-holds(n)), that is,
79 round up n to nearest pagesize.
80 calloc(size_t unit, size_t quantity);
81 Returns a pointer to quantity * unit bytes, with all locations
82 set to zero.
83 cfree(Void_t* p);
84 Equivalent to free(p).
85 malloc_trim(size_t pad);
86 Release all but pad bytes of freed top-most memory back
87 to the system. Return 1 if successful, else 0.
88 malloc_usable_size(Void_t* p);
89 Report the number usable allocated bytes associated with allocated
90 chunk p. This may or may not report more bytes than were requested,
91 due to alignment and minimum size constraints.
92 malloc_stats();
93 Prints brief summary statistics on stderr.
94 mallinfo()
95 Returns (by copy) a struct containing various summary statistics.
96 mallopt(int parameter_number, int parameter_value)
97 Changes one of the tunable parameters described below. Returns
98 1 if successful in changing the parameter, else 0.
100 * Vital statistics:
102 Alignment: 8-byte
103 8 byte alignment is currently hardwired into the design. This
104 seems to suffice for all current machines and C compilers.
106 Assumed pointer representation: 4 or 8 bytes
107 Code for 8-byte pointers is untested by me but has worked
108 reliably by Wolfram Gloger, who contributed most of the
109 changes supporting this.
111 Assumed size_t representation: 4 or 8 bytes
112 Note that size_t is allowed to be 4 bytes even if pointers are 8.
114 Minimum overhead per allocated chunk: 4 or 8 bytes
115 Each malloced chunk has a hidden overhead of 4 bytes holding size
116 and status information.
118 Minimum allocated size: 4-byte ptrs: 16 bytes (including 4 overhead)
119 8-byte ptrs: 24/32 bytes (including, 4/8 overhead)
121 When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte
122 ptrs but 4 byte size) or 24 (for 8/8) additional bytes are
123 needed; 4 (8) for a trailing size field
124 and 8 (16) bytes for free list pointers. Thus, the minimum
125 allocatable size is 16/24/32 bytes.
127 Even a request for zero bytes (i.e., malloc(0)) returns a
128 pointer to something of the minimum allocatable size.
130 Maximum allocated size: 4-byte size_t: 2^31 - 8 bytes
131 8-byte size_t: 2^63 - 16 bytes
133 It is assumed that (possibly signed) size_t bit values suffice to
134 represent chunk sizes. `Possibly signed' is due to the fact
135 that `size_t' may be defined on a system as either a signed or
136 an unsigned type. To be conservative, values that would appear
137 as negative numbers are avoided.
138 Requests for sizes with a negative sign bit will return a
139 minimum-sized chunk.
141 Maximum overhead wastage per allocated chunk: normally 15 bytes
143 Alignment demands, plus the minimum allocatable size restriction
144 make the normal worst-case wastage 15 bytes (i.e., up to 15
145 more bytes will be allocated than were requested in malloc), with
146 two exceptions:
147 1. Because requests for zero bytes allocate non-zero space,
148 the worst case wastage for a request of zero bytes is 24 bytes.
149 2. For requests >= mmap_threshold that are serviced via
150 mmap(), the worst case wastage is 8 bytes plus the remainder
151 from a system page (the minimal mmap unit); typically 4096 bytes.
153 * Limitations
155 Here are some features that are NOT currently supported
157 * No automated mechanism for fully checking that all accesses
158 to malloced memory stay within their bounds.
159 * No support for compaction.
161 * Synopsis of compile-time options:
163 People have reported using previous versions of this malloc on all
164 versions of Unix, sometimes by tweaking some of the defines
165 below. It has been tested most extensively on Solaris and
166 Linux. People have also reported adapting this malloc for use in
167 stand-alone embedded systems.
169 The implementation is in straight, hand-tuned ANSI C. Among other
170 consequences, it uses a lot of macros. Because of this, to be at
171 all usable, this code should be compiled using an optimizing compiler
172 (for example gcc -O2) that can simplify expressions and control
173 paths.
175 __STD_C (default: derived from C compiler defines)
176 Nonzero if using ANSI-standard C compiler, a C++ compiler, or
177 a C compiler sufficiently close to ANSI to get away with it.
178 MALLOC_DEBUG (default: NOT defined)
179 Define to enable debugging. Adds fairly extensive assertion-based
180 checking to help track down memory errors, but noticeably slows down
181 execution.
182 MALLOC_HOOKS (default: NOT defined)
183 Define to enable support run-time replacement of the allocation
184 functions through user-defined `hooks'.
185 REALLOC_ZERO_BYTES_FREES (default: defined)
186 Define this if you think that realloc(p, 0) should be equivalent
187 to free(p). (The C standard requires this behaviour, therefore
188 it is the default.) Otherwise, since malloc returns a unique
189 pointer for malloc(0), so does realloc(p, 0).
190 HAVE_MEMCPY (default: defined)
191 Define if you are not otherwise using ANSI STD C, but still
192 have memcpy and memset in your C library and want to use them.
193 Otherwise, simple internal versions are supplied.
194 USE_MEMCPY (default: 1 if HAVE_MEMCPY is defined, 0 otherwise)
195 Define as 1 if you want the C library versions of memset and
196 memcpy called in realloc and calloc (otherwise macro versions are used).
197 At least on some platforms, the simple macro versions usually
198 outperform libc versions.
199 HAVE_MMAP (default: defined as 1)
200 Define to non-zero to optionally make malloc() use mmap() to
201 allocate very large blocks.
202 HAVE_MREMAP (default: defined as 0 unless Linux libc set)
203 Define to non-zero to optionally make realloc() use mremap() to
204 reallocate very large blocks.
205 USE_ARENAS (default: the same as HAVE_MMAP)
206 Enable support for multiple arenas, allocated using mmap().
207 malloc_getpagesize (default: derived from system #includes)
208 Either a constant or routine call returning the system page size.
209 HAVE_USR_INCLUDE_MALLOC_H (default: NOT defined)
210 Optionally define if you are on a system with a /usr/include/malloc.h
211 that declares struct mallinfo. It is not at all necessary to
212 define this even if you do, but will ensure consistency.
213 INTERNAL_SIZE_T (default: size_t)
214 Define to a 32-bit type (probably `unsigned int') if you are on a
215 64-bit machine, yet do not want or need to allow malloc requests of
216 greater than 2^31 to be handled. This saves space, especially for
217 very small chunks.
218 _LIBC (default: NOT defined)
219 Defined only when compiled as part of the Linux libc/glibc.
220 Also note that there is some odd internal name-mangling via defines
221 (for example, internally, `malloc' is named `mALLOc') needed
222 when compiling in this case. These look funny but don't otherwise
223 affect anything.
224 LACKS_UNISTD_H (default: undefined)
225 Define this if your system does not have a <unistd.h>.
226 MORECORE (default: sbrk)
227 The name of the routine to call to obtain more memory from the system.
228 MORECORE_FAILURE (default: -1)
229 The value returned upon failure of MORECORE.
230 MORECORE_CLEARS (default 1)
231 The degree to which the routine mapped to MORECORE zeroes out
232 memory: never (0), only for newly allocated space (1) or always
233 (2). The distinction between (1) and (2) is necessary because on
234 some systems, if the application first decrements and then
235 increments the break value, the contents of the reallocated space
236 are unspecified.
237 DEFAULT_TRIM_THRESHOLD
238 DEFAULT_TOP_PAD
239 DEFAULT_MMAP_THRESHOLD
240 DEFAULT_MMAP_MAX
241 Default values of tunable parameters (described in detail below)
242 controlling interaction with host system routines (sbrk, mmap, etc).
243 These values may also be changed dynamically via mallopt(). The
244 preset defaults are those that give best performance for typical
245 programs/systems.
246 DEFAULT_CHECK_ACTION
247 When the standard debugging hooks are in place, and a pointer is
248 detected as corrupt, do nothing (0), print an error message (1),
249 or call abort() (2).
256 * Compile-time options for multiple threads:
258 USE_PTHREADS, USE_THR, USE_SPROC
259 Define one of these as 1 to select the thread interface:
260 POSIX threads, Solaris threads or SGI sproc's, respectively.
261 If none of these is defined as non-zero, you get a `normal'
262 malloc implementation which is not thread-safe. Support for
263 multiple threads requires HAVE_MMAP=1. As an exception, when
264 compiling for GNU libc, i.e. when _LIBC is defined, then none of
265 the USE_... symbols have to be defined.
267 HEAP_MIN_SIZE
268 HEAP_MAX_SIZE
269 When thread support is enabled, additional `heap's are created
270 with mmap calls. These are limited in size; HEAP_MIN_SIZE should
271 be a multiple of the page size, while HEAP_MAX_SIZE must be a power
272 of two for alignment reasons. HEAP_MAX_SIZE should be at least
273 twice as large as the mmap threshold.
274 THREAD_STATS
275 When this is defined as non-zero, some statistics on mutex locking
276 are computed.
283 /* Preliminaries */
285 #ifndef __STD_C
286 #if defined (__STDC__)
287 #define __STD_C 1
288 #else
289 #if __cplusplus
290 #define __STD_C 1
291 #else
292 #define __STD_C 0
293 #endif /*__cplusplus*/
294 #endif /*__STDC__*/
295 #endif /*__STD_C*/
297 #ifndef Void_t
298 #if __STD_C
299 #define Void_t void
300 #else
301 #define Void_t char
302 #endif
303 #endif /*Void_t*/
305 #if __STD_C
306 # include <stddef.h> /* for size_t */
307 # if defined _LIBC || defined MALLOC_HOOKS
308 # include <stdlib.h> /* for getenv(), abort() */
309 # endif
310 #else
311 # include <sys/types.h>
312 # if defined _LIBC || defined MALLOC_HOOKS
313 extern char* getenv();
314 # endif
315 #endif
317 #ifndef _LIBC
318 # define __secure_getenv(Str) getenv (Str)
319 #endif
321 /* Macros for handling mutexes and thread-specific data. This is
322 included early, because some thread-related header files (such as
323 pthread.h) should be included before any others. */
324 #include "thread-m.h"
326 #ifdef __cplusplus
327 extern "C" {
328 #endif
330 #include <errno.h>
331 #include <stdio.h> /* needed for malloc_stats */
335 Compile-time options
340 Debugging:
342 Because freed chunks may be overwritten with link fields, this
343 malloc will often die when freed memory is overwritten by user
344 programs. This can be very effective (albeit in an annoying way)
345 in helping track down dangling pointers.
347 If you compile with -DMALLOC_DEBUG, a number of assertion checks are
348 enabled that will catch more memory errors. You probably won't be
349 able to make much sense of the actual assertion errors, but they
350 should help you locate incorrectly overwritten memory. The
351 checking is fairly extensive, and will slow down execution
352 noticeably. Calling malloc_stats or mallinfo with MALLOC_DEBUG set will
353 attempt to check every non-mmapped allocated and free chunk in the
354 course of computing the summaries. (By nature, mmapped regions
355 cannot be checked very much automatically.)
357 Setting MALLOC_DEBUG may also be helpful if you are trying to modify
358 this code. The assertions in the check routines spell out in more
359 detail the assumptions and invariants underlying the algorithms.
363 #if MALLOC_DEBUG
364 #include <assert.h>
365 #else
366 #define assert(x) ((void)0)
367 #endif
371 INTERNAL_SIZE_T is the word-size used for internal bookkeeping
372 of chunk sizes. On a 64-bit machine, you can reduce malloc
373 overhead by defining INTERNAL_SIZE_T to be a 32 bit `unsigned int'
374 at the expense of not being able to handle requests greater than
375 2^31. This limitation is hardly ever a concern; you are encouraged
376 to set this. However, the default version is the same as size_t.
379 #ifndef INTERNAL_SIZE_T
380 #define INTERNAL_SIZE_T size_t
381 #endif
384 REALLOC_ZERO_BYTES_FREES should be set if a call to realloc with
385 zero bytes should be the same as a call to free. The C standard
386 requires this. Otherwise, since this malloc returns a unique pointer
387 for malloc(0), so does realloc(p, 0).
391 #define REALLOC_ZERO_BYTES_FREES
395 HAVE_MEMCPY should be defined if you are not otherwise using
396 ANSI STD C, but still have memcpy and memset in your C library
397 and want to use them in calloc and realloc. Otherwise simple
398 macro versions are defined here.
400 USE_MEMCPY should be defined as 1 if you actually want to
401 have memset and memcpy called. People report that the macro
402 versions are often enough faster than libc versions on many
403 systems that it is better to use them.
407 #define HAVE_MEMCPY 1
409 #ifndef USE_MEMCPY
410 #ifdef HAVE_MEMCPY
411 #define USE_MEMCPY 1
412 #else
413 #define USE_MEMCPY 0
414 #endif
415 #endif
417 #if (__STD_C || defined(HAVE_MEMCPY))
419 #if __STD_C
420 void* memset(void*, int, size_t);
421 void* memcpy(void*, const void*, size_t);
422 #else
423 Void_t* memset();
424 Void_t* memcpy();
425 #endif
426 #endif
428 #if USE_MEMCPY
430 /* The following macros are only invoked with (2n+1)-multiples of
431 INTERNAL_SIZE_T units, with a positive integer n. This is exploited
432 for fast inline execution when n is small. */
434 #define MALLOC_ZERO(charp, nbytes) \
435 do { \
436 INTERNAL_SIZE_T mzsz = (nbytes); \
437 if(mzsz <= 9*sizeof(mzsz)) { \
438 INTERNAL_SIZE_T* mz = (INTERNAL_SIZE_T*) (charp); \
439 if(mzsz >= 5*sizeof(mzsz)) { *mz++ = 0; \
440 *mz++ = 0; \
441 if(mzsz >= 7*sizeof(mzsz)) { *mz++ = 0; \
442 *mz++ = 0; \
443 if(mzsz >= 9*sizeof(mzsz)) { *mz++ = 0; \
444 *mz++ = 0; }}} \
445 *mz++ = 0; \
446 *mz++ = 0; \
447 *mz = 0; \
448 } else memset((charp), 0, mzsz); \
449 } while(0)
451 #define MALLOC_COPY(dest,src,nbytes) \
452 do { \
453 INTERNAL_SIZE_T mcsz = (nbytes); \
454 if(mcsz <= 9*sizeof(mcsz)) { \
455 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) (src); \
456 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) (dest); \
457 if(mcsz >= 5*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
458 *mcdst++ = *mcsrc++; \
459 if(mcsz >= 7*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
460 *mcdst++ = *mcsrc++; \
461 if(mcsz >= 9*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
462 *mcdst++ = *mcsrc++; }}} \
463 *mcdst++ = *mcsrc++; \
464 *mcdst++ = *mcsrc++; \
465 *mcdst = *mcsrc ; \
466 } else memcpy(dest, src, mcsz); \
467 } while(0)
469 #else /* !USE_MEMCPY */
471 /* Use Duff's device for good zeroing/copying performance. */
473 #define MALLOC_ZERO(charp, nbytes) \
474 do { \
475 INTERNAL_SIZE_T* mzp = (INTERNAL_SIZE_T*)(charp); \
476 long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
477 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
478 switch (mctmp) { \
479 case 0: for(;;) { *mzp++ = 0; \
480 case 7: *mzp++ = 0; \
481 case 6: *mzp++ = 0; \
482 case 5: *mzp++ = 0; \
483 case 4: *mzp++ = 0; \
484 case 3: *mzp++ = 0; \
485 case 2: *mzp++ = 0; \
486 case 1: *mzp++ = 0; if(mcn <= 0) break; mcn--; } \
488 } while(0)
490 #define MALLOC_COPY(dest,src,nbytes) \
491 do { \
492 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) src; \
493 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) dest; \
494 long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
495 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
496 switch (mctmp) { \
497 case 0: for(;;) { *mcdst++ = *mcsrc++; \
498 case 7: *mcdst++ = *mcsrc++; \
499 case 6: *mcdst++ = *mcsrc++; \
500 case 5: *mcdst++ = *mcsrc++; \
501 case 4: *mcdst++ = *mcsrc++; \
502 case 3: *mcdst++ = *mcsrc++; \
503 case 2: *mcdst++ = *mcsrc++; \
504 case 1: *mcdst++ = *mcsrc++; if(mcn <= 0) break; mcn--; } \
506 } while(0)
508 #endif
511 #ifndef LACKS_UNISTD_H
512 # include <unistd.h>
513 #endif
516 Define HAVE_MMAP to optionally make malloc() use mmap() to allocate
517 very large blocks. These will be returned to the operating system
518 immediately after a free(). HAVE_MMAP is also a prerequisite to
519 support multiple `arenas' (see USE_ARENAS below).
522 #ifndef HAVE_MMAP
523 # ifdef _POSIX_MAPPED_FILES
524 # define HAVE_MMAP 1
525 # endif
526 #endif
529 Define HAVE_MREMAP to make realloc() use mremap() to re-allocate
530 large blocks. This is currently only possible on Linux with
531 kernel versions newer than 1.3.77.
534 #ifndef HAVE_MREMAP
535 #define HAVE_MREMAP defined(__linux__)
536 #endif
538 /* Define USE_ARENAS to enable support for multiple `arenas'. These
539 are allocated using mmap(), are necessary for threads and
540 occasionally useful to overcome address space limitations affecting
541 sbrk(). */
543 #ifndef USE_ARENAS
544 #define USE_ARENAS HAVE_MMAP
545 #endif
547 #if HAVE_MMAP
549 #include <unistd.h>
550 #include <fcntl.h>
551 #include <sys/mman.h>
553 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
554 #define MAP_ANONYMOUS MAP_ANON
555 #endif
556 #if !defined(MAP_FAILED)
557 #define MAP_FAILED ((char*)-1)
558 #endif
560 #ifndef MAP_NORESERVE
561 # ifdef MAP_AUTORESRV
562 # define MAP_NORESERVE MAP_AUTORESRV
563 # else
564 # define MAP_NORESERVE 0
565 # endif
566 #endif
568 #endif /* HAVE_MMAP */
571 Access to system page size. To the extent possible, this malloc
572 manages memory from the system in page-size units.
574 The following mechanics for getpagesize were adapted from
575 bsd/gnu getpagesize.h
578 #ifndef malloc_getpagesize
579 # ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
580 # ifndef _SC_PAGE_SIZE
581 # define _SC_PAGE_SIZE _SC_PAGESIZE
582 # endif
583 # endif
584 # ifdef _SC_PAGE_SIZE
585 # define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
586 # else
587 # if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
588 extern size_t getpagesize();
589 # define malloc_getpagesize getpagesize()
590 # else
591 # include <sys/param.h>
592 # ifdef EXEC_PAGESIZE
593 # define malloc_getpagesize EXEC_PAGESIZE
594 # else
595 # ifdef NBPG
596 # ifndef CLSIZE
597 # define malloc_getpagesize NBPG
598 # else
599 # define malloc_getpagesize (NBPG * CLSIZE)
600 # endif
601 # else
602 # ifdef NBPC
603 # define malloc_getpagesize NBPC
604 # else
605 # ifdef PAGESIZE
606 # define malloc_getpagesize PAGESIZE
607 # else
608 # define malloc_getpagesize (4096) /* just guess */
609 # endif
610 # endif
611 # endif
612 # endif
613 # endif
614 # endif
615 #endif
621 This version of malloc supports the standard SVID/XPG mallinfo
622 routine that returns a struct containing the same kind of
623 information you can get from malloc_stats. It should work on
624 any SVID/XPG compliant system that has a /usr/include/malloc.h
625 defining struct mallinfo. (If you'd like to install such a thing
626 yourself, cut out the preliminary declarations as described above
627 and below and save them in a malloc.h file. But there's no
628 compelling reason to bother to do this.)
630 The main declaration needed is the mallinfo struct that is returned
631 (by-copy) by mallinfo(). The SVID/XPG malloinfo struct contains a
632 bunch of fields, most of which are not even meaningful in this
633 version of malloc. Some of these fields are are instead filled by
634 mallinfo() with other numbers that might possibly be of interest.
636 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
637 /usr/include/malloc.h file that includes a declaration of struct
638 mallinfo. If so, it is included; else an SVID2/XPG2 compliant
639 version is declared below. These must be precisely the same for
640 mallinfo() to work.
644 /* #define HAVE_USR_INCLUDE_MALLOC_H */
646 #if HAVE_USR_INCLUDE_MALLOC_H
647 # include "/usr/include/malloc.h"
648 #else
649 # ifdef _LIBC
650 # include "malloc.h"
651 # else
652 # include "ptmalloc.h"
653 # endif
654 #endif
656 #include <bp-checks.h>
658 #ifndef DEFAULT_TRIM_THRESHOLD
659 #define DEFAULT_TRIM_THRESHOLD (128 * 1024)
660 #endif
663 M_TRIM_THRESHOLD is the maximum amount of unused top-most memory
664 to keep before releasing via malloc_trim in free().
666 Automatic trimming is mainly useful in long-lived programs.
667 Because trimming via sbrk can be slow on some systems, and can
668 sometimes be wasteful (in cases where programs immediately
669 afterward allocate more large chunks) the value should be high
670 enough so that your overall system performance would improve by
671 releasing.
673 The trim threshold and the mmap control parameters (see below)
674 can be traded off with one another. Trimming and mmapping are
675 two different ways of releasing unused memory back to the
676 system. Between these two, it is often possible to keep
677 system-level demands of a long-lived program down to a bare
678 minimum. For example, in one test suite of sessions measuring
679 the XF86 X server on Linux, using a trim threshold of 128K and a
680 mmap threshold of 192K led to near-minimal long term resource
681 consumption.
683 If you are using this malloc in a long-lived program, it should
684 pay to experiment with these values. As a rough guide, you
685 might set to a value close to the average size of a process
686 (program) running on your system. Releasing this much memory
687 would allow such a process to run in memory. Generally, it's
688 worth it to tune for trimming rather than memory mapping when a
689 program undergoes phases where several large chunks are
690 allocated and released in ways that can reuse each other's
691 storage, perhaps mixed with phases where there are no such
692 chunks at all. And in well-behaved long-lived programs,
693 controlling release of large blocks via trimming versus mapping
694 is usually faster.
696 However, in most programs, these parameters serve mainly as
697 protection against the system-level effects of carrying around
698 massive amounts of unneeded memory. Since frequent calls to
699 sbrk, mmap, and munmap otherwise degrade performance, the default
700 parameters are set to relatively high values that serve only as
701 safeguards.
703 The default trim value is high enough to cause trimming only in
704 fairly extreme (by current memory consumption standards) cases.
705 It must be greater than page size to have any useful effect. To
706 disable trimming completely, you can set to (unsigned long)(-1);
712 #ifndef DEFAULT_TOP_PAD
713 #define DEFAULT_TOP_PAD (0)
714 #endif
717 M_TOP_PAD is the amount of extra `padding' space to allocate or
718 retain whenever sbrk is called. It is used in two ways internally:
720 * When sbrk is called to extend the top of the arena to satisfy
721 a new malloc request, this much padding is added to the sbrk
722 request.
724 * When malloc_trim is called automatically from free(),
725 it is used as the `pad' argument.
727 In both cases, the actual amount of padding is rounded
728 so that the end of the arena is always a system page boundary.
730 The main reason for using padding is to avoid calling sbrk so
731 often. Having even a small pad greatly reduces the likelihood
732 that nearly every malloc request during program start-up (or
733 after trimming) will invoke sbrk, which needlessly wastes
734 time.
736 Automatic rounding-up to page-size units is normally sufficient
737 to avoid measurable overhead, so the default is 0. However, in
738 systems where sbrk is relatively slow, it can pay to increase
739 this value, at the expense of carrying around more memory than
740 the program needs.
745 #ifndef DEFAULT_MMAP_THRESHOLD
746 #define DEFAULT_MMAP_THRESHOLD (128 * 1024)
747 #endif
751 M_MMAP_THRESHOLD is the request size threshold for using mmap()
752 to service a request. Requests of at least this size that cannot
753 be allocated using already-existing space will be serviced via mmap.
754 (If enough normal freed space already exists it is used instead.)
756 Using mmap segregates relatively large chunks of memory so that
757 they can be individually obtained and released from the host
758 system. A request serviced through mmap is never reused by any
759 other request (at least not directly; the system may just so
760 happen to remap successive requests to the same locations).
762 Segregating space in this way has the benefit that mmapped space
763 can ALWAYS be individually released back to the system, which
764 helps keep the system level memory demands of a long-lived
765 program low. Mapped memory can never become `locked' between
766 other chunks, as can happen with normally allocated chunks, which
767 menas that even trimming via malloc_trim would not release them.
769 However, it has the disadvantages that:
771 1. The space cannot be reclaimed, consolidated, and then
772 used to service later requests, as happens with normal chunks.
773 2. It can lead to more wastage because of mmap page alignment
774 requirements
775 3. It causes malloc performance to be more dependent on host
776 system memory management support routines which may vary in
777 implementation quality and may impose arbitrary
778 limitations. Generally, servicing a request via normal
779 malloc steps is faster than going through a system's mmap.
781 All together, these considerations should lead you to use mmap
782 only for relatively large requests.
789 #ifndef DEFAULT_MMAP_MAX
790 #if HAVE_MMAP
791 #define DEFAULT_MMAP_MAX (1024)
792 #else
793 #define DEFAULT_MMAP_MAX (0)
794 #endif
795 #endif
798 M_MMAP_MAX is the maximum number of requests to simultaneously
799 service using mmap. This parameter exists because:
801 1. Some systems have a limited number of internal tables for
802 use by mmap.
803 2. In most systems, overreliance on mmap can degrade overall
804 performance.
805 3. If a program allocates many large regions, it is probably
806 better off using normal sbrk-based allocation routines that
807 can reclaim and reallocate normal heap memory. Using a
808 small value allows transition into this mode after the
809 first few allocations.
811 Setting to 0 disables all use of mmap. If HAVE_MMAP is not set,
812 the default value is 0, and attempts to set it to non-zero values
813 in mallopt will fail.
818 #ifndef DEFAULT_CHECK_ACTION
819 #define DEFAULT_CHECK_ACTION 1
820 #endif
822 /* What to do if the standard debugging hooks are in place and a
823 corrupt pointer is detected: do nothing (0), print an error message
824 (1), or call abort() (2). */
828 #define HEAP_MIN_SIZE (32*1024)
829 #define HEAP_MAX_SIZE (1024*1024) /* must be a power of two */
831 /* HEAP_MIN_SIZE and HEAP_MAX_SIZE limit the size of mmap()ed heaps
832 that are dynamically created for multi-threaded programs. The
833 maximum size must be a power of two, for fast determination of
834 which heap belongs to a chunk. It should be much larger than
835 the mmap threshold, so that requests with a size just below that
836 threshold can be fulfilled without creating too many heaps.
841 #ifndef THREAD_STATS
842 #define THREAD_STATS 0
843 #endif
845 /* If THREAD_STATS is non-zero, some statistics on mutex locking are
846 computed. */
849 /* Macro to set errno. */
850 #ifndef __set_errno
851 # define __set_errno(val) errno = (val)
852 #endif
854 /* On some platforms we can compile internal, not exported functions better.
855 Let the environment provide a macro and define it to be empty if it
856 is not available. */
857 #ifndef internal_function
858 # define internal_function
859 #endif
864 Special defines for the Linux/GNU C library.
869 #ifdef _LIBC
871 #if __STD_C
873 Void_t * __default_morecore (ptrdiff_t);
874 Void_t *(*__morecore)(ptrdiff_t) = __default_morecore;
876 #else
878 Void_t * __default_morecore ();
879 Void_t *(*__morecore)() = __default_morecore;
881 #endif
883 #define MORECORE (*__morecore)
884 #define MORECORE_FAILURE 0
886 #ifndef MORECORE_CLEARS
887 #define MORECORE_CLEARS 1
888 #endif
890 static size_t __libc_pagesize;
892 #define mmap __mmap
893 #define munmap __munmap
894 #define mremap __mremap
895 #define mprotect __mprotect
896 #undef malloc_getpagesize
897 #define malloc_getpagesize __libc_pagesize
899 #else /* _LIBC */
901 #if __STD_C
902 extern Void_t* sbrk(ptrdiff_t);
903 #else
904 extern Void_t* sbrk();
905 #endif
907 #ifndef MORECORE
908 #define MORECORE sbrk
909 #endif
911 #ifndef MORECORE_FAILURE
912 #define MORECORE_FAILURE -1
913 #endif
915 #ifndef MORECORE_CLEARS
916 #define MORECORE_CLEARS 1
917 #endif
919 #endif /* _LIBC */
921 #ifdef _LIBC
923 #define cALLOc __libc_calloc
924 #define fREe __libc_free
925 #define mALLOc __libc_malloc
926 #define mEMALIGn __libc_memalign
927 #define rEALLOc __libc_realloc
928 #define vALLOc __libc_valloc
929 #define pvALLOc __libc_pvalloc
930 #define mALLINFo __libc_mallinfo
931 #define mALLOPt __libc_mallopt
932 #define mALLOC_STATs __malloc_stats
933 #define mALLOC_USABLE_SIZe __malloc_usable_size
934 #define mALLOC_TRIm __malloc_trim
935 #define mALLOC_GET_STATe __malloc_get_state
936 #define mALLOC_SET_STATe __malloc_set_state
938 #else
940 #define cALLOc calloc
941 #define fREe free
942 #define mALLOc malloc
943 #define mEMALIGn memalign
944 #define rEALLOc realloc
945 #define vALLOc valloc
946 #define pvALLOc pvalloc
947 #define mALLINFo mallinfo
948 #define mALLOPt mallopt
949 #define mALLOC_STATs malloc_stats
950 #define mALLOC_USABLE_SIZe malloc_usable_size
951 #define mALLOC_TRIm malloc_trim
952 #define mALLOC_GET_STATe malloc_get_state
953 #define mALLOC_SET_STATe malloc_set_state
955 #endif
957 /* Public routines */
959 #if __STD_C
961 #ifndef _LIBC
962 void ptmalloc_init(void);
963 #endif
964 Void_t* mALLOc(size_t);
965 void fREe(Void_t*);
966 Void_t* rEALLOc(Void_t*, size_t);
967 Void_t* mEMALIGn(size_t, size_t);
968 Void_t* vALLOc(size_t);
969 Void_t* pvALLOc(size_t);
970 Void_t* cALLOc(size_t, size_t);
971 void cfree(Void_t*);
972 int mALLOC_TRIm(size_t);
973 size_t mALLOC_USABLE_SIZe(Void_t*);
974 void mALLOC_STATs(void);
975 int mALLOPt(int, int);
976 struct mallinfo mALLINFo(void);
977 Void_t* mALLOC_GET_STATe(void);
978 int mALLOC_SET_STATe(Void_t*);
980 #else /* !__STD_C */
982 #ifndef _LIBC
983 void ptmalloc_init();
984 #endif
985 Void_t* mALLOc();
986 void fREe();
987 Void_t* rEALLOc();
988 Void_t* mEMALIGn();
989 Void_t* vALLOc();
990 Void_t* pvALLOc();
991 Void_t* cALLOc();
992 void cfree();
993 int mALLOC_TRIm();
994 size_t mALLOC_USABLE_SIZe();
995 void mALLOC_STATs();
996 int mALLOPt();
997 struct mallinfo mALLINFo();
998 Void_t* mALLOC_GET_STATe();
999 int mALLOC_SET_STATe();
1001 #endif /* __STD_C */
1004 #ifdef __cplusplus
1005 } /* end of extern "C" */
1006 #endif
1008 #if !defined(NO_THREADS) && !HAVE_MMAP
1009 "Can't have threads support without mmap"
1010 #endif
1011 #if USE_ARENAS && !HAVE_MMAP
1012 "Can't have multiple arenas without mmap"
1013 #endif
1017 Type declarations
1021 struct malloc_chunk
1023 INTERNAL_SIZE_T prev_size; /* Size of previous chunk (if free). */
1024 INTERNAL_SIZE_T size; /* Size in bytes, including overhead. */
1025 struct malloc_chunk* fd; /* double links -- used only if free. */
1026 struct malloc_chunk* bk;
1029 typedef struct malloc_chunk* mchunkptr;
1033 malloc_chunk details:
1035 (The following includes lightly edited explanations by Colin Plumb.)
1037 Chunks of memory are maintained using a `boundary tag' method as
1038 described in e.g., Knuth or Standish. (See the paper by Paul
1039 Wilson ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a
1040 survey of such techniques.) Sizes of free chunks are stored both
1041 in the front of each chunk and at the end. This makes
1042 consolidating fragmented chunks into bigger chunks very fast. The
1043 size fields also hold bits representing whether chunks are free or
1044 in use.
1046 An allocated chunk looks like this:
1049 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1050 | Size of previous chunk, if allocated | |
1051 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1052 | Size of chunk, in bytes |P|
1053 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1054 | User data starts here... .
1056 . (malloc_usable_space() bytes) .
1058 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1059 | Size of chunk |
1060 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1063 Where "chunk" is the front of the chunk for the purpose of most of
1064 the malloc code, but "mem" is the pointer that is returned to the
1065 user. "Nextchunk" is the beginning of the next contiguous chunk.
1067 Chunks always begin on even word boundaries, so the mem portion
1068 (which is returned to the user) is also on an even word boundary, and
1069 thus double-word aligned.
1071 Free chunks are stored in circular doubly-linked lists, and look like this:
1073 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1074 | Size of previous chunk |
1075 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1076 `head:' | Size of chunk, in bytes |P|
1077 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1078 | Forward pointer to next chunk in list |
1079 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1080 | Back pointer to previous chunk in list |
1081 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1082 | Unused space (may be 0 bytes long) .
1085 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1086 `foot:' | Size of chunk, in bytes |
1087 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1089 The P (PREV_INUSE) bit, stored in the unused low-order bit of the
1090 chunk size (which is always a multiple of two words), is an in-use
1091 bit for the *previous* chunk. If that bit is *clear*, then the
1092 word before the current chunk size contains the previous chunk
1093 size, and can be used to find the front of the previous chunk.
1094 (The very first chunk allocated always has this bit set,
1095 preventing access to non-existent (or non-owned) memory.)
1097 Note that the `foot' of the current chunk is actually represented
1098 as the prev_size of the NEXT chunk. (This makes it easier to
1099 deal with alignments etc).
1101 The two exceptions to all this are
1103 1. The special chunk `top', which doesn't bother using the
1104 trailing size field since there is no
1105 next contiguous chunk that would have to index off it. (After
1106 initialization, `top' is forced to always exist. If it would
1107 become less than MINSIZE bytes long, it is replenished via
1108 malloc_extend_top.)
1110 2. Chunks allocated via mmap, which have the second-lowest-order
1111 bit (IS_MMAPPED) set in their size fields. Because they are
1112 never merged or traversed from any other chunk, they have no
1113 foot size or inuse information.
1115 Available chunks are kept in any of several places (all declared below):
1117 * `av': An array of chunks serving as bin headers for consolidated
1118 chunks. Each bin is doubly linked. The bins are approximately
1119 proportionally (log) spaced. There are a lot of these bins
1120 (128). This may look excessive, but works very well in
1121 practice. All procedures maintain the invariant that no
1122 consolidated chunk physically borders another one. Chunks in
1123 bins are kept in size order, with ties going to the
1124 approximately least recently used chunk.
1126 The chunks in each bin are maintained in decreasing sorted order by
1127 size. This is irrelevant for the small bins, which all contain
1128 the same-sized chunks, but facilitates best-fit allocation for
1129 larger chunks. (These lists are just sequential. Keeping them in
1130 order almost never requires enough traversal to warrant using
1131 fancier ordered data structures.) Chunks of the same size are
1132 linked with the most recently freed at the front, and allocations
1133 are taken from the back. This results in LRU or FIFO allocation
1134 order, which tends to give each chunk an equal opportunity to be
1135 consolidated with adjacent freed chunks, resulting in larger free
1136 chunks and less fragmentation.
1138 * `top': The top-most available chunk (i.e., the one bordering the
1139 end of available memory) is treated specially. It is never
1140 included in any bin, is used only if no other chunk is
1141 available, and is released back to the system if it is very
1142 large (see M_TRIM_THRESHOLD).
1144 * `last_remainder': A bin holding only the remainder of the
1145 most recently split (non-top) chunk. This bin is checked
1146 before other non-fitting chunks, so as to provide better
1147 locality for runs of sequentially allocated chunks.
1149 * Implicitly, through the host system's memory mapping tables.
1150 If supported, requests greater than a threshold are usually
1151 serviced via calls to mmap, and then later released via munmap.
1156 Bins
1158 The bins are an array of pairs of pointers serving as the
1159 heads of (initially empty) doubly-linked lists of chunks, laid out
1160 in a way so that each pair can be treated as if it were in a
1161 malloc_chunk. (This way, the fd/bk offsets for linking bin heads
1162 and chunks are the same).
1164 Bins for sizes < 512 bytes contain chunks of all the same size, spaced
1165 8 bytes apart. Larger bins are approximately logarithmically
1166 spaced. (See the table below.)
1168 Bin layout:
1170 64 bins of size 8
1171 32 bins of size 64
1172 16 bins of size 512
1173 8 bins of size 4096
1174 4 bins of size 32768
1175 2 bins of size 262144
1176 1 bin of size what's left
1178 There is actually a little bit of slop in the numbers in bin_index
1179 for the sake of speed. This makes no difference elsewhere.
1181 The special chunks `top' and `last_remainder' get their own bins,
1182 (this is implemented via yet more trickery with the av array),
1183 although `top' is never properly linked to its bin since it is
1184 always handled specially.
1188 #define NAV 128 /* number of bins */
1190 typedef struct malloc_chunk* mbinptr;
1192 /* An arena is a configuration of malloc_chunks together with an array
1193 of bins. With multiple threads, it must be locked via a mutex
1194 before changing its data structures. One or more `heaps' are
1195 associated with each arena, except for the main_arena, which is
1196 associated only with the `main heap', i.e. the conventional free
1197 store obtained with calls to MORECORE() (usually sbrk). The `av'
1198 array is never mentioned directly in the code, but instead used via
1199 bin access macros. */
1201 typedef struct _arena {
1202 mbinptr av[2*NAV + 2];
1203 struct _arena *next;
1204 size_t size;
1205 #if THREAD_STATS
1206 long stat_lock_direct, stat_lock_loop, stat_lock_wait;
1207 #endif
1208 mutex_t mutex;
1209 } arena;
1212 /* A heap is a single contiguous memory region holding (coalesceable)
1213 malloc_chunks. It is allocated with mmap() and always starts at an
1214 address aligned to HEAP_MAX_SIZE. Not used unless compiling with
1215 USE_ARENAS. */
1217 typedef struct _heap_info {
1218 arena *ar_ptr; /* Arena for this heap. */
1219 struct _heap_info *prev; /* Previous heap. */
1220 size_t size; /* Current size in bytes. */
1221 size_t pad; /* Make sure the following data is properly aligned. */
1222 } heap_info;
1226 Static functions (forward declarations)
1229 #if __STD_C
1231 static void chunk_free(arena *ar_ptr, mchunkptr p) internal_function;
1232 static mchunkptr chunk_alloc(arena *ar_ptr, INTERNAL_SIZE_T size)
1233 internal_function;
1234 static mchunkptr chunk_realloc(arena *ar_ptr, mchunkptr oldp,
1235 INTERNAL_SIZE_T oldsize, INTERNAL_SIZE_T nb)
1236 internal_function;
1237 static mchunkptr chunk_align(arena *ar_ptr, INTERNAL_SIZE_T nb,
1238 size_t alignment) internal_function;
1239 static int main_trim(size_t pad) internal_function;
1240 #if USE_ARENAS
1241 static int heap_trim(heap_info *heap, size_t pad) internal_function;
1242 #endif
1243 #if defined _LIBC || defined MALLOC_HOOKS
1244 static Void_t* malloc_check(size_t sz, const Void_t *caller);
1245 static void free_check(Void_t* mem, const Void_t *caller);
1246 static Void_t* realloc_check(Void_t* oldmem, size_t bytes,
1247 const Void_t *caller);
1248 static Void_t* memalign_check(size_t alignment, size_t bytes,
1249 const Void_t *caller);
1250 #ifndef NO_THREADS
1251 static Void_t* malloc_starter(size_t sz, const Void_t *caller);
1252 static void free_starter(Void_t* mem, const Void_t *caller);
1253 static Void_t* malloc_atfork(size_t sz, const Void_t *caller);
1254 static void free_atfork(Void_t* mem, const Void_t *caller);
1255 #endif
1256 #endif
1258 #else
1260 static void chunk_free();
1261 static mchunkptr chunk_alloc();
1262 static mchunkptr chunk_realloc();
1263 static mchunkptr chunk_align();
1264 static int main_trim();
1265 #if USE_ARENAS
1266 static int heap_trim();
1267 #endif
1268 #if defined _LIBC || defined MALLOC_HOOKS
1269 static Void_t* malloc_check();
1270 static void free_check();
1271 static Void_t* realloc_check();
1272 static Void_t* memalign_check();
1273 #ifndef NO_THREADS
1274 static Void_t* malloc_starter();
1275 static void free_starter();
1276 static Void_t* malloc_atfork();
1277 static void free_atfork();
1278 #endif
1279 #endif
1281 #endif
1285 /* sizes, alignments */
1287 #define SIZE_SZ (sizeof(INTERNAL_SIZE_T))
1288 #define MALLOC_ALIGNMENT (SIZE_SZ + SIZE_SZ)
1289 #define MALLOC_ALIGN_MASK (MALLOC_ALIGNMENT - 1)
1290 #define MINSIZE (sizeof(struct malloc_chunk))
1292 /* conversion from malloc headers to user pointers, and back */
1294 #define chunk2mem(p) ((Void_t*)((char*)(p) + 2*SIZE_SZ))
1295 #define mem2chunk(mem) BOUNDED_1((mchunkptr)((char*)(mem) - 2*SIZE_SZ))
1297 /* pad request bytes into a usable size, return non-zero on overflow */
1299 #define request2size(req, nb) \
1300 ((nb = (req) + (SIZE_SZ + MALLOC_ALIGN_MASK)),\
1301 ((long)nb <= 0 || nb < (INTERNAL_SIZE_T) (req) \
1302 ? (__set_errno (ENOMEM), 1) \
1303 : ((nb < (MINSIZE + MALLOC_ALIGN_MASK) \
1304 ? (nb = MINSIZE) : (nb &= ~MALLOC_ALIGN_MASK)), 0)))
1306 /* Check if m has acceptable alignment */
1308 #define aligned_OK(m) (((unsigned long)((m)) & (MALLOC_ALIGN_MASK)) == 0)
1314 Physical chunk operations
1318 /* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */
1320 #define PREV_INUSE 0x1UL
1322 /* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */
1324 #define IS_MMAPPED 0x2UL
1326 /* Bits to mask off when extracting size */
1328 #define SIZE_BITS (PREV_INUSE|IS_MMAPPED)
1331 /* Ptr to next physical malloc_chunk. */
1333 #define next_chunk(p) chunk_at_offset((p), (p)->size & ~PREV_INUSE)
1335 /* Ptr to previous physical malloc_chunk */
1337 #define prev_chunk(p) chunk_at_offset((p), -(p)->prev_size)
1340 /* Treat space at ptr + offset as a chunk */
1342 #define chunk_at_offset(p, s) BOUNDED_1((mchunkptr)(((char*)(p)) + (s)))
1348 Dealing with use bits
1351 /* extract p's inuse bit */
1353 #define inuse(p) (next_chunk(p)->size & PREV_INUSE)
1355 /* extract inuse bit of previous chunk */
1357 #define prev_inuse(p) ((p)->size & PREV_INUSE)
1359 /* check for mmap()'ed chunk */
1361 #define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED)
1363 /* set/clear chunk as in use without otherwise disturbing */
1365 #define set_inuse(p) (next_chunk(p)->size |= PREV_INUSE)
1367 #define clear_inuse(p) (next_chunk(p)->size &= ~PREV_INUSE)
1369 /* check/set/clear inuse bits in known places */
1371 #define inuse_bit_at_offset(p, s) \
1372 (chunk_at_offset((p), (s))->size & PREV_INUSE)
1374 #define set_inuse_bit_at_offset(p, s) \
1375 (chunk_at_offset((p), (s))->size |= PREV_INUSE)
1377 #define clear_inuse_bit_at_offset(p, s) \
1378 (chunk_at_offset((p), (s))->size &= ~(PREV_INUSE))
1384 Dealing with size fields
1387 /* Get size, ignoring use bits */
1389 #define chunksize(p) ((p)->size & ~(SIZE_BITS))
1391 /* Set size at head, without disturbing its use bit */
1393 #define set_head_size(p, s) ((p)->size = (((p)->size & PREV_INUSE) | (s)))
1395 /* Set size/use ignoring previous bits in header */
1397 #define set_head(p, s) ((p)->size = (s))
1399 /* Set size at footer (only when chunk is not in use) */
1401 #define set_foot(p, s) (chunk_at_offset(p, s)->prev_size = (s))
1407 /* access macros */
1409 #define bin_at(a, i) BOUNDED_1(_bin_at(a, i))
1410 #define _bin_at(a, i) ((mbinptr)((char*)&(((a)->av)[2*(i)+2]) - 2*SIZE_SZ))
1411 #define init_bin(a, i) ((a)->av[2*(i)+2] = (a)->av[2*(i)+3] = bin_at((a), (i)))
1412 #define next_bin(b) ((mbinptr)((char*)(b) + 2 * sizeof(((arena*)0)->av[0])))
1413 #define prev_bin(b) ((mbinptr)((char*)(b) - 2 * sizeof(((arena*)0)->av[0])))
1416 The first 2 bins are never indexed. The corresponding av cells are instead
1417 used for bookkeeping. This is not to save space, but to simplify
1418 indexing, maintain locality, and avoid some initialization tests.
1421 #define binblocks(a) (bin_at(a,0)->size)/* bitvector of nonempty blocks */
1422 #define top(a) (bin_at(a,0)->fd) /* The topmost chunk */
1423 #define last_remainder(a) (bin_at(a,1)) /* remainder from last split */
1426 Because top initially points to its own bin with initial
1427 zero size, thus forcing extension on the first malloc request,
1428 we avoid having any special code in malloc to check whether
1429 it even exists yet. But we still need to in malloc_extend_top.
1432 #define initial_top(a) ((mchunkptr)bin_at(a, 0))
1436 /* field-extraction macros */
1438 #define first(b) ((b)->fd)
1439 #define last(b) ((b)->bk)
1442 Indexing into bins
1445 #define bin_index(sz) \
1446 (((((unsigned long)(sz)) >> 9) == 0) ? (((unsigned long)(sz)) >> 3):\
1447 ((((unsigned long)(sz)) >> 9) <= 4) ? 56 + (((unsigned long)(sz)) >> 6):\
1448 ((((unsigned long)(sz)) >> 9) <= 20) ? 91 + (((unsigned long)(sz)) >> 9):\
1449 ((((unsigned long)(sz)) >> 9) <= 84) ? 110 + (((unsigned long)(sz)) >> 12):\
1450 ((((unsigned long)(sz)) >> 9) <= 340) ? 119 + (((unsigned long)(sz)) >> 15):\
1451 ((((unsigned long)(sz)) >> 9) <= 1364) ? 124 + (((unsigned long)(sz)) >> 18):\
1452 126)
1454 bins for chunks < 512 are all spaced 8 bytes apart, and hold
1455 identically sized chunks. This is exploited in malloc.
1458 #define MAX_SMALLBIN 63
1459 #define MAX_SMALLBIN_SIZE 512
1460 #define SMALLBIN_WIDTH 8
1462 #define smallbin_index(sz) (((unsigned long)(sz)) >> 3)
1465 Requests are `small' if both the corresponding and the next bin are small
1468 #define is_small_request(nb) ((nb) < MAX_SMALLBIN_SIZE - SMALLBIN_WIDTH)
1473 To help compensate for the large number of bins, a one-level index
1474 structure is used for bin-by-bin searching. `binblocks' is a
1475 one-word bitvector recording whether groups of BINBLOCKWIDTH bins
1476 have any (possibly) non-empty bins, so they can be skipped over
1477 all at once during during traversals. The bits are NOT always
1478 cleared as soon as all bins in a block are empty, but instead only
1479 when all are noticed to be empty during traversal in malloc.
1482 #define BINBLOCKWIDTH 4 /* bins per block */
1484 /* bin<->block macros */
1486 #define idx2binblock(ix) ((unsigned)1 << ((ix) / BINBLOCKWIDTH))
1487 #define mark_binblock(a, ii) (binblocks(a) |= idx2binblock(ii))
1488 #define clear_binblock(a, ii) (binblocks(a) &= ~(idx2binblock(ii)))
1493 /* Static bookkeeping data */
1495 /* Helper macro to initialize bins */
1496 #define IAV(i) _bin_at(&main_arena, i), _bin_at(&main_arena, i)
1498 static arena main_arena = {
1500 0, 0,
1501 IAV(0), IAV(1), IAV(2), IAV(3), IAV(4), IAV(5), IAV(6), IAV(7),
1502 IAV(8), IAV(9), IAV(10), IAV(11), IAV(12), IAV(13), IAV(14), IAV(15),
1503 IAV(16), IAV(17), IAV(18), IAV(19), IAV(20), IAV(21), IAV(22), IAV(23),
1504 IAV(24), IAV(25), IAV(26), IAV(27), IAV(28), IAV(29), IAV(30), IAV(31),
1505 IAV(32), IAV(33), IAV(34), IAV(35), IAV(36), IAV(37), IAV(38), IAV(39),
1506 IAV(40), IAV(41), IAV(42), IAV(43), IAV(44), IAV(45), IAV(46), IAV(47),
1507 IAV(48), IAV(49), IAV(50), IAV(51), IAV(52), IAV(53), IAV(54), IAV(55),
1508 IAV(56), IAV(57), IAV(58), IAV(59), IAV(60), IAV(61), IAV(62), IAV(63),
1509 IAV(64), IAV(65), IAV(66), IAV(67), IAV(68), IAV(69), IAV(70), IAV(71),
1510 IAV(72), IAV(73), IAV(74), IAV(75), IAV(76), IAV(77), IAV(78), IAV(79),
1511 IAV(80), IAV(81), IAV(82), IAV(83), IAV(84), IAV(85), IAV(86), IAV(87),
1512 IAV(88), IAV(89), IAV(90), IAV(91), IAV(92), IAV(93), IAV(94), IAV(95),
1513 IAV(96), IAV(97), IAV(98), IAV(99), IAV(100), IAV(101), IAV(102), IAV(103),
1514 IAV(104), IAV(105), IAV(106), IAV(107), IAV(108), IAV(109), IAV(110), IAV(111),
1515 IAV(112), IAV(113), IAV(114), IAV(115), IAV(116), IAV(117), IAV(118), IAV(119),
1516 IAV(120), IAV(121), IAV(122), IAV(123), IAV(124), IAV(125), IAV(126), IAV(127)
1518 &main_arena, /* next */
1519 0, /* size */
1520 #if THREAD_STATS
1521 0, 0, 0, /* stat_lock_direct, stat_lock_loop, stat_lock_wait */
1522 #endif
1523 MUTEX_INITIALIZER /* mutex */
1526 #undef IAV
1528 /* Thread specific data */
1530 static tsd_key_t arena_key;
1531 static mutex_t list_lock = MUTEX_INITIALIZER;
1533 #if THREAD_STATS
1534 static int stat_n_heaps = 0;
1535 #define THREAD_STAT(x) x
1536 #else
1537 #define THREAD_STAT(x) do ; while(0)
1538 #endif
1540 /* variables holding tunable values */
1542 static unsigned long trim_threshold = DEFAULT_TRIM_THRESHOLD;
1543 static unsigned long top_pad = DEFAULT_TOP_PAD;
1544 static unsigned int n_mmaps_max = DEFAULT_MMAP_MAX;
1545 static unsigned long mmap_threshold = DEFAULT_MMAP_THRESHOLD;
1546 static int check_action = DEFAULT_CHECK_ACTION;
1548 /* The first value returned from sbrk */
1549 static char* sbrk_base = (char*)(-1);
1551 /* The maximum memory obtained from system via sbrk */
1552 static unsigned long max_sbrked_mem = 0;
1554 /* The maximum via either sbrk or mmap (too difficult to track with threads) */
1555 #ifdef NO_THREADS
1556 static unsigned long max_total_mem = 0;
1557 #endif
1559 /* The total memory obtained from system via sbrk */
1560 #define sbrked_mem (main_arena.size)
1562 /* Tracking mmaps */
1564 static unsigned int n_mmaps = 0;
1565 static unsigned int max_n_mmaps = 0;
1566 static unsigned long mmapped_mem = 0;
1567 static unsigned long max_mmapped_mem = 0;
1569 /* Mapped memory in non-main arenas (reliable only for NO_THREADS). */
1570 static unsigned long arena_mem = 0;
1574 #ifndef _LIBC
1575 #define weak_variable
1576 #else
1577 /* In GNU libc we want the hook variables to be weak definitions to
1578 avoid a problem with Emacs. */
1579 #define weak_variable weak_function
1580 #endif
1582 /* Already initialized? */
1583 int __malloc_initialized = -1;
1586 #ifndef NO_THREADS
1588 /* The following two functions are registered via thread_atfork() to
1589 make sure that the mutexes remain in a consistent state in the
1590 fork()ed version of a thread. Also adapt the malloc and free hooks
1591 temporarily, because the `atfork' handler mechanism may use
1592 malloc/free internally (e.g. in LinuxThreads). */
1594 #if defined _LIBC || defined MALLOC_HOOKS
1595 static __malloc_ptr_t (*save_malloc_hook) __MALLOC_P ((size_t __size,
1596 const __malloc_ptr_t));
1597 static void (*save_free_hook) __MALLOC_P ((__malloc_ptr_t __ptr,
1598 const __malloc_ptr_t));
1599 static Void_t* save_arena;
1600 #endif
1602 static void
1603 ptmalloc_lock_all __MALLOC_P((void))
1605 arena *ar_ptr;
1607 (void)mutex_lock(&list_lock);
1608 for(ar_ptr = &main_arena;;) {
1609 (void)mutex_lock(&ar_ptr->mutex);
1610 ar_ptr = ar_ptr->next;
1611 if(ar_ptr == &main_arena) break;
1613 #if defined _LIBC || defined MALLOC_HOOKS
1614 save_malloc_hook = __malloc_hook;
1615 save_free_hook = __free_hook;
1616 __malloc_hook = malloc_atfork;
1617 __free_hook = free_atfork;
1618 /* Only the current thread may perform malloc/free calls now. */
1619 tsd_getspecific(arena_key, save_arena);
1620 tsd_setspecific(arena_key, (Void_t*)0);
1621 #endif
1624 static void
1625 ptmalloc_unlock_all __MALLOC_P((void))
1627 arena *ar_ptr;
1629 #if defined _LIBC || defined MALLOC_HOOKS
1630 tsd_setspecific(arena_key, save_arena);
1631 __malloc_hook = save_malloc_hook;
1632 __free_hook = save_free_hook;
1633 #endif
1634 for(ar_ptr = &main_arena;;) {
1635 (void)mutex_unlock(&ar_ptr->mutex);
1636 ar_ptr = ar_ptr->next;
1637 if(ar_ptr == &main_arena) break;
1639 (void)mutex_unlock(&list_lock);
1642 static void
1643 ptmalloc_init_all __MALLOC_P((void))
1645 arena *ar_ptr;
1647 #if defined _LIBC || defined MALLOC_HOOKS
1648 tsd_setspecific(arena_key, save_arena);
1649 __malloc_hook = save_malloc_hook;
1650 __free_hook = save_free_hook;
1651 #endif
1652 for(ar_ptr = &main_arena;;) {
1653 (void)mutex_init(&ar_ptr->mutex);
1654 ar_ptr = ar_ptr->next;
1655 if(ar_ptr == &main_arena) break;
1657 (void)mutex_init(&list_lock);
1660 #endif /* !defined NO_THREADS */
1662 /* Initialization routine. */
1663 #if defined(_LIBC)
1664 #if 0
1665 static void ptmalloc_init __MALLOC_P ((void)) __attribute__ ((constructor));
1666 #endif
1668 static void
1669 ptmalloc_init __MALLOC_P((void))
1670 #else
1671 void
1672 ptmalloc_init __MALLOC_P((void))
1673 #endif
1675 #if defined _LIBC || defined MALLOC_HOOKS
1676 # if __STD_C
1677 const char* s;
1678 # else
1679 char* s;
1680 # endif
1681 #endif
1683 if(__malloc_initialized >= 0) return;
1684 __malloc_initialized = 0;
1685 #ifdef _LIBC
1686 __libc_pagesize = __getpagesize();
1687 #endif
1688 #ifndef NO_THREADS
1689 #if defined _LIBC || defined MALLOC_HOOKS
1690 /* With some threads implementations, creating thread-specific data
1691 or initializing a mutex may call malloc() itself. Provide a
1692 simple starter version (realloc() won't work). */
1693 save_malloc_hook = __malloc_hook;
1694 save_free_hook = __free_hook;
1695 __malloc_hook = malloc_starter;
1696 __free_hook = free_starter;
1697 #endif
1698 #ifdef _LIBC
1699 /* Initialize the pthreads interface. */
1700 if (__pthread_initialize != NULL)
1701 __pthread_initialize();
1702 #endif
1703 #endif /* !defined NO_THREADS */
1704 mutex_init(&main_arena.mutex);
1705 mutex_init(&list_lock);
1706 tsd_key_create(&arena_key, NULL);
1707 tsd_setspecific(arena_key, (Void_t *)&main_arena);
1708 thread_atfork(ptmalloc_lock_all, ptmalloc_unlock_all, ptmalloc_init_all);
1709 #if defined _LIBC || defined MALLOC_HOOKS
1710 if((s = __secure_getenv("MALLOC_TRIM_THRESHOLD_")))
1711 mALLOPt(M_TRIM_THRESHOLD, atoi(s));
1712 if((s = __secure_getenv("MALLOC_TOP_PAD_")))
1713 mALLOPt(M_TOP_PAD, atoi(s));
1714 if((s = __secure_getenv("MALLOC_MMAP_THRESHOLD_")))
1715 mALLOPt(M_MMAP_THRESHOLD, atoi(s));
1716 if((s = __secure_getenv("MALLOC_MMAP_MAX_")))
1717 mALLOPt(M_MMAP_MAX, atoi(s));
1718 s = getenv("MALLOC_CHECK_");
1719 #ifndef NO_THREADS
1720 __malloc_hook = save_malloc_hook;
1721 __free_hook = save_free_hook;
1722 #endif
1723 if(s && (! __libc_enable_secure || access ("/etc/suid-debug", F_OK) == 0)) {
1724 if(s[0]) mALLOPt(M_CHECK_ACTION, (int)(s[0] - '0'));
1725 __malloc_check_init();
1727 if(__malloc_initialize_hook != NULL)
1728 (*__malloc_initialize_hook)();
1729 #endif
1730 __malloc_initialized = 1;
1733 /* There are platforms (e.g. Hurd) with a link-time hook mechanism. */
1734 #ifdef thread_atfork_static
1735 thread_atfork_static(ptmalloc_lock_all, ptmalloc_unlock_all, \
1736 ptmalloc_init_all)
1737 #endif
1739 #if defined _LIBC || defined MALLOC_HOOKS
1741 /* Hooks for debugging versions. The initial hooks just call the
1742 initialization routine, then do the normal work. */
1744 static Void_t*
1745 #if __STD_C
1746 malloc_hook_ini(size_t sz, const __malloc_ptr_t caller)
1747 #else
1748 malloc_hook_ini(sz, caller)
1749 size_t sz; const __malloc_ptr_t caller;
1750 #endif
1752 __malloc_hook = NULL;
1753 ptmalloc_init();
1754 return mALLOc(sz);
1757 static Void_t*
1758 #if __STD_C
1759 realloc_hook_ini(Void_t* ptr, size_t sz, const __malloc_ptr_t caller)
1760 #else
1761 realloc_hook_ini(ptr, sz, caller)
1762 Void_t* ptr; size_t sz; const __malloc_ptr_t caller;
1763 #endif
1765 __malloc_hook = NULL;
1766 __realloc_hook = NULL;
1767 ptmalloc_init();
1768 return rEALLOc(ptr, sz);
1771 static Void_t*
1772 #if __STD_C
1773 memalign_hook_ini(size_t sz, size_t alignment, const __malloc_ptr_t caller)
1774 #else
1775 memalign_hook_ini(sz, alignment, caller)
1776 size_t sz; size_t alignment; const __malloc_ptr_t caller;
1777 #endif
1779 __memalign_hook = NULL;
1780 ptmalloc_init();
1781 return mEMALIGn(sz, alignment);
1784 void weak_variable (*__malloc_initialize_hook) __MALLOC_P ((void)) = NULL;
1785 void weak_variable (*__free_hook) __MALLOC_P ((__malloc_ptr_t __ptr,
1786 const __malloc_ptr_t)) = NULL;
1787 __malloc_ptr_t weak_variable (*__malloc_hook)
1788 __MALLOC_P ((size_t __size, const __malloc_ptr_t)) = malloc_hook_ini;
1789 __malloc_ptr_t weak_variable (*__realloc_hook)
1790 __MALLOC_P ((__malloc_ptr_t __ptr, size_t __size, const __malloc_ptr_t))
1791 = realloc_hook_ini;
1792 __malloc_ptr_t weak_variable (*__memalign_hook)
1793 __MALLOC_P ((size_t __size, size_t __alignment, const __malloc_ptr_t))
1794 = memalign_hook_ini;
1795 void weak_variable (*__after_morecore_hook) __MALLOC_P ((void)) = NULL;
1797 /* Whether we are using malloc checking. */
1798 static int using_malloc_checking;
1800 /* A flag that is set by malloc_set_state, to signal that malloc checking
1801 must not be enabled on the request from the user (via the MALLOC_CHECK_
1802 environment variable). It is reset by __malloc_check_init to tell
1803 malloc_set_state that the user has requested malloc checking.
1805 The purpose of this flag is to make sure that malloc checking is not
1806 enabled when the heap to be restored was constructed without malloc
1807 checking, and thus does not contain the required magic bytes.
1808 Otherwise the heap would be corrupted by calls to free and realloc. If
1809 it turns out that the heap was created with malloc checking and the
1810 user has requested it malloc_set_state just calls __malloc_check_init
1811 again to enable it. On the other hand, reusing such a heap without
1812 further malloc checking is safe. */
1813 static int disallow_malloc_check;
1815 /* Activate a standard set of debugging hooks. */
1816 void
1817 __malloc_check_init()
1819 if (disallow_malloc_check) {
1820 disallow_malloc_check = 0;
1821 return;
1823 using_malloc_checking = 1;
1824 __malloc_hook = malloc_check;
1825 __free_hook = free_check;
1826 __realloc_hook = realloc_check;
1827 __memalign_hook = memalign_check;
1828 if(check_action & 1)
1829 fprintf(stderr, "malloc: using debugging hooks\n");
1832 #endif
1838 /* Routines dealing with mmap(). */
1840 #if HAVE_MMAP
1842 #ifndef MAP_ANONYMOUS
1844 static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
1846 #define MMAP(addr, size, prot, flags) ((dev_zero_fd < 0) ? \
1847 (dev_zero_fd = open("/dev/zero", O_RDWR), \
1848 mmap((addr), (size), (prot), (flags), dev_zero_fd, 0)) : \
1849 mmap((addr), (size), (prot), (flags), dev_zero_fd, 0))
1851 #else
1853 #define MMAP(addr, size, prot, flags) \
1854 (mmap((addr), (size), (prot), (flags)|MAP_ANONYMOUS, -1, 0))
1856 #endif
1858 #if defined __GNUC__ && __GNUC__ >= 2
1859 /* This function is only called from one place, inline it. */
1860 __inline__
1861 #endif
1862 static mchunkptr
1863 internal_function
1864 #if __STD_C
1865 mmap_chunk(size_t size)
1866 #else
1867 mmap_chunk(size) size_t size;
1868 #endif
1870 size_t page_mask = malloc_getpagesize - 1;
1871 mchunkptr p;
1873 /* For mmapped chunks, the overhead is one SIZE_SZ unit larger, because
1874 * there is no following chunk whose prev_size field could be used.
1876 size = (size + SIZE_SZ + page_mask) & ~page_mask;
1878 p = (mchunkptr)MMAP(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE);
1879 if(p == (mchunkptr) MAP_FAILED) return 0;
1881 n_mmaps++;
1882 if (n_mmaps > max_n_mmaps) max_n_mmaps = n_mmaps;
1884 /* We demand that eight bytes into a page must be 8-byte aligned. */
1885 assert(aligned_OK(chunk2mem(p)));
1887 /* The offset to the start of the mmapped region is stored
1888 * in the prev_size field of the chunk; normally it is zero,
1889 * but that can be changed in memalign().
1891 p->prev_size = 0;
1892 set_head(p, size|IS_MMAPPED);
1894 mmapped_mem += size;
1895 if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
1896 max_mmapped_mem = mmapped_mem;
1897 #ifdef NO_THREADS
1898 if ((unsigned long)(mmapped_mem + arena_mem + sbrked_mem) > max_total_mem)
1899 max_total_mem = mmapped_mem + arena_mem + sbrked_mem;
1900 #endif
1901 return p;
1904 static void
1905 internal_function
1906 #if __STD_C
1907 munmap_chunk(mchunkptr p)
1908 #else
1909 munmap_chunk(p) mchunkptr p;
1910 #endif
1912 INTERNAL_SIZE_T size = chunksize(p);
1913 int ret;
1915 assert (chunk_is_mmapped(p));
1916 assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
1917 assert((n_mmaps > 0));
1918 assert(((p->prev_size + size) & (malloc_getpagesize-1)) == 0);
1920 n_mmaps--;
1921 mmapped_mem -= (size + p->prev_size);
1923 ret = munmap((char *)p - p->prev_size, size + p->prev_size);
1925 /* munmap returns non-zero on failure */
1926 assert(ret == 0);
1929 #if HAVE_MREMAP
1931 static mchunkptr
1932 internal_function
1933 #if __STD_C
1934 mremap_chunk(mchunkptr p, size_t new_size)
1935 #else
1936 mremap_chunk(p, new_size) mchunkptr p; size_t new_size;
1937 #endif
1939 size_t page_mask = malloc_getpagesize - 1;
1940 INTERNAL_SIZE_T offset = p->prev_size;
1941 INTERNAL_SIZE_T size = chunksize(p);
1942 char *cp;
1944 assert (chunk_is_mmapped(p));
1945 assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
1946 assert((n_mmaps > 0));
1947 assert(((size + offset) & (malloc_getpagesize-1)) == 0);
1949 /* Note the extra SIZE_SZ overhead as in mmap_chunk(). */
1950 new_size = (new_size + offset + SIZE_SZ + page_mask) & ~page_mask;
1952 cp = (char *)mremap((char *)p - offset, size + offset, new_size,
1953 MREMAP_MAYMOVE);
1955 if (cp == MAP_FAILED) return 0;
1957 p = (mchunkptr)(cp + offset);
1959 assert(aligned_OK(chunk2mem(p)));
1961 assert((p->prev_size == offset));
1962 set_head(p, (new_size - offset)|IS_MMAPPED);
1964 mmapped_mem -= size + offset;
1965 mmapped_mem += new_size;
1966 if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
1967 max_mmapped_mem = mmapped_mem;
1968 #ifdef NO_THREADS
1969 if ((unsigned long)(mmapped_mem + arena_mem + sbrked_mem) > max_total_mem)
1970 max_total_mem = mmapped_mem + arena_mem + sbrked_mem;
1971 #endif
1972 return p;
1975 #endif /* HAVE_MREMAP */
1977 #endif /* HAVE_MMAP */
1981 /* Managing heaps and arenas (for concurrent threads) */
1983 #if USE_ARENAS
1985 /* Create a new heap. size is automatically rounded up to a multiple
1986 of the page size. */
1988 static heap_info *
1989 internal_function
1990 #if __STD_C
1991 new_heap(size_t size)
1992 #else
1993 new_heap(size) size_t size;
1994 #endif
1996 size_t page_mask = malloc_getpagesize - 1;
1997 char *p1, *p2;
1998 unsigned long ul;
1999 heap_info *h;
2001 if(size+top_pad < HEAP_MIN_SIZE)
2002 size = HEAP_MIN_SIZE;
2003 else if(size+top_pad <= HEAP_MAX_SIZE)
2004 size += top_pad;
2005 else if(size > HEAP_MAX_SIZE)
2006 return 0;
2007 else
2008 size = HEAP_MAX_SIZE;
2009 size = (size + page_mask) & ~page_mask;
2011 /* A memory region aligned to a multiple of HEAP_MAX_SIZE is needed.
2012 No swap space needs to be reserved for the following large
2013 mapping (on Linux, this is the case for all non-writable mappings
2014 anyway). */
2015 p1 = (char *)MMAP(0, HEAP_MAX_SIZE<<1, PROT_NONE, MAP_PRIVATE|MAP_NORESERVE);
2016 if(p1 == MAP_FAILED)
2017 return 0;
2018 p2 = (char *)(((unsigned long)p1 + HEAP_MAX_SIZE) & ~(HEAP_MAX_SIZE-1));
2019 ul = p2 - p1;
2020 munmap(p1, ul);
2021 munmap(p2 + HEAP_MAX_SIZE, HEAP_MAX_SIZE - ul);
2022 if(mprotect(p2, size, PROT_READ|PROT_WRITE) != 0) {
2023 munmap(p2, HEAP_MAX_SIZE);
2024 return 0;
2026 h = (heap_info *)p2;
2027 h->size = size;
2028 THREAD_STAT(stat_n_heaps++);
2029 return h;
2032 /* Grow or shrink a heap. size is automatically rounded up to a
2033 multiple of the page size if it is positive. */
2035 static int
2036 #if __STD_C
2037 grow_heap(heap_info *h, long diff)
2038 #else
2039 grow_heap(h, diff) heap_info *h; long diff;
2040 #endif
2042 size_t page_mask = malloc_getpagesize - 1;
2043 long new_size;
2045 if(diff >= 0) {
2046 diff = (diff + page_mask) & ~page_mask;
2047 new_size = (long)h->size + diff;
2048 if(new_size > HEAP_MAX_SIZE)
2049 return -1;
2050 if(mprotect((char *)h + h->size, diff, PROT_READ|PROT_WRITE) != 0)
2051 return -2;
2052 } else {
2053 new_size = (long)h->size + diff;
2054 if(new_size < (long)sizeof(*h))
2055 return -1;
2056 /* Try to re-map the extra heap space freshly to save memory, and
2057 make it inaccessible. */
2058 if((char *)MMAP((char *)h + new_size, -diff, PROT_NONE,
2059 MAP_PRIVATE|MAP_FIXED) == (char *) MAP_FAILED)
2060 return -2;
2062 h->size = new_size;
2063 return 0;
2066 /* Delete a heap. */
2068 #define delete_heap(heap) munmap((char*)(heap), HEAP_MAX_SIZE)
2070 /* arena_get() acquires an arena and locks the corresponding mutex.
2071 First, try the one last locked successfully by this thread. (This
2072 is the common case and handled with a macro for speed.) Then, loop
2073 once over the circularly linked list of arenas. If no arena is
2074 readily available, create a new one. In this latter case, `size'
2075 is just a hint as to how much memory will be required immediately
2076 in the new arena. */
2078 #define arena_get(ptr, size) do { \
2079 Void_t *vptr = NULL; \
2080 ptr = (arena *)tsd_getspecific(arena_key, vptr); \
2081 if(ptr && !mutex_trylock(&ptr->mutex)) { \
2082 THREAD_STAT(++(ptr->stat_lock_direct)); \
2083 } else \
2084 ptr = arena_get2(ptr, (size)); \
2085 } while(0)
2087 static arena *
2088 internal_function
2089 #if __STD_C
2090 arena_get2(arena *a_tsd, size_t size)
2091 #else
2092 arena_get2(a_tsd, size) arena *a_tsd; size_t size;
2093 #endif
2095 arena *a;
2096 heap_info *h;
2097 char *ptr;
2098 int i;
2099 unsigned long misalign;
2101 if(!a_tsd)
2102 a = a_tsd = &main_arena;
2103 else {
2104 a = a_tsd->next;
2105 if(!a) {
2106 /* This can only happen while initializing the new arena. */
2107 (void)mutex_lock(&main_arena.mutex);
2108 THREAD_STAT(++(main_arena.stat_lock_wait));
2109 return &main_arena;
2113 /* Check the global, circularly linked list for available arenas. */
2114 repeat:
2115 do {
2116 if(!mutex_trylock(&a->mutex)) {
2117 THREAD_STAT(++(a->stat_lock_loop));
2118 tsd_setspecific(arena_key, (Void_t *)a);
2119 return a;
2121 a = a->next;
2122 } while(a != a_tsd);
2124 /* If not even the list_lock can be obtained, try again. This can
2125 happen during `atfork', or for example on systems where thread
2126 creation makes it temporarily impossible to obtain _any_
2127 locks. */
2128 if(mutex_trylock(&list_lock)) {
2129 a = a_tsd;
2130 goto repeat;
2132 (void)mutex_unlock(&list_lock);
2134 /* Nothing immediately available, so generate a new arena. */
2135 h = new_heap(size + (sizeof(*h) + sizeof(*a) + MALLOC_ALIGNMENT));
2136 if(!h) {
2137 /* Maybe size is too large to fit in a single heap. So, just try
2138 to create a minimally-sized arena and let chunk_alloc() attempt
2139 to deal with the large request via mmap_chunk(). */
2140 h = new_heap(sizeof(*h) + sizeof(*a) + MALLOC_ALIGNMENT);
2141 if(!h)
2142 return 0;
2144 a = h->ar_ptr = (arena *)(h+1);
2145 for(i=0; i<NAV; i++)
2146 init_bin(a, i);
2147 a->next = NULL;
2148 a->size = h->size;
2149 arena_mem += h->size;
2150 #ifdef NO_THREADS
2151 if((unsigned long)(mmapped_mem + arena_mem + sbrked_mem) > max_total_mem)
2152 max_total_mem = mmapped_mem + arena_mem + sbrked_mem;
2153 #endif
2154 tsd_setspecific(arena_key, (Void_t *)a);
2155 mutex_init(&a->mutex);
2156 i = mutex_lock(&a->mutex); /* remember result */
2158 /* Set up the top chunk, with proper alignment. */
2159 ptr = (char *)(a + 1);
2160 misalign = (unsigned long)chunk2mem(ptr) & MALLOC_ALIGN_MASK;
2161 if (misalign > 0)
2162 ptr += MALLOC_ALIGNMENT - misalign;
2163 top(a) = (mchunkptr)ptr;
2164 set_head(top(a), (((char*)h + h->size) - ptr) | PREV_INUSE);
2166 /* Add the new arena to the list. */
2167 (void)mutex_lock(&list_lock);
2168 a->next = main_arena.next;
2169 main_arena.next = a;
2170 (void)mutex_unlock(&list_lock);
2172 if(i) /* locking failed; keep arena for further attempts later */
2173 return 0;
2175 THREAD_STAT(++(a->stat_lock_loop));
2176 return a;
2179 /* find the heap and corresponding arena for a given ptr */
2181 #define heap_for_ptr(ptr) \
2182 ((heap_info *)((unsigned long)(ptr) & ~(HEAP_MAX_SIZE-1)))
2183 #define arena_for_ptr(ptr) \
2184 (((mchunkptr)(ptr) < top(&main_arena) && (char *)(ptr) >= sbrk_base) ? \
2185 &main_arena : heap_for_ptr(ptr)->ar_ptr)
2187 #else /* !USE_ARENAS */
2189 /* There is only one arena, main_arena. */
2191 #define arena_get(ptr, sz) (ptr = &main_arena)
2192 #define arena_for_ptr(ptr) (&main_arena)
2194 #endif /* USE_ARENAS */
2199 Debugging support
2202 #if MALLOC_DEBUG
2206 These routines make a number of assertions about the states
2207 of data structures that should be true at all times. If any
2208 are not true, it's very likely that a user program has somehow
2209 trashed memory. (It's also possible that there is a coding error
2210 in malloc. In which case, please report it!)
2213 #if __STD_C
2214 static void do_check_chunk(arena *ar_ptr, mchunkptr p)
2215 #else
2216 static void do_check_chunk(ar_ptr, p) arena *ar_ptr; mchunkptr p;
2217 #endif
2219 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
2221 /* No checkable chunk is mmapped */
2222 assert(!chunk_is_mmapped(p));
2224 #if USE_ARENAS
2225 if(ar_ptr != &main_arena) {
2226 heap_info *heap = heap_for_ptr(p);
2227 assert(heap->ar_ptr == ar_ptr);
2228 if(p != top(ar_ptr))
2229 assert((char *)p + sz <= (char *)heap + heap->size);
2230 else
2231 assert((char *)p + sz == (char *)heap + heap->size);
2232 return;
2234 #endif
2236 /* Check for legal address ... */
2237 assert((char*)p >= sbrk_base);
2238 if (p != top(ar_ptr))
2239 assert((char*)p + sz <= (char*)top(ar_ptr));
2240 else
2241 assert((char*)p + sz <= sbrk_base + sbrked_mem);
2246 #if __STD_C
2247 static void do_check_free_chunk(arena *ar_ptr, mchunkptr p)
2248 #else
2249 static void do_check_free_chunk(ar_ptr, p) arena *ar_ptr; mchunkptr p;
2250 #endif
2252 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
2253 mchunkptr next = chunk_at_offset(p, sz);
2255 do_check_chunk(ar_ptr, p);
2257 /* Check whether it claims to be free ... */
2258 assert(!inuse(p));
2260 /* Must have OK size and fields */
2261 assert((long)sz >= (long)MINSIZE);
2262 assert((sz & MALLOC_ALIGN_MASK) == 0);
2263 assert(aligned_OK(chunk2mem(p)));
2264 /* ... matching footer field */
2265 assert(next->prev_size == sz);
2266 /* ... and is fully consolidated */
2267 assert(prev_inuse(p));
2268 assert (next == top(ar_ptr) || inuse(next));
2270 /* ... and has minimally sane links */
2271 assert(p->fd->bk == p);
2272 assert(p->bk->fd == p);
2275 #if __STD_C
2276 static void do_check_inuse_chunk(arena *ar_ptr, mchunkptr p)
2277 #else
2278 static void do_check_inuse_chunk(ar_ptr, p) arena *ar_ptr; mchunkptr p;
2279 #endif
2281 mchunkptr next = next_chunk(p);
2282 do_check_chunk(ar_ptr, p);
2284 /* Check whether it claims to be in use ... */
2285 assert(inuse(p));
2287 /* ... whether its size is OK (it might be a fencepost) ... */
2288 assert(chunksize(p) >= MINSIZE || next->size == (0|PREV_INUSE));
2290 /* ... and is surrounded by OK chunks.
2291 Since more things can be checked with free chunks than inuse ones,
2292 if an inuse chunk borders them and debug is on, it's worth doing them.
2294 if (!prev_inuse(p))
2296 mchunkptr prv = prev_chunk(p);
2297 assert(next_chunk(prv) == p);
2298 do_check_free_chunk(ar_ptr, prv);
2300 if (next == top(ar_ptr))
2302 assert(prev_inuse(next));
2303 assert(chunksize(next) >= MINSIZE);
2305 else if (!inuse(next))
2306 do_check_free_chunk(ar_ptr, next);
2310 #if __STD_C
2311 static void do_check_malloced_chunk(arena *ar_ptr,
2312 mchunkptr p, INTERNAL_SIZE_T s)
2313 #else
2314 static void do_check_malloced_chunk(ar_ptr, p, s)
2315 arena *ar_ptr; mchunkptr p; INTERNAL_SIZE_T s;
2316 #endif
2318 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
2319 long room = sz - s;
2321 do_check_inuse_chunk(ar_ptr, p);
2323 /* Legal size ... */
2324 assert((long)sz >= (long)MINSIZE);
2325 assert((sz & MALLOC_ALIGN_MASK) == 0);
2326 assert(room >= 0);
2327 assert(room < (long)MINSIZE);
2329 /* ... and alignment */
2330 assert(aligned_OK(chunk2mem(p)));
2333 /* ... and was allocated at front of an available chunk */
2334 assert(prev_inuse(p));
2339 #define check_free_chunk(A,P) do_check_free_chunk(A,P)
2340 #define check_inuse_chunk(A,P) do_check_inuse_chunk(A,P)
2341 #define check_chunk(A,P) do_check_chunk(A,P)
2342 #define check_malloced_chunk(A,P,N) do_check_malloced_chunk(A,P,N)
2343 #else
2344 #define check_free_chunk(A,P)
2345 #define check_inuse_chunk(A,P)
2346 #define check_chunk(A,P)
2347 #define check_malloced_chunk(A,P,N)
2348 #endif
2353 Macro-based internal utilities
2358 Linking chunks in bin lists.
2359 Call these only with variables, not arbitrary expressions, as arguments.
2363 Place chunk p of size s in its bin, in size order,
2364 putting it ahead of others of same size.
2368 #define frontlink(A, P, S, IDX, BK, FD) \
2370 if (S < MAX_SMALLBIN_SIZE) \
2372 IDX = smallbin_index(S); \
2373 mark_binblock(A, IDX); \
2374 BK = bin_at(A, IDX); \
2375 FD = BK->fd; \
2376 P->bk = BK; \
2377 P->fd = FD; \
2378 FD->bk = BK->fd = P; \
2380 else \
2382 IDX = bin_index(S); \
2383 BK = bin_at(A, IDX); \
2384 FD = BK->fd; \
2385 if (FD == BK) mark_binblock(A, IDX); \
2386 else \
2388 while (FD != BK && S < chunksize(FD)) FD = FD->fd; \
2389 BK = FD->bk; \
2391 P->bk = BK; \
2392 P->fd = FD; \
2393 FD->bk = BK->fd = P; \
2398 /* take a chunk off a list */
2400 #define unlink(P, BK, FD) \
2402 BK = P->bk; \
2403 FD = P->fd; \
2404 FD->bk = BK; \
2405 BK->fd = FD; \
2408 /* Place p as the last remainder */
2410 #define link_last_remainder(A, P) \
2412 last_remainder(A)->fd = last_remainder(A)->bk = P; \
2413 P->fd = P->bk = last_remainder(A); \
2416 /* Clear the last_remainder bin */
2418 #define clear_last_remainder(A) \
2419 (last_remainder(A)->fd = last_remainder(A)->bk = last_remainder(A))
2426 Extend the top-most chunk by obtaining memory from system.
2427 Main interface to sbrk (but see also malloc_trim).
2430 #if defined __GNUC__ && __GNUC__ >= 2
2431 /* This function is called only from one place, inline it. */
2432 __inline__
2433 #endif
2434 static void
2435 internal_function
2436 #if __STD_C
2437 malloc_extend_top(arena *ar_ptr, INTERNAL_SIZE_T nb)
2438 #else
2439 malloc_extend_top(ar_ptr, nb) arena *ar_ptr; INTERNAL_SIZE_T nb;
2440 #endif
2442 unsigned long pagesz = malloc_getpagesize;
2443 mchunkptr old_top = top(ar_ptr); /* Record state of old top */
2444 INTERNAL_SIZE_T old_top_size = chunksize(old_top);
2445 INTERNAL_SIZE_T top_size; /* new size of top chunk */
2447 #if USE_ARENAS
2448 if(ar_ptr == &main_arena) {
2449 #endif
2451 char* brk; /* return value from sbrk */
2452 INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of sbrked space */
2453 INTERNAL_SIZE_T correction; /* bytes for 2nd sbrk call */
2454 char* new_brk; /* return of 2nd sbrk call */
2455 char* old_end = (char*)(chunk_at_offset(old_top, old_top_size));
2457 /* Pad request with top_pad plus minimal overhead */
2458 INTERNAL_SIZE_T sbrk_size = nb + top_pad + MINSIZE;
2460 /* If not the first time through, round to preserve page boundary */
2461 /* Otherwise, we need to correct to a page size below anyway. */
2462 /* (We also correct below if an intervening foreign sbrk call.) */
2464 if (sbrk_base != (char*)(-1))
2465 sbrk_size = (sbrk_size + (pagesz - 1)) & ~(pagesz - 1);
2467 brk = (char*)(MORECORE (sbrk_size));
2469 /* Fail if sbrk failed or if a foreign sbrk call killed our space */
2470 if (brk == (char*)(MORECORE_FAILURE) ||
2471 (brk < old_end && old_top != initial_top(&main_arena)))
2472 return;
2474 #if defined _LIBC || defined MALLOC_HOOKS
2475 /* Call the `morecore' hook if necessary. */
2476 if (__after_morecore_hook)
2477 (*__after_morecore_hook) ();
2478 #endif
2480 sbrked_mem += sbrk_size;
2482 if (brk == old_end) { /* can just add bytes to current top */
2483 top_size = sbrk_size + old_top_size;
2484 set_head(old_top, top_size | PREV_INUSE);
2485 old_top = 0; /* don't free below */
2486 } else {
2487 if (sbrk_base == (char*)(-1)) /* First time through. Record base */
2488 sbrk_base = brk;
2489 else
2490 /* Someone else called sbrk(). Count those bytes as sbrked_mem. */
2491 sbrked_mem += brk - (char*)old_end;
2493 /* Guarantee alignment of first new chunk made from this space */
2494 front_misalign = (unsigned long)chunk2mem(brk) & MALLOC_ALIGN_MASK;
2495 if (front_misalign > 0) {
2496 correction = (MALLOC_ALIGNMENT) - front_misalign;
2497 brk += correction;
2498 } else
2499 correction = 0;
2501 /* Guarantee the next brk will be at a page boundary */
2502 correction += pagesz - ((unsigned long)(brk + sbrk_size) & (pagesz - 1));
2504 /* Allocate correction */
2505 new_brk = (char*)(MORECORE (correction));
2506 if (new_brk == (char*)(MORECORE_FAILURE)) return;
2508 #if defined _LIBC || defined MALLOC_HOOKS
2509 /* Call the `morecore' hook if necessary. */
2510 if (__after_morecore_hook)
2511 (*__after_morecore_hook) ();
2512 #endif
2514 sbrked_mem += correction;
2516 top(&main_arena) = (mchunkptr)brk;
2517 top_size = new_brk - brk + correction;
2518 set_head(top(&main_arena), top_size | PREV_INUSE);
2520 if (old_top == initial_top(&main_arena))
2521 old_top = 0; /* don't free below */
2524 if ((unsigned long)sbrked_mem > (unsigned long)max_sbrked_mem)
2525 max_sbrked_mem = sbrked_mem;
2526 #ifdef NO_THREADS
2527 if ((unsigned long)(mmapped_mem + arena_mem + sbrked_mem) > max_total_mem)
2528 max_total_mem = mmapped_mem + arena_mem + sbrked_mem;
2529 #endif
2531 #if USE_ARENAS
2532 } else { /* ar_ptr != &main_arena */
2533 heap_info *old_heap, *heap;
2534 size_t old_heap_size;
2536 if(old_top_size < MINSIZE) /* this should never happen */
2537 return;
2539 /* First try to extend the current heap. */
2540 if(MINSIZE + nb <= old_top_size)
2541 return;
2542 old_heap = heap_for_ptr(old_top);
2543 old_heap_size = old_heap->size;
2544 if(grow_heap(old_heap, MINSIZE + nb - old_top_size) == 0) {
2545 ar_ptr->size += old_heap->size - old_heap_size;
2546 arena_mem += old_heap->size - old_heap_size;
2547 #ifdef NO_THREADS
2548 if(mmapped_mem + arena_mem + sbrked_mem > max_total_mem)
2549 max_total_mem = mmapped_mem + arena_mem + sbrked_mem;
2550 #endif
2551 top_size = ((char *)old_heap + old_heap->size) - (char *)old_top;
2552 set_head(old_top, top_size | PREV_INUSE);
2553 return;
2556 /* A new heap must be created. */
2557 heap = new_heap(nb + (MINSIZE + sizeof(*heap)));
2558 if(!heap)
2559 return;
2560 heap->ar_ptr = ar_ptr;
2561 heap->prev = old_heap;
2562 ar_ptr->size += heap->size;
2563 arena_mem += heap->size;
2564 #ifdef NO_THREADS
2565 if((unsigned long)(mmapped_mem + arena_mem + sbrked_mem) > max_total_mem)
2566 max_total_mem = mmapped_mem + arena_mem + sbrked_mem;
2567 #endif
2569 /* Set up the new top, so we can safely use chunk_free() below. */
2570 top(ar_ptr) = chunk_at_offset(heap, sizeof(*heap));
2571 top_size = heap->size - sizeof(*heap);
2572 set_head(top(ar_ptr), top_size | PREV_INUSE);
2574 #endif /* USE_ARENAS */
2576 /* We always land on a page boundary */
2577 assert(((unsigned long)((char*)top(ar_ptr) + top_size) & (pagesz-1)) == 0);
2579 /* Setup fencepost and free the old top chunk. */
2580 if(old_top) {
2581 /* The fencepost takes at least MINSIZE bytes, because it might
2582 become the top chunk again later. Note that a footer is set
2583 up, too, although the chunk is marked in use. */
2584 old_top_size -= MINSIZE;
2585 set_head(chunk_at_offset(old_top, old_top_size + 2*SIZE_SZ), 0|PREV_INUSE);
2586 if(old_top_size >= MINSIZE) {
2587 set_head(chunk_at_offset(old_top, old_top_size), (2*SIZE_SZ)|PREV_INUSE);
2588 set_foot(chunk_at_offset(old_top, old_top_size), (2*SIZE_SZ));
2589 set_head_size(old_top, old_top_size);
2590 chunk_free(ar_ptr, old_top);
2591 } else {
2592 set_head(old_top, (old_top_size + 2*SIZE_SZ)|PREV_INUSE);
2593 set_foot(old_top, (old_top_size + 2*SIZE_SZ));
2601 /* Main public routines */
2605 Malloc Algorithm:
2607 The requested size is first converted into a usable form, `nb'.
2608 This currently means to add 4 bytes overhead plus possibly more to
2609 obtain 8-byte alignment and/or to obtain a size of at least
2610 MINSIZE (currently 16, 24, or 32 bytes), the smallest allocatable
2611 size. (All fits are considered `exact' if they are within MINSIZE
2612 bytes.)
2614 From there, the first successful of the following steps is taken:
2616 1. The bin corresponding to the request size is scanned, and if
2617 a chunk of exactly the right size is found, it is taken.
2619 2. The most recently remaindered chunk is used if it is big
2620 enough. This is a form of (roving) first fit, used only in
2621 the absence of exact fits. Runs of consecutive requests use
2622 the remainder of the chunk used for the previous such request
2623 whenever possible. This limited use of a first-fit style
2624 allocation strategy tends to give contiguous chunks
2625 coextensive lifetimes, which improves locality and can reduce
2626 fragmentation in the long run.
2628 3. Other bins are scanned in increasing size order, using a
2629 chunk big enough to fulfill the request, and splitting off
2630 any remainder. This search is strictly by best-fit; i.e.,
2631 the smallest (with ties going to approximately the least
2632 recently used) chunk that fits is selected.
2634 4. If large enough, the chunk bordering the end of memory
2635 (`top') is split off. (This use of `top' is in accord with
2636 the best-fit search rule. In effect, `top' is treated as
2637 larger (and thus less well fitting) than any other available
2638 chunk since it can be extended to be as large as necessary
2639 (up to system limitations).
2641 5. If the request size meets the mmap threshold and the
2642 system supports mmap, and there are few enough currently
2643 allocated mmapped regions, and a call to mmap succeeds,
2644 the request is allocated via direct memory mapping.
2646 6. Otherwise, the top of memory is extended by
2647 obtaining more space from the system (normally using sbrk,
2648 but definable to anything else via the MORECORE macro).
2649 Memory is gathered from the system (in system page-sized
2650 units) in a way that allows chunks obtained across different
2651 sbrk calls to be consolidated, but does not require
2652 contiguous memory. Thus, it should be safe to intersperse
2653 mallocs with other sbrk calls.
2656 All allocations are made from the `lowest' part of any found
2657 chunk. (The implementation invariant is that prev_inuse is
2658 always true of any allocated chunk; i.e., that each allocated
2659 chunk borders either a previously allocated and still in-use chunk,
2660 or the base of its memory arena.)
2664 #if __STD_C
2665 Void_t* mALLOc(size_t bytes)
2666 #else
2667 Void_t* mALLOc(bytes) size_t bytes;
2668 #endif
2670 arena *ar_ptr;
2671 INTERNAL_SIZE_T nb; /* padded request size */
2672 mchunkptr victim;
2674 #if defined _LIBC || defined MALLOC_HOOKS
2675 if (__malloc_hook != NULL) {
2676 Void_t* result;
2678 #if defined __GNUC__ && __GNUC__ >= 2
2679 result = (*__malloc_hook)(bytes, RETURN_ADDRESS (0));
2680 #else
2681 result = (*__malloc_hook)(bytes, NULL);
2682 #endif
2683 return result;
2685 #endif
2687 if(request2size(bytes, nb))
2688 return 0;
2689 arena_get(ar_ptr, nb);
2690 if(!ar_ptr)
2691 return 0;
2692 victim = chunk_alloc(ar_ptr, nb);
2693 if(!victim) {
2694 /* Maybe the failure is due to running out of mmapped areas. */
2695 if(ar_ptr != &main_arena) {
2696 (void)mutex_unlock(&ar_ptr->mutex);
2697 (void)mutex_lock(&main_arena.mutex);
2698 victim = chunk_alloc(&main_arena, nb);
2699 (void)mutex_unlock(&main_arena.mutex);
2700 } else {
2701 #if USE_ARENAS
2702 /* ... or sbrk() has failed and there is still a chance to mmap() */
2703 ar_ptr = arena_get2(ar_ptr->next ? ar_ptr : 0, nb);
2704 (void)mutex_unlock(&main_arena.mutex);
2705 if(ar_ptr) {
2706 victim = chunk_alloc(ar_ptr, nb);
2707 (void)mutex_unlock(&ar_ptr->mutex);
2709 #endif
2711 if(!victim) return 0;
2712 } else
2713 (void)mutex_unlock(&ar_ptr->mutex);
2714 return BOUNDED_N(chunk2mem(victim), bytes);
2717 static mchunkptr
2718 internal_function
2719 #if __STD_C
2720 chunk_alloc(arena *ar_ptr, INTERNAL_SIZE_T nb)
2721 #else
2722 chunk_alloc(ar_ptr, nb) arena *ar_ptr; INTERNAL_SIZE_T nb;
2723 #endif
2725 mchunkptr victim; /* inspected/selected chunk */
2726 INTERNAL_SIZE_T victim_size; /* its size */
2727 int idx; /* index for bin traversal */
2728 mbinptr bin; /* associated bin */
2729 mchunkptr remainder; /* remainder from a split */
2730 long remainder_size; /* its size */
2731 int remainder_index; /* its bin index */
2732 unsigned long block; /* block traverser bit */
2733 int startidx; /* first bin of a traversed block */
2734 mchunkptr fwd; /* misc temp for linking */
2735 mchunkptr bck; /* misc temp for linking */
2736 mbinptr q; /* misc temp */
2739 /* Check for exact match in a bin */
2741 if (is_small_request(nb)) /* Faster version for small requests */
2743 idx = smallbin_index(nb);
2745 /* No traversal or size check necessary for small bins. */
2747 q = _bin_at(ar_ptr, idx);
2748 victim = last(q);
2750 /* Also scan the next one, since it would have a remainder < MINSIZE */
2751 if (victim == q)
2753 q = next_bin(q);
2754 victim = last(q);
2756 if (victim != q)
2758 victim_size = chunksize(victim);
2759 unlink(victim, bck, fwd);
2760 set_inuse_bit_at_offset(victim, victim_size);
2761 check_malloced_chunk(ar_ptr, victim, nb);
2762 return victim;
2765 idx += 2; /* Set for bin scan below. We've already scanned 2 bins. */
2768 else
2770 idx = bin_index(nb);
2771 bin = bin_at(ar_ptr, idx);
2773 for (victim = last(bin); victim != bin; victim = victim->bk)
2775 victim_size = chunksize(victim);
2776 remainder_size = victim_size - nb;
2778 if (remainder_size >= (long)MINSIZE) /* too big */
2780 --idx; /* adjust to rescan below after checking last remainder */
2781 break;
2784 else if (remainder_size >= 0) /* exact fit */
2786 unlink(victim, bck, fwd);
2787 set_inuse_bit_at_offset(victim, victim_size);
2788 check_malloced_chunk(ar_ptr, victim, nb);
2789 return victim;
2793 ++idx;
2797 /* Try to use the last split-off remainder */
2799 if ( (victim = last_remainder(ar_ptr)->fd) != last_remainder(ar_ptr))
2801 victim_size = chunksize(victim);
2802 remainder_size = victim_size - nb;
2804 if (remainder_size >= (long)MINSIZE) /* re-split */
2806 remainder = chunk_at_offset(victim, nb);
2807 set_head(victim, nb | PREV_INUSE);
2808 link_last_remainder(ar_ptr, remainder);
2809 set_head(remainder, remainder_size | PREV_INUSE);
2810 set_foot(remainder, remainder_size);
2811 check_malloced_chunk(ar_ptr, victim, nb);
2812 return victim;
2815 clear_last_remainder(ar_ptr);
2817 if (remainder_size >= 0) /* exhaust */
2819 set_inuse_bit_at_offset(victim, victim_size);
2820 check_malloced_chunk(ar_ptr, victim, nb);
2821 return victim;
2824 /* Else place in bin */
2826 frontlink(ar_ptr, victim, victim_size, remainder_index, bck, fwd);
2830 If there are any possibly nonempty big-enough blocks,
2831 search for best fitting chunk by scanning bins in blockwidth units.
2834 if ( (block = idx2binblock(idx)) <= binblocks(ar_ptr))
2837 /* Get to the first marked block */
2839 if ( (block & binblocks(ar_ptr)) == 0)
2841 /* force to an even block boundary */
2842 idx = (idx & ~(BINBLOCKWIDTH - 1)) + BINBLOCKWIDTH;
2843 block <<= 1;
2844 while ((block & binblocks(ar_ptr)) == 0)
2846 idx += BINBLOCKWIDTH;
2847 block <<= 1;
2851 /* For each possibly nonempty block ... */
2852 for (;;)
2854 startidx = idx; /* (track incomplete blocks) */
2855 q = bin = _bin_at(ar_ptr, idx);
2857 /* For each bin in this block ... */
2860 /* Find and use first big enough chunk ... */
2862 for (victim = last(bin); victim != bin; victim = victim->bk)
2864 victim_size = chunksize(victim);
2865 remainder_size = victim_size - nb;
2867 if (remainder_size >= (long)MINSIZE) /* split */
2869 remainder = chunk_at_offset(victim, nb);
2870 set_head(victim, nb | PREV_INUSE);
2871 unlink(victim, bck, fwd);
2872 link_last_remainder(ar_ptr, remainder);
2873 set_head(remainder, remainder_size | PREV_INUSE);
2874 set_foot(remainder, remainder_size);
2875 check_malloced_chunk(ar_ptr, victim, nb);
2876 return victim;
2879 else if (remainder_size >= 0) /* take */
2881 set_inuse_bit_at_offset(victim, victim_size);
2882 unlink(victim, bck, fwd);
2883 check_malloced_chunk(ar_ptr, victim, nb);
2884 return victim;
2889 bin = next_bin(bin);
2891 } while ((++idx & (BINBLOCKWIDTH - 1)) != 0);
2893 /* Clear out the block bit. */
2895 do /* Possibly backtrack to try to clear a partial block */
2897 if ((startidx & (BINBLOCKWIDTH - 1)) == 0)
2899 binblocks(ar_ptr) &= ~block;
2900 break;
2902 --startidx;
2903 q = prev_bin(q);
2904 } while (first(q) == q);
2906 /* Get to the next possibly nonempty block */
2908 if ( (block <<= 1) <= binblocks(ar_ptr) && (block != 0) )
2910 while ((block & binblocks(ar_ptr)) == 0)
2912 idx += BINBLOCKWIDTH;
2913 block <<= 1;
2916 else
2917 break;
2922 /* Try to use top chunk */
2924 /* Require that there be a remainder, ensuring top always exists */
2925 if ( (remainder_size = chunksize(top(ar_ptr)) - nb) < (long)MINSIZE)
2928 #if HAVE_MMAP
2929 /* If the request is big and there are not yet too many regions,
2930 and we would otherwise need to extend, try to use mmap instead. */
2931 if ((unsigned long)nb >= (unsigned long)mmap_threshold &&
2932 n_mmaps < n_mmaps_max &&
2933 (victim = mmap_chunk(nb)) != 0)
2934 return victim;
2935 #endif
2937 /* Try to extend */
2938 malloc_extend_top(ar_ptr, nb);
2939 if ((remainder_size = chunksize(top(ar_ptr)) - nb) < (long)MINSIZE)
2941 #if HAVE_MMAP
2942 /* A last attempt: when we are out of address space in a
2943 non-main arena, try mmap anyway, as long as it is allowed at
2944 all. */
2945 if (ar_ptr != &main_arena &&
2946 n_mmaps_max > 0 &&
2947 (victim = mmap_chunk(nb)) != 0)
2948 return victim;
2949 #endif
2950 return 0; /* propagate failure */
2954 victim = top(ar_ptr);
2955 set_head(victim, nb | PREV_INUSE);
2956 top(ar_ptr) = chunk_at_offset(victim, nb);
2957 set_head(top(ar_ptr), remainder_size | PREV_INUSE);
2958 check_malloced_chunk(ar_ptr, victim, nb);
2959 return victim;
2968 free() algorithm :
2970 cases:
2972 1. free(0) has no effect.
2974 2. If the chunk was allocated via mmap, it is released via munmap().
2976 3. If a returned chunk borders the current high end of memory,
2977 it is consolidated into the top, and if the total unused
2978 topmost memory exceeds the trim threshold, malloc_trim is
2979 called.
2981 4. Other chunks are consolidated as they arrive, and
2982 placed in corresponding bins. (This includes the case of
2983 consolidating with the current `last_remainder').
2988 #if __STD_C
2989 void fREe(Void_t* mem)
2990 #else
2991 void fREe(mem) Void_t* mem;
2992 #endif
2994 arena *ar_ptr;
2995 mchunkptr p; /* chunk corresponding to mem */
2997 #if defined _LIBC || defined MALLOC_HOOKS
2998 if (__free_hook != NULL) {
2999 #if defined __GNUC__ && __GNUC__ >= 2
3000 (*__free_hook)(mem, RETURN_ADDRESS (0));
3001 #else
3002 (*__free_hook)(mem, NULL);
3003 #endif
3004 return;
3006 #endif
3008 if (mem == 0) /* free(0) has no effect */
3009 return;
3011 p = mem2chunk(mem);
3013 #if HAVE_MMAP
3014 if (chunk_is_mmapped(p)) /* release mmapped memory. */
3016 munmap_chunk(p);
3017 return;
3019 #endif
3021 ar_ptr = arena_for_ptr(p);
3022 #if THREAD_STATS
3023 if(!mutex_trylock(&ar_ptr->mutex))
3024 ++(ar_ptr->stat_lock_direct);
3025 else {
3026 (void)mutex_lock(&ar_ptr->mutex);
3027 ++(ar_ptr->stat_lock_wait);
3029 #else
3030 (void)mutex_lock(&ar_ptr->mutex);
3031 #endif
3032 chunk_free(ar_ptr, p);
3033 (void)mutex_unlock(&ar_ptr->mutex);
3036 static void
3037 internal_function
3038 #if __STD_C
3039 chunk_free(arena *ar_ptr, mchunkptr p)
3040 #else
3041 chunk_free(ar_ptr, p) arena *ar_ptr; mchunkptr p;
3042 #endif
3044 INTERNAL_SIZE_T hd = p->size; /* its head field */
3045 INTERNAL_SIZE_T sz; /* its size */
3046 int idx; /* its bin index */
3047 mchunkptr next; /* next contiguous chunk */
3048 INTERNAL_SIZE_T nextsz; /* its size */
3049 INTERNAL_SIZE_T prevsz; /* size of previous contiguous chunk */
3050 mchunkptr bck; /* misc temp for linking */
3051 mchunkptr fwd; /* misc temp for linking */
3052 int islr; /* track whether merging with last_remainder */
3054 check_inuse_chunk(ar_ptr, p);
3056 sz = hd & ~PREV_INUSE;
3057 next = chunk_at_offset(p, sz);
3058 nextsz = chunksize(next);
3060 if (next == top(ar_ptr)) /* merge with top */
3062 sz += nextsz;
3064 if (!(hd & PREV_INUSE)) /* consolidate backward */
3066 prevsz = p->prev_size;
3067 p = chunk_at_offset(p, -(long)prevsz);
3068 sz += prevsz;
3069 unlink(p, bck, fwd);
3072 set_head(p, sz | PREV_INUSE);
3073 top(ar_ptr) = p;
3075 #if USE_ARENAS
3076 if(ar_ptr == &main_arena) {
3077 #endif
3078 if ((unsigned long)(sz) >= (unsigned long)trim_threshold)
3079 main_trim(top_pad);
3080 #if USE_ARENAS
3081 } else {
3082 heap_info *heap = heap_for_ptr(p);
3084 assert(heap->ar_ptr == ar_ptr);
3086 /* Try to get rid of completely empty heaps, if possible. */
3087 if((unsigned long)(sz) >= (unsigned long)trim_threshold ||
3088 p == chunk_at_offset(heap, sizeof(*heap)))
3089 heap_trim(heap, top_pad);
3091 #endif
3092 return;
3095 islr = 0;
3097 if (!(hd & PREV_INUSE)) /* consolidate backward */
3099 prevsz = p->prev_size;
3100 p = chunk_at_offset(p, -(long)prevsz);
3101 sz += prevsz;
3103 if (p->fd == last_remainder(ar_ptr)) /* keep as last_remainder */
3104 islr = 1;
3105 else
3106 unlink(p, bck, fwd);
3109 if (!(inuse_bit_at_offset(next, nextsz))) /* consolidate forward */
3111 sz += nextsz;
3113 if (!islr && next->fd == last_remainder(ar_ptr))
3114 /* re-insert last_remainder */
3116 islr = 1;
3117 link_last_remainder(ar_ptr, p);
3119 else
3120 unlink(next, bck, fwd);
3122 next = chunk_at_offset(p, sz);
3124 else
3125 set_head(next, nextsz); /* clear inuse bit */
3127 set_head(p, sz | PREV_INUSE);
3128 next->prev_size = sz;
3129 if (!islr)
3130 frontlink(ar_ptr, p, sz, idx, bck, fwd);
3132 #if USE_ARENAS
3133 /* Check whether the heap containing top can go away now. */
3134 if(next->size < MINSIZE &&
3135 (unsigned long)sz > trim_threshold &&
3136 ar_ptr != &main_arena) { /* fencepost */
3137 heap_info *heap = heap_for_ptr(top(ar_ptr));
3139 if(top(ar_ptr) == chunk_at_offset(heap, sizeof(*heap)) &&
3140 heap->prev == heap_for_ptr(p))
3141 heap_trim(heap, top_pad);
3143 #endif
3152 Realloc algorithm:
3154 Chunks that were obtained via mmap cannot be extended or shrunk
3155 unless HAVE_MREMAP is defined, in which case mremap is used.
3156 Otherwise, if their reallocation is for additional space, they are
3157 copied. If for less, they are just left alone.
3159 Otherwise, if the reallocation is for additional space, and the
3160 chunk can be extended, it is, else a malloc-copy-free sequence is
3161 taken. There are several different ways that a chunk could be
3162 extended. All are tried:
3164 * Extending forward into following adjacent free chunk.
3165 * Shifting backwards, joining preceding adjacent space
3166 * Both shifting backwards and extending forward.
3167 * Extending into newly sbrked space
3169 Unless the #define REALLOC_ZERO_BYTES_FREES is set, realloc with a
3170 size argument of zero (re)allocates a minimum-sized chunk.
3172 If the reallocation is for less space, and the new request is for
3173 a `small' (<512 bytes) size, then the newly unused space is lopped
3174 off and freed.
3176 The old unix realloc convention of allowing the last-free'd chunk
3177 to be used as an argument to realloc is no longer supported.
3178 I don't know of any programs still relying on this feature,
3179 and allowing it would also allow too many other incorrect
3180 usages of realloc to be sensible.
3186 #if __STD_C
3187 Void_t* rEALLOc(Void_t* oldmem, size_t bytes)
3188 #else
3189 Void_t* rEALLOc(oldmem, bytes) Void_t* oldmem; size_t bytes;
3190 #endif
3192 arena *ar_ptr;
3193 INTERNAL_SIZE_T nb; /* padded request size */
3195 mchunkptr oldp; /* chunk corresponding to oldmem */
3196 INTERNAL_SIZE_T oldsize; /* its size */
3198 mchunkptr newp; /* chunk to return */
3200 #if defined _LIBC || defined MALLOC_HOOKS
3201 if (__realloc_hook != NULL) {
3202 Void_t* result;
3204 #if defined __GNUC__ && __GNUC__ >= 2
3205 result = (*__realloc_hook)(oldmem, bytes, RETURN_ADDRESS (0));
3206 #else
3207 result = (*__realloc_hook)(oldmem, bytes, NULL);
3208 #endif
3209 return result;
3211 #endif
3213 #ifdef REALLOC_ZERO_BYTES_FREES
3214 if (bytes == 0 && oldmem != NULL) { fREe(oldmem); return 0; }
3215 #endif
3217 /* realloc of null is supposed to be same as malloc */
3218 if (oldmem == 0) return mALLOc(bytes);
3220 oldp = mem2chunk(oldmem);
3221 oldsize = chunksize(oldp);
3223 if(request2size(bytes, nb))
3224 return 0;
3226 #if HAVE_MMAP
3227 if (chunk_is_mmapped(oldp))
3229 Void_t* newmem;
3231 #if HAVE_MREMAP
3232 newp = mremap_chunk(oldp, nb);
3233 if(newp)
3234 return BOUNDED_N(chunk2mem(newp), bytes);
3235 #endif
3236 /* Note the extra SIZE_SZ overhead. */
3237 if(oldsize - SIZE_SZ >= nb) return oldmem; /* do nothing */
3238 /* Must alloc, copy, free. */
3239 newmem = mALLOc(bytes);
3240 if (newmem == 0) return 0; /* propagate failure */
3241 MALLOC_COPY(newmem, oldmem, oldsize - 2*SIZE_SZ);
3242 munmap_chunk(oldp);
3243 return newmem;
3245 #endif
3247 ar_ptr = arena_for_ptr(oldp);
3248 #if THREAD_STATS
3249 if(!mutex_trylock(&ar_ptr->mutex))
3250 ++(ar_ptr->stat_lock_direct);
3251 else {
3252 (void)mutex_lock(&ar_ptr->mutex);
3253 ++(ar_ptr->stat_lock_wait);
3255 #else
3256 (void)mutex_lock(&ar_ptr->mutex);
3257 #endif
3259 #ifndef NO_THREADS
3260 /* As in malloc(), remember this arena for the next allocation. */
3261 tsd_setspecific(arena_key, (Void_t *)ar_ptr);
3262 #endif
3264 newp = chunk_realloc(ar_ptr, oldp, oldsize, nb);
3266 (void)mutex_unlock(&ar_ptr->mutex);
3267 return newp ? BOUNDED_N(chunk2mem(newp), bytes) : NULL;
3270 static mchunkptr
3271 internal_function
3272 #if __STD_C
3273 chunk_realloc(arena* ar_ptr, mchunkptr oldp, INTERNAL_SIZE_T oldsize,
3274 INTERNAL_SIZE_T nb)
3275 #else
3276 chunk_realloc(ar_ptr, oldp, oldsize, nb)
3277 arena* ar_ptr; mchunkptr oldp; INTERNAL_SIZE_T oldsize, nb;
3278 #endif
3280 mchunkptr newp = oldp; /* chunk to return */
3281 INTERNAL_SIZE_T newsize = oldsize; /* its size */
3283 mchunkptr next; /* next contiguous chunk after oldp */
3284 INTERNAL_SIZE_T nextsize; /* its size */
3286 mchunkptr prev; /* previous contiguous chunk before oldp */
3287 INTERNAL_SIZE_T prevsize; /* its size */
3289 mchunkptr remainder; /* holds split off extra space from newp */
3290 INTERNAL_SIZE_T remainder_size; /* its size */
3292 mchunkptr bck; /* misc temp for linking */
3293 mchunkptr fwd; /* misc temp for linking */
3295 check_inuse_chunk(ar_ptr, oldp);
3297 if ((long)(oldsize) < (long)(nb))
3299 Void_t* oldmem = BOUNDED_N(chunk2mem(oldp), oldsize);
3301 /* Try expanding forward */
3303 next = chunk_at_offset(oldp, oldsize);
3304 if (next == top(ar_ptr) || !inuse(next))
3306 nextsize = chunksize(next);
3308 /* Forward into top only if a remainder */
3309 if (next == top(ar_ptr))
3311 if ((long)(nextsize + newsize) >= (long)(nb + MINSIZE))
3313 newsize += nextsize;
3314 top(ar_ptr) = chunk_at_offset(oldp, nb);
3315 set_head(top(ar_ptr), (newsize - nb) | PREV_INUSE);
3316 set_head_size(oldp, nb);
3317 return oldp;
3321 /* Forward into next chunk */
3322 else if (((long)(nextsize + newsize) >= (long)(nb)))
3324 unlink(next, bck, fwd);
3325 newsize += nextsize;
3326 goto split;
3329 else
3331 next = 0;
3332 nextsize = 0;
3335 oldsize -= SIZE_SZ;
3337 /* Try shifting backwards. */
3339 if (!prev_inuse(oldp))
3341 prev = prev_chunk(oldp);
3342 prevsize = chunksize(prev);
3344 /* try forward + backward first to save a later consolidation */
3346 if (next != 0)
3348 /* into top */
3349 if (next == top(ar_ptr))
3351 if ((long)(nextsize + prevsize + newsize) >= (long)(nb + MINSIZE))
3353 unlink(prev, bck, fwd);
3354 newp = prev;
3355 newsize += prevsize + nextsize;
3356 MALLOC_COPY(BOUNDED_N(chunk2mem(newp), oldsize), oldmem, oldsize);
3357 top(ar_ptr) = chunk_at_offset(newp, nb);
3358 set_head(top(ar_ptr), (newsize - nb) | PREV_INUSE);
3359 set_head_size(newp, nb);
3360 return newp;
3364 /* into next chunk */
3365 else if (((long)(nextsize + prevsize + newsize) >= (long)(nb)))
3367 unlink(next, bck, fwd);
3368 unlink(prev, bck, fwd);
3369 newp = prev;
3370 newsize += nextsize + prevsize;
3371 MALLOC_COPY(BOUNDED_N(chunk2mem(newp), oldsize), oldmem, oldsize);
3372 goto split;
3376 /* backward only */
3377 if (prev != 0 && (long)(prevsize + newsize) >= (long)nb)
3379 unlink(prev, bck, fwd);
3380 newp = prev;
3381 newsize += prevsize;
3382 MALLOC_COPY(BOUNDED_N(chunk2mem(newp), oldsize), oldmem, oldsize);
3383 goto split;
3387 /* Must allocate */
3389 newp = chunk_alloc (ar_ptr, nb);
3391 if (newp == 0) {
3392 /* Maybe the failure is due to running out of mmapped areas. */
3393 if (ar_ptr != &main_arena) {
3394 (void)mutex_lock(&main_arena.mutex);
3395 newp = chunk_alloc(&main_arena, nb);
3396 (void)mutex_unlock(&main_arena.mutex);
3397 } else {
3398 #if USE_ARENAS
3399 /* ... or sbrk() has failed and there is still a chance to mmap() */
3400 arena* ar_ptr2 = arena_get2(ar_ptr->next ? ar_ptr : 0, nb);
3401 if(ar_ptr2) {
3402 newp = chunk_alloc(ar_ptr2, nb);
3403 (void)mutex_unlock(&ar_ptr2->mutex);
3405 #endif
3407 if (newp == 0) /* propagate failure */
3408 return 0;
3411 /* Avoid copy if newp is next chunk after oldp. */
3412 /* (This can only happen when new chunk is sbrk'ed.) */
3414 if ( newp == next_chunk(oldp))
3416 newsize += chunksize(newp);
3417 newp = oldp;
3418 goto split;
3421 /* Otherwise copy, free, and exit */
3422 MALLOC_COPY(BOUNDED_N(chunk2mem(newp), oldsize), oldmem, oldsize);
3423 chunk_free(ar_ptr, oldp);
3424 return newp;
3428 split: /* split off extra room in old or expanded chunk */
3430 if (newsize - nb >= MINSIZE) /* split off remainder */
3432 remainder = chunk_at_offset(newp, nb);
3433 remainder_size = newsize - nb;
3434 set_head_size(newp, nb);
3435 set_head(remainder, remainder_size | PREV_INUSE);
3436 set_inuse_bit_at_offset(remainder, remainder_size);
3437 chunk_free(ar_ptr, remainder);
3439 else
3441 set_head_size(newp, newsize);
3442 set_inuse_bit_at_offset(newp, newsize);
3445 check_inuse_chunk(ar_ptr, newp);
3446 return newp;
3454 memalign algorithm:
3456 memalign requests more than enough space from malloc, finds a spot
3457 within that chunk that meets the alignment request, and then
3458 possibly frees the leading and trailing space.
3460 The alignment argument must be a power of two. This property is not
3461 checked by memalign, so misuse may result in random runtime errors.
3463 8-byte alignment is guaranteed by normal malloc calls, so don't
3464 bother calling memalign with an argument of 8 or less.
3466 Overreliance on memalign is a sure way to fragment space.
3471 #if __STD_C
3472 Void_t* mEMALIGn(size_t alignment, size_t bytes)
3473 #else
3474 Void_t* mEMALIGn(alignment, bytes) size_t alignment; size_t bytes;
3475 #endif
3477 arena *ar_ptr;
3478 INTERNAL_SIZE_T nb; /* padded request size */
3479 mchunkptr p;
3481 #if defined _LIBC || defined MALLOC_HOOKS
3482 if (__memalign_hook != NULL) {
3483 Void_t* result;
3485 #if defined __GNUC__ && __GNUC__ >= 2
3486 result = (*__memalign_hook)(alignment, bytes, RETURN_ADDRESS (0));
3487 #else
3488 result = (*__memalign_hook)(alignment, bytes, NULL);
3489 #endif
3490 return result;
3492 #endif
3494 /* If need less alignment than we give anyway, just relay to malloc */
3496 if (alignment <= MALLOC_ALIGNMENT) return mALLOc(bytes);
3498 /* Otherwise, ensure that it is at least a minimum chunk size */
3500 if (alignment < MINSIZE) alignment = MINSIZE;
3502 if(request2size(bytes, nb))
3503 return 0;
3504 arena_get(ar_ptr, nb + alignment + MINSIZE);
3505 if(!ar_ptr)
3506 return 0;
3507 p = chunk_align(ar_ptr, nb, alignment);
3508 (void)mutex_unlock(&ar_ptr->mutex);
3509 if(!p) {
3510 /* Maybe the failure is due to running out of mmapped areas. */
3511 if(ar_ptr != &main_arena) {
3512 (void)mutex_lock(&main_arena.mutex);
3513 p = chunk_align(&main_arena, nb, alignment);
3514 (void)mutex_unlock(&main_arena.mutex);
3515 } else {
3516 #if USE_ARENAS
3517 /* ... or sbrk() has failed and there is still a chance to mmap() */
3518 ar_ptr = arena_get2(ar_ptr->next ? ar_ptr : 0, nb);
3519 if(ar_ptr) {
3520 p = chunk_align(ar_ptr, nb, alignment);
3521 (void)mutex_unlock(&ar_ptr->mutex);
3523 #endif
3525 if(!p) return 0;
3527 return BOUNDED_N(chunk2mem(p), bytes);
3530 static mchunkptr
3531 internal_function
3532 #if __STD_C
3533 chunk_align(arena* ar_ptr, INTERNAL_SIZE_T nb, size_t alignment)
3534 #else
3535 chunk_align(ar_ptr, nb, alignment)
3536 arena* ar_ptr; INTERNAL_SIZE_T nb; size_t alignment;
3537 #endif
3539 unsigned long m; /* memory returned by malloc call */
3540 mchunkptr p; /* corresponding chunk */
3541 char* brk; /* alignment point within p */
3542 mchunkptr newp; /* chunk to return */
3543 INTERNAL_SIZE_T newsize; /* its size */
3544 INTERNAL_SIZE_T leadsize; /* leading space befor alignment point */
3545 mchunkptr remainder; /* spare room at end to split off */
3546 long remainder_size; /* its size */
3548 /* Call chunk_alloc with worst case padding to hit alignment. */
3549 p = chunk_alloc(ar_ptr, nb + alignment + MINSIZE);
3550 if (p == 0)
3551 return 0; /* propagate failure */
3553 m = (unsigned long)chunk2mem(p);
3555 if ((m % alignment) == 0) /* aligned */
3557 #if HAVE_MMAP
3558 if(chunk_is_mmapped(p)) {
3559 return p; /* nothing more to do */
3561 #endif
3563 else /* misaligned */
3566 Find an aligned spot inside chunk.
3567 Since we need to give back leading space in a chunk of at
3568 least MINSIZE, if the first calculation places us at
3569 a spot with less than MINSIZE leader, we can move to the
3570 next aligned spot -- we've allocated enough total room so that
3571 this is always possible.
3574 brk = (char*)mem2chunk(((m + alignment - 1)) & -(long)alignment);
3575 if ((long)(brk - (char*)(p)) < (long)MINSIZE) brk += alignment;
3577 newp = (mchunkptr)brk;
3578 leadsize = brk - (char*)(p);
3579 newsize = chunksize(p) - leadsize;
3581 #if HAVE_MMAP
3582 if(chunk_is_mmapped(p))
3584 newp->prev_size = p->prev_size + leadsize;
3585 set_head(newp, newsize|IS_MMAPPED);
3586 return newp;
3588 #endif
3590 /* give back leader, use the rest */
3592 set_head(newp, newsize | PREV_INUSE);
3593 set_inuse_bit_at_offset(newp, newsize);
3594 set_head_size(p, leadsize);
3595 chunk_free(ar_ptr, p);
3596 p = newp;
3598 assert (newsize>=nb && (((unsigned long)(chunk2mem(p))) % alignment) == 0);
3601 /* Also give back spare room at the end */
3603 remainder_size = chunksize(p) - nb;
3605 if (remainder_size >= (long)MINSIZE)
3607 remainder = chunk_at_offset(p, nb);
3608 set_head(remainder, remainder_size | PREV_INUSE);
3609 set_head_size(p, nb);
3610 chunk_free(ar_ptr, remainder);
3613 check_inuse_chunk(ar_ptr, p);
3614 return p;
3621 valloc just invokes memalign with alignment argument equal
3622 to the page size of the system (or as near to this as can
3623 be figured out from all the includes/defines above.)
3626 #if __STD_C
3627 Void_t* vALLOc(size_t bytes)
3628 #else
3629 Void_t* vALLOc(bytes) size_t bytes;
3630 #endif
3632 if(__malloc_initialized < 0)
3633 ptmalloc_init ();
3634 return mEMALIGn (malloc_getpagesize, bytes);
3638 pvalloc just invokes valloc for the nearest pagesize
3639 that will accommodate request
3643 #if __STD_C
3644 Void_t* pvALLOc(size_t bytes)
3645 #else
3646 Void_t* pvALLOc(bytes) size_t bytes;
3647 #endif
3649 size_t pagesize;
3650 if(__malloc_initialized < 0)
3651 ptmalloc_init ();
3652 pagesize = malloc_getpagesize;
3653 return mEMALIGn (pagesize, (bytes + pagesize - 1) & ~(pagesize - 1));
3658 calloc calls chunk_alloc, then zeroes out the allocated chunk.
3662 #if __STD_C
3663 Void_t* cALLOc(size_t n, size_t elem_size)
3664 #else
3665 Void_t* cALLOc(n, elem_size) size_t n; size_t elem_size;
3666 #endif
3668 arena *ar_ptr;
3669 mchunkptr p, oldtop;
3670 INTERNAL_SIZE_T sz, csz, oldtopsize;
3671 Void_t* mem;
3673 #if defined _LIBC || defined MALLOC_HOOKS
3674 if (__malloc_hook != NULL) {
3675 sz = n * elem_size;
3676 #if defined __GNUC__ && __GNUC__ >= 2
3677 mem = (*__malloc_hook)(sz, RETURN_ADDRESS (0));
3678 #else
3679 mem = (*__malloc_hook)(sz, NULL);
3680 #endif
3681 if(mem == 0)
3682 return 0;
3683 #ifdef HAVE_MEMSET
3684 return memset(mem, 0, sz);
3685 #else
3686 while(sz > 0) ((char*)mem)[--sz] = 0; /* rather inefficient */
3687 return mem;
3688 #endif
3690 #endif
3692 if(request2size(n * elem_size, sz))
3693 return 0;
3694 arena_get(ar_ptr, sz);
3695 if(!ar_ptr)
3696 return 0;
3698 /* Check if expand_top called, in which case there may be
3699 no need to clear. */
3700 #if MORECORE_CLEARS
3701 oldtop = top(ar_ptr);
3702 oldtopsize = chunksize(top(ar_ptr));
3703 #if MORECORE_CLEARS < 2
3704 /* Only newly allocated memory is guaranteed to be cleared. */
3705 if (ar_ptr == &main_arena &&
3706 oldtopsize < sbrk_base + max_sbrked_mem - (char *)oldtop)
3707 oldtopsize = (sbrk_base + max_sbrked_mem - (char *)oldtop);
3708 #endif
3709 #endif
3710 p = chunk_alloc (ar_ptr, sz);
3712 /* Only clearing follows, so we can unlock early. */
3713 (void)mutex_unlock(&ar_ptr->mutex);
3715 if (p == 0) {
3716 /* Maybe the failure is due to running out of mmapped areas. */
3717 if(ar_ptr != &main_arena) {
3718 (void)mutex_lock(&main_arena.mutex);
3719 p = chunk_alloc(&main_arena, sz);
3720 (void)mutex_unlock(&main_arena.mutex);
3721 } else {
3722 #if USE_ARENAS
3723 /* ... or sbrk() has failed and there is still a chance to mmap() */
3724 (void)mutex_lock(&main_arena.mutex);
3725 ar_ptr = arena_get2(ar_ptr->next ? ar_ptr : 0, sz);
3726 (void)mutex_unlock(&main_arena.mutex);
3727 if(ar_ptr) {
3728 p = chunk_alloc(ar_ptr, sz);
3729 (void)mutex_unlock(&ar_ptr->mutex);
3731 #endif
3733 if (p == 0) return 0;
3735 mem = BOUNDED_N(chunk2mem(p), n * elem_size);
3737 /* Two optional cases in which clearing not necessary */
3739 #if HAVE_MMAP
3740 if (chunk_is_mmapped(p)) return mem;
3741 #endif
3743 csz = chunksize(p);
3745 #if MORECORE_CLEARS
3746 if (p == oldtop && csz > oldtopsize) {
3747 /* clear only the bytes from non-freshly-sbrked memory */
3748 csz = oldtopsize;
3750 #endif
3752 csz -= SIZE_SZ;
3753 MALLOC_ZERO(BOUNDED_N(chunk2mem(p), csz), csz);
3754 return mem;
3759 cfree just calls free. It is needed/defined on some systems
3760 that pair it with calloc, presumably for odd historical reasons.
3764 #if !defined(_LIBC)
3765 #if __STD_C
3766 void cfree(Void_t *mem)
3767 #else
3768 void cfree(mem) Void_t *mem;
3769 #endif
3771 fREe(mem);
3773 #endif
3779 Malloc_trim gives memory back to the system (via negative
3780 arguments to sbrk) if there is unused memory at the `high' end of
3781 the malloc pool. You can call this after freeing large blocks of
3782 memory to potentially reduce the system-level memory requirements
3783 of a program. However, it cannot guarantee to reduce memory. Under
3784 some allocation patterns, some large free blocks of memory will be
3785 locked between two used chunks, so they cannot be given back to
3786 the system.
3788 The `pad' argument to malloc_trim represents the amount of free
3789 trailing space to leave untrimmed. If this argument is zero,
3790 only the minimum amount of memory to maintain internal data
3791 structures will be left (one page or less). Non-zero arguments
3792 can be supplied to maintain enough trailing space to service
3793 future expected allocations without having to re-obtain memory
3794 from the system.
3796 Malloc_trim returns 1 if it actually released any memory, else 0.
3800 #if __STD_C
3801 int mALLOC_TRIm(size_t pad)
3802 #else
3803 int mALLOC_TRIm(pad) size_t pad;
3804 #endif
3806 int res;
3808 (void)mutex_lock(&main_arena.mutex);
3809 res = main_trim(pad);
3810 (void)mutex_unlock(&main_arena.mutex);
3811 return res;
3814 /* Trim the main arena. */
3816 static int
3817 internal_function
3818 #if __STD_C
3819 main_trim(size_t pad)
3820 #else
3821 main_trim(pad) size_t pad;
3822 #endif
3824 mchunkptr top_chunk; /* The current top chunk */
3825 long top_size; /* Amount of top-most memory */
3826 long extra; /* Amount to release */
3827 char* current_brk; /* address returned by pre-check sbrk call */
3828 char* new_brk; /* address returned by negative sbrk call */
3830 unsigned long pagesz = malloc_getpagesize;
3832 top_chunk = top(&main_arena);
3833 top_size = chunksize(top_chunk);
3834 extra = ((top_size - pad - MINSIZE + (pagesz-1)) / pagesz - 1) * pagesz;
3836 if (extra < (long)pagesz) /* Not enough memory to release */
3837 return 0;
3839 /* Test to make sure no one else called sbrk */
3840 current_brk = (char*)(MORECORE (0));
3841 if (current_brk != (char*)(top_chunk) + top_size)
3842 return 0; /* Apparently we don't own memory; must fail */
3844 new_brk = (char*)(MORECORE (-extra));
3846 #if defined _LIBC || defined MALLOC_HOOKS
3847 /* Call the `morecore' hook if necessary. */
3848 if (__after_morecore_hook)
3849 (*__after_morecore_hook) ();
3850 #endif
3852 if (new_brk == (char*)(MORECORE_FAILURE)) { /* sbrk failed? */
3853 /* Try to figure out what we have */
3854 current_brk = (char*)(MORECORE (0));
3855 top_size = current_brk - (char*)top_chunk;
3856 if (top_size >= (long)MINSIZE) /* if not, we are very very dead! */
3858 sbrked_mem = current_brk - sbrk_base;
3859 set_head(top_chunk, top_size | PREV_INUSE);
3861 check_chunk(&main_arena, top_chunk);
3862 return 0;
3864 sbrked_mem -= extra;
3866 /* Success. Adjust top accordingly. */
3867 set_head(top_chunk, (top_size - extra) | PREV_INUSE);
3868 check_chunk(&main_arena, top_chunk);
3869 return 1;
3872 #if USE_ARENAS
3874 static int
3875 internal_function
3876 #if __STD_C
3877 heap_trim(heap_info *heap, size_t pad)
3878 #else
3879 heap_trim(heap, pad) heap_info *heap; size_t pad;
3880 #endif
3882 unsigned long pagesz = malloc_getpagesize;
3883 arena *ar_ptr = heap->ar_ptr;
3884 mchunkptr top_chunk = top(ar_ptr), p, bck, fwd;
3885 heap_info *prev_heap;
3886 long new_size, top_size, extra;
3888 /* Can this heap go away completely ? */
3889 while(top_chunk == chunk_at_offset(heap, sizeof(*heap))) {
3890 prev_heap = heap->prev;
3891 p = chunk_at_offset(prev_heap, prev_heap->size - (MINSIZE-2*SIZE_SZ));
3892 assert(p->size == (0|PREV_INUSE)); /* must be fencepost */
3893 p = prev_chunk(p);
3894 new_size = chunksize(p) + (MINSIZE-2*SIZE_SZ);
3895 assert(new_size>0 && new_size<(long)(2*MINSIZE));
3896 if(!prev_inuse(p))
3897 new_size += p->prev_size;
3898 assert(new_size>0 && new_size<HEAP_MAX_SIZE);
3899 if(new_size + (HEAP_MAX_SIZE - prev_heap->size) < pad + MINSIZE + pagesz)
3900 break;
3901 ar_ptr->size -= heap->size;
3902 arena_mem -= heap->size;
3903 delete_heap(heap);
3904 heap = prev_heap;
3905 if(!prev_inuse(p)) { /* consolidate backward */
3906 p = prev_chunk(p);
3907 unlink(p, bck, fwd);
3909 assert(((unsigned long)((char*)p + new_size) & (pagesz-1)) == 0);
3910 assert( ((char*)p + new_size) == ((char*)heap + heap->size) );
3911 top(ar_ptr) = top_chunk = p;
3912 set_head(top_chunk, new_size | PREV_INUSE);
3913 check_chunk(ar_ptr, top_chunk);
3915 top_size = chunksize(top_chunk);
3916 extra = ((top_size - pad - MINSIZE + (pagesz-1))/pagesz - 1) * pagesz;
3917 if(extra < (long)pagesz)
3918 return 0;
3919 /* Try to shrink. */
3920 if(grow_heap(heap, -extra) != 0)
3921 return 0;
3922 ar_ptr->size -= extra;
3923 arena_mem -= extra;
3925 /* Success. Adjust top accordingly. */
3926 set_head(top_chunk, (top_size - extra) | PREV_INUSE);
3927 check_chunk(ar_ptr, top_chunk);
3928 return 1;
3931 #endif /* USE_ARENAS */
3936 malloc_usable_size:
3938 This routine tells you how many bytes you can actually use in an
3939 allocated chunk, which may be more than you requested (although
3940 often not). You can use this many bytes without worrying about
3941 overwriting other allocated objects. Not a particularly great
3942 programming practice, but still sometimes useful.
3946 #if __STD_C
3947 size_t mALLOC_USABLE_SIZe(Void_t* mem)
3948 #else
3949 size_t mALLOC_USABLE_SIZe(mem) Void_t* mem;
3950 #endif
3952 mchunkptr p;
3954 if (mem == 0)
3955 return 0;
3956 else
3958 p = mem2chunk(mem);
3959 if(!chunk_is_mmapped(p))
3961 if (!inuse(p)) return 0;
3962 check_inuse_chunk(arena_for_ptr(mem), p);
3963 return chunksize(p) - SIZE_SZ;
3965 return chunksize(p) - 2*SIZE_SZ;
3972 /* Utility to update mallinfo for malloc_stats() and mallinfo() */
3974 static void
3975 #if __STD_C
3976 malloc_update_mallinfo(arena *ar_ptr, struct mallinfo *mi)
3977 #else
3978 malloc_update_mallinfo(ar_ptr, mi) arena *ar_ptr; struct mallinfo *mi;
3979 #endif
3981 int i, navail;
3982 mbinptr b;
3983 mchunkptr p;
3984 #if MALLOC_DEBUG
3985 mchunkptr q;
3986 #endif
3987 INTERNAL_SIZE_T avail;
3989 (void)mutex_lock(&ar_ptr->mutex);
3990 avail = chunksize(top(ar_ptr));
3991 navail = ((long)(avail) >= (long)MINSIZE)? 1 : 0;
3993 for (i = 1; i < NAV; ++i)
3995 b = bin_at(ar_ptr, i);
3996 for (p = last(b); p != b; p = p->bk)
3998 #if MALLOC_DEBUG
3999 check_free_chunk(ar_ptr, p);
4000 for (q = next_chunk(p);
4001 q != top(ar_ptr) && inuse(q) && (long)chunksize(q) > 0;
4002 q = next_chunk(q))
4003 check_inuse_chunk(ar_ptr, q);
4004 #endif
4005 avail += chunksize(p);
4006 navail++;
4010 mi->arena = ar_ptr->size;
4011 mi->ordblks = navail;
4012 mi->smblks = mi->usmblks = mi->fsmblks = 0; /* clear unused fields */
4013 mi->uordblks = ar_ptr->size - avail;
4014 mi->fordblks = avail;
4015 mi->hblks = n_mmaps;
4016 mi->hblkhd = mmapped_mem;
4017 mi->keepcost = chunksize(top(ar_ptr));
4019 (void)mutex_unlock(&ar_ptr->mutex);
4022 #if USE_ARENAS && MALLOC_DEBUG > 1
4024 /* Print the complete contents of a single heap to stderr. */
4026 static void
4027 #if __STD_C
4028 dump_heap(heap_info *heap)
4029 #else
4030 dump_heap(heap) heap_info *heap;
4031 #endif
4033 char *ptr;
4034 mchunkptr p;
4036 fprintf(stderr, "Heap %p, size %10lx:\n", heap, (long)heap->size);
4037 ptr = (heap->ar_ptr != (arena*)(heap+1)) ?
4038 (char*)(heap + 1) : (char*)(heap + 1) + sizeof(arena);
4039 p = (mchunkptr)(((unsigned long)ptr + MALLOC_ALIGN_MASK) &
4040 ~MALLOC_ALIGN_MASK);
4041 for(;;) {
4042 fprintf(stderr, "chunk %p size %10lx", p, (long)p->size);
4043 if(p == top(heap->ar_ptr)) {
4044 fprintf(stderr, " (top)\n");
4045 break;
4046 } else if(p->size == (0|PREV_INUSE)) {
4047 fprintf(stderr, " (fence)\n");
4048 break;
4050 fprintf(stderr, "\n");
4051 p = next_chunk(p);
4055 #endif
4061 malloc_stats:
4063 For all arenas separately and in total, prints on stderr the
4064 amount of space obtained from the system, and the current number
4065 of bytes allocated via malloc (or realloc, etc) but not yet
4066 freed. (Note that this is the number of bytes allocated, not the
4067 number requested. It will be larger than the number requested
4068 because of alignment and bookkeeping overhead.) When not compiled
4069 for multiple threads, the maximum amount of allocated memory
4070 (which may be more than current if malloc_trim and/or munmap got
4071 called) is also reported. When using mmap(), prints the maximum
4072 number of simultaneous mmap regions used, too.
4076 void mALLOC_STATs()
4078 int i;
4079 arena *ar_ptr;
4080 struct mallinfo mi;
4081 unsigned int in_use_b = mmapped_mem, system_b = in_use_b;
4082 #if THREAD_STATS
4083 long stat_lock_direct = 0, stat_lock_loop = 0, stat_lock_wait = 0;
4084 #endif
4086 for(i=0, ar_ptr = &main_arena;; i++) {
4087 malloc_update_mallinfo(ar_ptr, &mi);
4088 fprintf(stderr, "Arena %d:\n", i);
4089 fprintf(stderr, "system bytes = %10u\n", (unsigned int)mi.arena);
4090 fprintf(stderr, "in use bytes = %10u\n", (unsigned int)mi.uordblks);
4091 system_b += mi.arena;
4092 in_use_b += mi.uordblks;
4093 #if THREAD_STATS
4094 stat_lock_direct += ar_ptr->stat_lock_direct;
4095 stat_lock_loop += ar_ptr->stat_lock_loop;
4096 stat_lock_wait += ar_ptr->stat_lock_wait;
4097 #endif
4098 #if USE_ARENAS && MALLOC_DEBUG > 1
4099 if(ar_ptr != &main_arena) {
4100 heap_info *heap;
4101 (void)mutex_lock(&ar_ptr->mutex);
4102 heap = heap_for_ptr(top(ar_ptr));
4103 while(heap) { dump_heap(heap); heap = heap->prev; }
4104 (void)mutex_unlock(&ar_ptr->mutex);
4106 #endif
4107 ar_ptr = ar_ptr->next;
4108 if(ar_ptr == &main_arena) break;
4110 #if HAVE_MMAP
4111 fprintf(stderr, "Total (incl. mmap):\n");
4112 #else
4113 fprintf(stderr, "Total:\n");
4114 #endif
4115 fprintf(stderr, "system bytes = %10u\n", system_b);
4116 fprintf(stderr, "in use bytes = %10u\n", in_use_b);
4117 #ifdef NO_THREADS
4118 fprintf(stderr, "max system bytes = %10u\n", (unsigned int)max_total_mem);
4119 #endif
4120 #if HAVE_MMAP
4121 fprintf(stderr, "max mmap regions = %10u\n", (unsigned int)max_n_mmaps);
4122 fprintf(stderr, "max mmap bytes = %10lu\n", max_mmapped_mem);
4123 #endif
4124 #if THREAD_STATS
4125 fprintf(stderr, "heaps created = %10d\n", stat_n_heaps);
4126 fprintf(stderr, "locked directly = %10ld\n", stat_lock_direct);
4127 fprintf(stderr, "locked in loop = %10ld\n", stat_lock_loop);
4128 fprintf(stderr, "locked waiting = %10ld\n", stat_lock_wait);
4129 fprintf(stderr, "locked total = %10ld\n",
4130 stat_lock_direct + stat_lock_loop + stat_lock_wait);
4131 #endif
4135 mallinfo returns a copy of updated current mallinfo.
4136 The information reported is for the arena last used by the thread.
4139 struct mallinfo mALLINFo()
4141 struct mallinfo mi;
4142 Void_t *vptr = NULL;
4144 #ifndef NO_THREADS
4145 tsd_getspecific(arena_key, vptr);
4146 #endif
4147 malloc_update_mallinfo((vptr ? (arena*)vptr : &main_arena), &mi);
4148 return mi;
4155 mallopt:
4157 mallopt is the general SVID/XPG interface to tunable parameters.
4158 The format is to provide a (parameter-number, parameter-value) pair.
4159 mallopt then sets the corresponding parameter to the argument
4160 value if it can (i.e., so long as the value is meaningful),
4161 and returns 1 if successful else 0.
4163 See descriptions of tunable parameters above.
4167 #if __STD_C
4168 int mALLOPt(int param_number, int value)
4169 #else
4170 int mALLOPt(param_number, value) int param_number; int value;
4171 #endif
4173 switch(param_number)
4175 case M_TRIM_THRESHOLD:
4176 trim_threshold = value; return 1;
4177 case M_TOP_PAD:
4178 top_pad = value; return 1;
4179 case M_MMAP_THRESHOLD:
4180 #if USE_ARENAS
4181 /* Forbid setting the threshold too high. */
4182 if((unsigned long)value > HEAP_MAX_SIZE/2) return 0;
4183 #endif
4184 mmap_threshold = value; return 1;
4185 case M_MMAP_MAX:
4186 #if HAVE_MMAP
4187 n_mmaps_max = value; return 1;
4188 #else
4189 if (value != 0) return 0; else n_mmaps_max = value; return 1;
4190 #endif
4191 case M_CHECK_ACTION:
4192 check_action = value; return 1;
4194 default:
4195 return 0;
4201 /* Get/set state: malloc_get_state() records the current state of all
4202 malloc variables (_except_ for the actual heap contents and `hook'
4203 function pointers) in a system dependent, opaque data structure.
4204 This data structure is dynamically allocated and can be free()d
4205 after use. malloc_set_state() restores the state of all malloc
4206 variables to the previously obtained state. This is especially
4207 useful when using this malloc as part of a shared library, and when
4208 the heap contents are saved/restored via some other method. The
4209 primary example for this is GNU Emacs with its `dumping' procedure.
4210 `Hook' function pointers are never saved or restored by these
4211 functions, with two exceptions: If malloc checking was in use when
4212 malloc_get_state() was called, then malloc_set_state() calls
4213 __malloc_check_init() if possible; if malloc checking was not in
4214 use in the recorded state but the user requested malloc checking,
4215 then the hooks are reset to 0. */
4217 #define MALLOC_STATE_MAGIC 0x444c4541l
4218 #define MALLOC_STATE_VERSION (0*0x100l + 1l) /* major*0x100 + minor */
4220 struct malloc_state {
4221 long magic;
4222 long version;
4223 mbinptr av[NAV * 2 + 2];
4224 char* sbrk_base;
4225 int sbrked_mem_bytes;
4226 unsigned long trim_threshold;
4227 unsigned long top_pad;
4228 unsigned int n_mmaps_max;
4229 unsigned long mmap_threshold;
4230 int check_action;
4231 unsigned long max_sbrked_mem;
4232 unsigned long max_total_mem;
4233 unsigned int n_mmaps;
4234 unsigned int max_n_mmaps;
4235 unsigned long mmapped_mem;
4236 unsigned long max_mmapped_mem;
4237 int using_malloc_checking;
4240 Void_t*
4241 mALLOC_GET_STATe()
4243 struct malloc_state* ms;
4244 int i;
4245 mbinptr b;
4247 ms = (struct malloc_state*)mALLOc(sizeof(*ms));
4248 if (!ms)
4249 return 0;
4250 (void)mutex_lock(&main_arena.mutex);
4251 ms->magic = MALLOC_STATE_MAGIC;
4252 ms->version = MALLOC_STATE_VERSION;
4253 ms->av[0] = main_arena.av[0];
4254 ms->av[1] = main_arena.av[1];
4255 for(i=0; i<NAV; i++) {
4256 b = bin_at(&main_arena, i);
4257 if(first(b) == b)
4258 ms->av[2*i+2] = ms->av[2*i+3] = 0; /* empty bin (or initial top) */
4259 else {
4260 ms->av[2*i+2] = first(b);
4261 ms->av[2*i+3] = last(b);
4264 ms->sbrk_base = sbrk_base;
4265 ms->sbrked_mem_bytes = sbrked_mem;
4266 ms->trim_threshold = trim_threshold;
4267 ms->top_pad = top_pad;
4268 ms->n_mmaps_max = n_mmaps_max;
4269 ms->mmap_threshold = mmap_threshold;
4270 ms->check_action = check_action;
4271 ms->max_sbrked_mem = max_sbrked_mem;
4272 #ifdef NO_THREADS
4273 ms->max_total_mem = max_total_mem;
4274 #else
4275 ms->max_total_mem = 0;
4276 #endif
4277 ms->n_mmaps = n_mmaps;
4278 ms->max_n_mmaps = max_n_mmaps;
4279 ms->mmapped_mem = mmapped_mem;
4280 ms->max_mmapped_mem = max_mmapped_mem;
4281 #if defined _LIBC || defined MALLOC_HOOKS
4282 ms->using_malloc_checking = using_malloc_checking;
4283 #else
4284 ms->using_malloc_checking = 0;
4285 #endif
4286 (void)mutex_unlock(&main_arena.mutex);
4287 return (Void_t*)ms;
4291 #if __STD_C
4292 mALLOC_SET_STATe(Void_t* msptr)
4293 #else
4294 mALLOC_SET_STATe(msptr) Void_t* msptr;
4295 #endif
4297 struct malloc_state* ms = (struct malloc_state*)msptr;
4298 int i;
4299 mbinptr b;
4301 #if defined _LIBC || defined MALLOC_HOOKS
4302 disallow_malloc_check = 1;
4303 #endif
4304 ptmalloc_init();
4305 if(ms->magic != MALLOC_STATE_MAGIC) return -1;
4306 /* Must fail if the major version is too high. */
4307 if((ms->version & ~0xffl) > (MALLOC_STATE_VERSION & ~0xffl)) return -2;
4308 (void)mutex_lock(&main_arena.mutex);
4309 main_arena.av[0] = ms->av[0];
4310 main_arena.av[1] = ms->av[1];
4311 for(i=0; i<NAV; i++) {
4312 b = bin_at(&main_arena, i);
4313 if(ms->av[2*i+2] == 0)
4314 first(b) = last(b) = b;
4315 else {
4316 first(b) = ms->av[2*i+2];
4317 last(b) = ms->av[2*i+3];
4318 if(i > 0) {
4319 /* Make sure the links to the `av'-bins in the heap are correct. */
4320 first(b)->bk = b;
4321 last(b)->fd = b;
4325 sbrk_base = ms->sbrk_base;
4326 sbrked_mem = ms->sbrked_mem_bytes;
4327 trim_threshold = ms->trim_threshold;
4328 top_pad = ms->top_pad;
4329 n_mmaps_max = ms->n_mmaps_max;
4330 mmap_threshold = ms->mmap_threshold;
4331 check_action = ms->check_action;
4332 max_sbrked_mem = ms->max_sbrked_mem;
4333 #ifdef NO_THREADS
4334 max_total_mem = ms->max_total_mem;
4335 #endif
4336 n_mmaps = ms->n_mmaps;
4337 max_n_mmaps = ms->max_n_mmaps;
4338 mmapped_mem = ms->mmapped_mem;
4339 max_mmapped_mem = ms->max_mmapped_mem;
4340 /* add version-dependent code here */
4341 if (ms->version >= 1) {
4342 #if defined _LIBC || defined MALLOC_HOOKS
4343 /* Check whether it is safe to enable malloc checking, or whether
4344 it is necessary to disable it. */
4345 if (ms->using_malloc_checking && !using_malloc_checking &&
4346 !disallow_malloc_check)
4347 __malloc_check_init ();
4348 else if (!ms->using_malloc_checking && using_malloc_checking) {
4349 __malloc_hook = 0;
4350 __free_hook = 0;
4351 __realloc_hook = 0;
4352 __memalign_hook = 0;
4353 using_malloc_checking = 0;
4355 #endif
4358 (void)mutex_unlock(&main_arena.mutex);
4359 return 0;
4364 #if defined _LIBC || defined MALLOC_HOOKS
4366 /* A simple, standard set of debugging hooks. Overhead is `only' one
4367 byte per chunk; still this will catch most cases of double frees or
4368 overruns. The goal here is to avoid obscure crashes due to invalid
4369 usage, unlike in the MALLOC_DEBUG code. */
4371 #define MAGICBYTE(p) ( ( ((size_t)p >> 3) ^ ((size_t)p >> 11)) & 0xFF )
4373 /* Instrument a chunk with overrun detector byte(s) and convert it
4374 into a user pointer with requested size sz. */
4376 static Void_t*
4377 internal_function
4378 #if __STD_C
4379 chunk2mem_check(mchunkptr p, size_t sz)
4380 #else
4381 chunk2mem_check(p, sz) mchunkptr p; size_t sz;
4382 #endif
4384 unsigned char* m_ptr = (unsigned char*)BOUNDED_N(chunk2mem(p), sz);
4385 size_t i;
4387 for(i = chunksize(p) - (chunk_is_mmapped(p) ? 2*SIZE_SZ+1 : SIZE_SZ+1);
4388 i > sz;
4389 i -= 0xFF) {
4390 if(i-sz < 0x100) {
4391 m_ptr[i] = (unsigned char)(i-sz);
4392 break;
4394 m_ptr[i] = 0xFF;
4396 m_ptr[sz] = MAGICBYTE(p);
4397 return (Void_t*)m_ptr;
4400 /* Convert a pointer to be free()d or realloc()ed to a valid chunk
4401 pointer. If the provided pointer is not valid, return NULL. */
4403 static mchunkptr
4404 internal_function
4405 #if __STD_C
4406 mem2chunk_check(Void_t* mem)
4407 #else
4408 mem2chunk_check(mem) Void_t* mem;
4409 #endif
4411 mchunkptr p;
4412 INTERNAL_SIZE_T sz, c;
4413 unsigned char magic;
4415 p = mem2chunk(mem);
4416 if(!aligned_OK(p)) return NULL;
4417 if( (char*)p>=sbrk_base && (char*)p<(sbrk_base+sbrked_mem) ) {
4418 /* Must be a chunk in conventional heap memory. */
4419 if(chunk_is_mmapped(p) ||
4420 ( (sz = chunksize(p)), ((char*)p + sz)>=(sbrk_base+sbrked_mem) ) ||
4421 sz<MINSIZE || sz&MALLOC_ALIGN_MASK || !inuse(p) ||
4422 ( !prev_inuse(p) && (p->prev_size&MALLOC_ALIGN_MASK ||
4423 (long)prev_chunk(p)<(long)sbrk_base ||
4424 next_chunk(prev_chunk(p))!=p) ))
4425 return NULL;
4426 magic = MAGICBYTE(p);
4427 for(sz += SIZE_SZ-1; (c = ((unsigned char*)p)[sz]) != magic; sz -= c) {
4428 if(c<=0 || sz<(c+2*SIZE_SZ)) return NULL;
4430 ((unsigned char*)p)[sz] ^= 0xFF;
4431 } else {
4432 unsigned long offset, page_mask = malloc_getpagesize-1;
4434 /* mmap()ed chunks have MALLOC_ALIGNMENT or higher power-of-two
4435 alignment relative to the beginning of a page. Check this
4436 first. */
4437 offset = (unsigned long)mem & page_mask;
4438 if((offset!=MALLOC_ALIGNMENT && offset!=0 && offset!=0x10 &&
4439 offset!=0x20 && offset!=0x40 && offset!=0x80 && offset!=0x100 &&
4440 offset!=0x200 && offset!=0x400 && offset!=0x800 && offset!=0x1000 &&
4441 offset<0x2000) ||
4442 !chunk_is_mmapped(p) || (p->size & PREV_INUSE) ||
4443 ( (((unsigned long)p - p->prev_size) & page_mask) != 0 ) ||
4444 ( (sz = chunksize(p)), ((p->prev_size + sz) & page_mask) != 0 ) )
4445 return NULL;
4446 magic = MAGICBYTE(p);
4447 for(sz -= 1; (c = ((unsigned char*)p)[sz]) != magic; sz -= c) {
4448 if(c<=0 || sz<(c+2*SIZE_SZ)) return NULL;
4450 ((unsigned char*)p)[sz] ^= 0xFF;
4452 return p;
4455 /* Check for corruption of the top chunk, and try to recover if
4456 necessary. */
4458 static int
4459 internal_function
4460 #if __STD_C
4461 top_check(void)
4462 #else
4463 top_check()
4464 #endif
4466 mchunkptr t = top(&main_arena);
4467 char* brk, * new_brk;
4468 INTERNAL_SIZE_T front_misalign, sbrk_size;
4469 unsigned long pagesz = malloc_getpagesize;
4471 if((char*)t + chunksize(t) == sbrk_base + sbrked_mem ||
4472 t == initial_top(&main_arena)) return 0;
4474 if(check_action & 1)
4475 fprintf(stderr, "malloc: top chunk is corrupt\n");
4476 if(check_action & 2)
4477 abort();
4479 /* Try to set up a new top chunk. */
4480 brk = MORECORE(0);
4481 front_misalign = (unsigned long)chunk2mem(brk) & MALLOC_ALIGN_MASK;
4482 if (front_misalign > 0)
4483 front_misalign = MALLOC_ALIGNMENT - front_misalign;
4484 sbrk_size = front_misalign + top_pad + MINSIZE;
4485 sbrk_size += pagesz - ((unsigned long)(brk + sbrk_size) & (pagesz - 1));
4486 new_brk = (char*)(MORECORE (sbrk_size));
4487 if (new_brk == (char*)(MORECORE_FAILURE)) return -1;
4488 sbrked_mem = (new_brk - sbrk_base) + sbrk_size;
4490 top(&main_arena) = (mchunkptr)(brk + front_misalign);
4491 set_head(top(&main_arena), (sbrk_size - front_misalign) | PREV_INUSE);
4493 return 0;
4496 static Void_t*
4497 #if __STD_C
4498 malloc_check(size_t sz, const Void_t *caller)
4499 #else
4500 malloc_check(sz, caller) size_t sz; const Void_t *caller;
4501 #endif
4503 mchunkptr victim;
4504 INTERNAL_SIZE_T nb;
4506 if(request2size(sz+1, nb))
4507 return 0;
4508 (void)mutex_lock(&main_arena.mutex);
4509 victim = (top_check() >= 0) ? chunk_alloc(&main_arena, nb) : NULL;
4510 (void)mutex_unlock(&main_arena.mutex);
4511 if(!victim) return NULL;
4512 return chunk2mem_check(victim, sz);
4515 static void
4516 #if __STD_C
4517 free_check(Void_t* mem, const Void_t *caller)
4518 #else
4519 free_check(mem, caller) Void_t* mem; const Void_t *caller;
4520 #endif
4522 mchunkptr p;
4524 if(!mem) return;
4525 (void)mutex_lock(&main_arena.mutex);
4526 p = mem2chunk_check(mem);
4527 if(!p) {
4528 (void)mutex_unlock(&main_arena.mutex);
4529 if(check_action & 1)
4530 fprintf(stderr, "free(): invalid pointer %p!\n", mem);
4531 if(check_action & 2)
4532 abort();
4533 return;
4535 #if HAVE_MMAP
4536 if (chunk_is_mmapped(p)) {
4537 (void)mutex_unlock(&main_arena.mutex);
4538 munmap_chunk(p);
4539 return;
4541 #endif
4542 #if 0 /* Erase freed memory. */
4543 memset(mem, 0, chunksize(p) - (SIZE_SZ+1));
4544 #endif
4545 chunk_free(&main_arena, p);
4546 (void)mutex_unlock(&main_arena.mutex);
4549 static Void_t*
4550 #if __STD_C
4551 realloc_check(Void_t* oldmem, size_t bytes, const Void_t *caller)
4552 #else
4553 realloc_check(oldmem, bytes, caller)
4554 Void_t* oldmem; size_t bytes; const Void_t *caller;
4555 #endif
4557 mchunkptr oldp, newp;
4558 INTERNAL_SIZE_T nb, oldsize;
4560 if (oldmem == 0) return malloc_check(bytes, NULL);
4561 (void)mutex_lock(&main_arena.mutex);
4562 oldp = mem2chunk_check(oldmem);
4563 if(!oldp) {
4564 (void)mutex_unlock(&main_arena.mutex);
4565 if(check_action & 1)
4566 fprintf(stderr, "realloc(): invalid pointer %p!\n", oldmem);
4567 if(check_action & 2)
4568 abort();
4569 return malloc_check(bytes, NULL);
4571 oldsize = chunksize(oldp);
4573 if(request2size(bytes+1, nb)) {
4574 (void)mutex_unlock(&main_arena.mutex);
4575 return 0;
4578 #if HAVE_MMAP
4579 if (chunk_is_mmapped(oldp)) {
4580 #if HAVE_MREMAP
4581 newp = mremap_chunk(oldp, nb);
4582 if(!newp) {
4583 #endif
4584 /* Note the extra SIZE_SZ overhead. */
4585 if(oldsize - SIZE_SZ >= nb) newp = oldp; /* do nothing */
4586 else {
4587 /* Must alloc, copy, free. */
4588 newp = (top_check() >= 0) ? chunk_alloc(&main_arena, nb) : NULL;
4589 if (newp) {
4590 MALLOC_COPY(BOUNDED_N(chunk2mem(newp), nb),
4591 oldmem, oldsize - 2*SIZE_SZ);
4592 munmap_chunk(oldp);
4595 #if HAVE_MREMAP
4597 #endif
4598 } else {
4599 #endif /* HAVE_MMAP */
4600 newp = (top_check() >= 0) ?
4601 chunk_realloc(&main_arena, oldp, oldsize, nb) : NULL;
4602 #if 0 /* Erase freed memory. */
4603 nb = chunksize(newp);
4604 if(oldp<newp || oldp>=chunk_at_offset(newp, nb)) {
4605 memset((char*)oldmem + 2*sizeof(mbinptr), 0,
4606 oldsize - (2*sizeof(mbinptr)+2*SIZE_SZ+1));
4607 } else if(nb > oldsize+SIZE_SZ) {
4608 memset((char*)BOUNDED_N(chunk2mem(newp), bytes) + oldsize,
4609 0, nb - (oldsize+SIZE_SZ));
4611 #endif
4612 #if HAVE_MMAP
4614 #endif
4615 (void)mutex_unlock(&main_arena.mutex);
4617 if(!newp) return NULL;
4618 return chunk2mem_check(newp, bytes);
4621 static Void_t*
4622 #if __STD_C
4623 memalign_check(size_t alignment, size_t bytes, const Void_t *caller)
4624 #else
4625 memalign_check(alignment, bytes, caller)
4626 size_t alignment; size_t bytes; const Void_t *caller;
4627 #endif
4629 INTERNAL_SIZE_T nb;
4630 mchunkptr p;
4632 if (alignment <= MALLOC_ALIGNMENT) return malloc_check(bytes, NULL);
4633 if (alignment < MINSIZE) alignment = MINSIZE;
4635 if(request2size(bytes+1, nb))
4636 return 0;
4637 (void)mutex_lock(&main_arena.mutex);
4638 p = (top_check() >= 0) ? chunk_align(&main_arena, nb, alignment) : NULL;
4639 (void)mutex_unlock(&main_arena.mutex);
4640 if(!p) return NULL;
4641 return chunk2mem_check(p, bytes);
4644 #ifndef NO_THREADS
4646 /* The following hooks are used when the global initialization in
4647 ptmalloc_init() hasn't completed yet. */
4649 static Void_t*
4650 #if __STD_C
4651 malloc_starter(size_t sz, const Void_t *caller)
4652 #else
4653 malloc_starter(sz, caller) size_t sz; const Void_t *caller;
4654 #endif
4656 INTERNAL_SIZE_T nb;
4657 mchunkptr victim;
4659 if(request2size(sz, nb))
4660 return 0;
4661 victim = chunk_alloc(&main_arena, nb);
4663 return victim ? BOUNDED_N(chunk2mem(victim), sz) : 0;
4666 static void
4667 #if __STD_C
4668 free_starter(Void_t* mem, const Void_t *caller)
4669 #else
4670 free_starter(mem, caller) Void_t* mem; const Void_t *caller;
4671 #endif
4673 mchunkptr p;
4675 if(!mem) return;
4676 p = mem2chunk(mem);
4677 #if HAVE_MMAP
4678 if (chunk_is_mmapped(p)) {
4679 munmap_chunk(p);
4680 return;
4682 #endif
4683 chunk_free(&main_arena, p);
4686 /* The following hooks are used while the `atfork' handling mechanism
4687 is active. */
4689 static Void_t*
4690 #if __STD_C
4691 malloc_atfork (size_t sz, const Void_t *caller)
4692 #else
4693 malloc_atfork(sz, caller) size_t sz; const Void_t *caller;
4694 #endif
4696 Void_t *vptr = NULL;
4697 INTERNAL_SIZE_T nb;
4698 mchunkptr victim;
4700 tsd_getspecific(arena_key, vptr);
4701 if(!vptr) {
4702 if(save_malloc_hook != malloc_check) {
4703 if(request2size(sz, nb))
4704 return 0;
4705 victim = chunk_alloc(&main_arena, nb);
4706 return victim ? BOUNDED_N(chunk2mem(victim), sz) : 0;
4707 } else {
4708 if(top_check()<0 || request2size(sz+1, nb))
4709 return 0;
4710 victim = chunk_alloc(&main_arena, nb);
4711 return victim ? chunk2mem_check(victim, sz) : 0;
4713 } else {
4714 /* Suspend the thread until the `atfork' handlers have completed.
4715 By that time, the hooks will have been reset as well, so that
4716 mALLOc() can be used again. */
4717 (void)mutex_lock(&list_lock);
4718 (void)mutex_unlock(&list_lock);
4719 return mALLOc(sz);
4723 static void
4724 #if __STD_C
4725 free_atfork(Void_t* mem, const Void_t *caller)
4726 #else
4727 free_atfork(mem, caller) Void_t* mem; const Void_t *caller;
4728 #endif
4730 Void_t *vptr = NULL;
4731 arena *ar_ptr;
4732 mchunkptr p; /* chunk corresponding to mem */
4734 if (mem == 0) /* free(0) has no effect */
4735 return;
4737 p = mem2chunk(mem); /* do not bother to replicate free_check here */
4739 #if HAVE_MMAP
4740 if (chunk_is_mmapped(p)) /* release mmapped memory. */
4742 munmap_chunk(p);
4743 return;
4745 #endif
4747 ar_ptr = arena_for_ptr(p);
4748 tsd_getspecific(arena_key, vptr);
4749 if(vptr)
4750 (void)mutex_lock(&ar_ptr->mutex);
4751 chunk_free(ar_ptr, p);
4752 if(vptr)
4753 (void)mutex_unlock(&ar_ptr->mutex);
4756 #endif /* !defined NO_THREADS */
4758 #endif /* defined _LIBC || defined MALLOC_HOOKS */
4762 #ifdef _LIBC
4763 /* We need a wrapper function for one of the additions of POSIX. */
4765 __posix_memalign (void **memptr, size_t alignment, size_t size)
4767 void *mem;
4769 /* Test whether the SIZE argument is valid. It must be a power of
4770 two multiple of sizeof (void *). */
4771 if (size % sizeof (void *) != 0 || (size & (size - 1)) != 0)
4772 return EINVAL;
4774 mem = __libc_memalign (alignment, size);
4776 if (mem != NULL)
4778 *memptr = mem;
4779 return 0;
4782 return ENOMEM;
4784 weak_alias (__posix_memalign, posix_memalign)
4786 weak_alias (__libc_calloc, __calloc) weak_alias (__libc_calloc, calloc)
4787 weak_alias (__libc_free, __cfree) weak_alias (__libc_free, cfree)
4788 weak_alias (__libc_free, __free) weak_alias (__libc_free, free)
4789 weak_alias (__libc_malloc, __malloc) weak_alias (__libc_malloc, malloc)
4790 weak_alias (__libc_memalign, __memalign) weak_alias (__libc_memalign, memalign)
4791 weak_alias (__libc_realloc, __realloc) weak_alias (__libc_realloc, realloc)
4792 weak_alias (__libc_valloc, __valloc) weak_alias (__libc_valloc, valloc)
4793 weak_alias (__libc_pvalloc, __pvalloc) weak_alias (__libc_pvalloc, pvalloc)
4794 weak_alias (__libc_mallinfo, __mallinfo) weak_alias (__libc_mallinfo, mallinfo)
4795 weak_alias (__libc_mallopt, __mallopt) weak_alias (__libc_mallopt, mallopt)
4797 weak_alias (__malloc_stats, malloc_stats)
4798 weak_alias (__malloc_usable_size, malloc_usable_size)
4799 weak_alias (__malloc_trim, malloc_trim)
4800 weak_alias (__malloc_get_state, malloc_get_state)
4801 weak_alias (__malloc_set_state, malloc_set_state)
4802 #endif
4806 History:
4808 V2.6.4-pt3 Thu Feb 20 1997 Wolfram Gloger (wmglo@dent.med.uni-muenchen.de)
4809 * Added malloc_get/set_state() (mainly for use in GNU emacs),
4810 using interface from Marcus Daniels
4811 * All parameters are now adjustable via environment variables
4813 V2.6.4-pt2 Sat Dec 14 1996 Wolfram Gloger (wmglo@dent.med.uni-muenchen.de)
4814 * Added debugging hooks
4815 * Fixed possible deadlock in realloc() when out of memory
4816 * Don't pollute namespace in glibc: use __getpagesize, __mmap, etc.
4818 V2.6.4-pt Wed Dec 4 1996 Wolfram Gloger (wmglo@dent.med.uni-muenchen.de)
4819 * Very minor updates from the released 2.6.4 version.
4820 * Trimmed include file down to exported data structures.
4821 * Changes from H.J. Lu for glibc-2.0.
4823 V2.6.3i-pt Sep 16 1996 Wolfram Gloger (wmglo@dent.med.uni-muenchen.de)
4824 * Many changes for multiple threads
4825 * Introduced arenas and heaps
4827 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
4828 * Added pvalloc, as recommended by H.J. Liu
4829 * Added 64bit pointer support mainly from Wolfram Gloger
4830 * Added anonymously donated WIN32 sbrk emulation
4831 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
4832 * malloc_extend_top: fix mask error that caused wastage after
4833 foreign sbrks
4834 * Add linux mremap support code from HJ Liu
4836 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
4837 * Integrated most documentation with the code.
4838 * Add support for mmap, with help from
4839 Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
4840 * Use last_remainder in more cases.
4841 * Pack bins using idea from colin@nyx10.cs.du.edu
4842 * Use ordered bins instead of best-fit threshold
4843 * Eliminate block-local decls to simplify tracing and debugging.
4844 * Support another case of realloc via move into top
4845 * Fix error occurring when initial sbrk_base not word-aligned.
4846 * Rely on page size for units instead of SBRK_UNIT to
4847 avoid surprises about sbrk alignment conventions.
4848 * Add mallinfo, mallopt. Thanks to Raymond Nijssen
4849 (raymond@es.ele.tue.nl) for the suggestion.
4850 * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
4851 * More precautions for cases where other routines call sbrk,
4852 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
4853 * Added macros etc., allowing use in linux libc from
4854 H.J. Lu (hjl@gnu.ai.mit.edu)
4855 * Inverted this history list
4857 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
4858 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
4859 * Removed all preallocation code since under current scheme
4860 the work required to undo bad preallocations exceeds
4861 the work saved in good cases for most test programs.
4862 * No longer use return list or unconsolidated bins since
4863 no scheme using them consistently outperforms those that don't
4864 given above changes.
4865 * Use best fit for very large chunks to prevent some worst-cases.
4866 * Added some support for debugging
4868 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
4869 * Removed footers when chunks are in use. Thanks to
4870 Paul Wilson (wilson@cs.texas.edu) for the suggestion.
4872 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
4873 * Added malloc_trim, with help from Wolfram Gloger
4874 (wmglo@Dent.MED.Uni-Muenchen.DE).
4876 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
4878 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
4879 * realloc: try to expand in both directions
4880 * malloc: swap order of clean-bin strategy;
4881 * realloc: only conditionally expand backwards
4882 * Try not to scavenge used bins
4883 * Use bin counts as a guide to preallocation
4884 * Occasionally bin return list chunks in first scan
4885 * Add a few optimizations from colin@nyx10.cs.du.edu
4887 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
4888 * faster bin computation & slightly different binning
4889 * merged all consolidations to one part of malloc proper
4890 (eliminating old malloc_find_space & malloc_clean_bin)
4891 * Scan 2 returns chunks (not just 1)
4892 * Propagate failure in realloc if malloc returns 0
4893 * Add stuff to allow compilation on non-ANSI compilers
4894 from kpv@research.att.com
4896 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
4897 * removed potential for odd address access in prev_chunk
4898 * removed dependency on getpagesize.h
4899 * misc cosmetics and a bit more internal documentation
4900 * anticosmetics: mangled names in macros to evade debugger strangeness
4901 * tested on sparc, hp-700, dec-mips, rs6000
4902 with gcc & native cc (hp, dec only) allowing
4903 Detlefs & Zorn comparison study (in SIGPLAN Notices.)
4905 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
4906 * Based loosely on libg++-1.2X malloc. (It retains some of the overall
4907 structure of old version, but most details differ.)