Update.
[glibc.git] / malloc / malloc.c
blob5fd2dfba76c5ddae524fb57964ad3d7b8bfcb03e
1 /* Malloc implementation for multiple threads without lock contention.
2 Copyright (C) 1996, 1997, 1998, 1999 Free Software Foundation, Inc.
3 This file is part of the GNU C Library.
4 Contributed by Wolfram Gloger <wmglo@dent.med.uni-muenchen.de>
5 and Doug Lea <dl@cs.oswego.edu>, 1996.
7 The GNU C Library is free software; you can redistribute it and/or
8 modify it under the terms of the GNU Library General Public License as
9 published by the Free Software Foundation; either version 2 of the
10 License, or (at your option) any later version.
12 The GNU C Library is distributed in the hope that it will be useful,
13 but WITHOUT ANY WARRANTY; without even the implied warranty of
14 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
15 Library General Public License for more details.
17 You should have received a copy of the GNU Library General Public
18 License along with the GNU C Library; see the file COPYING.LIB. If not,
19 write to the Free Software Foundation, Inc., 59 Temple Place - Suite 330,
20 Boston, MA 02111-1307, USA. */
22 /* V2.6.4-pt3 Thu Feb 20 1997
24 This work is mainly derived from malloc-2.6.4 by Doug Lea
25 <dl@cs.oswego.edu>, which is available from:
27 ftp://g.oswego.edu/pub/misc/malloc.c
29 Most of the original comments are reproduced in the code below.
31 * Why use this malloc?
33 This is not the fastest, most space-conserving, most portable, or
34 most tunable malloc ever written. However it is among the fastest
35 while also being among the most space-conserving, portable and tunable.
36 Consistent balance across these factors results in a good general-purpose
37 allocator. For a high-level description, see
38 http://g.oswego.edu/dl/html/malloc.html
40 On many systems, the standard malloc implementation is by itself not
41 thread-safe, and therefore wrapped with a single global lock around
42 all malloc-related functions. In some applications, especially with
43 multiple available processors, this can lead to contention problems
44 and bad performance. This malloc version was designed with the goal
45 to avoid waiting for locks as much as possible. Statistics indicate
46 that this goal is achieved in many cases.
48 * Synopsis of public routines
50 (Much fuller descriptions are contained in the program documentation below.)
52 ptmalloc_init();
53 Initialize global configuration. When compiled for multiple threads,
54 this function must be called once before any other function in the
55 package. It is not required otherwise. It is called automatically
56 in the Linux/GNU C libray or when compiling with MALLOC_HOOKS.
57 malloc(size_t n);
58 Return a pointer to a newly allocated chunk of at least n bytes, or null
59 if no space is available.
60 free(Void_t* p);
61 Release the chunk of memory pointed to by p, or no effect if p is null.
62 realloc(Void_t* p, size_t n);
63 Return a pointer to a chunk of size n that contains the same data
64 as does chunk p up to the minimum of (n, p's size) bytes, or null
65 if no space is available. The returned pointer may or may not be
66 the same as p. If p is null, equivalent to malloc. Unless the
67 #define REALLOC_ZERO_BYTES_FREES below is set, realloc with a
68 size argument of zero (re)allocates a minimum-sized chunk.
69 memalign(size_t alignment, size_t n);
70 Return a pointer to a newly allocated chunk of n bytes, aligned
71 in accord with the alignment argument, which must be a power of
72 two.
73 valloc(size_t n);
74 Equivalent to memalign(pagesize, n), where pagesize is the page
75 size of the system (or as near to this as can be figured out from
76 all the includes/defines below.)
77 pvalloc(size_t n);
78 Equivalent to valloc(minimum-page-that-holds(n)), that is,
79 round up n to nearest pagesize.
80 calloc(size_t unit, size_t quantity);
81 Returns a pointer to quantity * unit bytes, with all locations
82 set to zero.
83 cfree(Void_t* p);
84 Equivalent to free(p).
85 malloc_trim(size_t pad);
86 Release all but pad bytes of freed top-most memory back
87 to the system. Return 1 if successful, else 0.
88 malloc_usable_size(Void_t* p);
89 Report the number usable allocated bytes associated with allocated
90 chunk p. This may or may not report more bytes than were requested,
91 due to alignment and minimum size constraints.
92 malloc_stats();
93 Prints brief summary statistics on stderr.
94 mallinfo()
95 Returns (by copy) a struct containing various summary statistics.
96 mallopt(int parameter_number, int parameter_value)
97 Changes one of the tunable parameters described below. Returns
98 1 if successful in changing the parameter, else 0.
100 * Vital statistics:
102 Alignment: 8-byte
103 8 byte alignment is currently hardwired into the design. This
104 seems to suffice for all current machines and C compilers.
106 Assumed pointer representation: 4 or 8 bytes
107 Code for 8-byte pointers is untested by me but has worked
108 reliably by Wolfram Gloger, who contributed most of the
109 changes supporting this.
111 Assumed size_t representation: 4 or 8 bytes
112 Note that size_t is allowed to be 4 bytes even if pointers are 8.
114 Minimum overhead per allocated chunk: 4 or 8 bytes
115 Each malloced chunk has a hidden overhead of 4 bytes holding size
116 and status information.
118 Minimum allocated size: 4-byte ptrs: 16 bytes (including 4 overhead)
119 8-byte ptrs: 24/32 bytes (including, 4/8 overhead)
121 When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte
122 ptrs but 4 byte size) or 24 (for 8/8) additional bytes are
123 needed; 4 (8) for a trailing size field
124 and 8 (16) bytes for free list pointers. Thus, the minimum
125 allocatable size is 16/24/32 bytes.
127 Even a request for zero bytes (i.e., malloc(0)) returns a
128 pointer to something of the minimum allocatable size.
130 Maximum allocated size: 4-byte size_t: 2^31 - 8 bytes
131 8-byte size_t: 2^63 - 16 bytes
133 It is assumed that (possibly signed) size_t bit values suffice to
134 represent chunk sizes. `Possibly signed' is due to the fact
135 that `size_t' may be defined on a system as either a signed or
136 an unsigned type. To be conservative, values that would appear
137 as negative numbers are avoided.
138 Requests for sizes with a negative sign bit will return a
139 minimum-sized chunk.
141 Maximum overhead wastage per allocated chunk: normally 15 bytes
143 Alignment demands, plus the minimum allocatable size restriction
144 make the normal worst-case wastage 15 bytes (i.e., up to 15
145 more bytes will be allocated than were requested in malloc), with
146 two exceptions:
147 1. Because requests for zero bytes allocate non-zero space,
148 the worst case wastage for a request of zero bytes is 24 bytes.
149 2. For requests >= mmap_threshold that are serviced via
150 mmap(), the worst case wastage is 8 bytes plus the remainder
151 from a system page (the minimal mmap unit); typically 4096 bytes.
153 * Limitations
155 Here are some features that are NOT currently supported
157 * No automated mechanism for fully checking that all accesses
158 to malloced memory stay within their bounds.
159 * No support for compaction.
161 * Synopsis of compile-time options:
163 People have reported using previous versions of this malloc on all
164 versions of Unix, sometimes by tweaking some of the defines
165 below. It has been tested most extensively on Solaris and
166 Linux. People have also reported adapting this malloc for use in
167 stand-alone embedded systems.
169 The implementation is in straight, hand-tuned ANSI C. Among other
170 consequences, it uses a lot of macros. Because of this, to be at
171 all usable, this code should be compiled using an optimizing compiler
172 (for example gcc -O2) that can simplify expressions and control
173 paths.
175 __STD_C (default: derived from C compiler defines)
176 Nonzero if using ANSI-standard C compiler, a C++ compiler, or
177 a C compiler sufficiently close to ANSI to get away with it.
178 MALLOC_DEBUG (default: NOT defined)
179 Define to enable debugging. Adds fairly extensive assertion-based
180 checking to help track down memory errors, but noticeably slows down
181 execution.
182 MALLOC_HOOKS (default: NOT defined)
183 Define to enable support run-time replacement of the allocation
184 functions through user-defined `hooks'.
185 REALLOC_ZERO_BYTES_FREES (default: defined)
186 Define this if you think that realloc(p, 0) should be equivalent
187 to free(p). (The C standard requires this behaviour, therefore
188 it is the default.) Otherwise, since malloc returns a unique
189 pointer for malloc(0), so does realloc(p, 0).
190 HAVE_MEMCPY (default: defined)
191 Define if you are not otherwise using ANSI STD C, but still
192 have memcpy and memset in your C library and want to use them.
193 Otherwise, simple internal versions are supplied.
194 USE_MEMCPY (default: 1 if HAVE_MEMCPY is defined, 0 otherwise)
195 Define as 1 if you want the C library versions of memset and
196 memcpy called in realloc and calloc (otherwise macro versions are used).
197 At least on some platforms, the simple macro versions usually
198 outperform libc versions.
199 HAVE_MMAP (default: defined as 1)
200 Define to non-zero to optionally make malloc() use mmap() to
201 allocate very large blocks.
202 HAVE_MREMAP (default: defined as 0 unless Linux libc set)
203 Define to non-zero to optionally make realloc() use mremap() to
204 reallocate very large blocks.
205 malloc_getpagesize (default: derived from system #includes)
206 Either a constant or routine call returning the system page size.
207 HAVE_USR_INCLUDE_MALLOC_H (default: NOT defined)
208 Optionally define if you are on a system with a /usr/include/malloc.h
209 that declares struct mallinfo. It is not at all necessary to
210 define this even if you do, but will ensure consistency.
211 INTERNAL_SIZE_T (default: size_t)
212 Define to a 32-bit type (probably `unsigned int') if you are on a
213 64-bit machine, yet do not want or need to allow malloc requests of
214 greater than 2^31 to be handled. This saves space, especially for
215 very small chunks.
216 _LIBC (default: NOT defined)
217 Defined only when compiled as part of the Linux libc/glibc.
218 Also note that there is some odd internal name-mangling via defines
219 (for example, internally, `malloc' is named `mALLOc') needed
220 when compiling in this case. These look funny but don't otherwise
221 affect anything.
222 LACKS_UNISTD_H (default: undefined)
223 Define this if your system does not have a <unistd.h>.
224 MORECORE (default: sbrk)
225 The name of the routine to call to obtain more memory from the system.
226 MORECORE_FAILURE (default: -1)
227 The value returned upon failure of MORECORE.
228 MORECORE_CLEARS (default 1)
229 The degree to which the routine mapped to MORECORE zeroes out
230 memory: never (0), only for newly allocated space (1) or always
231 (2). The distinction between (1) and (2) is necessary because on
232 some systems, if the application first decrements and then
233 increments the break value, the contents of the reallocated space
234 are unspecified.
235 DEFAULT_TRIM_THRESHOLD
236 DEFAULT_TOP_PAD
237 DEFAULT_MMAP_THRESHOLD
238 DEFAULT_MMAP_MAX
239 Default values of tunable parameters (described in detail below)
240 controlling interaction with host system routines (sbrk, mmap, etc).
241 These values may also be changed dynamically via mallopt(). The
242 preset defaults are those that give best performance for typical
243 programs/systems.
244 DEFAULT_CHECK_ACTION
245 When the standard debugging hooks are in place, and a pointer is
246 detected as corrupt, do nothing (0), print an error message (1),
247 or call abort() (2).
254 * Compile-time options for multiple threads:
256 USE_PTHREADS, USE_THR, USE_SPROC
257 Define one of these as 1 to select the thread interface:
258 POSIX threads, Solaris threads or SGI sproc's, respectively.
259 If none of these is defined as non-zero, you get a `normal'
260 malloc implementation which is not thread-safe. Support for
261 multiple threads requires HAVE_MMAP=1. As an exception, when
262 compiling for GNU libc, i.e. when _LIBC is defined, then none of
263 the USE_... symbols have to be defined.
265 HEAP_MIN_SIZE
266 HEAP_MAX_SIZE
267 When thread support is enabled, additional `heap's are created
268 with mmap calls. These are limited in size; HEAP_MIN_SIZE should
269 be a multiple of the page size, while HEAP_MAX_SIZE must be a power
270 of two for alignment reasons. HEAP_MAX_SIZE should be at least
271 twice as large as the mmap threshold.
272 THREAD_STATS
273 When this is defined as non-zero, some statistics on mutex locking
274 are computed.
281 /* Preliminaries */
283 #ifndef __STD_C
284 #if defined (__STDC__)
285 #define __STD_C 1
286 #else
287 #if __cplusplus
288 #define __STD_C 1
289 #else
290 #define __STD_C 0
291 #endif /*__cplusplus*/
292 #endif /*__STDC__*/
293 #endif /*__STD_C*/
295 #ifndef Void_t
296 #if __STD_C
297 #define Void_t void
298 #else
299 #define Void_t char
300 #endif
301 #endif /*Void_t*/
303 #if __STD_C
304 # include <stddef.h> /* for size_t */
305 # if defined _LIBC || defined MALLOC_HOOKS
306 # include <stdlib.h> /* for getenv(), abort() */
307 # endif
308 #else
309 # include <sys/types.h>
310 #endif
312 #ifndef _LIBC
313 # define __secure_getenv(Str) getenv (Str)
314 #endif
316 /* Macros for handling mutexes and thread-specific data. This is
317 included early, because some thread-related header files (such as
318 pthread.h) should be included before any others. */
319 #include "thread-m.h"
321 #ifdef __cplusplus
322 extern "C" {
323 #endif
325 #include <errno.h>
326 #include <stdio.h> /* needed for malloc_stats */
330 Compile-time options
335 Debugging:
337 Because freed chunks may be overwritten with link fields, this
338 malloc will often die when freed memory is overwritten by user
339 programs. This can be very effective (albeit in an annoying way)
340 in helping track down dangling pointers.
342 If you compile with -DMALLOC_DEBUG, a number of assertion checks are
343 enabled that will catch more memory errors. You probably won't be
344 able to make much sense of the actual assertion errors, but they
345 should help you locate incorrectly overwritten memory. The
346 checking is fairly extensive, and will slow down execution
347 noticeably. Calling malloc_stats or mallinfo with MALLOC_DEBUG set will
348 attempt to check every non-mmapped allocated and free chunk in the
349 course of computing the summaries. (By nature, mmapped regions
350 cannot be checked very much automatically.)
352 Setting MALLOC_DEBUG may also be helpful if you are trying to modify
353 this code. The assertions in the check routines spell out in more
354 detail the assumptions and invariants underlying the algorithms.
358 #if MALLOC_DEBUG
359 #include <assert.h>
360 #else
361 #define assert(x) ((void)0)
362 #endif
366 INTERNAL_SIZE_T is the word-size used for internal bookkeeping
367 of chunk sizes. On a 64-bit machine, you can reduce malloc
368 overhead by defining INTERNAL_SIZE_T to be a 32 bit `unsigned int'
369 at the expense of not being able to handle requests greater than
370 2^31. This limitation is hardly ever a concern; you are encouraged
371 to set this. However, the default version is the same as size_t.
374 #ifndef INTERNAL_SIZE_T
375 #define INTERNAL_SIZE_T size_t
376 #endif
379 REALLOC_ZERO_BYTES_FREES should be set if a call to realloc with
380 zero bytes should be the same as a call to free. The C standard
381 requires this. Otherwise, since this malloc returns a unique pointer
382 for malloc(0), so does realloc(p, 0).
386 #define REALLOC_ZERO_BYTES_FREES
390 HAVE_MEMCPY should be defined if you are not otherwise using
391 ANSI STD C, but still have memcpy and memset in your C library
392 and want to use them in calloc and realloc. Otherwise simple
393 macro versions are defined here.
395 USE_MEMCPY should be defined as 1 if you actually want to
396 have memset and memcpy called. People report that the macro
397 versions are often enough faster than libc versions on many
398 systems that it is better to use them.
402 #define HAVE_MEMCPY 1
404 #ifndef USE_MEMCPY
405 #ifdef HAVE_MEMCPY
406 #define USE_MEMCPY 1
407 #else
408 #define USE_MEMCPY 0
409 #endif
410 #endif
412 #if (__STD_C || defined(HAVE_MEMCPY))
414 #if __STD_C
415 void* memset(void*, int, size_t);
416 void* memcpy(void*, const void*, size_t);
417 #else
418 Void_t* memset();
419 Void_t* memcpy();
420 #endif
421 #endif
423 #if USE_MEMCPY
425 /* The following macros are only invoked with (2n+1)-multiples of
426 INTERNAL_SIZE_T units, with a positive integer n. This is exploited
427 for fast inline execution when n is small. */
429 #define MALLOC_ZERO(charp, nbytes) \
430 do { \
431 INTERNAL_SIZE_T mzsz = (nbytes); \
432 if(mzsz <= 9*sizeof(mzsz)) { \
433 INTERNAL_SIZE_T* mz = (INTERNAL_SIZE_T*) (charp); \
434 if(mzsz >= 5*sizeof(mzsz)) { *mz++ = 0; \
435 *mz++ = 0; \
436 if(mzsz >= 7*sizeof(mzsz)) { *mz++ = 0; \
437 *mz++ = 0; \
438 if(mzsz >= 9*sizeof(mzsz)) { *mz++ = 0; \
439 *mz++ = 0; }}} \
440 *mz++ = 0; \
441 *mz++ = 0; \
442 *mz = 0; \
443 } else memset((charp), 0, mzsz); \
444 } while(0)
446 #define MALLOC_COPY(dest,src,nbytes) \
447 do { \
448 INTERNAL_SIZE_T mcsz = (nbytes); \
449 if(mcsz <= 9*sizeof(mcsz)) { \
450 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) (src); \
451 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) (dest); \
452 if(mcsz >= 5*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
453 *mcdst++ = *mcsrc++; \
454 if(mcsz >= 7*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
455 *mcdst++ = *mcsrc++; \
456 if(mcsz >= 9*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
457 *mcdst++ = *mcsrc++; }}} \
458 *mcdst++ = *mcsrc++; \
459 *mcdst++ = *mcsrc++; \
460 *mcdst = *mcsrc ; \
461 } else memcpy(dest, src, mcsz); \
462 } while(0)
464 #else /* !USE_MEMCPY */
466 /* Use Duff's device for good zeroing/copying performance. */
468 #define MALLOC_ZERO(charp, nbytes) \
469 do { \
470 INTERNAL_SIZE_T* mzp = (INTERNAL_SIZE_T*)(charp); \
471 long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
472 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
473 switch (mctmp) { \
474 case 0: for(;;) { *mzp++ = 0; \
475 case 7: *mzp++ = 0; \
476 case 6: *mzp++ = 0; \
477 case 5: *mzp++ = 0; \
478 case 4: *mzp++ = 0; \
479 case 3: *mzp++ = 0; \
480 case 2: *mzp++ = 0; \
481 case 1: *mzp++ = 0; if(mcn <= 0) break; mcn--; } \
483 } while(0)
485 #define MALLOC_COPY(dest,src,nbytes) \
486 do { \
487 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) src; \
488 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) dest; \
489 long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
490 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
491 switch (mctmp) { \
492 case 0: for(;;) { *mcdst++ = *mcsrc++; \
493 case 7: *mcdst++ = *mcsrc++; \
494 case 6: *mcdst++ = *mcsrc++; \
495 case 5: *mcdst++ = *mcsrc++; \
496 case 4: *mcdst++ = *mcsrc++; \
497 case 3: *mcdst++ = *mcsrc++; \
498 case 2: *mcdst++ = *mcsrc++; \
499 case 1: *mcdst++ = *mcsrc++; if(mcn <= 0) break; mcn--; } \
501 } while(0)
503 #endif
506 #ifndef LACKS_UNISTD_H
507 # include <unistd.h>
508 #endif
511 Define HAVE_MMAP to optionally make malloc() use mmap() to
512 allocate very large blocks. These will be returned to the
513 operating system immediately after a free().
516 #ifndef HAVE_MMAP
517 # ifdef _POSIX_MAPPED_FILES
518 # define HAVE_MMAP 1
519 # endif
520 #endif
523 Define HAVE_MREMAP to make realloc() use mremap() to re-allocate
524 large blocks. This is currently only possible on Linux with
525 kernel versions newer than 1.3.77.
528 #ifndef HAVE_MREMAP
529 #define HAVE_MREMAP defined(__linux__) && !defined(__arm__)
530 #endif
532 #if HAVE_MMAP
534 #include <unistd.h>
535 #include <fcntl.h>
536 #include <sys/mman.h>
538 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
539 #define MAP_ANONYMOUS MAP_ANON
540 #endif
541 #if !defined(MAP_FAILED)
542 #define MAP_FAILED ((char*)-1)
543 #endif
545 #ifndef MAP_NORESERVE
546 # ifdef MAP_AUTORESRV
547 # define MAP_NORESERVE MAP_AUTORESRV
548 # else
549 # define MAP_NORESERVE 0
550 # endif
551 #endif
553 #endif /* HAVE_MMAP */
556 Access to system page size. To the extent possible, this malloc
557 manages memory from the system in page-size units.
559 The following mechanics for getpagesize were adapted from
560 bsd/gnu getpagesize.h
563 #ifndef malloc_getpagesize
564 # ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
565 # ifndef _SC_PAGE_SIZE
566 # define _SC_PAGE_SIZE _SC_PAGESIZE
567 # endif
568 # endif
569 # ifdef _SC_PAGE_SIZE
570 # define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
571 # else
572 # if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
573 extern size_t getpagesize();
574 # define malloc_getpagesize getpagesize()
575 # else
576 # include <sys/param.h>
577 # ifdef EXEC_PAGESIZE
578 # define malloc_getpagesize EXEC_PAGESIZE
579 # else
580 # ifdef NBPG
581 # ifndef CLSIZE
582 # define malloc_getpagesize NBPG
583 # else
584 # define malloc_getpagesize (NBPG * CLSIZE)
585 # endif
586 # else
587 # ifdef NBPC
588 # define malloc_getpagesize NBPC
589 # else
590 # ifdef PAGESIZE
591 # define malloc_getpagesize PAGESIZE
592 # else
593 # define malloc_getpagesize (4096) /* just guess */
594 # endif
595 # endif
596 # endif
597 # endif
598 # endif
599 # endif
600 #endif
606 This version of malloc supports the standard SVID/XPG mallinfo
607 routine that returns a struct containing the same kind of
608 information you can get from malloc_stats. It should work on
609 any SVID/XPG compliant system that has a /usr/include/malloc.h
610 defining struct mallinfo. (If you'd like to install such a thing
611 yourself, cut out the preliminary declarations as described above
612 and below and save them in a malloc.h file. But there's no
613 compelling reason to bother to do this.)
615 The main declaration needed is the mallinfo struct that is returned
616 (by-copy) by mallinfo(). The SVID/XPG malloinfo struct contains a
617 bunch of fields, most of which are not even meaningful in this
618 version of malloc. Some of these fields are are instead filled by
619 mallinfo() with other numbers that might possibly be of interest.
621 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
622 /usr/include/malloc.h file that includes a declaration of struct
623 mallinfo. If so, it is included; else an SVID2/XPG2 compliant
624 version is declared below. These must be precisely the same for
625 mallinfo() to work.
629 /* #define HAVE_USR_INCLUDE_MALLOC_H */
631 #if HAVE_USR_INCLUDE_MALLOC_H
632 # include "/usr/include/malloc.h"
633 #else
634 # ifdef _LIBC
635 # include "malloc.h"
636 # else
637 # include "ptmalloc.h"
638 # endif
639 #endif
643 #ifndef DEFAULT_TRIM_THRESHOLD
644 #define DEFAULT_TRIM_THRESHOLD (128 * 1024)
645 #endif
648 M_TRIM_THRESHOLD is the maximum amount of unused top-most memory
649 to keep before releasing via malloc_trim in free().
651 Automatic trimming is mainly useful in long-lived programs.
652 Because trimming via sbrk can be slow on some systems, and can
653 sometimes be wasteful (in cases where programs immediately
654 afterward allocate more large chunks) the value should be high
655 enough so that your overall system performance would improve by
656 releasing.
658 The trim threshold and the mmap control parameters (see below)
659 can be traded off with one another. Trimming and mmapping are
660 two different ways of releasing unused memory back to the
661 system. Between these two, it is often possible to keep
662 system-level demands of a long-lived program down to a bare
663 minimum. For example, in one test suite of sessions measuring
664 the XF86 X server on Linux, using a trim threshold of 128K and a
665 mmap threshold of 192K led to near-minimal long term resource
666 consumption.
668 If you are using this malloc in a long-lived program, it should
669 pay to experiment with these values. As a rough guide, you
670 might set to a value close to the average size of a process
671 (program) running on your system. Releasing this much memory
672 would allow such a process to run in memory. Generally, it's
673 worth it to tune for trimming rather than memory mapping when a
674 program undergoes phases where several large chunks are
675 allocated and released in ways that can reuse each other's
676 storage, perhaps mixed with phases where there are no such
677 chunks at all. And in well-behaved long-lived programs,
678 controlling release of large blocks via trimming versus mapping
679 is usually faster.
681 However, in most programs, these parameters serve mainly as
682 protection against the system-level effects of carrying around
683 massive amounts of unneeded memory. Since frequent calls to
684 sbrk, mmap, and munmap otherwise degrade performance, the default
685 parameters are set to relatively high values that serve only as
686 safeguards.
688 The default trim value is high enough to cause trimming only in
689 fairly extreme (by current memory consumption standards) cases.
690 It must be greater than page size to have any useful effect. To
691 disable trimming completely, you can set to (unsigned long)(-1);
697 #ifndef DEFAULT_TOP_PAD
698 #define DEFAULT_TOP_PAD (0)
699 #endif
702 M_TOP_PAD is the amount of extra `padding' space to allocate or
703 retain whenever sbrk is called. It is used in two ways internally:
705 * When sbrk is called to extend the top of the arena to satisfy
706 a new malloc request, this much padding is added to the sbrk
707 request.
709 * When malloc_trim is called automatically from free(),
710 it is used as the `pad' argument.
712 In both cases, the actual amount of padding is rounded
713 so that the end of the arena is always a system page boundary.
715 The main reason for using padding is to avoid calling sbrk so
716 often. Having even a small pad greatly reduces the likelihood
717 that nearly every malloc request during program start-up (or
718 after trimming) will invoke sbrk, which needlessly wastes
719 time.
721 Automatic rounding-up to page-size units is normally sufficient
722 to avoid measurable overhead, so the default is 0. However, in
723 systems where sbrk is relatively slow, it can pay to increase
724 this value, at the expense of carrying around more memory than
725 the program needs.
730 #ifndef DEFAULT_MMAP_THRESHOLD
731 #define DEFAULT_MMAP_THRESHOLD (128 * 1024)
732 #endif
736 M_MMAP_THRESHOLD is the request size threshold for using mmap()
737 to service a request. Requests of at least this size that cannot
738 be allocated using already-existing space will be serviced via mmap.
739 (If enough normal freed space already exists it is used instead.)
741 Using mmap segregates relatively large chunks of memory so that
742 they can be individually obtained and released from the host
743 system. A request serviced through mmap is never reused by any
744 other request (at least not directly; the system may just so
745 happen to remap successive requests to the same locations).
747 Segregating space in this way has the benefit that mmapped space
748 can ALWAYS be individually released back to the system, which
749 helps keep the system level memory demands of a long-lived
750 program low. Mapped memory can never become `locked' between
751 other chunks, as can happen with normally allocated chunks, which
752 menas that even trimming via malloc_trim would not release them.
754 However, it has the disadvantages that:
756 1. The space cannot be reclaimed, consolidated, and then
757 used to service later requests, as happens with normal chunks.
758 2. It can lead to more wastage because of mmap page alignment
759 requirements
760 3. It causes malloc performance to be more dependent on host
761 system memory management support routines which may vary in
762 implementation quality and may impose arbitrary
763 limitations. Generally, servicing a request via normal
764 malloc steps is faster than going through a system's mmap.
766 All together, these considerations should lead you to use mmap
767 only for relatively large requests.
774 #ifndef DEFAULT_MMAP_MAX
775 #if HAVE_MMAP
776 #define DEFAULT_MMAP_MAX (1024)
777 #else
778 #define DEFAULT_MMAP_MAX (0)
779 #endif
780 #endif
783 M_MMAP_MAX is the maximum number of requests to simultaneously
784 service using mmap. This parameter exists because:
786 1. Some systems have a limited number of internal tables for
787 use by mmap.
788 2. In most systems, overreliance on mmap can degrade overall
789 performance.
790 3. If a program allocates many large regions, it is probably
791 better off using normal sbrk-based allocation routines that
792 can reclaim and reallocate normal heap memory. Using a
793 small value allows transition into this mode after the
794 first few allocations.
796 Setting to 0 disables all use of mmap. If HAVE_MMAP is not set,
797 the default value is 0, and attempts to set it to non-zero values
798 in mallopt will fail.
803 #ifndef DEFAULT_CHECK_ACTION
804 #define DEFAULT_CHECK_ACTION 1
805 #endif
807 /* What to do if the standard debugging hooks are in place and a
808 corrupt pointer is detected: do nothing (0), print an error message
809 (1), or call abort() (2). */
813 #define HEAP_MIN_SIZE (32*1024)
814 #define HEAP_MAX_SIZE (1024*1024) /* must be a power of two */
816 /* HEAP_MIN_SIZE and HEAP_MAX_SIZE limit the size of mmap()ed heaps
817 that are dynamically created for multi-threaded programs. The
818 maximum size must be a power of two, for fast determination of
819 which heap belongs to a chunk. It should be much larger than
820 the mmap threshold, so that requests with a size just below that
821 threshold can be fulfilled without creating too many heaps.
826 #ifndef THREAD_STATS
827 #define THREAD_STATS 0
828 #endif
830 /* If THREAD_STATS is non-zero, some statistics on mutex locking are
831 computed. */
834 /* Macro to set errno. */
835 #ifndef __set_errno
836 # define __set_errno(val) errno = (val)
837 #endif
839 /* On some platforms we can compile internal, not exported functions better.
840 Let the environment provide a macro and define it to be empty if it
841 is not available. */
842 #ifndef internal_function
843 # define internal_function
844 #endif
849 Special defines for the Linux/GNU C library.
854 #ifdef _LIBC
856 #if __STD_C
858 Void_t * __default_morecore (ptrdiff_t);
859 Void_t *(*__morecore)(ptrdiff_t) = __default_morecore;
861 #else
863 Void_t * __default_morecore ();
864 Void_t *(*__morecore)() = __default_morecore;
866 #endif
868 #define MORECORE (*__morecore)
869 #define MORECORE_FAILURE 0
871 #ifndef MORECORE_CLEARS
872 #define MORECORE_CLEARS 1
873 #endif
875 static size_t __libc_pagesize;
877 #define mmap __mmap
878 #define munmap __munmap
879 #define mremap __mremap
880 #define mprotect __mprotect
881 #undef malloc_getpagesize
882 #define malloc_getpagesize __libc_pagesize
884 #else /* _LIBC */
886 #if __STD_C
887 extern Void_t* sbrk(ptrdiff_t);
888 #else
889 extern Void_t* sbrk();
890 #endif
892 #ifndef MORECORE
893 #define MORECORE sbrk
894 #endif
896 #ifndef MORECORE_FAILURE
897 #define MORECORE_FAILURE -1
898 #endif
900 #ifndef MORECORE_CLEARS
901 #define MORECORE_CLEARS 1
902 #endif
904 #endif /* _LIBC */
906 #ifdef _LIBC
908 #define cALLOc __libc_calloc
909 #define fREe __libc_free
910 #define mALLOc __libc_malloc
911 #define mEMALIGn __libc_memalign
912 #define rEALLOc __libc_realloc
913 #define vALLOc __libc_valloc
914 #define pvALLOc __libc_pvalloc
915 #define mALLINFo __libc_mallinfo
916 #define mALLOPt __libc_mallopt
917 #define mALLOC_STATs __malloc_stats
918 #define mALLOC_USABLE_SIZe __malloc_usable_size
919 #define mALLOC_TRIm __malloc_trim
920 #define mALLOC_GET_STATe __malloc_get_state
921 #define mALLOC_SET_STATe __malloc_set_state
923 #else
925 #define cALLOc calloc
926 #define fREe free
927 #define mALLOc malloc
928 #define mEMALIGn memalign
929 #define rEALLOc realloc
930 #define vALLOc valloc
931 #define pvALLOc pvalloc
932 #define mALLINFo mallinfo
933 #define mALLOPt mallopt
934 #define mALLOC_STATs malloc_stats
935 #define mALLOC_USABLE_SIZe malloc_usable_size
936 #define mALLOC_TRIm malloc_trim
937 #define mALLOC_GET_STATe malloc_get_state
938 #define mALLOC_SET_STATe malloc_set_state
940 #endif
942 /* Public routines */
944 #if __STD_C
946 #ifndef _LIBC
947 void ptmalloc_init(void);
948 #endif
949 Void_t* mALLOc(size_t);
950 void fREe(Void_t*);
951 Void_t* rEALLOc(Void_t*, size_t);
952 Void_t* mEMALIGn(size_t, size_t);
953 Void_t* vALLOc(size_t);
954 Void_t* pvALLOc(size_t);
955 Void_t* cALLOc(size_t, size_t);
956 void cfree(Void_t*);
957 int mALLOC_TRIm(size_t);
958 size_t mALLOC_USABLE_SIZe(Void_t*);
959 void mALLOC_STATs(void);
960 int mALLOPt(int, int);
961 struct mallinfo mALLINFo(void);
962 Void_t* mALLOC_GET_STATe(void);
963 int mALLOC_SET_STATe(Void_t*);
965 #else /* !__STD_C */
967 #ifndef _LIBC
968 void ptmalloc_init();
969 #endif
970 Void_t* mALLOc();
971 void fREe();
972 Void_t* rEALLOc();
973 Void_t* mEMALIGn();
974 Void_t* vALLOc();
975 Void_t* pvALLOc();
976 Void_t* cALLOc();
977 void cfree();
978 int mALLOC_TRIm();
979 size_t mALLOC_USABLE_SIZe();
980 void mALLOC_STATs();
981 int mALLOPt();
982 struct mallinfo mALLINFo();
983 Void_t* mALLOC_GET_STATe();
984 int mALLOC_SET_STATe();
986 #endif /* __STD_C */
989 #ifdef __cplusplus
990 }; /* end of extern "C" */
991 #endif
993 #if !defined(NO_THREADS) && !HAVE_MMAP
994 "Can't have threads support without mmap"
995 #endif
999 Type declarations
1003 struct malloc_chunk
1005 INTERNAL_SIZE_T prev_size; /* Size of previous chunk (if free). */
1006 INTERNAL_SIZE_T size; /* Size in bytes, including overhead. */
1007 struct malloc_chunk* fd; /* double links -- used only if free. */
1008 struct malloc_chunk* bk;
1011 typedef struct malloc_chunk* mchunkptr;
1015 malloc_chunk details:
1017 (The following includes lightly edited explanations by Colin Plumb.)
1019 Chunks of memory are maintained using a `boundary tag' method as
1020 described in e.g., Knuth or Standish. (See the paper by Paul
1021 Wilson ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a
1022 survey of such techniques.) Sizes of free chunks are stored both
1023 in the front of each chunk and at the end. This makes
1024 consolidating fragmented chunks into bigger chunks very fast. The
1025 size fields also hold bits representing whether chunks are free or
1026 in use.
1028 An allocated chunk looks like this:
1031 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1032 | Size of previous chunk, if allocated | |
1033 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1034 | Size of chunk, in bytes |P|
1035 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1036 | User data starts here... .
1038 . (malloc_usable_space() bytes) .
1040 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1041 | Size of chunk |
1042 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1045 Where "chunk" is the front of the chunk for the purpose of most of
1046 the malloc code, but "mem" is the pointer that is returned to the
1047 user. "Nextchunk" is the beginning of the next contiguous chunk.
1049 Chunks always begin on even word boundaries, so the mem portion
1050 (which is returned to the user) is also on an even word boundary, and
1051 thus double-word aligned.
1053 Free chunks are stored in circular doubly-linked lists, and look like this:
1055 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1056 | Size of previous chunk |
1057 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1058 `head:' | Size of chunk, in bytes |P|
1059 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1060 | Forward pointer to next chunk in list |
1061 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1062 | Back pointer to previous chunk in list |
1063 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1064 | Unused space (may be 0 bytes long) .
1067 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1068 `foot:' | Size of chunk, in bytes |
1069 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1071 The P (PREV_INUSE) bit, stored in the unused low-order bit of the
1072 chunk size (which is always a multiple of two words), is an in-use
1073 bit for the *previous* chunk. If that bit is *clear*, then the
1074 word before the current chunk size contains the previous chunk
1075 size, and can be used to find the front of the previous chunk.
1076 (The very first chunk allocated always has this bit set,
1077 preventing access to non-existent (or non-owned) memory.)
1079 Note that the `foot' of the current chunk is actually represented
1080 as the prev_size of the NEXT chunk. (This makes it easier to
1081 deal with alignments etc).
1083 The two exceptions to all this are
1085 1. The special chunk `top', which doesn't bother using the
1086 trailing size field since there is no
1087 next contiguous chunk that would have to index off it. (After
1088 initialization, `top' is forced to always exist. If it would
1089 become less than MINSIZE bytes long, it is replenished via
1090 malloc_extend_top.)
1092 2. Chunks allocated via mmap, which have the second-lowest-order
1093 bit (IS_MMAPPED) set in their size fields. Because they are
1094 never merged or traversed from any other chunk, they have no
1095 foot size or inuse information.
1097 Available chunks are kept in any of several places (all declared below):
1099 * `av': An array of chunks serving as bin headers for consolidated
1100 chunks. Each bin is doubly linked. The bins are approximately
1101 proportionally (log) spaced. There are a lot of these bins
1102 (128). This may look excessive, but works very well in
1103 practice. All procedures maintain the invariant that no
1104 consolidated chunk physically borders another one. Chunks in
1105 bins are kept in size order, with ties going to the
1106 approximately least recently used chunk.
1108 The chunks in each bin are maintained in decreasing sorted order by
1109 size. This is irrelevant for the small bins, which all contain
1110 the same-sized chunks, but facilitates best-fit allocation for
1111 larger chunks. (These lists are just sequential. Keeping them in
1112 order almost never requires enough traversal to warrant using
1113 fancier ordered data structures.) Chunks of the same size are
1114 linked with the most recently freed at the front, and allocations
1115 are taken from the back. This results in LRU or FIFO allocation
1116 order, which tends to give each chunk an equal opportunity to be
1117 consolidated with adjacent freed chunks, resulting in larger free
1118 chunks and less fragmentation.
1120 * `top': The top-most available chunk (i.e., the one bordering the
1121 end of available memory) is treated specially. It is never
1122 included in any bin, is used only if no other chunk is
1123 available, and is released back to the system if it is very
1124 large (see M_TRIM_THRESHOLD).
1126 * `last_remainder': A bin holding only the remainder of the
1127 most recently split (non-top) chunk. This bin is checked
1128 before other non-fitting chunks, so as to provide better
1129 locality for runs of sequentially allocated chunks.
1131 * Implicitly, through the host system's memory mapping tables.
1132 If supported, requests greater than a threshold are usually
1133 serviced via calls to mmap, and then later released via munmap.
1138 Bins
1140 The bins are an array of pairs of pointers serving as the
1141 heads of (initially empty) doubly-linked lists of chunks, laid out
1142 in a way so that each pair can be treated as if it were in a
1143 malloc_chunk. (This way, the fd/bk offsets for linking bin heads
1144 and chunks are the same).
1146 Bins for sizes < 512 bytes contain chunks of all the same size, spaced
1147 8 bytes apart. Larger bins are approximately logarithmically
1148 spaced. (See the table below.)
1150 Bin layout:
1152 64 bins of size 8
1153 32 bins of size 64
1154 16 bins of size 512
1155 8 bins of size 4096
1156 4 bins of size 32768
1157 2 bins of size 262144
1158 1 bin of size what's left
1160 There is actually a little bit of slop in the numbers in bin_index
1161 for the sake of speed. This makes no difference elsewhere.
1163 The special chunks `top' and `last_remainder' get their own bins,
1164 (this is implemented via yet more trickery with the av array),
1165 although `top' is never properly linked to its bin since it is
1166 always handled specially.
1170 #define NAV 128 /* number of bins */
1172 typedef struct malloc_chunk* mbinptr;
1174 /* An arena is a configuration of malloc_chunks together with an array
1175 of bins. With multiple threads, it must be locked via a mutex
1176 before changing its data structures. One or more `heaps' are
1177 associated with each arena, except for the main_arena, which is
1178 associated only with the `main heap', i.e. the conventional free
1179 store obtained with calls to MORECORE() (usually sbrk). The `av'
1180 array is never mentioned directly in the code, but instead used via
1181 bin access macros. */
1183 typedef struct _arena {
1184 mbinptr av[2*NAV + 2];
1185 struct _arena *next;
1186 size_t size;
1187 #if THREAD_STATS
1188 long stat_lock_direct, stat_lock_loop, stat_lock_wait;
1189 #endif
1190 mutex_t mutex;
1191 } arena;
1194 /* A heap is a single contiguous memory region holding (coalesceable)
1195 malloc_chunks. It is allocated with mmap() and always starts at an
1196 address aligned to HEAP_MAX_SIZE. Not used unless compiling for
1197 multiple threads. */
1199 typedef struct _heap_info {
1200 arena *ar_ptr; /* Arena for this heap. */
1201 struct _heap_info *prev; /* Previous heap. */
1202 size_t size; /* Current size in bytes. */
1203 size_t pad; /* Make sure the following data is properly aligned. */
1204 } heap_info;
1208 Static functions (forward declarations)
1211 #if __STD_C
1213 static void chunk_free(arena *ar_ptr, mchunkptr p) internal_function;
1214 static mchunkptr chunk_alloc(arena *ar_ptr, INTERNAL_SIZE_T size)
1215 internal_function;
1216 static mchunkptr chunk_realloc(arena *ar_ptr, mchunkptr oldp,
1217 INTERNAL_SIZE_T oldsize, INTERNAL_SIZE_T nb)
1218 internal_function;
1219 static mchunkptr chunk_align(arena *ar_ptr, INTERNAL_SIZE_T nb,
1220 size_t alignment) internal_function;
1221 static int main_trim(size_t pad) internal_function;
1222 #ifndef NO_THREADS
1223 static int heap_trim(heap_info *heap, size_t pad) internal_function;
1224 #endif
1225 #if defined _LIBC || defined MALLOC_HOOKS
1226 static Void_t* malloc_check(size_t sz, const Void_t *caller);
1227 static void free_check(Void_t* mem, const Void_t *caller);
1228 static Void_t* realloc_check(Void_t* oldmem, size_t bytes,
1229 const Void_t *caller);
1230 static Void_t* memalign_check(size_t alignment, size_t bytes,
1231 const Void_t *caller);
1232 #ifndef NO_THREADS
1233 static Void_t* malloc_starter(size_t sz, const Void_t *caller);
1234 static void free_starter(Void_t* mem, const Void_t *caller);
1235 static Void_t* malloc_atfork(size_t sz, const Void_t *caller);
1236 static void free_atfork(Void_t* mem, const Void_t *caller);
1237 #endif
1238 #endif
1240 #else
1242 static void chunk_free();
1243 static mchunkptr chunk_alloc();
1244 static mchunkptr chunk_realloc();
1245 static mchunkptr chunk_align();
1246 static int main_trim();
1247 #ifndef NO_THREADS
1248 static int heap_trim();
1249 #endif
1250 #if defined _LIBC || defined MALLOC_HOOKS
1251 static Void_t* malloc_check();
1252 static void free_check();
1253 static Void_t* realloc_check();
1254 static Void_t* memalign_check();
1255 #ifndef NO_THREADS
1256 static Void_t* malloc_starter();
1257 static void free_starter();
1258 static Void_t* malloc_atfork();
1259 static void free_atfork();
1260 #endif
1261 #endif
1263 #endif
1267 /* sizes, alignments */
1269 #define SIZE_SZ (sizeof(INTERNAL_SIZE_T))
1270 #define MALLOC_ALIGNMENT (SIZE_SZ + SIZE_SZ)
1271 #define MALLOC_ALIGN_MASK (MALLOC_ALIGNMENT - 1)
1272 #define MINSIZE (sizeof(struct malloc_chunk))
1274 /* conversion from malloc headers to user pointers, and back */
1276 #define chunk2mem(p) ((Void_t*)((char*)(p) + 2*SIZE_SZ))
1277 #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - 2*SIZE_SZ))
1279 /* pad request bytes into a usable size, return non-zero on overflow */
1281 #define request2size(req, nb) \
1282 ((nb = (req) + (SIZE_SZ + MALLOC_ALIGN_MASK)),\
1283 ((long)nb <= 0 || nb < (INTERNAL_SIZE_T) (req) \
1284 ? (__set_errno (ENOMEM), 1) \
1285 : ((nb < (MINSIZE + MALLOC_ALIGN_MASK) \
1286 ? (nb = MINSIZE) : (nb &= ~MALLOC_ALIGN_MASK)), 0)))
1288 /* Check if m has acceptable alignment */
1290 #define aligned_OK(m) (((unsigned long)((m)) & (MALLOC_ALIGN_MASK)) == 0)
1296 Physical chunk operations
1300 /* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */
1302 #define PREV_INUSE 0x1
1304 /* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */
1306 #define IS_MMAPPED 0x2
1308 /* Bits to mask off when extracting size */
1310 #define SIZE_BITS (PREV_INUSE|IS_MMAPPED)
1313 /* Ptr to next physical malloc_chunk. */
1315 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->size & ~PREV_INUSE) ))
1317 /* Ptr to previous physical malloc_chunk */
1319 #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_size) ))
1322 /* Treat space at ptr + offset as a chunk */
1324 #define chunk_at_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
1330 Dealing with use bits
1333 /* extract p's inuse bit */
1335 #define inuse(p) \
1336 ((((mchunkptr)(((char*)(p))+((p)->size & ~PREV_INUSE)))->size) & PREV_INUSE)
1338 /* extract inuse bit of previous chunk */
1340 #define prev_inuse(p) ((p)->size & PREV_INUSE)
1342 /* check for mmap()'ed chunk */
1344 #define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED)
1346 /* set/clear chunk as in use without otherwise disturbing */
1348 #define set_inuse(p) \
1349 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size |= PREV_INUSE
1351 #define clear_inuse(p) \
1352 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size &= ~(PREV_INUSE)
1354 /* check/set/clear inuse bits in known places */
1356 #define inuse_bit_at_offset(p, s)\
1357 (((mchunkptr)(((char*)(p)) + (s)))->size & PREV_INUSE)
1359 #define set_inuse_bit_at_offset(p, s)\
1360 (((mchunkptr)(((char*)(p)) + (s)))->size |= PREV_INUSE)
1362 #define clear_inuse_bit_at_offset(p, s)\
1363 (((mchunkptr)(((char*)(p)) + (s)))->size &= ~(PREV_INUSE))
1369 Dealing with size fields
1372 /* Get size, ignoring use bits */
1374 #define chunksize(p) ((p)->size & ~(SIZE_BITS))
1376 /* Set size at head, without disturbing its use bit */
1378 #define set_head_size(p, s) ((p)->size = (((p)->size & PREV_INUSE) | (s)))
1380 /* Set size/use ignoring previous bits in header */
1382 #define set_head(p, s) ((p)->size = (s))
1384 /* Set size at footer (only when chunk is not in use) */
1386 #define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_size = (s))
1392 /* access macros */
1394 #define bin_at(a, i) ((mbinptr)((char*)&(((a)->av)[2*(i) + 2]) - 2*SIZE_SZ))
1395 #define init_bin(a, i) ((a)->av[2*i+2] = (a)->av[2*i+3] = bin_at((a), i))
1396 #define next_bin(b) ((mbinptr)((char*)(b) + 2 * sizeof(mbinptr)))
1397 #define prev_bin(b) ((mbinptr)((char*)(b) - 2 * sizeof(mbinptr)))
1400 The first 2 bins are never indexed. The corresponding av cells are instead
1401 used for bookkeeping. This is not to save space, but to simplify
1402 indexing, maintain locality, and avoid some initialization tests.
1405 #define binblocks(a) (bin_at(a,0)->size)/* bitvector of nonempty blocks */
1406 #define top(a) (bin_at(a,0)->fd) /* The topmost chunk */
1407 #define last_remainder(a) (bin_at(a,1)) /* remainder from last split */
1410 Because top initially points to its own bin with initial
1411 zero size, thus forcing extension on the first malloc request,
1412 we avoid having any special code in malloc to check whether
1413 it even exists yet. But we still need to in malloc_extend_top.
1416 #define initial_top(a) ((mchunkptr)bin_at(a, 0))
1420 /* field-extraction macros */
1422 #define first(b) ((b)->fd)
1423 #define last(b) ((b)->bk)
1426 Indexing into bins
1429 #define bin_index(sz) \
1430 (((((unsigned long)(sz)) >> 9) == 0) ? (((unsigned long)(sz)) >> 3):\
1431 ((((unsigned long)(sz)) >> 9) <= 4) ? 56 + (((unsigned long)(sz)) >> 6):\
1432 ((((unsigned long)(sz)) >> 9) <= 20) ? 91 + (((unsigned long)(sz)) >> 9):\
1433 ((((unsigned long)(sz)) >> 9) <= 84) ? 110 + (((unsigned long)(sz)) >> 12):\
1434 ((((unsigned long)(sz)) >> 9) <= 340) ? 119 + (((unsigned long)(sz)) >> 15):\
1435 ((((unsigned long)(sz)) >> 9) <= 1364) ? 124 + (((unsigned long)(sz)) >> 18):\
1436 126)
1438 bins for chunks < 512 are all spaced 8 bytes apart, and hold
1439 identically sized chunks. This is exploited in malloc.
1442 #define MAX_SMALLBIN 63
1443 #define MAX_SMALLBIN_SIZE 512
1444 #define SMALLBIN_WIDTH 8
1446 #define smallbin_index(sz) (((unsigned long)(sz)) >> 3)
1449 Requests are `small' if both the corresponding and the next bin are small
1452 #define is_small_request(nb) ((nb) < MAX_SMALLBIN_SIZE - SMALLBIN_WIDTH)
1457 To help compensate for the large number of bins, a one-level index
1458 structure is used for bin-by-bin searching. `binblocks' is a
1459 one-word bitvector recording whether groups of BINBLOCKWIDTH bins
1460 have any (possibly) non-empty bins, so they can be skipped over
1461 all at once during during traversals. The bits are NOT always
1462 cleared as soon as all bins in a block are empty, but instead only
1463 when all are noticed to be empty during traversal in malloc.
1466 #define BINBLOCKWIDTH 4 /* bins per block */
1468 /* bin<->block macros */
1470 #define idx2binblock(ix) ((unsigned)1 << ((ix) / BINBLOCKWIDTH))
1471 #define mark_binblock(a, ii) (binblocks(a) |= idx2binblock(ii))
1472 #define clear_binblock(a, ii) (binblocks(a) &= ~(idx2binblock(ii)))
1477 /* Static bookkeeping data */
1479 /* Helper macro to initialize bins */
1480 #define IAV(i) bin_at(&main_arena, i), bin_at(&main_arena, i)
1482 static arena main_arena = {
1484 0, 0,
1485 IAV(0), IAV(1), IAV(2), IAV(3), IAV(4), IAV(5), IAV(6), IAV(7),
1486 IAV(8), IAV(9), IAV(10), IAV(11), IAV(12), IAV(13), IAV(14), IAV(15),
1487 IAV(16), IAV(17), IAV(18), IAV(19), IAV(20), IAV(21), IAV(22), IAV(23),
1488 IAV(24), IAV(25), IAV(26), IAV(27), IAV(28), IAV(29), IAV(30), IAV(31),
1489 IAV(32), IAV(33), IAV(34), IAV(35), IAV(36), IAV(37), IAV(38), IAV(39),
1490 IAV(40), IAV(41), IAV(42), IAV(43), IAV(44), IAV(45), IAV(46), IAV(47),
1491 IAV(48), IAV(49), IAV(50), IAV(51), IAV(52), IAV(53), IAV(54), IAV(55),
1492 IAV(56), IAV(57), IAV(58), IAV(59), IAV(60), IAV(61), IAV(62), IAV(63),
1493 IAV(64), IAV(65), IAV(66), IAV(67), IAV(68), IAV(69), IAV(70), IAV(71),
1494 IAV(72), IAV(73), IAV(74), IAV(75), IAV(76), IAV(77), IAV(78), IAV(79),
1495 IAV(80), IAV(81), IAV(82), IAV(83), IAV(84), IAV(85), IAV(86), IAV(87),
1496 IAV(88), IAV(89), IAV(90), IAV(91), IAV(92), IAV(93), IAV(94), IAV(95),
1497 IAV(96), IAV(97), IAV(98), IAV(99), IAV(100), IAV(101), IAV(102), IAV(103),
1498 IAV(104), IAV(105), IAV(106), IAV(107), IAV(108), IAV(109), IAV(110), IAV(111),
1499 IAV(112), IAV(113), IAV(114), IAV(115), IAV(116), IAV(117), IAV(118), IAV(119),
1500 IAV(120), IAV(121), IAV(122), IAV(123), IAV(124), IAV(125), IAV(126), IAV(127)
1502 &main_arena, /* next */
1503 0, /* size */
1504 #if THREAD_STATS
1505 0, 0, 0, /* stat_lock_direct, stat_lock_loop, stat_lock_wait */
1506 #endif
1507 MUTEX_INITIALIZER /* mutex */
1510 #undef IAV
1512 /* Thread specific data */
1514 #ifndef NO_THREADS
1515 static tsd_key_t arena_key;
1516 static mutex_t list_lock = MUTEX_INITIALIZER;
1517 #endif
1519 #if THREAD_STATS
1520 static int stat_n_heaps = 0;
1521 #define THREAD_STAT(x) x
1522 #else
1523 #define THREAD_STAT(x) do ; while(0)
1524 #endif
1526 /* variables holding tunable values */
1528 static unsigned long trim_threshold = DEFAULT_TRIM_THRESHOLD;
1529 static unsigned long top_pad = DEFAULT_TOP_PAD;
1530 static unsigned int n_mmaps_max = DEFAULT_MMAP_MAX;
1531 static unsigned long mmap_threshold = DEFAULT_MMAP_THRESHOLD;
1532 static int check_action = DEFAULT_CHECK_ACTION;
1534 /* The first value returned from sbrk */
1535 static char* sbrk_base = (char*)(-1);
1537 /* The maximum memory obtained from system via sbrk */
1538 static unsigned long max_sbrked_mem = 0;
1540 /* The maximum via either sbrk or mmap (too difficult to track with threads) */
1541 #ifdef NO_THREADS
1542 static unsigned long max_total_mem = 0;
1543 #endif
1545 /* The total memory obtained from system via sbrk */
1546 #define sbrked_mem (main_arena.size)
1548 /* Tracking mmaps */
1550 static unsigned int n_mmaps = 0;
1551 static unsigned int max_n_mmaps = 0;
1552 static unsigned long mmapped_mem = 0;
1553 static unsigned long max_mmapped_mem = 0;
1557 #ifndef _LIBC
1558 #define weak_variable
1559 #else
1560 /* In GNU libc we want the hook variables to be weak definitions to
1561 avoid a problem with Emacs. */
1562 #define weak_variable weak_function
1563 #endif
1565 /* Already initialized? */
1566 int __malloc_initialized = -1;
1569 #ifndef NO_THREADS
1571 /* The following two functions are registered via thread_atfork() to
1572 make sure that the mutexes remain in a consistent state in the
1573 fork()ed version of a thread. Also adapt the malloc and free hooks
1574 temporarily, because the `atfork' handler mechanism may use
1575 malloc/free internally (e.g. in LinuxThreads). */
1577 #if defined _LIBC || defined MALLOC_HOOKS
1578 static __malloc_ptr_t (*save_malloc_hook) __MALLOC_P ((size_t __size,
1579 const __malloc_ptr_t));
1580 static void (*save_free_hook) __MALLOC_P ((__malloc_ptr_t __ptr,
1581 const __malloc_ptr_t));
1582 static Void_t* save_arena;
1583 #endif
1585 static void
1586 ptmalloc_lock_all __MALLOC_P((void))
1588 arena *ar_ptr;
1590 (void)mutex_lock(&list_lock);
1591 for(ar_ptr = &main_arena;;) {
1592 (void)mutex_lock(&ar_ptr->mutex);
1593 ar_ptr = ar_ptr->next;
1594 if(ar_ptr == &main_arena) break;
1596 #if defined _LIBC || defined MALLOC_HOOKS
1597 save_malloc_hook = __malloc_hook;
1598 save_free_hook = __free_hook;
1599 __malloc_hook = malloc_atfork;
1600 __free_hook = free_atfork;
1601 /* Only the current thread may perform malloc/free calls now. */
1602 tsd_getspecific(arena_key, save_arena);
1603 tsd_setspecific(arena_key, (Void_t*)0);
1604 #endif
1607 static void
1608 ptmalloc_unlock_all __MALLOC_P((void))
1610 arena *ar_ptr;
1612 #if defined _LIBC || defined MALLOC_HOOKS
1613 tsd_setspecific(arena_key, save_arena);
1614 __malloc_hook = save_malloc_hook;
1615 __free_hook = save_free_hook;
1616 #endif
1617 for(ar_ptr = &main_arena;;) {
1618 (void)mutex_unlock(&ar_ptr->mutex);
1619 ar_ptr = ar_ptr->next;
1620 if(ar_ptr == &main_arena) break;
1622 (void)mutex_unlock(&list_lock);
1625 static void
1626 ptmalloc_init_all __MALLOC_P((void))
1628 arena *ar_ptr;
1630 #if defined _LIBC || defined MALLOC_HOOKS
1631 tsd_setspecific(arena_key, save_arena);
1632 __malloc_hook = save_malloc_hook;
1633 __free_hook = save_free_hook;
1634 #endif
1635 for(ar_ptr = &main_arena;;) {
1636 (void)mutex_init(&ar_ptr->mutex);
1637 ar_ptr = ar_ptr->next;
1638 if(ar_ptr == &main_arena) break;
1640 (void)mutex_init(&list_lock);
1643 #endif /* !defined NO_THREADS */
1645 /* Initialization routine. */
1646 #if defined(_LIBC)
1647 #if 0
1648 static void ptmalloc_init __MALLOC_P ((void)) __attribute__ ((constructor));
1649 #endif
1651 static void
1652 ptmalloc_init __MALLOC_P((void))
1653 #else
1654 void
1655 ptmalloc_init __MALLOC_P((void))
1656 #endif
1658 #if defined _LIBC || defined MALLOC_HOOKS
1659 const char* s;
1660 #endif
1662 if(__malloc_initialized >= 0) return;
1663 __malloc_initialized = 0;
1664 #ifdef _LIBC
1665 __libc_pagesize = __getpagesize();
1666 #endif
1667 #ifndef NO_THREADS
1668 #if defined _LIBC || defined MALLOC_HOOKS
1669 /* With some threads implementations, creating thread-specific data
1670 or initializing a mutex may call malloc() itself. Provide a
1671 simple starter version (realloc() won't work). */
1672 save_malloc_hook = __malloc_hook;
1673 save_free_hook = __free_hook;
1674 __malloc_hook = malloc_starter;
1675 __free_hook = free_starter;
1676 #endif
1677 #ifdef _LIBC
1678 /* Initialize the pthreads interface. */
1679 if (__pthread_initialize != NULL)
1680 __pthread_initialize();
1681 #endif
1682 mutex_init(&main_arena.mutex);
1683 mutex_init(&list_lock);
1684 tsd_key_create(&arena_key, NULL);
1685 tsd_setspecific(arena_key, (Void_t *)&main_arena);
1686 thread_atfork(ptmalloc_lock_all, ptmalloc_unlock_all, ptmalloc_init_all);
1687 #endif /* !defined NO_THREADS */
1688 #if defined _LIBC || defined MALLOC_HOOKS
1689 if((s = __secure_getenv("MALLOC_TRIM_THRESHOLD_")))
1690 mALLOPt(M_TRIM_THRESHOLD, atoi(s));
1691 if((s = __secure_getenv("MALLOC_TOP_PAD_")))
1692 mALLOPt(M_TOP_PAD, atoi(s));
1693 if((s = __secure_getenv("MALLOC_MMAP_THRESHOLD_")))
1694 mALLOPt(M_MMAP_THRESHOLD, atoi(s));
1695 if((s = __secure_getenv("MALLOC_MMAP_MAX_")))
1696 mALLOPt(M_MMAP_MAX, atoi(s));
1697 s = getenv("MALLOC_CHECK_");
1698 #ifndef NO_THREADS
1699 __malloc_hook = save_malloc_hook;
1700 __free_hook = save_free_hook;
1701 #endif
1702 if(s) {
1703 if(s[0]) mALLOPt(M_CHECK_ACTION, (int)(s[0] - '0'));
1704 __malloc_check_init();
1706 if(__malloc_initialize_hook != NULL)
1707 (*__malloc_initialize_hook)();
1708 #endif
1709 __malloc_initialized = 1;
1712 /* There are platforms (e.g. Hurd) with a link-time hook mechanism. */
1713 #ifdef thread_atfork_static
1714 thread_atfork_static(ptmalloc_lock_all, ptmalloc_unlock_all, \
1715 ptmalloc_init_all)
1716 #endif
1718 #if defined _LIBC || defined MALLOC_HOOKS
1720 /* Hooks for debugging versions. The initial hooks just call the
1721 initialization routine, then do the normal work. */
1723 static Void_t*
1724 #if __STD_C
1725 malloc_hook_ini(size_t sz, const __malloc_ptr_t caller)
1726 #else
1727 malloc_hook_ini(sz, caller)
1728 size_t sz; const __malloc_ptr_t caller;
1729 #endif
1731 __malloc_hook = NULL;
1732 ptmalloc_init();
1733 return mALLOc(sz);
1736 static Void_t*
1737 #if __STD_C
1738 realloc_hook_ini(Void_t* ptr, size_t sz, const __malloc_ptr_t caller)
1739 #else
1740 realloc_hook_ini(ptr, sz, caller)
1741 Void_t* ptr; size_t sz; const __malloc_ptr_t caller;
1742 #endif
1744 __malloc_hook = NULL;
1745 __realloc_hook = NULL;
1746 ptmalloc_init();
1747 return rEALLOc(ptr, sz);
1750 static Void_t*
1751 #if __STD_C
1752 memalign_hook_ini(size_t sz, size_t alignment, const __malloc_ptr_t caller)
1753 #else
1754 memalign_hook_ini(sz, alignment, caller)
1755 size_t sz; size_t alignment; const __malloc_ptr_t caller;
1756 #endif
1758 __memalign_hook = NULL;
1759 ptmalloc_init();
1760 return mEMALIGn(sz, alignment);
1763 void weak_variable (*__malloc_initialize_hook) __MALLOC_P ((void)) = NULL;
1764 void weak_variable (*__free_hook) __MALLOC_P ((__malloc_ptr_t __ptr,
1765 const __malloc_ptr_t)) = NULL;
1766 __malloc_ptr_t weak_variable (*__malloc_hook)
1767 __MALLOC_P ((size_t __size, const __malloc_ptr_t)) = malloc_hook_ini;
1768 __malloc_ptr_t weak_variable (*__realloc_hook)
1769 __MALLOC_P ((__malloc_ptr_t __ptr, size_t __size, const __malloc_ptr_t))
1770 = realloc_hook_ini;
1771 __malloc_ptr_t weak_variable (*__memalign_hook)
1772 __MALLOC_P ((size_t __size, size_t __alignment, const __malloc_ptr_t))
1773 = memalign_hook_ini;
1774 void weak_variable (*__after_morecore_hook) __MALLOC_P ((void)) = NULL;
1776 /* Whether we are using malloc checking. */
1777 static int using_malloc_checking;
1779 /* A flag that is set by malloc_set_state, to signal that malloc checking
1780 must not be enabled on the request from the user (via the MALLOC_CHECK_
1781 environment variable). It is reset by __malloc_check_init to tell
1782 malloc_set_state that the user has requested malloc checking.
1784 The purpose of this flag is to make sure that malloc checking is not
1785 enabled when the heap to be restored was constructed without malloc
1786 checking, and thus does not contain the required magic bytes.
1787 Otherwise the heap would be corrupted by calls to free and realloc. If
1788 it turns out that the heap was created with malloc checking and the
1789 user has requested it malloc_set_state just calls __malloc_check_init
1790 again to enable it. On the other hand, reusing such a heap without
1791 further malloc checking is safe. */
1792 static int disallow_malloc_check;
1794 /* Activate a standard set of debugging hooks. */
1795 void
1796 __malloc_check_init()
1798 if (disallow_malloc_check) {
1799 disallow_malloc_check = 0;
1800 return;
1802 using_malloc_checking = 1;
1803 __malloc_hook = malloc_check;
1804 __free_hook = free_check;
1805 __realloc_hook = realloc_check;
1806 __memalign_hook = memalign_check;
1807 if(check_action & 1)
1808 fprintf(stderr, "malloc: using debugging hooks\n");
1811 #endif
1817 /* Routines dealing with mmap(). */
1819 #if HAVE_MMAP
1821 #ifndef MAP_ANONYMOUS
1823 static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
1825 #define MMAP(addr, size, prot, flags) ((dev_zero_fd < 0) ? \
1826 (dev_zero_fd = open("/dev/zero", O_RDWR), \
1827 mmap((addr), (size), (prot), (flags), dev_zero_fd, 0)) : \
1828 mmap((addr), (size), (prot), (flags), dev_zero_fd, 0))
1830 #else
1832 #define MMAP(addr, size, prot, flags) \
1833 (mmap((addr), (size), (prot), (flags)|MAP_ANONYMOUS, -1, 0))
1835 #endif
1837 #if defined __GNUC__ && __GNUC__ >= 2
1838 /* This function is only called from one place, inline it. */
1839 inline
1840 #endif
1841 static mchunkptr
1842 internal_function
1843 #if __STD_C
1844 mmap_chunk(size_t size)
1845 #else
1846 mmap_chunk(size) size_t size;
1847 #endif
1849 size_t page_mask = malloc_getpagesize - 1;
1850 mchunkptr p;
1852 if(n_mmaps >= n_mmaps_max) return 0; /* too many regions */
1854 /* For mmapped chunks, the overhead is one SIZE_SZ unit larger, because
1855 * there is no following chunk whose prev_size field could be used.
1857 size = (size + SIZE_SZ + page_mask) & ~page_mask;
1859 p = (mchunkptr)MMAP(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE);
1860 if(p == (mchunkptr) MAP_FAILED) return 0;
1862 n_mmaps++;
1863 if (n_mmaps > max_n_mmaps) max_n_mmaps = n_mmaps;
1865 /* We demand that eight bytes into a page must be 8-byte aligned. */
1866 assert(aligned_OK(chunk2mem(p)));
1868 /* The offset to the start of the mmapped region is stored
1869 * in the prev_size field of the chunk; normally it is zero,
1870 * but that can be changed in memalign().
1872 p->prev_size = 0;
1873 set_head(p, size|IS_MMAPPED);
1875 mmapped_mem += size;
1876 if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
1877 max_mmapped_mem = mmapped_mem;
1878 #ifdef NO_THREADS
1879 if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
1880 max_total_mem = mmapped_mem + sbrked_mem;
1881 #endif
1882 return p;
1885 static void
1886 internal_function
1887 #if __STD_C
1888 munmap_chunk(mchunkptr p)
1889 #else
1890 munmap_chunk(p) mchunkptr p;
1891 #endif
1893 INTERNAL_SIZE_T size = chunksize(p);
1894 int ret;
1896 assert (chunk_is_mmapped(p));
1897 assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
1898 assert((n_mmaps > 0));
1899 assert(((p->prev_size + size) & (malloc_getpagesize-1)) == 0);
1901 n_mmaps--;
1902 mmapped_mem -= (size + p->prev_size);
1904 ret = munmap((char *)p - p->prev_size, size + p->prev_size);
1906 /* munmap returns non-zero on failure */
1907 assert(ret == 0);
1910 #if HAVE_MREMAP
1912 static mchunkptr
1913 internal_function
1914 #if __STD_C
1915 mremap_chunk(mchunkptr p, size_t new_size)
1916 #else
1917 mremap_chunk(p, new_size) mchunkptr p; size_t new_size;
1918 #endif
1920 size_t page_mask = malloc_getpagesize - 1;
1921 INTERNAL_SIZE_T offset = p->prev_size;
1922 INTERNAL_SIZE_T size = chunksize(p);
1923 char *cp;
1925 assert (chunk_is_mmapped(p));
1926 assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
1927 assert((n_mmaps > 0));
1928 assert(((size + offset) & (malloc_getpagesize-1)) == 0);
1930 /* Note the extra SIZE_SZ overhead as in mmap_chunk(). */
1931 new_size = (new_size + offset + SIZE_SZ + page_mask) & ~page_mask;
1933 cp = (char *)mremap((char *)p - offset, size + offset, new_size,
1934 MREMAP_MAYMOVE);
1936 if (cp == MAP_FAILED) return 0;
1938 p = (mchunkptr)(cp + offset);
1940 assert(aligned_OK(chunk2mem(p)));
1942 assert((p->prev_size == offset));
1943 set_head(p, (new_size - offset)|IS_MMAPPED);
1945 mmapped_mem -= size + offset;
1946 mmapped_mem += new_size;
1947 if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
1948 max_mmapped_mem = mmapped_mem;
1949 #ifdef NO_THREADS
1950 if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
1951 max_total_mem = mmapped_mem + sbrked_mem;
1952 #endif
1953 return p;
1956 #endif /* HAVE_MREMAP */
1958 #endif /* HAVE_MMAP */
1962 /* Managing heaps and arenas (for concurrent threads) */
1964 #ifndef NO_THREADS
1966 /* Create a new heap. size is automatically rounded up to a multiple
1967 of the page size. */
1969 static heap_info *
1970 internal_function
1971 #if __STD_C
1972 new_heap(size_t size)
1973 #else
1974 new_heap(size) size_t size;
1975 #endif
1977 size_t page_mask = malloc_getpagesize - 1;
1978 char *p1, *p2;
1979 unsigned long ul;
1980 heap_info *h;
1982 if(size+top_pad < HEAP_MIN_SIZE)
1983 size = HEAP_MIN_SIZE;
1984 else if(size+top_pad <= HEAP_MAX_SIZE)
1985 size += top_pad;
1986 else if(size > HEAP_MAX_SIZE)
1987 return 0;
1988 else
1989 size = HEAP_MAX_SIZE;
1990 size = (size + page_mask) & ~page_mask;
1992 /* A memory region aligned to a multiple of HEAP_MAX_SIZE is needed.
1993 No swap space needs to be reserved for the following large
1994 mapping (on Linux, this is the case for all non-writable mappings
1995 anyway). */
1996 p1 = (char *)MMAP(0, HEAP_MAX_SIZE<<1, PROT_NONE, MAP_PRIVATE|MAP_NORESERVE);
1997 if(p1 == MAP_FAILED)
1998 return 0;
1999 p2 = (char *)(((unsigned long)p1 + HEAP_MAX_SIZE) & ~(HEAP_MAX_SIZE-1));
2000 ul = p2 - p1;
2001 munmap(p1, ul);
2002 munmap(p2 + HEAP_MAX_SIZE, HEAP_MAX_SIZE - ul);
2003 if(mprotect(p2, size, PROT_READ|PROT_WRITE) != 0) {
2004 munmap(p2, HEAP_MAX_SIZE);
2005 return 0;
2007 h = (heap_info *)p2;
2008 h->size = size;
2009 THREAD_STAT(stat_n_heaps++);
2010 return h;
2013 /* Grow or shrink a heap. size is automatically rounded up to a
2014 multiple of the page size if it is positive. */
2016 static int
2017 #if __STD_C
2018 grow_heap(heap_info *h, long diff)
2019 #else
2020 grow_heap(h, diff) heap_info *h; long diff;
2021 #endif
2023 size_t page_mask = malloc_getpagesize - 1;
2024 long new_size;
2026 if(diff >= 0) {
2027 diff = (diff + page_mask) & ~page_mask;
2028 new_size = (long)h->size + diff;
2029 if(new_size > HEAP_MAX_SIZE)
2030 return -1;
2031 if(mprotect((char *)h + h->size, diff, PROT_READ|PROT_WRITE) != 0)
2032 return -2;
2033 } else {
2034 new_size = (long)h->size + diff;
2035 if(new_size < (long)sizeof(*h))
2036 return -1;
2037 /* Try to re-map the extra heap space freshly to save memory, and
2038 make it inaccessible. */
2039 if((char *)MMAP((char *)h + new_size, -diff, PROT_NONE,
2040 MAP_PRIVATE|MAP_FIXED) == (char *) MAP_FAILED)
2041 return -2;
2043 h->size = new_size;
2044 return 0;
2047 /* Delete a heap. */
2049 #define delete_heap(heap) munmap((char*)(heap), HEAP_MAX_SIZE)
2051 /* arena_get() acquires an arena and locks the corresponding mutex.
2052 First, try the one last locked successfully by this thread. (This
2053 is the common case and handled with a macro for speed.) Then, loop
2054 once over the circularly linked list of arenas. If no arena is
2055 readily available, create a new one. */
2057 #define arena_get(ptr, size) do { \
2058 Void_t *vptr = NULL; \
2059 ptr = (arena *)tsd_getspecific(arena_key, vptr); \
2060 if(ptr && !mutex_trylock(&ptr->mutex)) { \
2061 THREAD_STAT(++(ptr->stat_lock_direct)); \
2062 } else \
2063 ptr = arena_get2(ptr, (size)); \
2064 } while(0)
2066 static arena *
2067 internal_function
2068 #if __STD_C
2069 arena_get2(arena *a_tsd, size_t size)
2070 #else
2071 arena_get2(a_tsd, size) arena *a_tsd; size_t size;
2072 #endif
2074 arena *a;
2075 heap_info *h;
2076 char *ptr;
2077 int i;
2078 unsigned long misalign;
2080 if(!a_tsd)
2081 a = a_tsd = &main_arena;
2082 else {
2083 a = a_tsd->next;
2084 if(!a) {
2085 /* This can only happen while initializing the new arena. */
2086 (void)mutex_lock(&main_arena.mutex);
2087 THREAD_STAT(++(main_arena.stat_lock_wait));
2088 return &main_arena;
2092 /* Check the global, circularly linked list for available arenas. */
2093 repeat:
2094 do {
2095 if(!mutex_trylock(&a->mutex)) {
2096 THREAD_STAT(++(a->stat_lock_loop));
2097 tsd_setspecific(arena_key, (Void_t *)a);
2098 return a;
2100 a = a->next;
2101 } while(a != a_tsd);
2103 /* If not even the list_lock can be obtained, try again. This can
2104 happen during `atfork', or for example on systems where thread
2105 creation makes it temporarily impossible to obtain _any_
2106 locks. */
2107 if(mutex_trylock(&list_lock)) {
2108 a = a_tsd;
2109 goto repeat;
2111 (void)mutex_unlock(&list_lock);
2113 /* Nothing immediately available, so generate a new arena. */
2114 h = new_heap(size + (sizeof(*h) + sizeof(*a) + MALLOC_ALIGNMENT));
2115 if(!h)
2116 return 0;
2117 a = h->ar_ptr = (arena *)(h+1);
2118 for(i=0; i<NAV; i++)
2119 init_bin(a, i);
2120 a->next = NULL;
2121 a->size = h->size;
2122 tsd_setspecific(arena_key, (Void_t *)a);
2123 mutex_init(&a->mutex);
2124 i = mutex_lock(&a->mutex); /* remember result */
2126 /* Set up the top chunk, with proper alignment. */
2127 ptr = (char *)(a + 1);
2128 misalign = (unsigned long)chunk2mem(ptr) & MALLOC_ALIGN_MASK;
2129 if (misalign > 0)
2130 ptr += MALLOC_ALIGNMENT - misalign;
2131 top(a) = (mchunkptr)ptr;
2132 set_head(top(a), (((char*)h + h->size) - ptr) | PREV_INUSE);
2134 /* Add the new arena to the list. */
2135 (void)mutex_lock(&list_lock);
2136 a->next = main_arena.next;
2137 main_arena.next = a;
2138 (void)mutex_unlock(&list_lock);
2140 if(i) /* locking failed; keep arena for further attempts later */
2141 return 0;
2143 THREAD_STAT(++(a->stat_lock_loop));
2144 return a;
2147 /* find the heap and corresponding arena for a given ptr */
2149 #define heap_for_ptr(ptr) \
2150 ((heap_info *)((unsigned long)(ptr) & ~(HEAP_MAX_SIZE-1)))
2151 #define arena_for_ptr(ptr) \
2152 (((mchunkptr)(ptr) < top(&main_arena) && (char *)(ptr) >= sbrk_base) ? \
2153 &main_arena : heap_for_ptr(ptr)->ar_ptr)
2155 #else /* defined(NO_THREADS) */
2157 /* Without concurrent threads, there is only one arena. */
2159 #define arena_get(ptr, sz) (ptr = &main_arena)
2160 #define arena_for_ptr(ptr) (&main_arena)
2162 #endif /* !defined(NO_THREADS) */
2167 Debugging support
2170 #if MALLOC_DEBUG
2174 These routines make a number of assertions about the states
2175 of data structures that should be true at all times. If any
2176 are not true, it's very likely that a user program has somehow
2177 trashed memory. (It's also possible that there is a coding error
2178 in malloc. In which case, please report it!)
2181 #if __STD_C
2182 static void do_check_chunk(arena *ar_ptr, mchunkptr p)
2183 #else
2184 static void do_check_chunk(ar_ptr, p) arena *ar_ptr; mchunkptr p;
2185 #endif
2187 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
2189 /* No checkable chunk is mmapped */
2190 assert(!chunk_is_mmapped(p));
2192 #ifndef NO_THREADS
2193 if(ar_ptr != &main_arena) {
2194 heap_info *heap = heap_for_ptr(p);
2195 assert(heap->ar_ptr == ar_ptr);
2196 if(p != top(ar_ptr))
2197 assert((char *)p + sz <= (char *)heap + heap->size);
2198 else
2199 assert((char *)p + sz == (char *)heap + heap->size);
2200 return;
2202 #endif
2204 /* Check for legal address ... */
2205 assert((char*)p >= sbrk_base);
2206 if (p != top(ar_ptr))
2207 assert((char*)p + sz <= (char*)top(ar_ptr));
2208 else
2209 assert((char*)p + sz <= sbrk_base + sbrked_mem);
2214 #if __STD_C
2215 static void do_check_free_chunk(arena *ar_ptr, mchunkptr p)
2216 #else
2217 static void do_check_free_chunk(ar_ptr, p) arena *ar_ptr; mchunkptr p;
2218 #endif
2220 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
2221 mchunkptr next = chunk_at_offset(p, sz);
2223 do_check_chunk(ar_ptr, p);
2225 /* Check whether it claims to be free ... */
2226 assert(!inuse(p));
2228 /* Must have OK size and fields */
2229 assert((long)sz >= (long)MINSIZE);
2230 assert((sz & MALLOC_ALIGN_MASK) == 0);
2231 assert(aligned_OK(chunk2mem(p)));
2232 /* ... matching footer field */
2233 assert(next->prev_size == sz);
2234 /* ... and is fully consolidated */
2235 assert(prev_inuse(p));
2236 assert (next == top(ar_ptr) || inuse(next));
2238 /* ... and has minimally sane links */
2239 assert(p->fd->bk == p);
2240 assert(p->bk->fd == p);
2243 #if __STD_C
2244 static void do_check_inuse_chunk(arena *ar_ptr, mchunkptr p)
2245 #else
2246 static void do_check_inuse_chunk(ar_ptr, p) arena *ar_ptr; mchunkptr p;
2247 #endif
2249 mchunkptr next = next_chunk(p);
2250 do_check_chunk(ar_ptr, p);
2252 /* Check whether it claims to be in use ... */
2253 assert(inuse(p));
2255 /* ... whether its size is OK (it might be a fencepost) ... */
2256 assert(chunksize(p) >= MINSIZE || next->size == (0|PREV_INUSE));
2258 /* ... and is surrounded by OK chunks.
2259 Since more things can be checked with free chunks than inuse ones,
2260 if an inuse chunk borders them and debug is on, it's worth doing them.
2262 if (!prev_inuse(p))
2264 mchunkptr prv = prev_chunk(p);
2265 assert(next_chunk(prv) == p);
2266 do_check_free_chunk(ar_ptr, prv);
2268 if (next == top(ar_ptr))
2270 assert(prev_inuse(next));
2271 assert(chunksize(next) >= MINSIZE);
2273 else if (!inuse(next))
2274 do_check_free_chunk(ar_ptr, next);
2278 #if __STD_C
2279 static void do_check_malloced_chunk(arena *ar_ptr,
2280 mchunkptr p, INTERNAL_SIZE_T s)
2281 #else
2282 static void do_check_malloced_chunk(ar_ptr, p, s)
2283 arena *ar_ptr; mchunkptr p; INTERNAL_SIZE_T s;
2284 #endif
2286 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
2287 long room = sz - s;
2289 do_check_inuse_chunk(ar_ptr, p);
2291 /* Legal size ... */
2292 assert((long)sz >= (long)MINSIZE);
2293 assert((sz & MALLOC_ALIGN_MASK) == 0);
2294 assert(room >= 0);
2295 assert(room < (long)MINSIZE);
2297 /* ... and alignment */
2298 assert(aligned_OK(chunk2mem(p)));
2301 /* ... and was allocated at front of an available chunk */
2302 assert(prev_inuse(p));
2307 #define check_free_chunk(A,P) do_check_free_chunk(A,P)
2308 #define check_inuse_chunk(A,P) do_check_inuse_chunk(A,P)
2309 #define check_chunk(A,P) do_check_chunk(A,P)
2310 #define check_malloced_chunk(A,P,N) do_check_malloced_chunk(A,P,N)
2311 #else
2312 #define check_free_chunk(A,P)
2313 #define check_inuse_chunk(A,P)
2314 #define check_chunk(A,P)
2315 #define check_malloced_chunk(A,P,N)
2316 #endif
2321 Macro-based internal utilities
2326 Linking chunks in bin lists.
2327 Call these only with variables, not arbitrary expressions, as arguments.
2331 Place chunk p of size s in its bin, in size order,
2332 putting it ahead of others of same size.
2336 #define frontlink(A, P, S, IDX, BK, FD) \
2338 if (S < MAX_SMALLBIN_SIZE) \
2340 IDX = smallbin_index(S); \
2341 mark_binblock(A, IDX); \
2342 BK = bin_at(A, IDX); \
2343 FD = BK->fd; \
2344 P->bk = BK; \
2345 P->fd = FD; \
2346 FD->bk = BK->fd = P; \
2348 else \
2350 IDX = bin_index(S); \
2351 BK = bin_at(A, IDX); \
2352 FD = BK->fd; \
2353 if (FD == BK) mark_binblock(A, IDX); \
2354 else \
2356 while (FD != BK && S < chunksize(FD)) FD = FD->fd; \
2357 BK = FD->bk; \
2359 P->bk = BK; \
2360 P->fd = FD; \
2361 FD->bk = BK->fd = P; \
2366 /* take a chunk off a list */
2368 #define unlink(P, BK, FD) \
2370 BK = P->bk; \
2371 FD = P->fd; \
2372 FD->bk = BK; \
2373 BK->fd = FD; \
2376 /* Place p as the last remainder */
2378 #define link_last_remainder(A, P) \
2380 last_remainder(A)->fd = last_remainder(A)->bk = P; \
2381 P->fd = P->bk = last_remainder(A); \
2384 /* Clear the last_remainder bin */
2386 #define clear_last_remainder(A) \
2387 (last_remainder(A)->fd = last_remainder(A)->bk = last_remainder(A))
2394 Extend the top-most chunk by obtaining memory from system.
2395 Main interface to sbrk (but see also malloc_trim).
2398 #if defined __GNUC__ && __GNUC__ >= 2
2399 /* This function is called only from one place, inline it. */
2400 inline
2401 #endif
2402 static void
2403 internal_function
2404 #if __STD_C
2405 malloc_extend_top(arena *ar_ptr, INTERNAL_SIZE_T nb)
2406 #else
2407 malloc_extend_top(ar_ptr, nb) arena *ar_ptr; INTERNAL_SIZE_T nb;
2408 #endif
2410 unsigned long pagesz = malloc_getpagesize;
2411 mchunkptr old_top = top(ar_ptr); /* Record state of old top */
2412 INTERNAL_SIZE_T old_top_size = chunksize(old_top);
2413 INTERNAL_SIZE_T top_size; /* new size of top chunk */
2415 #ifndef NO_THREADS
2416 if(ar_ptr == &main_arena) {
2417 #endif
2419 char* brk; /* return value from sbrk */
2420 INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of sbrked space */
2421 INTERNAL_SIZE_T correction; /* bytes for 2nd sbrk call */
2422 char* new_brk; /* return of 2nd sbrk call */
2423 char* old_end = (char*)(chunk_at_offset(old_top, old_top_size));
2425 /* Pad request with top_pad plus minimal overhead */
2426 INTERNAL_SIZE_T sbrk_size = nb + top_pad + MINSIZE;
2428 /* If not the first time through, round to preserve page boundary */
2429 /* Otherwise, we need to correct to a page size below anyway. */
2430 /* (We also correct below if an intervening foreign sbrk call.) */
2432 if (sbrk_base != (char*)(-1))
2433 sbrk_size = (sbrk_size + (pagesz - 1)) & ~(pagesz - 1);
2435 brk = (char*)(MORECORE (sbrk_size));
2437 /* Fail if sbrk failed or if a foreign sbrk call killed our space */
2438 if (brk == (char*)(MORECORE_FAILURE) ||
2439 (brk < old_end && old_top != initial_top(&main_arena)))
2440 return;
2442 #if defined _LIBC || defined MALLOC_HOOKS
2443 /* Call the `morecore' hook if necessary. */
2444 if (__after_morecore_hook)
2445 (*__after_morecore_hook) ();
2446 #endif
2448 sbrked_mem += sbrk_size;
2450 if (brk == old_end) { /* can just add bytes to current top */
2451 top_size = sbrk_size + old_top_size;
2452 set_head(old_top, top_size | PREV_INUSE);
2453 old_top = 0; /* don't free below */
2454 } else {
2455 if (sbrk_base == (char*)(-1)) /* First time through. Record base */
2456 sbrk_base = brk;
2457 else
2458 /* Someone else called sbrk(). Count those bytes as sbrked_mem. */
2459 sbrked_mem += brk - (char*)old_end;
2461 /* Guarantee alignment of first new chunk made from this space */
2462 front_misalign = (unsigned long)chunk2mem(brk) & MALLOC_ALIGN_MASK;
2463 if (front_misalign > 0) {
2464 correction = (MALLOC_ALIGNMENT) - front_misalign;
2465 brk += correction;
2466 } else
2467 correction = 0;
2469 /* Guarantee the next brk will be at a page boundary */
2470 correction += pagesz - ((unsigned long)(brk + sbrk_size) & (pagesz - 1));
2472 /* Allocate correction */
2473 new_brk = (char*)(MORECORE (correction));
2474 if (new_brk == (char*)(MORECORE_FAILURE)) return;
2476 #if defined _LIBC || defined MALLOC_HOOKS
2477 /* Call the `morecore' hook if necessary. */
2478 if (__after_morecore_hook)
2479 (*__after_morecore_hook) ();
2480 #endif
2482 sbrked_mem += correction;
2484 top(&main_arena) = (mchunkptr)brk;
2485 top_size = new_brk - brk + correction;
2486 set_head(top(&main_arena), top_size | PREV_INUSE);
2488 if (old_top == initial_top(&main_arena))
2489 old_top = 0; /* don't free below */
2492 if ((unsigned long)sbrked_mem > (unsigned long)max_sbrked_mem)
2493 max_sbrked_mem = sbrked_mem;
2494 #ifdef NO_THREADS
2495 if ((unsigned long)(mmapped_mem + sbrked_mem) >
2496 (unsigned long)max_total_mem)
2497 max_total_mem = mmapped_mem + sbrked_mem;
2498 #endif
2500 #ifndef NO_THREADS
2501 } else { /* ar_ptr != &main_arena */
2502 heap_info *old_heap, *heap;
2503 size_t old_heap_size;
2505 if(old_top_size < MINSIZE) /* this should never happen */
2506 return;
2508 /* First try to extend the current heap. */
2509 if(MINSIZE + nb <= old_top_size)
2510 return;
2511 old_heap = heap_for_ptr(old_top);
2512 old_heap_size = old_heap->size;
2513 if(grow_heap(old_heap, MINSIZE + nb - old_top_size) == 0) {
2514 ar_ptr->size += old_heap->size - old_heap_size;
2515 top_size = ((char *)old_heap + old_heap->size) - (char *)old_top;
2516 set_head(old_top, top_size | PREV_INUSE);
2517 return;
2520 /* A new heap must be created. */
2521 heap = new_heap(nb + (MINSIZE + sizeof(*heap)));
2522 if(!heap)
2523 return;
2524 heap->ar_ptr = ar_ptr;
2525 heap->prev = old_heap;
2526 ar_ptr->size += heap->size;
2528 /* Set up the new top, so we can safely use chunk_free() below. */
2529 top(ar_ptr) = chunk_at_offset(heap, sizeof(*heap));
2530 top_size = heap->size - sizeof(*heap);
2531 set_head(top(ar_ptr), top_size | PREV_INUSE);
2533 #endif /* !defined(NO_THREADS) */
2535 /* We always land on a page boundary */
2536 assert(((unsigned long)((char*)top(ar_ptr) + top_size) & (pagesz-1)) == 0);
2538 /* Setup fencepost and free the old top chunk. */
2539 if(old_top) {
2540 /* The fencepost takes at least MINSIZE bytes, because it might
2541 become the top chunk again later. Note that a footer is set
2542 up, too, although the chunk is marked in use. */
2543 old_top_size -= MINSIZE;
2544 set_head(chunk_at_offset(old_top, old_top_size + 2*SIZE_SZ), 0|PREV_INUSE);
2545 if(old_top_size >= MINSIZE) {
2546 set_head(chunk_at_offset(old_top, old_top_size), (2*SIZE_SZ)|PREV_INUSE);
2547 set_foot(chunk_at_offset(old_top, old_top_size), (2*SIZE_SZ));
2548 set_head_size(old_top, old_top_size);
2549 chunk_free(ar_ptr, old_top);
2550 } else {
2551 set_head(old_top, (old_top_size + 2*SIZE_SZ)|PREV_INUSE);
2552 set_foot(old_top, (old_top_size + 2*SIZE_SZ));
2560 /* Main public routines */
2564 Malloc Algorithm:
2566 The requested size is first converted into a usable form, `nb'.
2567 This currently means to add 4 bytes overhead plus possibly more to
2568 obtain 8-byte alignment and/or to obtain a size of at least
2569 MINSIZE (currently 16, 24, or 32 bytes), the smallest allocatable
2570 size. (All fits are considered `exact' if they are within MINSIZE
2571 bytes.)
2573 From there, the first successful of the following steps is taken:
2575 1. The bin corresponding to the request size is scanned, and if
2576 a chunk of exactly the right size is found, it is taken.
2578 2. The most recently remaindered chunk is used if it is big
2579 enough. This is a form of (roving) first fit, used only in
2580 the absence of exact fits. Runs of consecutive requests use
2581 the remainder of the chunk used for the previous such request
2582 whenever possible. This limited use of a first-fit style
2583 allocation strategy tends to give contiguous chunks
2584 coextensive lifetimes, which improves locality and can reduce
2585 fragmentation in the long run.
2587 3. Other bins are scanned in increasing size order, using a
2588 chunk big enough to fulfill the request, and splitting off
2589 any remainder. This search is strictly by best-fit; i.e.,
2590 the smallest (with ties going to approximately the least
2591 recently used) chunk that fits is selected.
2593 4. If large enough, the chunk bordering the end of memory
2594 (`top') is split off. (This use of `top' is in accord with
2595 the best-fit search rule. In effect, `top' is treated as
2596 larger (and thus less well fitting) than any other available
2597 chunk since it can be extended to be as large as necessary
2598 (up to system limitations).
2600 5. If the request size meets the mmap threshold and the
2601 system supports mmap, and there are few enough currently
2602 allocated mmapped regions, and a call to mmap succeeds,
2603 the request is allocated via direct memory mapping.
2605 6. Otherwise, the top of memory is extended by
2606 obtaining more space from the system (normally using sbrk,
2607 but definable to anything else via the MORECORE macro).
2608 Memory is gathered from the system (in system page-sized
2609 units) in a way that allows chunks obtained across different
2610 sbrk calls to be consolidated, but does not require
2611 contiguous memory. Thus, it should be safe to intersperse
2612 mallocs with other sbrk calls.
2615 All allocations are made from the `lowest' part of any found
2616 chunk. (The implementation invariant is that prev_inuse is
2617 always true of any allocated chunk; i.e., that each allocated
2618 chunk borders either a previously allocated and still in-use chunk,
2619 or the base of its memory arena.)
2623 #if __STD_C
2624 Void_t* mALLOc(size_t bytes)
2625 #else
2626 Void_t* mALLOc(bytes) size_t bytes;
2627 #endif
2629 arena *ar_ptr;
2630 INTERNAL_SIZE_T nb; /* padded request size */
2631 mchunkptr victim;
2633 #if defined _LIBC || defined MALLOC_HOOKS
2634 if (__malloc_hook != NULL) {
2635 Void_t* result;
2637 #if defined __GNUC__ && __GNUC__ >= 2
2638 result = (*__malloc_hook)(bytes, __builtin_return_address (0));
2639 #else
2640 result = (*__malloc_hook)(bytes, NULL);
2641 #endif
2642 return result;
2644 #endif
2646 if(request2size(bytes, nb))
2647 return 0;
2648 arena_get(ar_ptr, nb);
2649 if(!ar_ptr)
2650 return 0;
2651 victim = chunk_alloc(ar_ptr, nb);
2652 (void)mutex_unlock(&ar_ptr->mutex);
2653 if(!victim) {
2654 /* Maybe the failure is due to running out of mmapped areas. */
2655 if(ar_ptr != &main_arena) {
2656 (void)mutex_lock(&main_arena.mutex);
2657 victim = chunk_alloc(&main_arena, nb);
2658 (void)mutex_unlock(&main_arena.mutex);
2660 if(!victim) return 0;
2662 return chunk2mem(victim);
2665 static mchunkptr
2666 internal_function
2667 #if __STD_C
2668 chunk_alloc(arena *ar_ptr, INTERNAL_SIZE_T nb)
2669 #else
2670 chunk_alloc(ar_ptr, nb) arena *ar_ptr; INTERNAL_SIZE_T nb;
2671 #endif
2673 mchunkptr victim; /* inspected/selected chunk */
2674 INTERNAL_SIZE_T victim_size; /* its size */
2675 int idx; /* index for bin traversal */
2676 mbinptr bin; /* associated bin */
2677 mchunkptr remainder; /* remainder from a split */
2678 long remainder_size; /* its size */
2679 int remainder_index; /* its bin index */
2680 unsigned long block; /* block traverser bit */
2681 int startidx; /* first bin of a traversed block */
2682 mchunkptr fwd; /* misc temp for linking */
2683 mchunkptr bck; /* misc temp for linking */
2684 mbinptr q; /* misc temp */
2687 /* Check for exact match in a bin */
2689 if (is_small_request(nb)) /* Faster version for small requests */
2691 idx = smallbin_index(nb);
2693 /* No traversal or size check necessary for small bins. */
2695 q = bin_at(ar_ptr, idx);
2696 victim = last(q);
2698 /* Also scan the next one, since it would have a remainder < MINSIZE */
2699 if (victim == q)
2701 q = next_bin(q);
2702 victim = last(q);
2704 if (victim != q)
2706 victim_size = chunksize(victim);
2707 unlink(victim, bck, fwd);
2708 set_inuse_bit_at_offset(victim, victim_size);
2709 check_malloced_chunk(ar_ptr, victim, nb);
2710 return victim;
2713 idx += 2; /* Set for bin scan below. We've already scanned 2 bins. */
2716 else
2718 idx = bin_index(nb);
2719 bin = bin_at(ar_ptr, idx);
2721 for (victim = last(bin); victim != bin; victim = victim->bk)
2723 victim_size = chunksize(victim);
2724 remainder_size = victim_size - nb;
2726 if (remainder_size >= (long)MINSIZE) /* too big */
2728 --idx; /* adjust to rescan below after checking last remainder */
2729 break;
2732 else if (remainder_size >= 0) /* exact fit */
2734 unlink(victim, bck, fwd);
2735 set_inuse_bit_at_offset(victim, victim_size);
2736 check_malloced_chunk(ar_ptr, victim, nb);
2737 return victim;
2741 ++idx;
2745 /* Try to use the last split-off remainder */
2747 if ( (victim = last_remainder(ar_ptr)->fd) != last_remainder(ar_ptr))
2749 victim_size = chunksize(victim);
2750 remainder_size = victim_size - nb;
2752 if (remainder_size >= (long)MINSIZE) /* re-split */
2754 remainder = chunk_at_offset(victim, nb);
2755 set_head(victim, nb | PREV_INUSE);
2756 link_last_remainder(ar_ptr, remainder);
2757 set_head(remainder, remainder_size | PREV_INUSE);
2758 set_foot(remainder, remainder_size);
2759 check_malloced_chunk(ar_ptr, victim, nb);
2760 return victim;
2763 clear_last_remainder(ar_ptr);
2765 if (remainder_size >= 0) /* exhaust */
2767 set_inuse_bit_at_offset(victim, victim_size);
2768 check_malloced_chunk(ar_ptr, victim, nb);
2769 return victim;
2772 /* Else place in bin */
2774 frontlink(ar_ptr, victim, victim_size, remainder_index, bck, fwd);
2778 If there are any possibly nonempty big-enough blocks,
2779 search for best fitting chunk by scanning bins in blockwidth units.
2782 if ( (block = idx2binblock(idx)) <= binblocks(ar_ptr))
2785 /* Get to the first marked block */
2787 if ( (block & binblocks(ar_ptr)) == 0)
2789 /* force to an even block boundary */
2790 idx = (idx & ~(BINBLOCKWIDTH - 1)) + BINBLOCKWIDTH;
2791 block <<= 1;
2792 while ((block & binblocks(ar_ptr)) == 0)
2794 idx += BINBLOCKWIDTH;
2795 block <<= 1;
2799 /* For each possibly nonempty block ... */
2800 for (;;)
2802 startidx = idx; /* (track incomplete blocks) */
2803 q = bin = bin_at(ar_ptr, idx);
2805 /* For each bin in this block ... */
2808 /* Find and use first big enough chunk ... */
2810 for (victim = last(bin); victim != bin; victim = victim->bk)
2812 victim_size = chunksize(victim);
2813 remainder_size = victim_size - nb;
2815 if (remainder_size >= (long)MINSIZE) /* split */
2817 remainder = chunk_at_offset(victim, nb);
2818 set_head(victim, nb | PREV_INUSE);
2819 unlink(victim, bck, fwd);
2820 link_last_remainder(ar_ptr, remainder);
2821 set_head(remainder, remainder_size | PREV_INUSE);
2822 set_foot(remainder, remainder_size);
2823 check_malloced_chunk(ar_ptr, victim, nb);
2824 return victim;
2827 else if (remainder_size >= 0) /* take */
2829 set_inuse_bit_at_offset(victim, victim_size);
2830 unlink(victim, bck, fwd);
2831 check_malloced_chunk(ar_ptr, victim, nb);
2832 return victim;
2837 bin = next_bin(bin);
2839 } while ((++idx & (BINBLOCKWIDTH - 1)) != 0);
2841 /* Clear out the block bit. */
2843 do /* Possibly backtrack to try to clear a partial block */
2845 if ((startidx & (BINBLOCKWIDTH - 1)) == 0)
2847 binblocks(ar_ptr) &= ~block;
2848 break;
2850 --startidx;
2851 q = prev_bin(q);
2852 } while (first(q) == q);
2854 /* Get to the next possibly nonempty block */
2856 if ( (block <<= 1) <= binblocks(ar_ptr) && (block != 0) )
2858 while ((block & binblocks(ar_ptr)) == 0)
2860 idx += BINBLOCKWIDTH;
2861 block <<= 1;
2864 else
2865 break;
2870 /* Try to use top chunk */
2872 /* Require that there be a remainder, ensuring top always exists */
2873 if ( (remainder_size = chunksize(top(ar_ptr)) - nb) < (long)MINSIZE)
2876 #if HAVE_MMAP
2877 /* If big and would otherwise need to extend, try to use mmap instead */
2878 if ((unsigned long)nb >= (unsigned long)mmap_threshold &&
2879 (victim = mmap_chunk(nb)) != 0)
2880 return victim;
2881 #endif
2883 /* Try to extend */
2884 malloc_extend_top(ar_ptr, nb);
2885 if ((remainder_size = chunksize(top(ar_ptr)) - nb) < (long)MINSIZE)
2886 return 0; /* propagate failure */
2889 victim = top(ar_ptr);
2890 set_head(victim, nb | PREV_INUSE);
2891 top(ar_ptr) = chunk_at_offset(victim, nb);
2892 set_head(top(ar_ptr), remainder_size | PREV_INUSE);
2893 check_malloced_chunk(ar_ptr, victim, nb);
2894 return victim;
2903 free() algorithm :
2905 cases:
2907 1. free(0) has no effect.
2909 2. If the chunk was allocated via mmap, it is released via munmap().
2911 3. If a returned chunk borders the current high end of memory,
2912 it is consolidated into the top, and if the total unused
2913 topmost memory exceeds the trim threshold, malloc_trim is
2914 called.
2916 4. Other chunks are consolidated as they arrive, and
2917 placed in corresponding bins. (This includes the case of
2918 consolidating with the current `last_remainder').
2923 #if __STD_C
2924 void fREe(Void_t* mem)
2925 #else
2926 void fREe(mem) Void_t* mem;
2927 #endif
2929 arena *ar_ptr;
2930 mchunkptr p; /* chunk corresponding to mem */
2932 #if defined _LIBC || defined MALLOC_HOOKS
2933 if (__free_hook != NULL) {
2934 #if defined __GNUC__ && __GNUC__ >= 2
2935 (*__free_hook)(mem, __builtin_return_address (0));
2936 #else
2937 (*__free_hook)(mem, NULL);
2938 #endif
2939 return;
2941 #endif
2943 if (mem == 0) /* free(0) has no effect */
2944 return;
2946 p = mem2chunk(mem);
2948 #if HAVE_MMAP
2949 if (chunk_is_mmapped(p)) /* release mmapped memory. */
2951 munmap_chunk(p);
2952 return;
2954 #endif
2956 ar_ptr = arena_for_ptr(p);
2957 #if THREAD_STATS
2958 if(!mutex_trylock(&ar_ptr->mutex))
2959 ++(ar_ptr->stat_lock_direct);
2960 else {
2961 (void)mutex_lock(&ar_ptr->mutex);
2962 ++(ar_ptr->stat_lock_wait);
2964 #else
2965 (void)mutex_lock(&ar_ptr->mutex);
2966 #endif
2967 chunk_free(ar_ptr, p);
2968 (void)mutex_unlock(&ar_ptr->mutex);
2971 static void
2972 internal_function
2973 #if __STD_C
2974 chunk_free(arena *ar_ptr, mchunkptr p)
2975 #else
2976 chunk_free(ar_ptr, p) arena *ar_ptr; mchunkptr p;
2977 #endif
2979 INTERNAL_SIZE_T hd = p->size; /* its head field */
2980 INTERNAL_SIZE_T sz; /* its size */
2981 int idx; /* its bin index */
2982 mchunkptr next; /* next contiguous chunk */
2983 INTERNAL_SIZE_T nextsz; /* its size */
2984 INTERNAL_SIZE_T prevsz; /* size of previous contiguous chunk */
2985 mchunkptr bck; /* misc temp for linking */
2986 mchunkptr fwd; /* misc temp for linking */
2987 int islr; /* track whether merging with last_remainder */
2989 check_inuse_chunk(ar_ptr, p);
2991 sz = hd & ~PREV_INUSE;
2992 next = chunk_at_offset(p, sz);
2993 nextsz = chunksize(next);
2995 if (next == top(ar_ptr)) /* merge with top */
2997 sz += nextsz;
2999 if (!(hd & PREV_INUSE)) /* consolidate backward */
3001 prevsz = p->prev_size;
3002 p = chunk_at_offset(p, -prevsz);
3003 sz += prevsz;
3004 unlink(p, bck, fwd);
3007 set_head(p, sz | PREV_INUSE);
3008 top(ar_ptr) = p;
3010 #ifndef NO_THREADS
3011 if(ar_ptr == &main_arena) {
3012 #endif
3013 if ((unsigned long)(sz) >= (unsigned long)trim_threshold)
3014 main_trim(top_pad);
3015 #ifndef NO_THREADS
3016 } else {
3017 heap_info *heap = heap_for_ptr(p);
3019 assert(heap->ar_ptr == ar_ptr);
3021 /* Try to get rid of completely empty heaps, if possible. */
3022 if((unsigned long)(sz) >= (unsigned long)trim_threshold ||
3023 p == chunk_at_offset(heap, sizeof(*heap)))
3024 heap_trim(heap, top_pad);
3026 #endif
3027 return;
3030 islr = 0;
3032 if (!(hd & PREV_INUSE)) /* consolidate backward */
3034 prevsz = p->prev_size;
3035 p = chunk_at_offset(p, -prevsz);
3036 sz += prevsz;
3038 if (p->fd == last_remainder(ar_ptr)) /* keep as last_remainder */
3039 islr = 1;
3040 else
3041 unlink(p, bck, fwd);
3044 if (!(inuse_bit_at_offset(next, nextsz))) /* consolidate forward */
3046 sz += nextsz;
3048 if (!islr && next->fd == last_remainder(ar_ptr))
3049 /* re-insert last_remainder */
3051 islr = 1;
3052 link_last_remainder(ar_ptr, p);
3054 else
3055 unlink(next, bck, fwd);
3057 next = chunk_at_offset(p, sz);
3059 else
3060 set_head(next, nextsz); /* clear inuse bit */
3062 set_head(p, sz | PREV_INUSE);
3063 next->prev_size = sz;
3064 if (!islr)
3065 frontlink(ar_ptr, p, sz, idx, bck, fwd);
3067 #ifndef NO_THREADS
3068 /* Check whether the heap containing top can go away now. */
3069 if(next->size < MINSIZE &&
3070 (unsigned long)sz > trim_threshold &&
3071 ar_ptr != &main_arena) { /* fencepost */
3072 heap_info *heap = heap_for_ptr(top(ar_ptr));
3074 if(top(ar_ptr) == chunk_at_offset(heap, sizeof(*heap)) &&
3075 heap->prev == heap_for_ptr(p))
3076 heap_trim(heap, top_pad);
3078 #endif
3087 Realloc algorithm:
3089 Chunks that were obtained via mmap cannot be extended or shrunk
3090 unless HAVE_MREMAP is defined, in which case mremap is used.
3091 Otherwise, if their reallocation is for additional space, they are
3092 copied. If for less, they are just left alone.
3094 Otherwise, if the reallocation is for additional space, and the
3095 chunk can be extended, it is, else a malloc-copy-free sequence is
3096 taken. There are several different ways that a chunk could be
3097 extended. All are tried:
3099 * Extending forward into following adjacent free chunk.
3100 * Shifting backwards, joining preceding adjacent space
3101 * Both shifting backwards and extending forward.
3102 * Extending into newly sbrked space
3104 Unless the #define REALLOC_ZERO_BYTES_FREES is set, realloc with a
3105 size argument of zero (re)allocates a minimum-sized chunk.
3107 If the reallocation is for less space, and the new request is for
3108 a `small' (<512 bytes) size, then the newly unused space is lopped
3109 off and freed.
3111 The old unix realloc convention of allowing the last-free'd chunk
3112 to be used as an argument to realloc is no longer supported.
3113 I don't know of any programs still relying on this feature,
3114 and allowing it would also allow too many other incorrect
3115 usages of realloc to be sensible.
3121 #if __STD_C
3122 Void_t* rEALLOc(Void_t* oldmem, size_t bytes)
3123 #else
3124 Void_t* rEALLOc(oldmem, bytes) Void_t* oldmem; size_t bytes;
3125 #endif
3127 arena *ar_ptr;
3128 INTERNAL_SIZE_T nb; /* padded request size */
3130 mchunkptr oldp; /* chunk corresponding to oldmem */
3131 INTERNAL_SIZE_T oldsize; /* its size */
3133 mchunkptr newp; /* chunk to return */
3135 #if defined _LIBC || defined MALLOC_HOOKS
3136 if (__realloc_hook != NULL) {
3137 Void_t* result;
3139 #if defined __GNUC__ && __GNUC__ >= 2
3140 result = (*__realloc_hook)(oldmem, bytes, __builtin_return_address (0));
3141 #else
3142 result = (*__realloc_hook)(oldmem, bytes, NULL);
3143 #endif
3144 return result;
3146 #endif
3148 #ifdef REALLOC_ZERO_BYTES_FREES
3149 if (bytes == 0 && oldmem != NULL) { fREe(oldmem); return 0; }
3150 #endif
3152 /* realloc of null is supposed to be same as malloc */
3153 if (oldmem == 0) return mALLOc(bytes);
3155 oldp = mem2chunk(oldmem);
3156 oldsize = chunksize(oldp);
3158 if(request2size(bytes, nb))
3159 return 0;
3161 #if HAVE_MMAP
3162 if (chunk_is_mmapped(oldp))
3164 Void_t* newmem;
3166 #if HAVE_MREMAP
3167 newp = mremap_chunk(oldp, nb);
3168 if(newp) return chunk2mem(newp);
3169 #endif
3170 /* Note the extra SIZE_SZ overhead. */
3171 if(oldsize - SIZE_SZ >= nb) return oldmem; /* do nothing */
3172 /* Must alloc, copy, free. */
3173 newmem = mALLOc(bytes);
3174 if (newmem == 0) return 0; /* propagate failure */
3175 MALLOC_COPY(newmem, oldmem, oldsize - 2*SIZE_SZ);
3176 munmap_chunk(oldp);
3177 return newmem;
3179 #endif
3181 ar_ptr = arena_for_ptr(oldp);
3182 #if THREAD_STATS
3183 if(!mutex_trylock(&ar_ptr->mutex))
3184 ++(ar_ptr->stat_lock_direct);
3185 else {
3186 (void)mutex_lock(&ar_ptr->mutex);
3187 ++(ar_ptr->stat_lock_wait);
3189 #else
3190 (void)mutex_lock(&ar_ptr->mutex);
3191 #endif
3193 #ifndef NO_THREADS
3194 /* As in malloc(), remember this arena for the next allocation. */
3195 tsd_setspecific(arena_key, (Void_t *)ar_ptr);
3196 #endif
3198 newp = chunk_realloc(ar_ptr, oldp, oldsize, nb);
3200 (void)mutex_unlock(&ar_ptr->mutex);
3201 return newp ? chunk2mem(newp) : NULL;
3204 static mchunkptr
3205 internal_function
3206 #if __STD_C
3207 chunk_realloc(arena* ar_ptr, mchunkptr oldp, INTERNAL_SIZE_T oldsize,
3208 INTERNAL_SIZE_T nb)
3209 #else
3210 chunk_realloc(ar_ptr, oldp, oldsize, nb)
3211 arena* ar_ptr; mchunkptr oldp; INTERNAL_SIZE_T oldsize, nb;
3212 #endif
3214 mchunkptr newp = oldp; /* chunk to return */
3215 INTERNAL_SIZE_T newsize = oldsize; /* its size */
3217 mchunkptr next; /* next contiguous chunk after oldp */
3218 INTERNAL_SIZE_T nextsize; /* its size */
3220 mchunkptr prev; /* previous contiguous chunk before oldp */
3221 INTERNAL_SIZE_T prevsize; /* its size */
3223 mchunkptr remainder; /* holds split off extra space from newp */
3224 INTERNAL_SIZE_T remainder_size; /* its size */
3226 mchunkptr bck; /* misc temp for linking */
3227 mchunkptr fwd; /* misc temp for linking */
3229 check_inuse_chunk(ar_ptr, oldp);
3231 if ((long)(oldsize) < (long)(nb))
3234 /* Try expanding forward */
3236 next = chunk_at_offset(oldp, oldsize);
3237 if (next == top(ar_ptr) || !inuse(next))
3239 nextsize = chunksize(next);
3241 /* Forward into top only if a remainder */
3242 if (next == top(ar_ptr))
3244 if ((long)(nextsize + newsize) >= (long)(nb + MINSIZE))
3246 newsize += nextsize;
3247 top(ar_ptr) = chunk_at_offset(oldp, nb);
3248 set_head(top(ar_ptr), (newsize - nb) | PREV_INUSE);
3249 set_head_size(oldp, nb);
3250 return oldp;
3254 /* Forward into next chunk */
3255 else if (((long)(nextsize + newsize) >= (long)(nb)))
3257 unlink(next, bck, fwd);
3258 newsize += nextsize;
3259 goto split;
3262 else
3264 next = 0;
3265 nextsize = 0;
3268 /* Try shifting backwards. */
3270 if (!prev_inuse(oldp))
3272 prev = prev_chunk(oldp);
3273 prevsize = chunksize(prev);
3275 /* try forward + backward first to save a later consolidation */
3277 if (next != 0)
3279 /* into top */
3280 if (next == top(ar_ptr))
3282 if ((long)(nextsize + prevsize + newsize) >= (long)(nb + MINSIZE))
3284 unlink(prev, bck, fwd);
3285 newp = prev;
3286 newsize += prevsize + nextsize;
3287 MALLOC_COPY(chunk2mem(newp), chunk2mem(oldp), oldsize - SIZE_SZ);
3288 top(ar_ptr) = chunk_at_offset(newp, nb);
3289 set_head(top(ar_ptr), (newsize - nb) | PREV_INUSE);
3290 set_head_size(newp, nb);
3291 return newp;
3295 /* into next chunk */
3296 else if (((long)(nextsize + prevsize + newsize) >= (long)(nb)))
3298 unlink(next, bck, fwd);
3299 unlink(prev, bck, fwd);
3300 newp = prev;
3301 newsize += nextsize + prevsize;
3302 MALLOC_COPY(chunk2mem(newp), chunk2mem(oldp), oldsize - SIZE_SZ);
3303 goto split;
3307 /* backward only */
3308 if (prev != 0 && (long)(prevsize + newsize) >= (long)nb)
3310 unlink(prev, bck, fwd);
3311 newp = prev;
3312 newsize += prevsize;
3313 MALLOC_COPY(chunk2mem(newp), chunk2mem(oldp), oldsize - SIZE_SZ);
3314 goto split;
3318 /* Must allocate */
3320 newp = chunk_alloc (ar_ptr, nb);
3322 if (newp == 0) {
3323 /* Maybe the failure is due to running out of mmapped areas. */
3324 if (ar_ptr != &main_arena) {
3325 (void)mutex_lock(&main_arena.mutex);
3326 newp = chunk_alloc(&main_arena, nb);
3327 (void)mutex_unlock(&main_arena.mutex);
3329 if (newp == 0) /* propagate failure */
3330 return 0;
3333 /* Avoid copy if newp is next chunk after oldp. */
3334 /* (This can only happen when new chunk is sbrk'ed.) */
3336 if ( newp == next_chunk(oldp))
3338 newsize += chunksize(newp);
3339 newp = oldp;
3340 goto split;
3343 /* Otherwise copy, free, and exit */
3344 MALLOC_COPY(chunk2mem(newp), chunk2mem(oldp), oldsize - SIZE_SZ);
3345 chunk_free(ar_ptr, oldp);
3346 return newp;
3350 split: /* split off extra room in old or expanded chunk */
3352 if (newsize - nb >= MINSIZE) /* split off remainder */
3354 remainder = chunk_at_offset(newp, nb);
3355 remainder_size = newsize - nb;
3356 set_head_size(newp, nb);
3357 set_head(remainder, remainder_size | PREV_INUSE);
3358 set_inuse_bit_at_offset(remainder, remainder_size);
3359 chunk_free(ar_ptr, remainder);
3361 else
3363 set_head_size(newp, newsize);
3364 set_inuse_bit_at_offset(newp, newsize);
3367 check_inuse_chunk(ar_ptr, newp);
3368 return newp;
3376 memalign algorithm:
3378 memalign requests more than enough space from malloc, finds a spot
3379 within that chunk that meets the alignment request, and then
3380 possibly frees the leading and trailing space.
3382 The alignment argument must be a power of two. This property is not
3383 checked by memalign, so misuse may result in random runtime errors.
3385 8-byte alignment is guaranteed by normal malloc calls, so don't
3386 bother calling memalign with an argument of 8 or less.
3388 Overreliance on memalign is a sure way to fragment space.
3393 #if __STD_C
3394 Void_t* mEMALIGn(size_t alignment, size_t bytes)
3395 #else
3396 Void_t* mEMALIGn(alignment, bytes) size_t alignment; size_t bytes;
3397 #endif
3399 arena *ar_ptr;
3400 INTERNAL_SIZE_T nb; /* padded request size */
3401 mchunkptr p;
3403 #if defined _LIBC || defined MALLOC_HOOKS
3404 if (__memalign_hook != NULL) {
3405 Void_t* result;
3407 #if defined __GNUC__ && __GNUC__ >= 2
3408 result = (*__memalign_hook)(alignment, bytes,
3409 __builtin_return_address (0));
3410 #else
3411 result = (*__memalign_hook)(alignment, bytes, NULL);
3412 #endif
3413 return result;
3415 #endif
3417 /* If need less alignment than we give anyway, just relay to malloc */
3419 if (alignment <= MALLOC_ALIGNMENT) return mALLOc(bytes);
3421 /* Otherwise, ensure that it is at least a minimum chunk size */
3423 if (alignment < MINSIZE) alignment = MINSIZE;
3425 if(request2size(bytes, nb))
3426 return 0;
3427 arena_get(ar_ptr, nb + alignment + MINSIZE);
3428 if(!ar_ptr)
3429 return 0;
3430 p = chunk_align(ar_ptr, nb, alignment);
3431 (void)mutex_unlock(&ar_ptr->mutex);
3432 if(!p) {
3433 /* Maybe the failure is due to running out of mmapped areas. */
3434 if(ar_ptr != &main_arena) {
3435 (void)mutex_lock(&main_arena.mutex);
3436 p = chunk_align(&main_arena, nb, alignment);
3437 (void)mutex_unlock(&main_arena.mutex);
3439 if(!p) return 0;
3441 return chunk2mem(p);
3444 static mchunkptr
3445 internal_function
3446 #if __STD_C
3447 chunk_align(arena* ar_ptr, INTERNAL_SIZE_T nb, size_t alignment)
3448 #else
3449 chunk_align(ar_ptr, nb, alignment)
3450 arena* ar_ptr; INTERNAL_SIZE_T nb; size_t alignment;
3451 #endif
3453 char* m; /* memory returned by malloc call */
3454 mchunkptr p; /* corresponding chunk */
3455 char* brk; /* alignment point within p */
3456 mchunkptr newp; /* chunk to return */
3457 INTERNAL_SIZE_T newsize; /* its size */
3458 INTERNAL_SIZE_T leadsize; /* leading space befor alignment point */
3459 mchunkptr remainder; /* spare room at end to split off */
3460 long remainder_size; /* its size */
3462 /* Call chunk_alloc with worst case padding to hit alignment. */
3463 p = chunk_alloc(ar_ptr, nb + alignment + MINSIZE);
3464 if (p == 0)
3465 return 0; /* propagate failure */
3467 m = chunk2mem(p);
3469 if ((((unsigned long)(m)) % alignment) == 0) /* aligned */
3471 #if HAVE_MMAP
3472 if(chunk_is_mmapped(p)) {
3473 return p; /* nothing more to do */
3475 #endif
3477 else /* misaligned */
3480 Find an aligned spot inside chunk.
3481 Since we need to give back leading space in a chunk of at
3482 least MINSIZE, if the first calculation places us at
3483 a spot with less than MINSIZE leader, we can move to the
3484 next aligned spot -- we've allocated enough total room so that
3485 this is always possible.
3488 brk = (char*)mem2chunk(((unsigned long)(m + alignment - 1)) & -alignment);
3489 if ((long)(brk - (char*)(p)) < (long)MINSIZE) brk += alignment;
3491 newp = (mchunkptr)brk;
3492 leadsize = brk - (char*)(p);
3493 newsize = chunksize(p) - leadsize;
3495 #if HAVE_MMAP
3496 if(chunk_is_mmapped(p))
3498 newp->prev_size = p->prev_size + leadsize;
3499 set_head(newp, newsize|IS_MMAPPED);
3500 return newp;
3502 #endif
3504 /* give back leader, use the rest */
3506 set_head(newp, newsize | PREV_INUSE);
3507 set_inuse_bit_at_offset(newp, newsize);
3508 set_head_size(p, leadsize);
3509 chunk_free(ar_ptr, p);
3510 p = newp;
3512 assert (newsize>=nb && (((unsigned long)(chunk2mem(p))) % alignment) == 0);
3515 /* Also give back spare room at the end */
3517 remainder_size = chunksize(p) - nb;
3519 if (remainder_size >= (long)MINSIZE)
3521 remainder = chunk_at_offset(p, nb);
3522 set_head(remainder, remainder_size | PREV_INUSE);
3523 set_head_size(p, nb);
3524 chunk_free(ar_ptr, remainder);
3527 check_inuse_chunk(ar_ptr, p);
3528 return p;
3535 valloc just invokes memalign with alignment argument equal
3536 to the page size of the system (or as near to this as can
3537 be figured out from all the includes/defines above.)
3540 #if __STD_C
3541 Void_t* vALLOc(size_t bytes)
3542 #else
3543 Void_t* vALLOc(bytes) size_t bytes;
3544 #endif
3546 return mEMALIGn (malloc_getpagesize, bytes);
3550 pvalloc just invokes valloc for the nearest pagesize
3551 that will accommodate request
3555 #if __STD_C
3556 Void_t* pvALLOc(size_t bytes)
3557 #else
3558 Void_t* pvALLOc(bytes) size_t bytes;
3559 #endif
3561 size_t pagesize = malloc_getpagesize;
3562 return mEMALIGn (pagesize, (bytes + pagesize - 1) & ~(pagesize - 1));
3567 calloc calls chunk_alloc, then zeroes out the allocated chunk.
3571 #if __STD_C
3572 Void_t* cALLOc(size_t n, size_t elem_size)
3573 #else
3574 Void_t* cALLOc(n, elem_size) size_t n; size_t elem_size;
3575 #endif
3577 arena *ar_ptr;
3578 mchunkptr p, oldtop;
3579 INTERNAL_SIZE_T sz, csz, oldtopsize;
3580 Void_t* mem;
3582 #if defined _LIBC || defined MALLOC_HOOKS
3583 if (__malloc_hook != NULL) {
3584 sz = n * elem_size;
3585 #if defined __GNUC__ && __GNUC__ >= 2
3586 mem = (*__malloc_hook)(sz, __builtin_return_address (0));
3587 #else
3588 mem = (*__malloc_hook)(sz, NULL);
3589 #endif
3590 if(mem == 0)
3591 return 0;
3592 #ifdef HAVE_MEMSET
3593 return memset(mem, 0, sz);
3594 #else
3595 while(sz > 0) ((char*)mem)[--sz] = 0; /* rather inefficient */
3596 return mem;
3597 #endif
3599 #endif
3601 if(request2size(n * elem_size, sz))
3602 return 0;
3603 arena_get(ar_ptr, sz);
3604 if(!ar_ptr)
3605 return 0;
3607 /* Check if expand_top called, in which case there may be
3608 no need to clear. */
3609 #if MORECORE_CLEARS
3610 oldtop = top(ar_ptr);
3611 oldtopsize = chunksize(top(ar_ptr));
3612 #if MORECORE_CLEARS < 2
3613 /* Only newly allocated memory is guaranteed to be cleared. */
3614 if (oldtopsize < sbrk_base + max_sbrked_mem - (char *)oldtop)
3615 oldtopsize = (sbrk_base + max_sbrked_mem - (char *)oldtop);
3616 #endif
3617 #endif
3618 p = chunk_alloc (ar_ptr, sz);
3620 /* Only clearing follows, so we can unlock early. */
3621 (void)mutex_unlock(&ar_ptr->mutex);
3623 if (p == 0) {
3624 /* Maybe the failure is due to running out of mmapped areas. */
3625 if(ar_ptr != &main_arena) {
3626 (void)mutex_lock(&main_arena.mutex);
3627 p = chunk_alloc(&main_arena, sz);
3628 (void)mutex_unlock(&main_arena.mutex);
3630 if (p == 0) return 0;
3632 mem = chunk2mem(p);
3634 /* Two optional cases in which clearing not necessary */
3636 #if HAVE_MMAP
3637 if (chunk_is_mmapped(p)) return mem;
3638 #endif
3640 csz = chunksize(p);
3642 #if MORECORE_CLEARS
3643 if (p == oldtop && csz > oldtopsize) {
3644 /* clear only the bytes from non-freshly-sbrked memory */
3645 csz = oldtopsize;
3647 #endif
3649 MALLOC_ZERO(mem, csz - SIZE_SZ);
3650 return mem;
3655 cfree just calls free. It is needed/defined on some systems
3656 that pair it with calloc, presumably for odd historical reasons.
3660 #if !defined(_LIBC)
3661 #if __STD_C
3662 void cfree(Void_t *mem)
3663 #else
3664 void cfree(mem) Void_t *mem;
3665 #endif
3667 free(mem);
3669 #endif
3675 Malloc_trim gives memory back to the system (via negative
3676 arguments to sbrk) if there is unused memory at the `high' end of
3677 the malloc pool. You can call this after freeing large blocks of
3678 memory to potentially reduce the system-level memory requirements
3679 of a program. However, it cannot guarantee to reduce memory. Under
3680 some allocation patterns, some large free blocks of memory will be
3681 locked between two used chunks, so they cannot be given back to
3682 the system.
3684 The `pad' argument to malloc_trim represents the amount of free
3685 trailing space to leave untrimmed. If this argument is zero,
3686 only the minimum amount of memory to maintain internal data
3687 structures will be left (one page or less). Non-zero arguments
3688 can be supplied to maintain enough trailing space to service
3689 future expected allocations without having to re-obtain memory
3690 from the system.
3692 Malloc_trim returns 1 if it actually released any memory, else 0.
3696 #if __STD_C
3697 int mALLOC_TRIm(size_t pad)
3698 #else
3699 int mALLOC_TRIm(pad) size_t pad;
3700 #endif
3702 int res;
3704 (void)mutex_lock(&main_arena.mutex);
3705 res = main_trim(pad);
3706 (void)mutex_unlock(&main_arena.mutex);
3707 return res;
3710 /* Trim the main arena. */
3712 static int
3713 internal_function
3714 #if __STD_C
3715 main_trim(size_t pad)
3716 #else
3717 main_trim(pad) size_t pad;
3718 #endif
3720 mchunkptr top_chunk; /* The current top chunk */
3721 long top_size; /* Amount of top-most memory */
3722 long extra; /* Amount to release */
3723 char* current_brk; /* address returned by pre-check sbrk call */
3724 char* new_brk; /* address returned by negative sbrk call */
3726 unsigned long pagesz = malloc_getpagesize;
3728 top_chunk = top(&main_arena);
3729 top_size = chunksize(top_chunk);
3730 extra = ((top_size - pad - MINSIZE + (pagesz-1)) / pagesz - 1) * pagesz;
3732 if (extra < (long)pagesz) /* Not enough memory to release */
3733 return 0;
3735 /* Test to make sure no one else called sbrk */
3736 current_brk = (char*)(MORECORE (0));
3737 if (current_brk != (char*)(top_chunk) + top_size)
3738 return 0; /* Apparently we don't own memory; must fail */
3740 new_brk = (char*)(MORECORE (-extra));
3742 #if defined _LIBC || defined MALLOC_HOOKS
3743 /* Call the `morecore' hook if necessary. */
3744 if (__after_morecore_hook)
3745 (*__after_morecore_hook) ();
3746 #endif
3748 if (new_brk == (char*)(MORECORE_FAILURE)) { /* sbrk failed? */
3749 /* Try to figure out what we have */
3750 current_brk = (char*)(MORECORE (0));
3751 top_size = current_brk - (char*)top_chunk;
3752 if (top_size >= (long)MINSIZE) /* if not, we are very very dead! */
3754 sbrked_mem = current_brk - sbrk_base;
3755 set_head(top_chunk, top_size | PREV_INUSE);
3757 check_chunk(&main_arena, top_chunk);
3758 return 0;
3760 sbrked_mem -= extra;
3762 /* Success. Adjust top accordingly. */
3763 set_head(top_chunk, (top_size - extra) | PREV_INUSE);
3764 check_chunk(&main_arena, top_chunk);
3765 return 1;
3768 #ifndef NO_THREADS
3770 static int
3771 internal_function
3772 #if __STD_C
3773 heap_trim(heap_info *heap, size_t pad)
3774 #else
3775 heap_trim(heap, pad) heap_info *heap; size_t pad;
3776 #endif
3778 unsigned long pagesz = malloc_getpagesize;
3779 arena *ar_ptr = heap->ar_ptr;
3780 mchunkptr top_chunk = top(ar_ptr), p, bck, fwd;
3781 heap_info *prev_heap;
3782 long new_size, top_size, extra;
3784 /* Can this heap go away completely ? */
3785 while(top_chunk == chunk_at_offset(heap, sizeof(*heap))) {
3786 prev_heap = heap->prev;
3787 p = chunk_at_offset(prev_heap, prev_heap->size - (MINSIZE-2*SIZE_SZ));
3788 assert(p->size == (0|PREV_INUSE)); /* must be fencepost */
3789 p = prev_chunk(p);
3790 new_size = chunksize(p) + (MINSIZE-2*SIZE_SZ);
3791 assert(new_size>0 && new_size<(long)(2*MINSIZE));
3792 if(!prev_inuse(p))
3793 new_size += p->prev_size;
3794 assert(new_size>0 && new_size<HEAP_MAX_SIZE);
3795 if(new_size + (HEAP_MAX_SIZE - prev_heap->size) < pad + MINSIZE + pagesz)
3796 break;
3797 ar_ptr->size -= heap->size;
3798 delete_heap(heap);
3799 heap = prev_heap;
3800 if(!prev_inuse(p)) { /* consolidate backward */
3801 p = prev_chunk(p);
3802 unlink(p, bck, fwd);
3804 assert(((unsigned long)((char*)p + new_size) & (pagesz-1)) == 0);
3805 assert( ((char*)p + new_size) == ((char*)heap + heap->size) );
3806 top(ar_ptr) = top_chunk = p;
3807 set_head(top_chunk, new_size | PREV_INUSE);
3808 check_chunk(ar_ptr, top_chunk);
3810 top_size = chunksize(top_chunk);
3811 extra = ((top_size - pad - MINSIZE + (pagesz-1))/pagesz - 1) * pagesz;
3812 if(extra < (long)pagesz)
3813 return 0;
3814 /* Try to shrink. */
3815 if(grow_heap(heap, -extra) != 0)
3816 return 0;
3817 ar_ptr->size -= extra;
3819 /* Success. Adjust top accordingly. */
3820 set_head(top_chunk, (top_size - extra) | PREV_INUSE);
3821 check_chunk(ar_ptr, top_chunk);
3822 return 1;
3825 #endif
3830 malloc_usable_size:
3832 This routine tells you how many bytes you can actually use in an
3833 allocated chunk, which may be more than you requested (although
3834 often not). You can use this many bytes without worrying about
3835 overwriting other allocated objects. Not a particularly great
3836 programming practice, but still sometimes useful.
3840 #if __STD_C
3841 size_t mALLOC_USABLE_SIZe(Void_t* mem)
3842 #else
3843 size_t mALLOC_USABLE_SIZe(mem) Void_t* mem;
3844 #endif
3846 mchunkptr p;
3848 if (mem == 0)
3849 return 0;
3850 else
3852 p = mem2chunk(mem);
3853 if(!chunk_is_mmapped(p))
3855 if (!inuse(p)) return 0;
3856 check_inuse_chunk(arena_for_ptr(mem), p);
3857 return chunksize(p) - SIZE_SZ;
3859 return chunksize(p) - 2*SIZE_SZ;
3866 /* Utility to update mallinfo for malloc_stats() and mallinfo() */
3868 static void
3869 #if __STD_C
3870 malloc_update_mallinfo(arena *ar_ptr, struct mallinfo *mi)
3871 #else
3872 malloc_update_mallinfo(ar_ptr, mi) arena *ar_ptr; struct mallinfo *mi;
3873 #endif
3875 int i, navail;
3876 mbinptr b;
3877 mchunkptr p;
3878 #if MALLOC_DEBUG
3879 mchunkptr q;
3880 #endif
3881 INTERNAL_SIZE_T avail;
3883 (void)mutex_lock(&ar_ptr->mutex);
3884 avail = chunksize(top(ar_ptr));
3885 navail = ((long)(avail) >= (long)MINSIZE)? 1 : 0;
3887 for (i = 1; i < NAV; ++i)
3889 b = bin_at(ar_ptr, i);
3890 for (p = last(b); p != b; p = p->bk)
3892 #if MALLOC_DEBUG
3893 check_free_chunk(ar_ptr, p);
3894 for (q = next_chunk(p);
3895 q != top(ar_ptr) && inuse(q) && (long)chunksize(q) > 0;
3896 q = next_chunk(q))
3897 check_inuse_chunk(ar_ptr, q);
3898 #endif
3899 avail += chunksize(p);
3900 navail++;
3904 mi->arena = ar_ptr->size;
3905 mi->ordblks = navail;
3906 mi->smblks = mi->usmblks = mi->fsmblks = 0; /* clear unused fields */
3907 mi->uordblks = ar_ptr->size - avail;
3908 mi->fordblks = avail;
3909 mi->hblks = n_mmaps;
3910 mi->hblkhd = mmapped_mem;
3911 mi->keepcost = chunksize(top(ar_ptr));
3913 (void)mutex_unlock(&ar_ptr->mutex);
3916 #if !defined(NO_THREADS) && MALLOC_DEBUG > 1
3918 /* Print the complete contents of a single heap to stderr. */
3920 static void
3921 #if __STD_C
3922 dump_heap(heap_info *heap)
3923 #else
3924 dump_heap(heap) heap_info *heap;
3925 #endif
3927 char *ptr;
3928 mchunkptr p;
3930 fprintf(stderr, "Heap %p, size %10lx:\n", heap, (long)heap->size);
3931 ptr = (heap->ar_ptr != (arena*)(heap+1)) ?
3932 (char*)(heap + 1) : (char*)(heap + 1) + sizeof(arena);
3933 p = (mchunkptr)(((unsigned long)ptr + MALLOC_ALIGN_MASK) &
3934 ~MALLOC_ALIGN_MASK);
3935 for(;;) {
3936 fprintf(stderr, "chunk %p size %10lx", p, (long)p->size);
3937 if(p == top(heap->ar_ptr)) {
3938 fprintf(stderr, " (top)\n");
3939 break;
3940 } else if(p->size == (0|PREV_INUSE)) {
3941 fprintf(stderr, " (fence)\n");
3942 break;
3944 fprintf(stderr, "\n");
3945 p = next_chunk(p);
3949 #endif
3955 malloc_stats:
3957 For all arenas separately and in total, prints on stderr the
3958 amount of space obtained from the system, and the current number
3959 of bytes allocated via malloc (or realloc, etc) but not yet
3960 freed. (Note that this is the number of bytes allocated, not the
3961 number requested. It will be larger than the number requested
3962 because of alignment and bookkeeping overhead.) When not compiled
3963 for multiple threads, the maximum amount of allocated memory
3964 (which may be more than current if malloc_trim and/or munmap got
3965 called) is also reported. When using mmap(), prints the maximum
3966 number of simultaneous mmap regions used, too.
3970 void mALLOC_STATs()
3972 int i;
3973 arena *ar_ptr;
3974 struct mallinfo mi;
3975 unsigned int in_use_b = mmapped_mem, system_b = in_use_b;
3976 #if THREAD_STATS
3977 long stat_lock_direct = 0, stat_lock_loop = 0, stat_lock_wait = 0;
3978 #endif
3980 for(i=0, ar_ptr = &main_arena;; i++) {
3981 malloc_update_mallinfo(ar_ptr, &mi);
3982 fprintf(stderr, "Arena %d:\n", i);
3983 fprintf(stderr, "system bytes = %10u\n", (unsigned int)mi.arena);
3984 fprintf(stderr, "in use bytes = %10u\n", (unsigned int)mi.uordblks);
3985 system_b += mi.arena;
3986 in_use_b += mi.uordblks;
3987 #if THREAD_STATS
3988 stat_lock_direct += ar_ptr->stat_lock_direct;
3989 stat_lock_loop += ar_ptr->stat_lock_loop;
3990 stat_lock_wait += ar_ptr->stat_lock_wait;
3991 #endif
3992 #if !defined(NO_THREADS) && MALLOC_DEBUG > 1
3993 if(ar_ptr != &main_arena) {
3994 heap_info *heap;
3995 (void)mutex_lock(&ar_ptr->mutex);
3996 heap = heap_for_ptr(top(ar_ptr));
3997 while(heap) { dump_heap(heap); heap = heap->prev; }
3998 (void)mutex_unlock(&ar_ptr->mutex);
4000 #endif
4001 ar_ptr = ar_ptr->next;
4002 if(ar_ptr == &main_arena) break;
4004 #if HAVE_MMAP
4005 fprintf(stderr, "Total (incl. mmap):\n");
4006 #else
4007 fprintf(stderr, "Total:\n");
4008 #endif
4009 fprintf(stderr, "system bytes = %10u\n", system_b);
4010 fprintf(stderr, "in use bytes = %10u\n", in_use_b);
4011 #ifdef NO_THREADS
4012 fprintf(stderr, "max system bytes = %10u\n", (unsigned int)max_total_mem);
4013 #endif
4014 #if HAVE_MMAP
4015 fprintf(stderr, "max mmap regions = %10u\n", (unsigned int)max_n_mmaps);
4016 fprintf(stderr, "max mmap bytes = %10lu\n", max_mmapped_mem);
4017 #endif
4018 #if THREAD_STATS
4019 fprintf(stderr, "heaps created = %10d\n", stat_n_heaps);
4020 fprintf(stderr, "locked directly = %10ld\n", stat_lock_direct);
4021 fprintf(stderr, "locked in loop = %10ld\n", stat_lock_loop);
4022 fprintf(stderr, "locked waiting = %10ld\n", stat_lock_wait);
4023 fprintf(stderr, "locked total = %10ld\n",
4024 stat_lock_direct + stat_lock_loop + stat_lock_wait);
4025 #endif
4029 mallinfo returns a copy of updated current mallinfo.
4030 The information reported is for the arena last used by the thread.
4033 struct mallinfo mALLINFo()
4035 struct mallinfo mi;
4036 Void_t *vptr = NULL;
4038 #ifndef NO_THREADS
4039 tsd_getspecific(arena_key, vptr);
4040 #endif
4041 malloc_update_mallinfo((vptr ? (arena*)vptr : &main_arena), &mi);
4042 return mi;
4049 mallopt:
4051 mallopt is the general SVID/XPG interface to tunable parameters.
4052 The format is to provide a (parameter-number, parameter-value) pair.
4053 mallopt then sets the corresponding parameter to the argument
4054 value if it can (i.e., so long as the value is meaningful),
4055 and returns 1 if successful else 0.
4057 See descriptions of tunable parameters above.
4061 #if __STD_C
4062 int mALLOPt(int param_number, int value)
4063 #else
4064 int mALLOPt(param_number, value) int param_number; int value;
4065 #endif
4067 switch(param_number)
4069 case M_TRIM_THRESHOLD:
4070 trim_threshold = value; return 1;
4071 case M_TOP_PAD:
4072 top_pad = value; return 1;
4073 case M_MMAP_THRESHOLD:
4074 #ifndef NO_THREADS
4075 /* Forbid setting the threshold too high. */
4076 if((unsigned long)value > HEAP_MAX_SIZE/2) return 0;
4077 #endif
4078 mmap_threshold = value; return 1;
4079 case M_MMAP_MAX:
4080 #if HAVE_MMAP
4081 n_mmaps_max = value; return 1;
4082 #else
4083 if (value != 0) return 0; else n_mmaps_max = value; return 1;
4084 #endif
4085 case M_CHECK_ACTION:
4086 check_action = value; return 1;
4088 default:
4089 return 0;
4095 /* Get/set state: malloc_get_state() records the current state of all
4096 malloc variables (_except_ for the actual heap contents and `hook'
4097 function pointers) in a system dependent, opaque data structure.
4098 This data structure is dynamically allocated and can be free()d
4099 after use. malloc_set_state() restores the state of all malloc
4100 variables to the previously obtained state. This is especially
4101 useful when using this malloc as part of a shared library, and when
4102 the heap contents are saved/restored via some other method. The
4103 primary example for this is GNU Emacs with its `dumping' procedure.
4104 `Hook' function pointers are never saved or restored by these
4105 functions, with two exceptions: If malloc checking was in use when
4106 malloc_get_state() was called, then malloc_set_state() calls
4107 __malloc_check_init() if possible; if malloc checking was not in
4108 use in the recorded state but the user requested malloc checking,
4109 then the hooks are reset to 0. */
4111 #define MALLOC_STATE_MAGIC 0x444c4541l
4112 #define MALLOC_STATE_VERSION (0*0x100l + 1l) /* major*0x100 + minor */
4114 struct malloc_state {
4115 long magic;
4116 long version;
4117 mbinptr av[NAV * 2 + 2];
4118 char* sbrk_base;
4119 int sbrked_mem_bytes;
4120 unsigned long trim_threshold;
4121 unsigned long top_pad;
4122 unsigned int n_mmaps_max;
4123 unsigned long mmap_threshold;
4124 int check_action;
4125 unsigned long max_sbrked_mem;
4126 unsigned long max_total_mem;
4127 unsigned int n_mmaps;
4128 unsigned int max_n_mmaps;
4129 unsigned long mmapped_mem;
4130 unsigned long max_mmapped_mem;
4131 int using_malloc_checking;
4134 Void_t*
4135 mALLOC_GET_STATe()
4137 struct malloc_state* ms;
4138 int i;
4139 mbinptr b;
4141 ms = (struct malloc_state*)mALLOc(sizeof(*ms));
4142 if (!ms)
4143 return 0;
4144 (void)mutex_lock(&main_arena.mutex);
4145 ms->magic = MALLOC_STATE_MAGIC;
4146 ms->version = MALLOC_STATE_VERSION;
4147 ms->av[0] = main_arena.av[0];
4148 ms->av[1] = main_arena.av[1];
4149 for(i=0; i<NAV; i++) {
4150 b = bin_at(&main_arena, i);
4151 if(first(b) == b)
4152 ms->av[2*i+2] = ms->av[2*i+3] = 0; /* empty bin (or initial top) */
4153 else {
4154 ms->av[2*i+2] = first(b);
4155 ms->av[2*i+3] = last(b);
4158 ms->sbrk_base = sbrk_base;
4159 ms->sbrked_mem_bytes = sbrked_mem;
4160 ms->trim_threshold = trim_threshold;
4161 ms->top_pad = top_pad;
4162 ms->n_mmaps_max = n_mmaps_max;
4163 ms->mmap_threshold = mmap_threshold;
4164 ms->check_action = check_action;
4165 ms->max_sbrked_mem = max_sbrked_mem;
4166 #ifdef NO_THREADS
4167 ms->max_total_mem = max_total_mem;
4168 #else
4169 ms->max_total_mem = 0;
4170 #endif
4171 ms->n_mmaps = n_mmaps;
4172 ms->max_n_mmaps = max_n_mmaps;
4173 ms->mmapped_mem = mmapped_mem;
4174 ms->max_mmapped_mem = max_mmapped_mem;
4175 #if defined _LIBC || defined MALLOC_HOOKS
4176 ms->using_malloc_checking = using_malloc_checking;
4177 #else
4178 ms->using_malloc_checking = 0;
4179 #endif
4180 (void)mutex_unlock(&main_arena.mutex);
4181 return (Void_t*)ms;
4185 #if __STD_C
4186 mALLOC_SET_STATe(Void_t* msptr)
4187 #else
4188 mALLOC_SET_STATe(msptr) Void_t* msptr;
4189 #endif
4191 struct malloc_state* ms = (struct malloc_state*)msptr;
4192 int i;
4193 mbinptr b;
4195 #if defined _LIBC || defined MALLOC_HOOKS
4196 disallow_malloc_check = 1;
4197 #endif
4198 ptmalloc_init();
4199 if(ms->magic != MALLOC_STATE_MAGIC) return -1;
4200 /* Must fail if the major version is too high. */
4201 if((ms->version & ~0xffl) > (MALLOC_STATE_VERSION & ~0xffl)) return -2;
4202 (void)mutex_lock(&main_arena.mutex);
4203 main_arena.av[0] = ms->av[0];
4204 main_arena.av[1] = ms->av[1];
4205 for(i=0; i<NAV; i++) {
4206 b = bin_at(&main_arena, i);
4207 if(ms->av[2*i+2] == 0)
4208 first(b) = last(b) = b;
4209 else {
4210 first(b) = ms->av[2*i+2];
4211 last(b) = ms->av[2*i+3];
4212 if(i > 0) {
4213 /* Make sure the links to the `av'-bins in the heap are correct. */
4214 first(b)->bk = b;
4215 last(b)->fd = b;
4219 sbrk_base = ms->sbrk_base;
4220 sbrked_mem = ms->sbrked_mem_bytes;
4221 trim_threshold = ms->trim_threshold;
4222 top_pad = ms->top_pad;
4223 n_mmaps_max = ms->n_mmaps_max;
4224 mmap_threshold = ms->mmap_threshold;
4225 check_action = ms->check_action;
4226 max_sbrked_mem = ms->max_sbrked_mem;
4227 #ifdef NO_THREADS
4228 max_total_mem = ms->max_total_mem;
4229 #endif
4230 n_mmaps = ms->n_mmaps;
4231 max_n_mmaps = ms->max_n_mmaps;
4232 mmapped_mem = ms->mmapped_mem;
4233 max_mmapped_mem = ms->max_mmapped_mem;
4234 /* add version-dependent code here */
4235 if (ms->version >= 1) {
4236 #if defined _LIBC || defined MALLOC_HOOKS
4237 /* Check whether it is safe to enable malloc checking, or whether
4238 it is necessary to disable it. */
4239 if (ms->using_malloc_checking && !using_malloc_checking &&
4240 !disallow_malloc_check)
4241 __malloc_check_init ();
4242 else if (!ms->using_malloc_checking && using_malloc_checking) {
4243 __malloc_hook = 0;
4244 __free_hook = 0;
4245 __realloc_hook = 0;
4246 __memalign_hook = 0;
4247 using_malloc_checking = 0;
4249 #endif
4252 (void)mutex_unlock(&main_arena.mutex);
4253 return 0;
4258 #if defined _LIBC || defined MALLOC_HOOKS
4260 /* A simple, standard set of debugging hooks. Overhead is `only' one
4261 byte per chunk; still this will catch most cases of double frees or
4262 overruns. The goal here is to avoid obscure crashes due to invalid
4263 usage, unlike in the MALLOC_DEBUG code. */
4265 #define MAGICBYTE(p) ( ( ((size_t)p >> 3) ^ ((size_t)p >> 11)) & 0xFF )
4267 /* Instrument a chunk with overrun detector byte(s) and convert it
4268 into a user pointer with requested size sz. */
4270 static Void_t*
4271 internal_function
4272 #if __STD_C
4273 chunk2mem_check(mchunkptr p, size_t sz)
4274 #else
4275 chunk2mem_check(p, sz) mchunkptr p; size_t sz;
4276 #endif
4278 unsigned char* m_ptr = (unsigned char*)chunk2mem(p);
4279 size_t i;
4281 for(i = chunksize(p) - (chunk_is_mmapped(p) ? 2*SIZE_SZ+1 : SIZE_SZ+1);
4282 i > sz;
4283 i -= 0xFF) {
4284 if(i-sz < 0x100) {
4285 m_ptr[i] = (unsigned char)(i-sz);
4286 break;
4288 m_ptr[i] = 0xFF;
4290 m_ptr[sz] = MAGICBYTE(p);
4291 return (Void_t*)m_ptr;
4294 /* Convert a pointer to be free()d or realloc()ed to a valid chunk
4295 pointer. If the provided pointer is not valid, return NULL. */
4297 static mchunkptr
4298 internal_function
4299 #if __STD_C
4300 mem2chunk_check(Void_t* mem)
4301 #else
4302 mem2chunk_check(mem) Void_t* mem;
4303 #endif
4305 mchunkptr p;
4306 INTERNAL_SIZE_T sz, c;
4307 unsigned char magic;
4309 p = mem2chunk(mem);
4310 if(!aligned_OK(p)) return NULL;
4311 if( (char*)p>=sbrk_base && (char*)p<(sbrk_base+sbrked_mem) ) {
4312 /* Must be a chunk in conventional heap memory. */
4313 if(chunk_is_mmapped(p) ||
4314 ( (sz = chunksize(p)), ((char*)p + sz)>=(sbrk_base+sbrked_mem) ) ||
4315 sz<MINSIZE || sz&MALLOC_ALIGN_MASK || !inuse(p) ||
4316 ( !prev_inuse(p) && (p->prev_size&MALLOC_ALIGN_MASK ||
4317 (long)prev_chunk(p)<(long)sbrk_base ||
4318 next_chunk(prev_chunk(p))!=p) ))
4319 return NULL;
4320 magic = MAGICBYTE(p);
4321 for(sz += SIZE_SZ-1; (c = ((unsigned char*)p)[sz]) != magic; sz -= c) {
4322 if(c<=0 || sz<(c+2*SIZE_SZ)) return NULL;
4324 ((unsigned char*)p)[sz] ^= 0xFF;
4325 } else {
4326 unsigned long offset, page_mask = malloc_getpagesize-1;
4328 /* mmap()ed chunks have MALLOC_ALIGNMENT or higher power-of-two
4329 alignment relative to the beginning of a page. Check this
4330 first. */
4331 offset = (unsigned long)mem & page_mask;
4332 if((offset!=MALLOC_ALIGNMENT && offset!=0 && offset!=0x10 &&
4333 offset!=0x20 && offset!=0x40 && offset!=0x80 && offset!=0x100 &&
4334 offset!=0x200 && offset!=0x400 && offset!=0x800 && offset!=0x1000 &&
4335 offset<0x2000) ||
4336 !chunk_is_mmapped(p) || (p->size & PREV_INUSE) ||
4337 ( (((unsigned long)p - p->prev_size) & page_mask) != 0 ) ||
4338 ( (sz = chunksize(p)), ((p->prev_size + sz) & page_mask) != 0 ) )
4339 return NULL;
4340 magic = MAGICBYTE(p);
4341 for(sz -= 1; (c = ((unsigned char*)p)[sz]) != magic; sz -= c) {
4342 if(c<=0 || sz<(c+2*SIZE_SZ)) return NULL;
4344 ((unsigned char*)p)[sz] ^= 0xFF;
4346 return p;
4349 /* Check for corruption of the top chunk, and try to recover if
4350 necessary. */
4352 static int
4353 internal_function
4354 #if __STD_C
4355 top_check(void)
4356 #else
4357 top_check()
4358 #endif
4360 mchunkptr t = top(&main_arena);
4361 char* brk, * new_brk;
4362 INTERNAL_SIZE_T front_misalign, sbrk_size;
4363 unsigned long pagesz = malloc_getpagesize;
4365 if((char*)t + chunksize(t) == sbrk_base + sbrked_mem ||
4366 t == initial_top(&main_arena)) return 0;
4368 if(check_action & 1)
4369 fprintf(stderr, "malloc: top chunk is corrupt\n");
4370 if(check_action & 2)
4371 abort();
4373 /* Try to set up a new top chunk. */
4374 brk = MORECORE(0);
4375 front_misalign = (unsigned long)chunk2mem(brk) & MALLOC_ALIGN_MASK;
4376 if (front_misalign > 0)
4377 front_misalign = MALLOC_ALIGNMENT - front_misalign;
4378 sbrk_size = front_misalign + top_pad + MINSIZE;
4379 sbrk_size += pagesz - ((unsigned long)(brk + sbrk_size) & (pagesz - 1));
4380 new_brk = (char*)(MORECORE (sbrk_size));
4381 if (new_brk == (char*)(MORECORE_FAILURE)) return -1;
4382 sbrked_mem = (new_brk - sbrk_base) + sbrk_size;
4384 top(&main_arena) = (mchunkptr)(brk + front_misalign);
4385 set_head(top(&main_arena), (sbrk_size - front_misalign) | PREV_INUSE);
4387 return 0;
4390 static Void_t*
4391 #if __STD_C
4392 malloc_check(size_t sz, const Void_t *caller)
4393 #else
4394 malloc_check(sz, caller) size_t sz; const Void_t *caller;
4395 #endif
4397 mchunkptr victim;
4398 INTERNAL_SIZE_T nb;
4400 if(request2size(sz+1, nb))
4401 return 0;
4402 (void)mutex_lock(&main_arena.mutex);
4403 victim = (top_check() >= 0) ? chunk_alloc(&main_arena, nb) : NULL;
4404 (void)mutex_unlock(&main_arena.mutex);
4405 if(!victim) return NULL;
4406 return chunk2mem_check(victim, sz);
4409 static void
4410 #if __STD_C
4411 free_check(Void_t* mem, const Void_t *caller)
4412 #else
4413 free_check(mem, caller) Void_t* mem; const Void_t *caller;
4414 #endif
4416 mchunkptr p;
4418 if(!mem) return;
4419 (void)mutex_lock(&main_arena.mutex);
4420 p = mem2chunk_check(mem);
4421 if(!p) {
4422 (void)mutex_unlock(&main_arena.mutex);
4423 if(check_action & 1)
4424 fprintf(stderr, "free(): invalid pointer %p!\n", mem);
4425 if(check_action & 2)
4426 abort();
4427 return;
4429 #if HAVE_MMAP
4430 if (chunk_is_mmapped(p)) {
4431 (void)mutex_unlock(&main_arena.mutex);
4432 munmap_chunk(p);
4433 return;
4435 #endif
4436 #if 0 /* Erase freed memory. */
4437 memset(mem, 0, chunksize(p) - (SIZE_SZ+1));
4438 #endif
4439 chunk_free(&main_arena, p);
4440 (void)mutex_unlock(&main_arena.mutex);
4443 static Void_t*
4444 #if __STD_C
4445 realloc_check(Void_t* oldmem, size_t bytes, const Void_t *caller)
4446 #else
4447 realloc_check(oldmem, bytes, caller)
4448 Void_t* oldmem; size_t bytes; const Void_t *caller;
4449 #endif
4451 mchunkptr oldp, newp;
4452 INTERNAL_SIZE_T nb, oldsize;
4454 if (oldmem == 0) return malloc_check(bytes, NULL);
4455 (void)mutex_lock(&main_arena.mutex);
4456 oldp = mem2chunk_check(oldmem);
4457 if(!oldp) {
4458 (void)mutex_unlock(&main_arena.mutex);
4459 if (check_action & 1)
4460 fprintf(stderr, "realloc(): invalid pointer %p!\n", oldmem);
4461 if (check_action & 2)
4462 abort();
4463 return malloc_check(bytes, NULL);
4465 oldsize = chunksize(oldp);
4467 if(request2size(bytes+1, nb)) {
4468 (void)mutex_unlock(&main_arena.mutex);
4469 return 0;
4472 #if HAVE_MMAP
4473 if (chunk_is_mmapped(oldp)) {
4474 #if HAVE_MREMAP
4475 newp = mremap_chunk(oldp, nb);
4476 if(!newp) {
4477 #endif
4478 /* Note the extra SIZE_SZ overhead. */
4479 if(oldsize - SIZE_SZ >= nb) newp = oldp; /* do nothing */
4480 else {
4481 /* Must alloc, copy, free. */
4482 newp = (top_check() >= 0) ? chunk_alloc(&main_arena, nb) : NULL;
4483 if (newp) {
4484 MALLOC_COPY(chunk2mem(newp), oldmem, oldsize - 2*SIZE_SZ);
4485 munmap_chunk(oldp);
4488 #if HAVE_MREMAP
4490 #endif
4491 } else {
4492 #endif /* HAVE_MMAP */
4493 newp = (top_check() >= 0) ?
4494 chunk_realloc(&main_arena, oldp, oldsize, nb) : NULL;
4495 #if 0 /* Erase freed memory. */
4496 nb = chunksize(newp);
4497 if(oldp<newp || oldp>=chunk_at_offset(newp, nb)) {
4498 memset((char*)oldmem + 2*sizeof(mbinptr), 0,
4499 oldsize - (2*sizeof(mbinptr)+2*SIZE_SZ+1));
4500 } else if(nb > oldsize+SIZE_SZ) {
4501 memset((char*)chunk2mem(newp) + oldsize, 0, nb - (oldsize+SIZE_SZ));
4503 #endif
4504 #if HAVE_MMAP
4506 #endif
4507 (void)mutex_unlock(&main_arena.mutex);
4509 if(!newp) return NULL;
4510 return chunk2mem_check(newp, bytes);
4513 static Void_t*
4514 #if __STD_C
4515 memalign_check(size_t alignment, size_t bytes, const Void_t *caller)
4516 #else
4517 memalign_check(alignment, bytes, caller)
4518 size_t alignment; size_t bytes; const Void_t *caller;
4519 #endif
4521 INTERNAL_SIZE_T nb;
4522 mchunkptr p;
4524 if (alignment <= MALLOC_ALIGNMENT) return malloc_check(bytes, NULL);
4525 if (alignment < MINSIZE) alignment = MINSIZE;
4527 if(request2size(bytes+1, nb))
4528 return 0;
4529 (void)mutex_lock(&main_arena.mutex);
4530 p = (top_check() >= 0) ? chunk_align(&main_arena, nb, alignment) : NULL;
4531 (void)mutex_unlock(&main_arena.mutex);
4532 if(!p) return NULL;
4533 return chunk2mem_check(p, bytes);
4536 #ifndef NO_THREADS
4538 /* The following hooks are used when the global initialization in
4539 ptmalloc_init() hasn't completed yet. */
4541 static Void_t*
4542 #if __STD_C
4543 malloc_starter(size_t sz, const Void_t *caller)
4544 #else
4545 malloc_starter(sz, caller) size_t sz; const Void_t *caller;
4546 #endif
4548 INTERNAL_SIZE_T nb;
4549 mchunkptr victim;
4551 if(request2size(sz, nb))
4552 return 0;
4553 victim = chunk_alloc(&main_arena, nb);
4555 return victim ? chunk2mem(victim) : 0;
4558 static void
4559 #if __STD_C
4560 free_starter(Void_t* mem, const Void_t *caller)
4561 #else
4562 free_starter(mem, caller) Void_t* mem; const Void_t *caller;
4563 #endif
4565 mchunkptr p;
4567 if(!mem) return;
4568 p = mem2chunk(mem);
4569 #if HAVE_MMAP
4570 if (chunk_is_mmapped(p)) {
4571 munmap_chunk(p);
4572 return;
4574 #endif
4575 chunk_free(&main_arena, p);
4578 /* The following hooks are used while the `atfork' handling mechanism
4579 is active. */
4581 static Void_t*
4582 #if __STD_C
4583 malloc_atfork (size_t sz, const Void_t *caller)
4584 #else
4585 malloc_atfork(sz, caller) size_t sz; const Void_t *caller;
4586 #endif
4588 Void_t *vptr = NULL;
4589 INTERNAL_SIZE_T nb;
4590 mchunkptr victim;
4592 tsd_getspecific(arena_key, vptr);
4593 if(!vptr) {
4594 if(save_malloc_hook != malloc_check) {
4595 if(request2size(sz, nb))
4596 return 0;
4597 victim = chunk_alloc(&main_arena, nb);
4598 return victim ? chunk2mem(victim) : 0;
4599 } else {
4600 if(top_check()<0 || request2size(sz+1, nb))
4601 return 0;
4602 victim = chunk_alloc(&main_arena, nb);
4603 return victim ? chunk2mem_check(victim, sz) : 0;
4605 } else {
4606 /* Suspend the thread until the `atfork' handlers have completed.
4607 By that time, the hooks will have been reset as well, so that
4608 mALLOc() can be used again. */
4609 (void)mutex_lock(&list_lock);
4610 (void)mutex_unlock(&list_lock);
4611 return mALLOc(sz);
4615 static void
4616 #if __STD_C
4617 free_atfork(Void_t* mem, const Void_t *caller)
4618 #else
4619 free_atfork(mem, caller) Void_t* mem; const Void_t *caller;
4620 #endif
4622 Void_t *vptr = NULL;
4623 arena *ar_ptr;
4624 mchunkptr p; /* chunk corresponding to mem */
4626 if (mem == 0) /* free(0) has no effect */
4627 return;
4629 p = mem2chunk(mem); /* do not bother to replicate free_check here */
4631 #if HAVE_MMAP
4632 if (chunk_is_mmapped(p)) /* release mmapped memory. */
4634 munmap_chunk(p);
4635 return;
4637 #endif
4639 ar_ptr = arena_for_ptr(p);
4640 tsd_getspecific(arena_key, vptr);
4641 if(vptr)
4642 (void)mutex_lock(&ar_ptr->mutex);
4643 chunk_free(ar_ptr, p);
4644 if(vptr)
4645 (void)mutex_unlock(&ar_ptr->mutex);
4648 #endif /* !defined NO_THREADS */
4650 #endif /* defined _LIBC || defined MALLOC_HOOKS */
4654 #ifdef _LIBC
4655 weak_alias (__libc_calloc, __calloc) weak_alias (__libc_calloc, calloc)
4656 weak_alias (__libc_free, __cfree) weak_alias (__libc_free, cfree)
4657 weak_alias (__libc_free, __free) weak_alias (__libc_free, free)
4658 weak_alias (__libc_malloc, __malloc) weak_alias (__libc_malloc, malloc)
4659 weak_alias (__libc_memalign, __memalign) weak_alias (__libc_memalign, memalign)
4660 weak_alias (__libc_realloc, __realloc) weak_alias (__libc_realloc, realloc)
4661 weak_alias (__libc_valloc, __valloc) weak_alias (__libc_valloc, valloc)
4662 weak_alias (__libc_pvalloc, __pvalloc) weak_alias (__libc_pvalloc, pvalloc)
4663 weak_alias (__libc_mallinfo, __mallinfo) weak_alias (__libc_mallinfo, mallinfo)
4664 weak_alias (__libc_mallopt, __mallopt) weak_alias (__libc_mallopt, mallopt)
4666 weak_alias (__malloc_stats, malloc_stats)
4667 weak_alias (__malloc_usable_size, malloc_usable_size)
4668 weak_alias (__malloc_trim, malloc_trim)
4669 weak_alias (__malloc_get_state, malloc_get_state)
4670 weak_alias (__malloc_set_state, malloc_set_state)
4671 #endif
4675 History:
4677 V2.6.4-pt3 Thu Feb 20 1997 Wolfram Gloger (wmglo@dent.med.uni-muenchen.de)
4678 * Added malloc_get/set_state() (mainly for use in GNU emacs),
4679 using interface from Marcus Daniels
4680 * All parameters are now adjustable via environment variables
4682 V2.6.4-pt2 Sat Dec 14 1996 Wolfram Gloger (wmglo@dent.med.uni-muenchen.de)
4683 * Added debugging hooks
4684 * Fixed possible deadlock in realloc() when out of memory
4685 * Don't pollute namespace in glibc: use __getpagesize, __mmap, etc.
4687 V2.6.4-pt Wed Dec 4 1996 Wolfram Gloger (wmglo@dent.med.uni-muenchen.de)
4688 * Very minor updates from the released 2.6.4 version.
4689 * Trimmed include file down to exported data structures.
4690 * Changes from H.J. Lu for glibc-2.0.
4692 V2.6.3i-pt Sep 16 1996 Wolfram Gloger (wmglo@dent.med.uni-muenchen.de)
4693 * Many changes for multiple threads
4694 * Introduced arenas and heaps
4696 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
4697 * Added pvalloc, as recommended by H.J. Liu
4698 * Added 64bit pointer support mainly from Wolfram Gloger
4699 * Added anonymously donated WIN32 sbrk emulation
4700 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
4701 * malloc_extend_top: fix mask error that caused wastage after
4702 foreign sbrks
4703 * Add linux mremap support code from HJ Liu
4705 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
4706 * Integrated most documentation with the code.
4707 * Add support for mmap, with help from
4708 Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
4709 * Use last_remainder in more cases.
4710 * Pack bins using idea from colin@nyx10.cs.du.edu
4711 * Use ordered bins instead of best-fit threshold
4712 * Eliminate block-local decls to simplify tracing and debugging.
4713 * Support another case of realloc via move into top
4714 * Fix error occurring when initial sbrk_base not word-aligned.
4715 * Rely on page size for units instead of SBRK_UNIT to
4716 avoid surprises about sbrk alignment conventions.
4717 * Add mallinfo, mallopt. Thanks to Raymond Nijssen
4718 (raymond@es.ele.tue.nl) for the suggestion.
4719 * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
4720 * More precautions for cases where other routines call sbrk,
4721 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
4722 * Added macros etc., allowing use in linux libc from
4723 H.J. Lu (hjl@gnu.ai.mit.edu)
4724 * Inverted this history list
4726 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
4727 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
4728 * Removed all preallocation code since under current scheme
4729 the work required to undo bad preallocations exceeds
4730 the work saved in good cases for most test programs.
4731 * No longer use return list or unconsolidated bins since
4732 no scheme using them consistently outperforms those that don't
4733 given above changes.
4734 * Use best fit for very large chunks to prevent some worst-cases.
4735 * Added some support for debugging
4737 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
4738 * Removed footers when chunks are in use. Thanks to
4739 Paul Wilson (wilson@cs.texas.edu) for the suggestion.
4741 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
4742 * Added malloc_trim, with help from Wolfram Gloger
4743 (wmglo@Dent.MED.Uni-Muenchen.DE).
4745 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
4747 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
4748 * realloc: try to expand in both directions
4749 * malloc: swap order of clean-bin strategy;
4750 * realloc: only conditionally expand backwards
4751 * Try not to scavenge used bins
4752 * Use bin counts as a guide to preallocation
4753 * Occasionally bin return list chunks in first scan
4754 * Add a few optimizations from colin@nyx10.cs.du.edu
4756 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
4757 * faster bin computation & slightly different binning
4758 * merged all consolidations to one part of malloc proper
4759 (eliminating old malloc_find_space & malloc_clean_bin)
4760 * Scan 2 returns chunks (not just 1)
4761 * Propagate failure in realloc if malloc returns 0
4762 * Add stuff to allow compilation on non-ANSI compilers
4763 from kpv@research.att.com
4765 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
4766 * removed potential for odd address access in prev_chunk
4767 * removed dependency on getpagesize.h
4768 * misc cosmetics and a bit more internal documentation
4769 * anticosmetics: mangled names in macros to evade debugger strangeness
4770 * tested on sparc, hp-700, dec-mips, rs6000
4771 with gcc & native cc (hp, dec only) allowing
4772 Detlefs & Zorn comparison study (in SIGPLAN Notices.)
4774 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
4775 * Based loosely on libg++-1.2X malloc. (It retains some of the overall
4776 structure of old version, but most details differ.)