Update.
[glibc.git] / malloc / malloc.c
blob8eed746fff9c3788364ca8517df0e92762360d4d
1 /* Malloc implementation for multiple threads without lock contention.
2 Copyright (C) 1996, 1997, 1998, 1999, 2000 Free Software Foundation, Inc.
3 This file is part of the GNU C Library.
4 Contributed by Wolfram Gloger <wmglo@dent.med.uni-muenchen.de>
5 and Doug Lea <dl@cs.oswego.edu>, 1996.
7 The GNU C Library is free software; you can redistribute it and/or
8 modify it under the terms of the GNU Library General Public License as
9 published by the Free Software Foundation; either version 2 of the
10 License, or (at your option) any later version.
12 The GNU C Library is distributed in the hope that it will be useful,
13 but WITHOUT ANY WARRANTY; without even the implied warranty of
14 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
15 Library General Public License for more details.
17 You should have received a copy of the GNU Library General Public
18 License along with the GNU C Library; see the file COPYING.LIB. If not,
19 write to the Free Software Foundation, Inc., 59 Temple Place - Suite 330,
20 Boston, MA 02111-1307, USA. */
22 /* $Id$
24 This work is mainly derived from malloc-2.6.4 by Doug Lea
25 <dl@cs.oswego.edu>, which is available from:
27 ftp://g.oswego.edu/pub/misc/malloc.c
29 Most of the original comments are reproduced in the code below.
31 * Why use this malloc?
33 This is not the fastest, most space-conserving, most portable, or
34 most tunable malloc ever written. However it is among the fastest
35 while also being among the most space-conserving, portable and tunable.
36 Consistent balance across these factors results in a good general-purpose
37 allocator. For a high-level description, see
38 http://g.oswego.edu/dl/html/malloc.html
40 On many systems, the standard malloc implementation is by itself not
41 thread-safe, and therefore wrapped with a single global lock around
42 all malloc-related functions. In some applications, especially with
43 multiple available processors, this can lead to contention problems
44 and bad performance. This malloc version was designed with the goal
45 to avoid waiting for locks as much as possible. Statistics indicate
46 that this goal is achieved in many cases.
48 * Synopsis of public routines
50 (Much fuller descriptions are contained in the program documentation below.)
52 ptmalloc_init();
53 Initialize global configuration. When compiled for multiple threads,
54 this function must be called once before any other function in the
55 package. It is not required otherwise. It is called automatically
56 in the Linux/GNU C libray or when compiling with MALLOC_HOOKS.
57 malloc(size_t n);
58 Return a pointer to a newly allocated chunk of at least n bytes, or null
59 if no space is available.
60 free(Void_t* p);
61 Release the chunk of memory pointed to by p, or no effect if p is null.
62 realloc(Void_t* p, size_t n);
63 Return a pointer to a chunk of size n that contains the same data
64 as does chunk p up to the minimum of (n, p's size) bytes, or null
65 if no space is available. The returned pointer may or may not be
66 the same as p. If p is null, equivalent to malloc. Unless the
67 #define REALLOC_ZERO_BYTES_FREES below is set, realloc with a
68 size argument of zero (re)allocates a minimum-sized chunk.
69 memalign(size_t alignment, size_t n);
70 Return a pointer to a newly allocated chunk of n bytes, aligned
71 in accord with the alignment argument, which must be a power of
72 two.
73 valloc(size_t n);
74 Equivalent to memalign(pagesize, n), where pagesize is the page
75 size of the system (or as near to this as can be figured out from
76 all the includes/defines below.)
77 pvalloc(size_t n);
78 Equivalent to valloc(minimum-page-that-holds(n)), that is,
79 round up n to nearest pagesize.
80 calloc(size_t unit, size_t quantity);
81 Returns a pointer to quantity * unit bytes, with all locations
82 set to zero.
83 cfree(Void_t* p);
84 Equivalent to free(p).
85 malloc_trim(size_t pad);
86 Release all but pad bytes of freed top-most memory back
87 to the system. Return 1 if successful, else 0.
88 malloc_usable_size(Void_t* p);
89 Report the number usable allocated bytes associated with allocated
90 chunk p. This may or may not report more bytes than were requested,
91 due to alignment and minimum size constraints.
92 malloc_stats();
93 Prints brief summary statistics on stderr.
94 mallinfo()
95 Returns (by copy) a struct containing various summary statistics.
96 mallopt(int parameter_number, int parameter_value)
97 Changes one of the tunable parameters described below. Returns
98 1 if successful in changing the parameter, else 0.
100 * Vital statistics:
102 Alignment: 8-byte
103 8 byte alignment is currently hardwired into the design. This
104 seems to suffice for all current machines and C compilers.
106 Assumed pointer representation: 4 or 8 bytes
107 Code for 8-byte pointers is untested by me but has worked
108 reliably by Wolfram Gloger, who contributed most of the
109 changes supporting this.
111 Assumed size_t representation: 4 or 8 bytes
112 Note that size_t is allowed to be 4 bytes even if pointers are 8.
114 Minimum overhead per allocated chunk: 4 or 8 bytes
115 Each malloced chunk has a hidden overhead of 4 bytes holding size
116 and status information.
118 Minimum allocated size: 4-byte ptrs: 16 bytes (including 4 overhead)
119 8-byte ptrs: 24/32 bytes (including, 4/8 overhead)
121 When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte
122 ptrs but 4 byte size) or 24 (for 8/8) additional bytes are
123 needed; 4 (8) for a trailing size field
124 and 8 (16) bytes for free list pointers. Thus, the minimum
125 allocatable size is 16/24/32 bytes.
127 Even a request for zero bytes (i.e., malloc(0)) returns a
128 pointer to something of the minimum allocatable size.
130 Maximum allocated size: 4-byte size_t: 2^31 - 8 bytes
131 8-byte size_t: 2^63 - 16 bytes
133 It is assumed that (possibly signed) size_t bit values suffice to
134 represent chunk sizes. `Possibly signed' is due to the fact
135 that `size_t' may be defined on a system as either a signed or
136 an unsigned type. To be conservative, values that would appear
137 as negative numbers are avoided.
138 Requests for sizes with a negative sign bit will return a
139 minimum-sized chunk.
141 Maximum overhead wastage per allocated chunk: normally 15 bytes
143 Alignment demands, plus the minimum allocatable size restriction
144 make the normal worst-case wastage 15 bytes (i.e., up to 15
145 more bytes will be allocated than were requested in malloc), with
146 two exceptions:
147 1. Because requests for zero bytes allocate non-zero space,
148 the worst case wastage for a request of zero bytes is 24 bytes.
149 2. For requests >= mmap_threshold that are serviced via
150 mmap(), the worst case wastage is 8 bytes plus the remainder
151 from a system page (the minimal mmap unit); typically 4096 bytes.
153 * Limitations
155 Here are some features that are NOT currently supported
157 * No automated mechanism for fully checking that all accesses
158 to malloced memory stay within their bounds.
159 * No support for compaction.
161 * Synopsis of compile-time options:
163 People have reported using previous versions of this malloc on all
164 versions of Unix, sometimes by tweaking some of the defines
165 below. It has been tested most extensively on Solaris and
166 Linux. People have also reported adapting this malloc for use in
167 stand-alone embedded systems.
169 The implementation is in straight, hand-tuned ANSI C. Among other
170 consequences, it uses a lot of macros. Because of this, to be at
171 all usable, this code should be compiled using an optimizing compiler
172 (for example gcc -O2) that can simplify expressions and control
173 paths.
175 __STD_C (default: derived from C compiler defines)
176 Nonzero if using ANSI-standard C compiler, a C++ compiler, or
177 a C compiler sufficiently close to ANSI to get away with it.
178 MALLOC_DEBUG (default: NOT defined)
179 Define to enable debugging. Adds fairly extensive assertion-based
180 checking to help track down memory errors, but noticeably slows down
181 execution.
182 MALLOC_HOOKS (default: NOT defined)
183 Define to enable support run-time replacement of the allocation
184 functions through user-defined `hooks'.
185 REALLOC_ZERO_BYTES_FREES (default: defined)
186 Define this if you think that realloc(p, 0) should be equivalent
187 to free(p). (The C standard requires this behaviour, therefore
188 it is the default.) Otherwise, since malloc returns a unique
189 pointer for malloc(0), so does realloc(p, 0).
190 HAVE_MEMCPY (default: defined)
191 Define if you are not otherwise using ANSI STD C, but still
192 have memcpy and memset in your C library and want to use them.
193 Otherwise, simple internal versions are supplied.
194 USE_MEMCPY (default: 1 if HAVE_MEMCPY is defined, 0 otherwise)
195 Define as 1 if you want the C library versions of memset and
196 memcpy called in realloc and calloc (otherwise macro versions are used).
197 At least on some platforms, the simple macro versions usually
198 outperform libc versions.
199 HAVE_MMAP (default: defined as 1)
200 Define to non-zero to optionally make malloc() use mmap() to
201 allocate very large blocks.
202 HAVE_MREMAP (default: defined as 0 unless Linux libc set)
203 Define to non-zero to optionally make realloc() use mremap() to
204 reallocate very large blocks.
205 USE_ARENAS (default: the same as HAVE_MMAP)
206 Enable support for multiple arenas, allocated using mmap().
207 malloc_getpagesize (default: derived from system #includes)
208 Either a constant or routine call returning the system page size.
209 HAVE_USR_INCLUDE_MALLOC_H (default: NOT defined)
210 Optionally define if you are on a system with a /usr/include/malloc.h
211 that declares struct mallinfo. It is not at all necessary to
212 define this even if you do, but will ensure consistency.
213 INTERNAL_SIZE_T (default: size_t)
214 Define to a 32-bit type (probably `unsigned int') if you are on a
215 64-bit machine, yet do not want or need to allow malloc requests of
216 greater than 2^31 to be handled. This saves space, especially for
217 very small chunks.
218 _LIBC (default: NOT defined)
219 Defined only when compiled as part of the Linux libc/glibc.
220 Also note that there is some odd internal name-mangling via defines
221 (for example, internally, `malloc' is named `mALLOc') needed
222 when compiling in this case. These look funny but don't otherwise
223 affect anything.
224 LACKS_UNISTD_H (default: undefined)
225 Define this if your system does not have a <unistd.h>.
226 MORECORE (default: sbrk)
227 The name of the routine to call to obtain more memory from the system.
228 MORECORE_FAILURE (default: -1)
229 The value returned upon failure of MORECORE.
230 MORECORE_CLEARS (default 1)
231 The degree to which the routine mapped to MORECORE zeroes out
232 memory: never (0), only for newly allocated space (1) or always
233 (2). The distinction between (1) and (2) is necessary because on
234 some systems, if the application first decrements and then
235 increments the break value, the contents of the reallocated space
236 are unspecified.
237 DEFAULT_TRIM_THRESHOLD
238 DEFAULT_TOP_PAD
239 DEFAULT_MMAP_THRESHOLD
240 DEFAULT_MMAP_MAX
241 Default values of tunable parameters (described in detail below)
242 controlling interaction with host system routines (sbrk, mmap, etc).
243 These values may also be changed dynamically via mallopt(). The
244 preset defaults are those that give best performance for typical
245 programs/systems.
246 DEFAULT_CHECK_ACTION
247 When the standard debugging hooks are in place, and a pointer is
248 detected as corrupt, do nothing (0), print an error message (1),
249 or call abort() (2).
256 * Compile-time options for multiple threads:
258 USE_PTHREADS, USE_THR, USE_SPROC
259 Define one of these as 1 to select the thread interface:
260 POSIX threads, Solaris threads or SGI sproc's, respectively.
261 If none of these is defined as non-zero, you get a `normal'
262 malloc implementation which is not thread-safe. Support for
263 multiple threads requires HAVE_MMAP=1. As an exception, when
264 compiling for GNU libc, i.e. when _LIBC is defined, then none of
265 the USE_... symbols have to be defined.
267 HEAP_MIN_SIZE
268 HEAP_MAX_SIZE
269 When thread support is enabled, additional `heap's are created
270 with mmap calls. These are limited in size; HEAP_MIN_SIZE should
271 be a multiple of the page size, while HEAP_MAX_SIZE must be a power
272 of two for alignment reasons. HEAP_MAX_SIZE should be at least
273 twice as large as the mmap threshold.
274 THREAD_STATS
275 When this is defined as non-zero, some statistics on mutex locking
276 are computed.
283 /* Preliminaries */
285 #ifndef __STD_C
286 #if defined (__STDC__)
287 #define __STD_C 1
288 #else
289 #if __cplusplus
290 #define __STD_C 1
291 #else
292 #define __STD_C 0
293 #endif /*__cplusplus*/
294 #endif /*__STDC__*/
295 #endif /*__STD_C*/
297 #ifndef Void_t
298 #if __STD_C
299 #define Void_t void
300 #else
301 #define Void_t char
302 #endif
303 #endif /*Void_t*/
305 #if __STD_C
306 # include <stddef.h> /* for size_t */
307 # if defined _LIBC || defined MALLOC_HOOKS
308 # include <stdlib.h> /* for getenv(), abort() */
309 # endif
310 #else
311 # include <sys/types.h>
312 # if defined _LIBC || defined MALLOC_HOOKS
313 extern char* getenv();
314 # endif
315 #endif
317 #ifndef _LIBC
318 # define __secure_getenv(Str) getenv (Str)
319 #endif
321 /* Macros for handling mutexes and thread-specific data. This is
322 included early, because some thread-related header files (such as
323 pthread.h) should be included before any others. */
324 #include "thread-m.h"
326 #ifdef __cplusplus
327 extern "C" {
328 #endif
330 #include <errno.h>
331 #include <stdio.h> /* needed for malloc_stats */
335 Compile-time options
340 Debugging:
342 Because freed chunks may be overwritten with link fields, this
343 malloc will often die when freed memory is overwritten by user
344 programs. This can be very effective (albeit in an annoying way)
345 in helping track down dangling pointers.
347 If you compile with -DMALLOC_DEBUG, a number of assertion checks are
348 enabled that will catch more memory errors. You probably won't be
349 able to make much sense of the actual assertion errors, but they
350 should help you locate incorrectly overwritten memory. The
351 checking is fairly extensive, and will slow down execution
352 noticeably. Calling malloc_stats or mallinfo with MALLOC_DEBUG set will
353 attempt to check every non-mmapped allocated and free chunk in the
354 course of computing the summaries. (By nature, mmapped regions
355 cannot be checked very much automatically.)
357 Setting MALLOC_DEBUG may also be helpful if you are trying to modify
358 this code. The assertions in the check routines spell out in more
359 detail the assumptions and invariants underlying the algorithms.
363 #if MALLOC_DEBUG
364 #include <assert.h>
365 #else
366 #define assert(x) ((void)0)
367 #endif
371 INTERNAL_SIZE_T is the word-size used for internal bookkeeping
372 of chunk sizes. On a 64-bit machine, you can reduce malloc
373 overhead by defining INTERNAL_SIZE_T to be a 32 bit `unsigned int'
374 at the expense of not being able to handle requests greater than
375 2^31. This limitation is hardly ever a concern; you are encouraged
376 to set this. However, the default version is the same as size_t.
379 #ifndef INTERNAL_SIZE_T
380 #define INTERNAL_SIZE_T size_t
381 #endif
384 REALLOC_ZERO_BYTES_FREES should be set if a call to realloc with
385 zero bytes should be the same as a call to free. The C standard
386 requires this. Otherwise, since this malloc returns a unique pointer
387 for malloc(0), so does realloc(p, 0).
391 #define REALLOC_ZERO_BYTES_FREES
395 HAVE_MEMCPY should be defined if you are not otherwise using
396 ANSI STD C, but still have memcpy and memset in your C library
397 and want to use them in calloc and realloc. Otherwise simple
398 macro versions are defined here.
400 USE_MEMCPY should be defined as 1 if you actually want to
401 have memset and memcpy called. People report that the macro
402 versions are often enough faster than libc versions on many
403 systems that it is better to use them.
407 #define HAVE_MEMCPY 1
409 #ifndef USE_MEMCPY
410 #ifdef HAVE_MEMCPY
411 #define USE_MEMCPY 1
412 #else
413 #define USE_MEMCPY 0
414 #endif
415 #endif
417 #if (__STD_C || defined(HAVE_MEMCPY))
419 #if __STD_C
420 void* memset(void*, int, size_t);
421 void* memcpy(void*, const void*, size_t);
422 #else
423 Void_t* memset();
424 Void_t* memcpy();
425 #endif
426 #endif
428 #if USE_MEMCPY
430 /* The following macros are only invoked with (2n+1)-multiples of
431 INTERNAL_SIZE_T units, with a positive integer n. This is exploited
432 for fast inline execution when n is small. */
434 #define MALLOC_ZERO(charp, nbytes) \
435 do { \
436 INTERNAL_SIZE_T mzsz = (nbytes); \
437 if(mzsz <= 9*sizeof(mzsz)) { \
438 INTERNAL_SIZE_T* mz = (INTERNAL_SIZE_T*) (charp); \
439 if(mzsz >= 5*sizeof(mzsz)) { *mz++ = 0; \
440 *mz++ = 0; \
441 if(mzsz >= 7*sizeof(mzsz)) { *mz++ = 0; \
442 *mz++ = 0; \
443 if(mzsz >= 9*sizeof(mzsz)) { *mz++ = 0; \
444 *mz++ = 0; }}} \
445 *mz++ = 0; \
446 *mz++ = 0; \
447 *mz = 0; \
448 } else memset((charp), 0, mzsz); \
449 } while(0)
451 #define MALLOC_COPY(dest,src,nbytes) \
452 do { \
453 INTERNAL_SIZE_T mcsz = (nbytes); \
454 if(mcsz <= 9*sizeof(mcsz)) { \
455 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) (src); \
456 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) (dest); \
457 if(mcsz >= 5*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
458 *mcdst++ = *mcsrc++; \
459 if(mcsz >= 7*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
460 *mcdst++ = *mcsrc++; \
461 if(mcsz >= 9*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
462 *mcdst++ = *mcsrc++; }}} \
463 *mcdst++ = *mcsrc++; \
464 *mcdst++ = *mcsrc++; \
465 *mcdst = *mcsrc ; \
466 } else memcpy(dest, src, mcsz); \
467 } while(0)
469 #else /* !USE_MEMCPY */
471 /* Use Duff's device for good zeroing/copying performance. */
473 #define MALLOC_ZERO(charp, nbytes) \
474 do { \
475 INTERNAL_SIZE_T* mzp = (INTERNAL_SIZE_T*)(charp); \
476 long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
477 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
478 switch (mctmp) { \
479 case 0: for(;;) { *mzp++ = 0; \
480 case 7: *mzp++ = 0; \
481 case 6: *mzp++ = 0; \
482 case 5: *mzp++ = 0; \
483 case 4: *mzp++ = 0; \
484 case 3: *mzp++ = 0; \
485 case 2: *mzp++ = 0; \
486 case 1: *mzp++ = 0; if(mcn <= 0) break; mcn--; } \
488 } while(0)
490 #define MALLOC_COPY(dest,src,nbytes) \
491 do { \
492 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) src; \
493 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) dest; \
494 long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
495 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
496 switch (mctmp) { \
497 case 0: for(;;) { *mcdst++ = *mcsrc++; \
498 case 7: *mcdst++ = *mcsrc++; \
499 case 6: *mcdst++ = *mcsrc++; \
500 case 5: *mcdst++ = *mcsrc++; \
501 case 4: *mcdst++ = *mcsrc++; \
502 case 3: *mcdst++ = *mcsrc++; \
503 case 2: *mcdst++ = *mcsrc++; \
504 case 1: *mcdst++ = *mcsrc++; if(mcn <= 0) break; mcn--; } \
506 } while(0)
508 #endif
511 #ifndef LACKS_UNISTD_H
512 # include <unistd.h>
513 #endif
516 Define HAVE_MMAP to optionally make malloc() use mmap() to allocate
517 very large blocks. These will be returned to the operating system
518 immediately after a free(). HAVE_MMAP is also a prerequisite to
519 support multiple `arenas' (see USE_ARENAS below).
522 #ifndef HAVE_MMAP
523 # ifdef _POSIX_MAPPED_FILES
524 # define HAVE_MMAP 1
525 # endif
526 #endif
529 Define HAVE_MREMAP to make realloc() use mremap() to re-allocate
530 large blocks. This is currently only possible on Linux with
531 kernel versions newer than 1.3.77.
534 #ifndef HAVE_MREMAP
535 #define HAVE_MREMAP defined(__linux__) && !defined(__arm__)
536 #endif
538 /* Define USE_ARENAS to enable support for multiple `arenas'. These
539 are allocated using mmap(), are necessary for threads and
540 occasionally useful to overcome address space limitations affecting
541 sbrk(). */
543 #ifndef USE_ARENAS
544 #define USE_ARENAS HAVE_MMAP
545 #endif
547 #if HAVE_MMAP
549 #include <unistd.h>
550 #include <fcntl.h>
551 #include <sys/mman.h>
553 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
554 #define MAP_ANONYMOUS MAP_ANON
555 #endif
556 #if !defined(MAP_FAILED)
557 #define MAP_FAILED ((char*)-1)
558 #endif
560 #ifndef MAP_NORESERVE
561 # ifdef MAP_AUTORESRV
562 # define MAP_NORESERVE MAP_AUTORESRV
563 # else
564 # define MAP_NORESERVE 0
565 # endif
566 #endif
568 #endif /* HAVE_MMAP */
571 Access to system page size. To the extent possible, this malloc
572 manages memory from the system in page-size units.
574 The following mechanics for getpagesize were adapted from
575 bsd/gnu getpagesize.h
578 #ifndef malloc_getpagesize
579 # ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
580 # ifndef _SC_PAGE_SIZE
581 # define _SC_PAGE_SIZE _SC_PAGESIZE
582 # endif
583 # endif
584 # ifdef _SC_PAGE_SIZE
585 # define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
586 # else
587 # if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
588 extern size_t getpagesize();
589 # define malloc_getpagesize getpagesize()
590 # else
591 # include <sys/param.h>
592 # ifdef EXEC_PAGESIZE
593 # define malloc_getpagesize EXEC_PAGESIZE
594 # else
595 # ifdef NBPG
596 # ifndef CLSIZE
597 # define malloc_getpagesize NBPG
598 # else
599 # define malloc_getpagesize (NBPG * CLSIZE)
600 # endif
601 # else
602 # ifdef NBPC
603 # define malloc_getpagesize NBPC
604 # else
605 # ifdef PAGESIZE
606 # define malloc_getpagesize PAGESIZE
607 # else
608 # define malloc_getpagesize (4096) /* just guess */
609 # endif
610 # endif
611 # endif
612 # endif
613 # endif
614 # endif
615 #endif
621 This version of malloc supports the standard SVID/XPG mallinfo
622 routine that returns a struct containing the same kind of
623 information you can get from malloc_stats. It should work on
624 any SVID/XPG compliant system that has a /usr/include/malloc.h
625 defining struct mallinfo. (If you'd like to install such a thing
626 yourself, cut out the preliminary declarations as described above
627 and below and save them in a malloc.h file. But there's no
628 compelling reason to bother to do this.)
630 The main declaration needed is the mallinfo struct that is returned
631 (by-copy) by mallinfo(). The SVID/XPG malloinfo struct contains a
632 bunch of fields, most of which are not even meaningful in this
633 version of malloc. Some of these fields are are instead filled by
634 mallinfo() with other numbers that might possibly be of interest.
636 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
637 /usr/include/malloc.h file that includes a declaration of struct
638 mallinfo. If so, it is included; else an SVID2/XPG2 compliant
639 version is declared below. These must be precisely the same for
640 mallinfo() to work.
644 /* #define HAVE_USR_INCLUDE_MALLOC_H */
646 #if HAVE_USR_INCLUDE_MALLOC_H
647 # include "/usr/include/malloc.h"
648 #else
649 # ifdef _LIBC
650 # include "malloc.h"
651 # else
652 # include "ptmalloc.h"
653 # endif
654 #endif
658 #ifndef DEFAULT_TRIM_THRESHOLD
659 #define DEFAULT_TRIM_THRESHOLD (128 * 1024)
660 #endif
663 M_TRIM_THRESHOLD is the maximum amount of unused top-most memory
664 to keep before releasing via malloc_trim in free().
666 Automatic trimming is mainly useful in long-lived programs.
667 Because trimming via sbrk can be slow on some systems, and can
668 sometimes be wasteful (in cases where programs immediately
669 afterward allocate more large chunks) the value should be high
670 enough so that your overall system performance would improve by
671 releasing.
673 The trim threshold and the mmap control parameters (see below)
674 can be traded off with one another. Trimming and mmapping are
675 two different ways of releasing unused memory back to the
676 system. Between these two, it is often possible to keep
677 system-level demands of a long-lived program down to a bare
678 minimum. For example, in one test suite of sessions measuring
679 the XF86 X server on Linux, using a trim threshold of 128K and a
680 mmap threshold of 192K led to near-minimal long term resource
681 consumption.
683 If you are using this malloc in a long-lived program, it should
684 pay to experiment with these values. As a rough guide, you
685 might set to a value close to the average size of a process
686 (program) running on your system. Releasing this much memory
687 would allow such a process to run in memory. Generally, it's
688 worth it to tune for trimming rather than memory mapping when a
689 program undergoes phases where several large chunks are
690 allocated and released in ways that can reuse each other's
691 storage, perhaps mixed with phases where there are no such
692 chunks at all. And in well-behaved long-lived programs,
693 controlling release of large blocks via trimming versus mapping
694 is usually faster.
696 However, in most programs, these parameters serve mainly as
697 protection against the system-level effects of carrying around
698 massive amounts of unneeded memory. Since frequent calls to
699 sbrk, mmap, and munmap otherwise degrade performance, the default
700 parameters are set to relatively high values that serve only as
701 safeguards.
703 The default trim value is high enough to cause trimming only in
704 fairly extreme (by current memory consumption standards) cases.
705 It must be greater than page size to have any useful effect. To
706 disable trimming completely, you can set to (unsigned long)(-1);
712 #ifndef DEFAULT_TOP_PAD
713 #define DEFAULT_TOP_PAD (0)
714 #endif
717 M_TOP_PAD is the amount of extra `padding' space to allocate or
718 retain whenever sbrk is called. It is used in two ways internally:
720 * When sbrk is called to extend the top of the arena to satisfy
721 a new malloc request, this much padding is added to the sbrk
722 request.
724 * When malloc_trim is called automatically from free(),
725 it is used as the `pad' argument.
727 In both cases, the actual amount of padding is rounded
728 so that the end of the arena is always a system page boundary.
730 The main reason for using padding is to avoid calling sbrk so
731 often. Having even a small pad greatly reduces the likelihood
732 that nearly every malloc request during program start-up (or
733 after trimming) will invoke sbrk, which needlessly wastes
734 time.
736 Automatic rounding-up to page-size units is normally sufficient
737 to avoid measurable overhead, so the default is 0. However, in
738 systems where sbrk is relatively slow, it can pay to increase
739 this value, at the expense of carrying around more memory than
740 the program needs.
745 #ifndef DEFAULT_MMAP_THRESHOLD
746 #define DEFAULT_MMAP_THRESHOLD (128 * 1024)
747 #endif
751 M_MMAP_THRESHOLD is the request size threshold for using mmap()
752 to service a request. Requests of at least this size that cannot
753 be allocated using already-existing space will be serviced via mmap.
754 (If enough normal freed space already exists it is used instead.)
756 Using mmap segregates relatively large chunks of memory so that
757 they can be individually obtained and released from the host
758 system. A request serviced through mmap is never reused by any
759 other request (at least not directly; the system may just so
760 happen to remap successive requests to the same locations).
762 Segregating space in this way has the benefit that mmapped space
763 can ALWAYS be individually released back to the system, which
764 helps keep the system level memory demands of a long-lived
765 program low. Mapped memory can never become `locked' between
766 other chunks, as can happen with normally allocated chunks, which
767 menas that even trimming via malloc_trim would not release them.
769 However, it has the disadvantages that:
771 1. The space cannot be reclaimed, consolidated, and then
772 used to service later requests, as happens with normal chunks.
773 2. It can lead to more wastage because of mmap page alignment
774 requirements
775 3. It causes malloc performance to be more dependent on host
776 system memory management support routines which may vary in
777 implementation quality and may impose arbitrary
778 limitations. Generally, servicing a request via normal
779 malloc steps is faster than going through a system's mmap.
781 All together, these considerations should lead you to use mmap
782 only for relatively large requests.
789 #ifndef DEFAULT_MMAP_MAX
790 #if HAVE_MMAP
791 #define DEFAULT_MMAP_MAX (1024)
792 #else
793 #define DEFAULT_MMAP_MAX (0)
794 #endif
795 #endif
798 M_MMAP_MAX is the maximum number of requests to simultaneously
799 service using mmap. This parameter exists because:
801 1. Some systems have a limited number of internal tables for
802 use by mmap.
803 2. In most systems, overreliance on mmap can degrade overall
804 performance.
805 3. If a program allocates many large regions, it is probably
806 better off using normal sbrk-based allocation routines that
807 can reclaim and reallocate normal heap memory. Using a
808 small value allows transition into this mode after the
809 first few allocations.
811 Setting to 0 disables all use of mmap. If HAVE_MMAP is not set,
812 the default value is 0, and attempts to set it to non-zero values
813 in mallopt will fail.
818 #ifndef DEFAULT_CHECK_ACTION
819 #define DEFAULT_CHECK_ACTION 1
820 #endif
822 /* What to do if the standard debugging hooks are in place and a
823 corrupt pointer is detected: do nothing (0), print an error message
824 (1), or call abort() (2). */
828 #define HEAP_MIN_SIZE (32*1024)
829 #define HEAP_MAX_SIZE (1024*1024) /* must be a power of two */
831 /* HEAP_MIN_SIZE and HEAP_MAX_SIZE limit the size of mmap()ed heaps
832 that are dynamically created for multi-threaded programs. The
833 maximum size must be a power of two, for fast determination of
834 which heap belongs to a chunk. It should be much larger than
835 the mmap threshold, so that requests with a size just below that
836 threshold can be fulfilled without creating too many heaps.
841 #ifndef THREAD_STATS
842 #define THREAD_STATS 0
843 #endif
845 /* If THREAD_STATS is non-zero, some statistics on mutex locking are
846 computed. */
849 /* Macro to set errno. */
850 #ifndef __set_errno
851 # define __set_errno(val) errno = (val)
852 #endif
854 /* On some platforms we can compile internal, not exported functions better.
855 Let the environment provide a macro and define it to be empty if it
856 is not available. */
857 #ifndef internal_function
858 # define internal_function
859 #endif
864 Special defines for the Linux/GNU C library.
869 #ifdef _LIBC
871 #if __STD_C
873 Void_t * __default_morecore (ptrdiff_t);
874 Void_t *(*__morecore)(ptrdiff_t) = __default_morecore;
876 #else
878 Void_t * __default_morecore ();
879 Void_t *(*__morecore)() = __default_morecore;
881 #endif
883 #define MORECORE (*__morecore)
884 #define MORECORE_FAILURE 0
886 #ifndef MORECORE_CLEARS
887 #define MORECORE_CLEARS 1
888 #endif
890 static size_t __libc_pagesize;
892 #define mmap __mmap
893 #define munmap __munmap
894 #define mremap __mremap
895 #define mprotect __mprotect
896 #undef malloc_getpagesize
897 #define malloc_getpagesize __libc_pagesize
899 #else /* _LIBC */
901 #if __STD_C
902 extern Void_t* sbrk(ptrdiff_t);
903 #else
904 extern Void_t* sbrk();
905 #endif
907 #ifndef MORECORE
908 #define MORECORE sbrk
909 #endif
911 #ifndef MORECORE_FAILURE
912 #define MORECORE_FAILURE -1
913 #endif
915 #ifndef MORECORE_CLEARS
916 #define MORECORE_CLEARS 1
917 #endif
919 #endif /* _LIBC */
921 #ifdef _LIBC
923 #define cALLOc __libc_calloc
924 #define fREe __libc_free
925 #define mALLOc __libc_malloc
926 #define mEMALIGn __libc_memalign
927 #define rEALLOc __libc_realloc
928 #define vALLOc __libc_valloc
929 #define pvALLOc __libc_pvalloc
930 #define mALLINFo __libc_mallinfo
931 #define mALLOPt __libc_mallopt
932 #define mALLOC_STATs __malloc_stats
933 #define mALLOC_USABLE_SIZe __malloc_usable_size
934 #define mALLOC_TRIm __malloc_trim
935 #define mALLOC_GET_STATe __malloc_get_state
936 #define mALLOC_SET_STATe __malloc_set_state
938 #else
940 #define cALLOc calloc
941 #define fREe free
942 #define mALLOc malloc
943 #define mEMALIGn memalign
944 #define rEALLOc realloc
945 #define vALLOc valloc
946 #define pvALLOc pvalloc
947 #define mALLINFo mallinfo
948 #define mALLOPt mallopt
949 #define mALLOC_STATs malloc_stats
950 #define mALLOC_USABLE_SIZe malloc_usable_size
951 #define mALLOC_TRIm malloc_trim
952 #define mALLOC_GET_STATe malloc_get_state
953 #define mALLOC_SET_STATe malloc_set_state
955 #endif
957 /* Public routines */
959 #if __STD_C
961 #ifndef _LIBC
962 void ptmalloc_init(void);
963 #endif
964 Void_t* mALLOc(size_t);
965 void fREe(Void_t*);
966 Void_t* rEALLOc(Void_t*, size_t);
967 Void_t* mEMALIGn(size_t, size_t);
968 Void_t* vALLOc(size_t);
969 Void_t* pvALLOc(size_t);
970 Void_t* cALLOc(size_t, size_t);
971 void cfree(Void_t*);
972 int mALLOC_TRIm(size_t);
973 size_t mALLOC_USABLE_SIZe(Void_t*);
974 void mALLOC_STATs(void);
975 int mALLOPt(int, int);
976 struct mallinfo mALLINFo(void);
977 Void_t* mALLOC_GET_STATe(void);
978 int mALLOC_SET_STATe(Void_t*);
980 #else /* !__STD_C */
982 #ifndef _LIBC
983 void ptmalloc_init();
984 #endif
985 Void_t* mALLOc();
986 void fREe();
987 Void_t* rEALLOc();
988 Void_t* mEMALIGn();
989 Void_t* vALLOc();
990 Void_t* pvALLOc();
991 Void_t* cALLOc();
992 void cfree();
993 int mALLOC_TRIm();
994 size_t mALLOC_USABLE_SIZe();
995 void mALLOC_STATs();
996 int mALLOPt();
997 struct mallinfo mALLINFo();
998 Void_t* mALLOC_GET_STATe();
999 int mALLOC_SET_STATe();
1001 #endif /* __STD_C */
1004 #ifdef __cplusplus
1005 } /* end of extern "C" */
1006 #endif
1008 #if !defined(NO_THREADS) && !HAVE_MMAP
1009 "Can't have threads support without mmap"
1010 #endif
1011 #if USE_ARENAS && !HAVE_MMAP
1012 "Can't have multiple arenas without mmap"
1013 #endif
1017 Type declarations
1021 struct malloc_chunk
1023 INTERNAL_SIZE_T prev_size; /* Size of previous chunk (if free). */
1024 INTERNAL_SIZE_T size; /* Size in bytes, including overhead. */
1025 struct malloc_chunk* fd; /* double links -- used only if free. */
1026 struct malloc_chunk* bk;
1029 typedef struct malloc_chunk* mchunkptr;
1033 malloc_chunk details:
1035 (The following includes lightly edited explanations by Colin Plumb.)
1037 Chunks of memory are maintained using a `boundary tag' method as
1038 described in e.g., Knuth or Standish. (See the paper by Paul
1039 Wilson ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a
1040 survey of such techniques.) Sizes of free chunks are stored both
1041 in the front of each chunk and at the end. This makes
1042 consolidating fragmented chunks into bigger chunks very fast. The
1043 size fields also hold bits representing whether chunks are free or
1044 in use.
1046 An allocated chunk looks like this:
1049 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1050 | Size of previous chunk, if allocated | |
1051 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1052 | Size of chunk, in bytes |P|
1053 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1054 | User data starts here... .
1056 . (malloc_usable_space() bytes) .
1058 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1059 | Size of chunk |
1060 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1063 Where "chunk" is the front of the chunk for the purpose of most of
1064 the malloc code, but "mem" is the pointer that is returned to the
1065 user. "Nextchunk" is the beginning of the next contiguous chunk.
1067 Chunks always begin on even word boundaries, so the mem portion
1068 (which is returned to the user) is also on an even word boundary, and
1069 thus double-word aligned.
1071 Free chunks are stored in circular doubly-linked lists, and look like this:
1073 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1074 | Size of previous chunk |
1075 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1076 `head:' | Size of chunk, in bytes |P|
1077 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1078 | Forward pointer to next chunk in list |
1079 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1080 | Back pointer to previous chunk in list |
1081 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1082 | Unused space (may be 0 bytes long) .
1085 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1086 `foot:' | Size of chunk, in bytes |
1087 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1089 The P (PREV_INUSE) bit, stored in the unused low-order bit of the
1090 chunk size (which is always a multiple of two words), is an in-use
1091 bit for the *previous* chunk. If that bit is *clear*, then the
1092 word before the current chunk size contains the previous chunk
1093 size, and can be used to find the front of the previous chunk.
1094 (The very first chunk allocated always has this bit set,
1095 preventing access to non-existent (or non-owned) memory.)
1097 Note that the `foot' of the current chunk is actually represented
1098 as the prev_size of the NEXT chunk. (This makes it easier to
1099 deal with alignments etc).
1101 The two exceptions to all this are
1103 1. The special chunk `top', which doesn't bother using the
1104 trailing size field since there is no
1105 next contiguous chunk that would have to index off it. (After
1106 initialization, `top' is forced to always exist. If it would
1107 become less than MINSIZE bytes long, it is replenished via
1108 malloc_extend_top.)
1110 2. Chunks allocated via mmap, which have the second-lowest-order
1111 bit (IS_MMAPPED) set in their size fields. Because they are
1112 never merged or traversed from any other chunk, they have no
1113 foot size or inuse information.
1115 Available chunks are kept in any of several places (all declared below):
1117 * `av': An array of chunks serving as bin headers for consolidated
1118 chunks. Each bin is doubly linked. The bins are approximately
1119 proportionally (log) spaced. There are a lot of these bins
1120 (128). This may look excessive, but works very well in
1121 practice. All procedures maintain the invariant that no
1122 consolidated chunk physically borders another one. Chunks in
1123 bins are kept in size order, with ties going to the
1124 approximately least recently used chunk.
1126 The chunks in each bin are maintained in decreasing sorted order by
1127 size. This is irrelevant for the small bins, which all contain
1128 the same-sized chunks, but facilitates best-fit allocation for
1129 larger chunks. (These lists are just sequential. Keeping them in
1130 order almost never requires enough traversal to warrant using
1131 fancier ordered data structures.) Chunks of the same size are
1132 linked with the most recently freed at the front, and allocations
1133 are taken from the back. This results in LRU or FIFO allocation
1134 order, which tends to give each chunk an equal opportunity to be
1135 consolidated with adjacent freed chunks, resulting in larger free
1136 chunks and less fragmentation.
1138 * `top': The top-most available chunk (i.e., the one bordering the
1139 end of available memory) is treated specially. It is never
1140 included in any bin, is used only if no other chunk is
1141 available, and is released back to the system if it is very
1142 large (see M_TRIM_THRESHOLD).
1144 * `last_remainder': A bin holding only the remainder of the
1145 most recently split (non-top) chunk. This bin is checked
1146 before other non-fitting chunks, so as to provide better
1147 locality for runs of sequentially allocated chunks.
1149 * Implicitly, through the host system's memory mapping tables.
1150 If supported, requests greater than a threshold are usually
1151 serviced via calls to mmap, and then later released via munmap.
1156 Bins
1158 The bins are an array of pairs of pointers serving as the
1159 heads of (initially empty) doubly-linked lists of chunks, laid out
1160 in a way so that each pair can be treated as if it were in a
1161 malloc_chunk. (This way, the fd/bk offsets for linking bin heads
1162 and chunks are the same).
1164 Bins for sizes < 512 bytes contain chunks of all the same size, spaced
1165 8 bytes apart. Larger bins are approximately logarithmically
1166 spaced. (See the table below.)
1168 Bin layout:
1170 64 bins of size 8
1171 32 bins of size 64
1172 16 bins of size 512
1173 8 bins of size 4096
1174 4 bins of size 32768
1175 2 bins of size 262144
1176 1 bin of size what's left
1178 There is actually a little bit of slop in the numbers in bin_index
1179 for the sake of speed. This makes no difference elsewhere.
1181 The special chunks `top' and `last_remainder' get their own bins,
1182 (this is implemented via yet more trickery with the av array),
1183 although `top' is never properly linked to its bin since it is
1184 always handled specially.
1188 #define NAV 128 /* number of bins */
1190 typedef struct malloc_chunk* mbinptr;
1192 /* An arena is a configuration of malloc_chunks together with an array
1193 of bins. With multiple threads, it must be locked via a mutex
1194 before changing its data structures. One or more `heaps' are
1195 associated with each arena, except for the main_arena, which is
1196 associated only with the `main heap', i.e. the conventional free
1197 store obtained with calls to MORECORE() (usually sbrk). The `av'
1198 array is never mentioned directly in the code, but instead used via
1199 bin access macros. */
1201 typedef struct _arena {
1202 mbinptr av[2*NAV + 2];
1203 struct _arena *next;
1204 size_t size;
1205 #if THREAD_STATS
1206 long stat_lock_direct, stat_lock_loop, stat_lock_wait;
1207 #endif
1208 mutex_t mutex;
1209 } arena;
1212 /* A heap is a single contiguous memory region holding (coalesceable)
1213 malloc_chunks. It is allocated with mmap() and always starts at an
1214 address aligned to HEAP_MAX_SIZE. Not used unless compiling with
1215 USE_ARENAS. */
1217 typedef struct _heap_info {
1218 arena *ar_ptr; /* Arena for this heap. */
1219 struct _heap_info *prev; /* Previous heap. */
1220 size_t size; /* Current size in bytes. */
1221 size_t pad; /* Make sure the following data is properly aligned. */
1222 } heap_info;
1226 Static functions (forward declarations)
1229 #if __STD_C
1231 static void chunk_free(arena *ar_ptr, mchunkptr p) internal_function;
1232 static mchunkptr chunk_alloc(arena *ar_ptr, INTERNAL_SIZE_T size)
1233 internal_function;
1234 static mchunkptr chunk_realloc(arena *ar_ptr, mchunkptr oldp,
1235 INTERNAL_SIZE_T oldsize, INTERNAL_SIZE_T nb)
1236 internal_function;
1237 static mchunkptr chunk_align(arena *ar_ptr, INTERNAL_SIZE_T nb,
1238 size_t alignment) internal_function;
1239 static int main_trim(size_t pad) internal_function;
1240 #if USE_ARENAS
1241 static int heap_trim(heap_info *heap, size_t pad) internal_function;
1242 #endif
1243 #if defined _LIBC || defined MALLOC_HOOKS
1244 static Void_t* malloc_check(size_t sz, const Void_t *caller);
1245 static void free_check(Void_t* mem, const Void_t *caller);
1246 static Void_t* realloc_check(Void_t* oldmem, size_t bytes,
1247 const Void_t *caller);
1248 static Void_t* memalign_check(size_t alignment, size_t bytes,
1249 const Void_t *caller);
1250 #ifndef NO_THREADS
1251 static Void_t* malloc_starter(size_t sz, const Void_t *caller);
1252 static void free_starter(Void_t* mem, const Void_t *caller);
1253 static Void_t* malloc_atfork(size_t sz, const Void_t *caller);
1254 static void free_atfork(Void_t* mem, const Void_t *caller);
1255 #endif
1256 #endif
1258 #else
1260 static void chunk_free();
1261 static mchunkptr chunk_alloc();
1262 static mchunkptr chunk_realloc();
1263 static mchunkptr chunk_align();
1264 static int main_trim();
1265 #if USE_ARENAS
1266 static int heap_trim();
1267 #endif
1268 #if defined _LIBC || defined MALLOC_HOOKS
1269 static Void_t* malloc_check();
1270 static void free_check();
1271 static Void_t* realloc_check();
1272 static Void_t* memalign_check();
1273 #ifndef NO_THREADS
1274 static Void_t* malloc_starter();
1275 static void free_starter();
1276 static Void_t* malloc_atfork();
1277 static void free_atfork();
1278 #endif
1279 #endif
1281 #endif
1285 /* sizes, alignments */
1287 #define SIZE_SZ (sizeof(INTERNAL_SIZE_T))
1288 #define MALLOC_ALIGNMENT (SIZE_SZ + SIZE_SZ)
1289 #define MALLOC_ALIGN_MASK (MALLOC_ALIGNMENT - 1)
1290 #define MINSIZE (sizeof(struct malloc_chunk))
1292 /* conversion from malloc headers to user pointers, and back */
1294 #define chunk2mem(p) ((Void_t*)((char*)(p) + 2*SIZE_SZ))
1295 #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - 2*SIZE_SZ))
1297 /* pad request bytes into a usable size, return non-zero on overflow */
1299 #define request2size(req, nb) \
1300 ((nb = (req) + (SIZE_SZ + MALLOC_ALIGN_MASK)),\
1301 ((long)nb <= 0 || nb < (INTERNAL_SIZE_T) (req) \
1302 ? (__set_errno (ENOMEM), 1) \
1303 : ((nb < (MINSIZE + MALLOC_ALIGN_MASK) \
1304 ? (nb = MINSIZE) : (nb &= ~MALLOC_ALIGN_MASK)), 0)))
1306 /* Check if m has acceptable alignment */
1308 #define aligned_OK(m) (((unsigned long)((m)) & (MALLOC_ALIGN_MASK)) == 0)
1314 Physical chunk operations
1318 /* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */
1320 #define PREV_INUSE 0x1UL
1322 /* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */
1324 #define IS_MMAPPED 0x2UL
1326 /* Bits to mask off when extracting size */
1328 #define SIZE_BITS (PREV_INUSE|IS_MMAPPED)
1331 /* Ptr to next physical malloc_chunk. */
1333 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->size & ~PREV_INUSE) ))
1335 /* Ptr to previous physical malloc_chunk */
1337 #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_size) ))
1340 /* Treat space at ptr + offset as a chunk */
1342 #define chunk_at_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
1348 Dealing with use bits
1351 /* extract p's inuse bit */
1353 #define inuse(p) \
1354 ((((mchunkptr)(((char*)(p))+((p)->size & ~PREV_INUSE)))->size) & PREV_INUSE)
1356 /* extract inuse bit of previous chunk */
1358 #define prev_inuse(p) ((p)->size & PREV_INUSE)
1360 /* check for mmap()'ed chunk */
1362 #define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED)
1364 /* set/clear chunk as in use without otherwise disturbing */
1366 #define set_inuse(p) \
1367 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size |= PREV_INUSE
1369 #define clear_inuse(p) \
1370 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size &= ~(PREV_INUSE)
1372 /* check/set/clear inuse bits in known places */
1374 #define inuse_bit_at_offset(p, s)\
1375 (((mchunkptr)(((char*)(p)) + (s)))->size & PREV_INUSE)
1377 #define set_inuse_bit_at_offset(p, s)\
1378 (((mchunkptr)(((char*)(p)) + (s)))->size |= PREV_INUSE)
1380 #define clear_inuse_bit_at_offset(p, s)\
1381 (((mchunkptr)(((char*)(p)) + (s)))->size &= ~(PREV_INUSE))
1387 Dealing with size fields
1390 /* Get size, ignoring use bits */
1392 #define chunksize(p) ((p)->size & ~(SIZE_BITS))
1394 /* Set size at head, without disturbing its use bit */
1396 #define set_head_size(p, s) ((p)->size = (((p)->size & PREV_INUSE) | (s)))
1398 /* Set size/use ignoring previous bits in header */
1400 #define set_head(p, s) ((p)->size = (s))
1402 /* Set size at footer (only when chunk is not in use) */
1404 #define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_size = (s))
1410 /* access macros */
1412 #define bin_at(a, i) ((mbinptr)((char*)&(((a)->av)[2*(i) + 2]) - 2*SIZE_SZ))
1413 #define init_bin(a, i) ((a)->av[2*i+2] = (a)->av[2*i+3] = bin_at((a), i))
1414 #define next_bin(b) ((mbinptr)((char*)(b) + 2 * sizeof(mbinptr)))
1415 #define prev_bin(b) ((mbinptr)((char*)(b) - 2 * sizeof(mbinptr)))
1418 The first 2 bins are never indexed. The corresponding av cells are instead
1419 used for bookkeeping. This is not to save space, but to simplify
1420 indexing, maintain locality, and avoid some initialization tests.
1423 #define binblocks(a) (bin_at(a,0)->size)/* bitvector of nonempty blocks */
1424 #define top(a) (bin_at(a,0)->fd) /* The topmost chunk */
1425 #define last_remainder(a) (bin_at(a,1)) /* remainder from last split */
1428 Because top initially points to its own bin with initial
1429 zero size, thus forcing extension on the first malloc request,
1430 we avoid having any special code in malloc to check whether
1431 it even exists yet. But we still need to in malloc_extend_top.
1434 #define initial_top(a) ((mchunkptr)bin_at(a, 0))
1438 /* field-extraction macros */
1440 #define first(b) ((b)->fd)
1441 #define last(b) ((b)->bk)
1444 Indexing into bins
1447 #define bin_index(sz) \
1448 (((((unsigned long)(sz)) >> 9) == 0) ? (((unsigned long)(sz)) >> 3):\
1449 ((((unsigned long)(sz)) >> 9) <= 4) ? 56 + (((unsigned long)(sz)) >> 6):\
1450 ((((unsigned long)(sz)) >> 9) <= 20) ? 91 + (((unsigned long)(sz)) >> 9):\
1451 ((((unsigned long)(sz)) >> 9) <= 84) ? 110 + (((unsigned long)(sz)) >> 12):\
1452 ((((unsigned long)(sz)) >> 9) <= 340) ? 119 + (((unsigned long)(sz)) >> 15):\
1453 ((((unsigned long)(sz)) >> 9) <= 1364) ? 124 + (((unsigned long)(sz)) >> 18):\
1454 126)
1456 bins for chunks < 512 are all spaced 8 bytes apart, and hold
1457 identically sized chunks. This is exploited in malloc.
1460 #define MAX_SMALLBIN 63
1461 #define MAX_SMALLBIN_SIZE 512
1462 #define SMALLBIN_WIDTH 8
1464 #define smallbin_index(sz) (((unsigned long)(sz)) >> 3)
1467 Requests are `small' if both the corresponding and the next bin are small
1470 #define is_small_request(nb) ((nb) < MAX_SMALLBIN_SIZE - SMALLBIN_WIDTH)
1475 To help compensate for the large number of bins, a one-level index
1476 structure is used for bin-by-bin searching. `binblocks' is a
1477 one-word bitvector recording whether groups of BINBLOCKWIDTH bins
1478 have any (possibly) non-empty bins, so they can be skipped over
1479 all at once during during traversals. The bits are NOT always
1480 cleared as soon as all bins in a block are empty, but instead only
1481 when all are noticed to be empty during traversal in malloc.
1484 #define BINBLOCKWIDTH 4 /* bins per block */
1486 /* bin<->block macros */
1488 #define idx2binblock(ix) ((unsigned)1 << ((ix) / BINBLOCKWIDTH))
1489 #define mark_binblock(a, ii) (binblocks(a) |= idx2binblock(ii))
1490 #define clear_binblock(a, ii) (binblocks(a) &= ~(idx2binblock(ii)))
1495 /* Static bookkeeping data */
1497 /* Helper macro to initialize bins */
1498 #define IAV(i) bin_at(&main_arena, i), bin_at(&main_arena, i)
1500 static arena main_arena = {
1502 0, 0,
1503 IAV(0), IAV(1), IAV(2), IAV(3), IAV(4), IAV(5), IAV(6), IAV(7),
1504 IAV(8), IAV(9), IAV(10), IAV(11), IAV(12), IAV(13), IAV(14), IAV(15),
1505 IAV(16), IAV(17), IAV(18), IAV(19), IAV(20), IAV(21), IAV(22), IAV(23),
1506 IAV(24), IAV(25), IAV(26), IAV(27), IAV(28), IAV(29), IAV(30), IAV(31),
1507 IAV(32), IAV(33), IAV(34), IAV(35), IAV(36), IAV(37), IAV(38), IAV(39),
1508 IAV(40), IAV(41), IAV(42), IAV(43), IAV(44), IAV(45), IAV(46), IAV(47),
1509 IAV(48), IAV(49), IAV(50), IAV(51), IAV(52), IAV(53), IAV(54), IAV(55),
1510 IAV(56), IAV(57), IAV(58), IAV(59), IAV(60), IAV(61), IAV(62), IAV(63),
1511 IAV(64), IAV(65), IAV(66), IAV(67), IAV(68), IAV(69), IAV(70), IAV(71),
1512 IAV(72), IAV(73), IAV(74), IAV(75), IAV(76), IAV(77), IAV(78), IAV(79),
1513 IAV(80), IAV(81), IAV(82), IAV(83), IAV(84), IAV(85), IAV(86), IAV(87),
1514 IAV(88), IAV(89), IAV(90), IAV(91), IAV(92), IAV(93), IAV(94), IAV(95),
1515 IAV(96), IAV(97), IAV(98), IAV(99), IAV(100), IAV(101), IAV(102), IAV(103),
1516 IAV(104), IAV(105), IAV(106), IAV(107), IAV(108), IAV(109), IAV(110), IAV(111),
1517 IAV(112), IAV(113), IAV(114), IAV(115), IAV(116), IAV(117), IAV(118), IAV(119),
1518 IAV(120), IAV(121), IAV(122), IAV(123), IAV(124), IAV(125), IAV(126), IAV(127)
1520 &main_arena, /* next */
1521 0, /* size */
1522 #if THREAD_STATS
1523 0, 0, 0, /* stat_lock_direct, stat_lock_loop, stat_lock_wait */
1524 #endif
1525 MUTEX_INITIALIZER /* mutex */
1528 #undef IAV
1530 /* Thread specific data */
1532 static tsd_key_t arena_key;
1533 static mutex_t list_lock = MUTEX_INITIALIZER;
1535 #if THREAD_STATS
1536 static int stat_n_heaps = 0;
1537 #define THREAD_STAT(x) x
1538 #else
1539 #define THREAD_STAT(x) do ; while(0)
1540 #endif
1542 /* variables holding tunable values */
1544 static unsigned long trim_threshold = DEFAULT_TRIM_THRESHOLD;
1545 static unsigned long top_pad = DEFAULT_TOP_PAD;
1546 static unsigned int n_mmaps_max = DEFAULT_MMAP_MAX;
1547 static unsigned long mmap_threshold = DEFAULT_MMAP_THRESHOLD;
1548 static int check_action = DEFAULT_CHECK_ACTION;
1550 /* The first value returned from sbrk */
1551 static char* sbrk_base = (char*)(-1);
1553 /* The maximum memory obtained from system via sbrk */
1554 static unsigned long max_sbrked_mem = 0;
1556 /* The maximum via either sbrk or mmap (too difficult to track with threads) */
1557 #ifdef NO_THREADS
1558 static unsigned long max_total_mem = 0;
1559 #endif
1561 /* The total memory obtained from system via sbrk */
1562 #define sbrked_mem (main_arena.size)
1564 /* Tracking mmaps */
1566 static unsigned int n_mmaps = 0;
1567 static unsigned int max_n_mmaps = 0;
1568 static unsigned long mmapped_mem = 0;
1569 static unsigned long max_mmapped_mem = 0;
1571 /* Mapped memory in non-main arenas (reliable only for NO_THREADS). */
1572 static unsigned long arena_mem = 0;
1576 #ifndef _LIBC
1577 #define weak_variable
1578 #else
1579 /* In GNU libc we want the hook variables to be weak definitions to
1580 avoid a problem with Emacs. */
1581 #define weak_variable weak_function
1582 #endif
1584 /* Already initialized? */
1585 int __malloc_initialized = -1;
1588 #ifndef NO_THREADS
1590 /* The following two functions are registered via thread_atfork() to
1591 make sure that the mutexes remain in a consistent state in the
1592 fork()ed version of a thread. Also adapt the malloc and free hooks
1593 temporarily, because the `atfork' handler mechanism may use
1594 malloc/free internally (e.g. in LinuxThreads). */
1596 #if defined _LIBC || defined MALLOC_HOOKS
1597 static __malloc_ptr_t (*save_malloc_hook) __MALLOC_P ((size_t __size,
1598 const __malloc_ptr_t));
1599 static void (*save_free_hook) __MALLOC_P ((__malloc_ptr_t __ptr,
1600 const __malloc_ptr_t));
1601 static Void_t* save_arena;
1602 #endif
1604 static void
1605 ptmalloc_lock_all __MALLOC_P((void))
1607 arena *ar_ptr;
1609 (void)mutex_lock(&list_lock);
1610 for(ar_ptr = &main_arena;;) {
1611 (void)mutex_lock(&ar_ptr->mutex);
1612 ar_ptr = ar_ptr->next;
1613 if(ar_ptr == &main_arena) break;
1615 #if defined _LIBC || defined MALLOC_HOOKS
1616 save_malloc_hook = __malloc_hook;
1617 save_free_hook = __free_hook;
1618 __malloc_hook = malloc_atfork;
1619 __free_hook = free_atfork;
1620 /* Only the current thread may perform malloc/free calls now. */
1621 tsd_getspecific(arena_key, save_arena);
1622 tsd_setspecific(arena_key, (Void_t*)0);
1623 #endif
1626 static void
1627 ptmalloc_unlock_all __MALLOC_P((void))
1629 arena *ar_ptr;
1631 #if defined _LIBC || defined MALLOC_HOOKS
1632 tsd_setspecific(arena_key, save_arena);
1633 __malloc_hook = save_malloc_hook;
1634 __free_hook = save_free_hook;
1635 #endif
1636 for(ar_ptr = &main_arena;;) {
1637 (void)mutex_unlock(&ar_ptr->mutex);
1638 ar_ptr = ar_ptr->next;
1639 if(ar_ptr == &main_arena) break;
1641 (void)mutex_unlock(&list_lock);
1644 static void
1645 ptmalloc_init_all __MALLOC_P((void))
1647 arena *ar_ptr;
1649 #if defined _LIBC || defined MALLOC_HOOKS
1650 tsd_setspecific(arena_key, save_arena);
1651 __malloc_hook = save_malloc_hook;
1652 __free_hook = save_free_hook;
1653 #endif
1654 for(ar_ptr = &main_arena;;) {
1655 (void)mutex_init(&ar_ptr->mutex);
1656 ar_ptr = ar_ptr->next;
1657 if(ar_ptr == &main_arena) break;
1659 (void)mutex_init(&list_lock);
1662 #endif /* !defined NO_THREADS */
1664 /* Initialization routine. */
1665 #if defined(_LIBC)
1666 #if 0
1667 static void ptmalloc_init __MALLOC_P ((void)) __attribute__ ((constructor));
1668 #endif
1670 static void
1671 ptmalloc_init __MALLOC_P((void))
1672 #else
1673 void
1674 ptmalloc_init __MALLOC_P((void))
1675 #endif
1677 #if defined _LIBC || defined MALLOC_HOOKS
1678 # if __STD_C
1679 const char* s;
1680 # else
1681 char* s;
1682 # endif
1683 #endif
1685 if(__malloc_initialized >= 0) return;
1686 __malloc_initialized = 0;
1687 #ifdef _LIBC
1688 __libc_pagesize = __getpagesize();
1689 #endif
1690 #ifndef NO_THREADS
1691 #if defined _LIBC || defined MALLOC_HOOKS
1692 /* With some threads implementations, creating thread-specific data
1693 or initializing a mutex may call malloc() itself. Provide a
1694 simple starter version (realloc() won't work). */
1695 save_malloc_hook = __malloc_hook;
1696 save_free_hook = __free_hook;
1697 __malloc_hook = malloc_starter;
1698 __free_hook = free_starter;
1699 #endif
1700 #ifdef _LIBC
1701 /* Initialize the pthreads interface. */
1702 if (__pthread_initialize != NULL)
1703 __pthread_initialize();
1704 #endif
1705 #endif /* !defined NO_THREADS */
1706 mutex_init(&main_arena.mutex);
1707 mutex_init(&list_lock);
1708 tsd_key_create(&arena_key, NULL);
1709 tsd_setspecific(arena_key, (Void_t *)&main_arena);
1710 thread_atfork(ptmalloc_lock_all, ptmalloc_unlock_all, ptmalloc_init_all);
1711 #if defined _LIBC || defined MALLOC_HOOKS
1712 if((s = __secure_getenv("MALLOC_TRIM_THRESHOLD_")))
1713 mALLOPt(M_TRIM_THRESHOLD, atoi(s));
1714 if((s = __secure_getenv("MALLOC_TOP_PAD_")))
1715 mALLOPt(M_TOP_PAD, atoi(s));
1716 if((s = __secure_getenv("MALLOC_MMAP_THRESHOLD_")))
1717 mALLOPt(M_MMAP_THRESHOLD, atoi(s));
1718 if((s = __secure_getenv("MALLOC_MMAP_MAX_")))
1719 mALLOPt(M_MMAP_MAX, atoi(s));
1720 s = getenv("MALLOC_CHECK_");
1721 #ifndef NO_THREADS
1722 __malloc_hook = save_malloc_hook;
1723 __free_hook = save_free_hook;
1724 #endif
1725 if(s && (! __libc_enable_secure || access ("/etc/suid-debug", F_OK) == 0)) {
1726 if(s[0]) mALLOPt(M_CHECK_ACTION, (int)(s[0] - '0'));
1727 __malloc_check_init();
1729 if(__malloc_initialize_hook != NULL)
1730 (*__malloc_initialize_hook)();
1731 #endif
1732 __malloc_initialized = 1;
1735 /* There are platforms (e.g. Hurd) with a link-time hook mechanism. */
1736 #ifdef thread_atfork_static
1737 thread_atfork_static(ptmalloc_lock_all, ptmalloc_unlock_all, \
1738 ptmalloc_init_all)
1739 #endif
1741 #if defined _LIBC || defined MALLOC_HOOKS
1743 /* Hooks for debugging versions. The initial hooks just call the
1744 initialization routine, then do the normal work. */
1746 static Void_t*
1747 #if __STD_C
1748 malloc_hook_ini(size_t sz, const __malloc_ptr_t caller)
1749 #else
1750 malloc_hook_ini(sz, caller)
1751 size_t sz; const __malloc_ptr_t caller;
1752 #endif
1754 __malloc_hook = NULL;
1755 ptmalloc_init();
1756 return mALLOc(sz);
1759 static Void_t*
1760 #if __STD_C
1761 realloc_hook_ini(Void_t* ptr, size_t sz, const __malloc_ptr_t caller)
1762 #else
1763 realloc_hook_ini(ptr, sz, caller)
1764 Void_t* ptr; size_t sz; const __malloc_ptr_t caller;
1765 #endif
1767 __malloc_hook = NULL;
1768 __realloc_hook = NULL;
1769 ptmalloc_init();
1770 return rEALLOc(ptr, sz);
1773 static Void_t*
1774 #if __STD_C
1775 memalign_hook_ini(size_t sz, size_t alignment, const __malloc_ptr_t caller)
1776 #else
1777 memalign_hook_ini(sz, alignment, caller)
1778 size_t sz; size_t alignment; const __malloc_ptr_t caller;
1779 #endif
1781 __memalign_hook = NULL;
1782 ptmalloc_init();
1783 return mEMALIGn(sz, alignment);
1786 void weak_variable (*__malloc_initialize_hook) __MALLOC_P ((void)) = NULL;
1787 void weak_variable (*__free_hook) __MALLOC_P ((__malloc_ptr_t __ptr,
1788 const __malloc_ptr_t)) = NULL;
1789 __malloc_ptr_t weak_variable (*__malloc_hook)
1790 __MALLOC_P ((size_t __size, const __malloc_ptr_t)) = malloc_hook_ini;
1791 __malloc_ptr_t weak_variable (*__realloc_hook)
1792 __MALLOC_P ((__malloc_ptr_t __ptr, size_t __size, const __malloc_ptr_t))
1793 = realloc_hook_ini;
1794 __malloc_ptr_t weak_variable (*__memalign_hook)
1795 __MALLOC_P ((size_t __size, size_t __alignment, const __malloc_ptr_t))
1796 = memalign_hook_ini;
1797 void weak_variable (*__after_morecore_hook) __MALLOC_P ((void)) = NULL;
1799 /* Whether we are using malloc checking. */
1800 static int using_malloc_checking;
1802 /* A flag that is set by malloc_set_state, to signal that malloc checking
1803 must not be enabled on the request from the user (via the MALLOC_CHECK_
1804 environment variable). It is reset by __malloc_check_init to tell
1805 malloc_set_state that the user has requested malloc checking.
1807 The purpose of this flag is to make sure that malloc checking is not
1808 enabled when the heap to be restored was constructed without malloc
1809 checking, and thus does not contain the required magic bytes.
1810 Otherwise the heap would be corrupted by calls to free and realloc. If
1811 it turns out that the heap was created with malloc checking and the
1812 user has requested it malloc_set_state just calls __malloc_check_init
1813 again to enable it. On the other hand, reusing such a heap without
1814 further malloc checking is safe. */
1815 static int disallow_malloc_check;
1817 /* Activate a standard set of debugging hooks. */
1818 void
1819 __malloc_check_init()
1821 if (disallow_malloc_check) {
1822 disallow_malloc_check = 0;
1823 return;
1825 using_malloc_checking = 1;
1826 __malloc_hook = malloc_check;
1827 __free_hook = free_check;
1828 __realloc_hook = realloc_check;
1829 __memalign_hook = memalign_check;
1830 if(check_action & 1)
1831 fprintf(stderr, "malloc: using debugging hooks\n");
1834 #endif
1840 /* Routines dealing with mmap(). */
1842 #if HAVE_MMAP
1844 #ifndef MAP_ANONYMOUS
1846 static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
1848 #define MMAP(addr, size, prot, flags) ((dev_zero_fd < 0) ? \
1849 (dev_zero_fd = open("/dev/zero", O_RDWR), \
1850 mmap((addr), (size), (prot), (flags), dev_zero_fd, 0)) : \
1851 mmap((addr), (size), (prot), (flags), dev_zero_fd, 0))
1853 #else
1855 #define MMAP(addr, size, prot, flags) \
1856 (mmap((addr), (size), (prot), (flags)|MAP_ANONYMOUS, -1, 0))
1858 #endif
1860 #if defined __GNUC__ && __GNUC__ >= 2
1861 /* This function is only called from one place, inline it. */
1862 __inline__
1863 #endif
1864 static mchunkptr
1865 internal_function
1866 #if __STD_C
1867 mmap_chunk(size_t size)
1868 #else
1869 mmap_chunk(size) size_t size;
1870 #endif
1872 size_t page_mask = malloc_getpagesize - 1;
1873 mchunkptr p;
1875 /* For mmapped chunks, the overhead is one SIZE_SZ unit larger, because
1876 * there is no following chunk whose prev_size field could be used.
1878 size = (size + SIZE_SZ + page_mask) & ~page_mask;
1880 p = (mchunkptr)MMAP(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE);
1881 if(p == (mchunkptr) MAP_FAILED) return 0;
1883 n_mmaps++;
1884 if (n_mmaps > max_n_mmaps) max_n_mmaps = n_mmaps;
1886 /* We demand that eight bytes into a page must be 8-byte aligned. */
1887 assert(aligned_OK(chunk2mem(p)));
1889 /* The offset to the start of the mmapped region is stored
1890 * in the prev_size field of the chunk; normally it is zero,
1891 * but that can be changed in memalign().
1893 p->prev_size = 0;
1894 set_head(p, size|IS_MMAPPED);
1896 mmapped_mem += size;
1897 if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
1898 max_mmapped_mem = mmapped_mem;
1899 #ifdef NO_THREADS
1900 if ((unsigned long)(mmapped_mem + arena_mem + sbrked_mem) > max_total_mem)
1901 max_total_mem = mmapped_mem + arena_mem + sbrked_mem;
1902 #endif
1903 return p;
1906 static void
1907 internal_function
1908 #if __STD_C
1909 munmap_chunk(mchunkptr p)
1910 #else
1911 munmap_chunk(p) mchunkptr p;
1912 #endif
1914 INTERNAL_SIZE_T size = chunksize(p);
1915 int ret;
1917 assert (chunk_is_mmapped(p));
1918 assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
1919 assert((n_mmaps > 0));
1920 assert(((p->prev_size + size) & (malloc_getpagesize-1)) == 0);
1922 n_mmaps--;
1923 mmapped_mem -= (size + p->prev_size);
1925 ret = munmap((char *)p - p->prev_size, size + p->prev_size);
1927 /* munmap returns non-zero on failure */
1928 assert(ret == 0);
1931 #if HAVE_MREMAP
1933 static mchunkptr
1934 internal_function
1935 #if __STD_C
1936 mremap_chunk(mchunkptr p, size_t new_size)
1937 #else
1938 mremap_chunk(p, new_size) mchunkptr p; size_t new_size;
1939 #endif
1941 size_t page_mask = malloc_getpagesize - 1;
1942 INTERNAL_SIZE_T offset = p->prev_size;
1943 INTERNAL_SIZE_T size = chunksize(p);
1944 char *cp;
1946 assert (chunk_is_mmapped(p));
1947 assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
1948 assert((n_mmaps > 0));
1949 assert(((size + offset) & (malloc_getpagesize-1)) == 0);
1951 /* Note the extra SIZE_SZ overhead as in mmap_chunk(). */
1952 new_size = (new_size + offset + SIZE_SZ + page_mask) & ~page_mask;
1954 cp = (char *)mremap((char *)p - offset, size + offset, new_size,
1955 MREMAP_MAYMOVE);
1957 if (cp == MAP_FAILED) return 0;
1959 p = (mchunkptr)(cp + offset);
1961 assert(aligned_OK(chunk2mem(p)));
1963 assert((p->prev_size == offset));
1964 set_head(p, (new_size - offset)|IS_MMAPPED);
1966 mmapped_mem -= size + offset;
1967 mmapped_mem += new_size;
1968 if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
1969 max_mmapped_mem = mmapped_mem;
1970 #ifdef NO_THREADS
1971 if ((unsigned long)(mmapped_mem + arena_mem + sbrked_mem) > max_total_mem)
1972 max_total_mem = mmapped_mem + arena_mem + sbrked_mem;
1973 #endif
1974 return p;
1977 #endif /* HAVE_MREMAP */
1979 #endif /* HAVE_MMAP */
1983 /* Managing heaps and arenas (for concurrent threads) */
1985 #if USE_ARENAS
1987 /* Create a new heap. size is automatically rounded up to a multiple
1988 of the page size. */
1990 static heap_info *
1991 internal_function
1992 #if __STD_C
1993 new_heap(size_t size)
1994 #else
1995 new_heap(size) size_t size;
1996 #endif
1998 size_t page_mask = malloc_getpagesize - 1;
1999 char *p1, *p2;
2000 unsigned long ul;
2001 heap_info *h;
2003 if(size+top_pad < HEAP_MIN_SIZE)
2004 size = HEAP_MIN_SIZE;
2005 else if(size+top_pad <= HEAP_MAX_SIZE)
2006 size += top_pad;
2007 else if(size > HEAP_MAX_SIZE)
2008 return 0;
2009 else
2010 size = HEAP_MAX_SIZE;
2011 size = (size + page_mask) & ~page_mask;
2013 /* A memory region aligned to a multiple of HEAP_MAX_SIZE is needed.
2014 No swap space needs to be reserved for the following large
2015 mapping (on Linux, this is the case for all non-writable mappings
2016 anyway). */
2017 p1 = (char *)MMAP(0, HEAP_MAX_SIZE<<1, PROT_NONE, MAP_PRIVATE|MAP_NORESERVE);
2018 if(p1 == MAP_FAILED)
2019 return 0;
2020 p2 = (char *)(((unsigned long)p1 + HEAP_MAX_SIZE) & ~(HEAP_MAX_SIZE-1));
2021 ul = p2 - p1;
2022 munmap(p1, ul);
2023 munmap(p2 + HEAP_MAX_SIZE, HEAP_MAX_SIZE - ul);
2024 if(mprotect(p2, size, PROT_READ|PROT_WRITE) != 0) {
2025 munmap(p2, HEAP_MAX_SIZE);
2026 return 0;
2028 h = (heap_info *)p2;
2029 h->size = size;
2030 THREAD_STAT(stat_n_heaps++);
2031 return h;
2034 /* Grow or shrink a heap. size is automatically rounded up to a
2035 multiple of the page size if it is positive. */
2037 static int
2038 #if __STD_C
2039 grow_heap(heap_info *h, long diff)
2040 #else
2041 grow_heap(h, diff) heap_info *h; long diff;
2042 #endif
2044 size_t page_mask = malloc_getpagesize - 1;
2045 long new_size;
2047 if(diff >= 0) {
2048 diff = (diff + page_mask) & ~page_mask;
2049 new_size = (long)h->size + diff;
2050 if(new_size > HEAP_MAX_SIZE)
2051 return -1;
2052 if(mprotect((char *)h + h->size, diff, PROT_READ|PROT_WRITE) != 0)
2053 return -2;
2054 } else {
2055 new_size = (long)h->size + diff;
2056 if(new_size < (long)sizeof(*h))
2057 return -1;
2058 /* Try to re-map the extra heap space freshly to save memory, and
2059 make it inaccessible. */
2060 if((char *)MMAP((char *)h + new_size, -diff, PROT_NONE,
2061 MAP_PRIVATE|MAP_FIXED) == (char *) MAP_FAILED)
2062 return -2;
2064 h->size = new_size;
2065 return 0;
2068 /* Delete a heap. */
2070 #define delete_heap(heap) munmap((char*)(heap), HEAP_MAX_SIZE)
2072 /* arena_get() acquires an arena and locks the corresponding mutex.
2073 First, try the one last locked successfully by this thread. (This
2074 is the common case and handled with a macro for speed.) Then, loop
2075 once over the circularly linked list of arenas. If no arena is
2076 readily available, create a new one. In this latter case, `size'
2077 is just a hint as to how much memory will be required immediately
2078 in the new arena. */
2080 #define arena_get(ptr, size) do { \
2081 Void_t *vptr = NULL; \
2082 ptr = (arena *)tsd_getspecific(arena_key, vptr); \
2083 if(ptr && !mutex_trylock(&ptr->mutex)) { \
2084 THREAD_STAT(++(ptr->stat_lock_direct)); \
2085 } else \
2086 ptr = arena_get2(ptr, (size)); \
2087 } while(0)
2089 static arena *
2090 internal_function
2091 #if __STD_C
2092 arena_get2(arena *a_tsd, size_t size)
2093 #else
2094 arena_get2(a_tsd, size) arena *a_tsd; size_t size;
2095 #endif
2097 arena *a;
2098 heap_info *h;
2099 char *ptr;
2100 int i;
2101 unsigned long misalign;
2103 if(!a_tsd)
2104 a = a_tsd = &main_arena;
2105 else {
2106 a = a_tsd->next;
2107 if(!a) {
2108 /* This can only happen while initializing the new arena. */
2109 (void)mutex_lock(&main_arena.mutex);
2110 THREAD_STAT(++(main_arena.stat_lock_wait));
2111 return &main_arena;
2115 /* Check the global, circularly linked list for available arenas. */
2116 repeat:
2117 do {
2118 if(!mutex_trylock(&a->mutex)) {
2119 THREAD_STAT(++(a->stat_lock_loop));
2120 tsd_setspecific(arena_key, (Void_t *)a);
2121 return a;
2123 a = a->next;
2124 } while(a != a_tsd);
2126 /* If not even the list_lock can be obtained, try again. This can
2127 happen during `atfork', or for example on systems where thread
2128 creation makes it temporarily impossible to obtain _any_
2129 locks. */
2130 if(mutex_trylock(&list_lock)) {
2131 a = a_tsd;
2132 goto repeat;
2134 (void)mutex_unlock(&list_lock);
2136 /* Nothing immediately available, so generate a new arena. */
2137 h = new_heap(size + (sizeof(*h) + sizeof(*a) + MALLOC_ALIGNMENT));
2138 if(!h) {
2139 /* Maybe size is too large to fit in a single heap. So, just try
2140 to create a minimally-sized arena and let chunk_alloc() attempt
2141 to deal with the large request via mmap_chunk(). */
2142 h = new_heap(sizeof(*h) + sizeof(*a) + MALLOC_ALIGNMENT);
2143 if(!h)
2144 return 0;
2146 a = h->ar_ptr = (arena *)(h+1);
2147 for(i=0; i<NAV; i++)
2148 init_bin(a, i);
2149 a->next = NULL;
2150 a->size = h->size;
2151 arena_mem += h->size;
2152 #ifdef NO_THREADS
2153 if((unsigned long)(mmapped_mem + arena_mem + sbrked_mem) > max_total_mem)
2154 max_total_mem = mmapped_mem + arena_mem + sbrked_mem;
2155 #endif
2156 tsd_setspecific(arena_key, (Void_t *)a);
2157 mutex_init(&a->mutex);
2158 i = mutex_lock(&a->mutex); /* remember result */
2160 /* Set up the top chunk, with proper alignment. */
2161 ptr = (char *)(a + 1);
2162 misalign = (unsigned long)chunk2mem(ptr) & MALLOC_ALIGN_MASK;
2163 if (misalign > 0)
2164 ptr += MALLOC_ALIGNMENT - misalign;
2165 top(a) = (mchunkptr)ptr;
2166 set_head(top(a), (((char*)h + h->size) - ptr) | PREV_INUSE);
2168 /* Add the new arena to the list. */
2169 (void)mutex_lock(&list_lock);
2170 a->next = main_arena.next;
2171 main_arena.next = a;
2172 (void)mutex_unlock(&list_lock);
2174 if(i) /* locking failed; keep arena for further attempts later */
2175 return 0;
2177 THREAD_STAT(++(a->stat_lock_loop));
2178 return a;
2181 /* find the heap and corresponding arena for a given ptr */
2183 #define heap_for_ptr(ptr) \
2184 ((heap_info *)((unsigned long)(ptr) & ~(HEAP_MAX_SIZE-1)))
2185 #define arena_for_ptr(ptr) \
2186 (((mchunkptr)(ptr) < top(&main_arena) && (char *)(ptr) >= sbrk_base) ? \
2187 &main_arena : heap_for_ptr(ptr)->ar_ptr)
2189 #else /* !USE_ARENAS */
2191 /* There is only one arena, main_arena. */
2193 #define arena_get(ptr, sz) (ptr = &main_arena)
2194 #define arena_for_ptr(ptr) (&main_arena)
2196 #endif /* USE_ARENAS */
2201 Debugging support
2204 #if MALLOC_DEBUG
2208 These routines make a number of assertions about the states
2209 of data structures that should be true at all times. If any
2210 are not true, it's very likely that a user program has somehow
2211 trashed memory. (It's also possible that there is a coding error
2212 in malloc. In which case, please report it!)
2215 #if __STD_C
2216 static void do_check_chunk(arena *ar_ptr, mchunkptr p)
2217 #else
2218 static void do_check_chunk(ar_ptr, p) arena *ar_ptr; mchunkptr p;
2219 #endif
2221 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
2223 /* No checkable chunk is mmapped */
2224 assert(!chunk_is_mmapped(p));
2226 #if USE_ARENAS
2227 if(ar_ptr != &main_arena) {
2228 heap_info *heap = heap_for_ptr(p);
2229 assert(heap->ar_ptr == ar_ptr);
2230 if(p != top(ar_ptr))
2231 assert((char *)p + sz <= (char *)heap + heap->size);
2232 else
2233 assert((char *)p + sz == (char *)heap + heap->size);
2234 return;
2236 #endif
2238 /* Check for legal address ... */
2239 assert((char*)p >= sbrk_base);
2240 if (p != top(ar_ptr))
2241 assert((char*)p + sz <= (char*)top(ar_ptr));
2242 else
2243 assert((char*)p + sz <= sbrk_base + sbrked_mem);
2248 #if __STD_C
2249 static void do_check_free_chunk(arena *ar_ptr, mchunkptr p)
2250 #else
2251 static void do_check_free_chunk(ar_ptr, p) arena *ar_ptr; mchunkptr p;
2252 #endif
2254 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
2255 mchunkptr next = chunk_at_offset(p, sz);
2257 do_check_chunk(ar_ptr, p);
2259 /* Check whether it claims to be free ... */
2260 assert(!inuse(p));
2262 /* Must have OK size and fields */
2263 assert((long)sz >= (long)MINSIZE);
2264 assert((sz & MALLOC_ALIGN_MASK) == 0);
2265 assert(aligned_OK(chunk2mem(p)));
2266 /* ... matching footer field */
2267 assert(next->prev_size == sz);
2268 /* ... and is fully consolidated */
2269 assert(prev_inuse(p));
2270 assert (next == top(ar_ptr) || inuse(next));
2272 /* ... and has minimally sane links */
2273 assert(p->fd->bk == p);
2274 assert(p->bk->fd == p);
2277 #if __STD_C
2278 static void do_check_inuse_chunk(arena *ar_ptr, mchunkptr p)
2279 #else
2280 static void do_check_inuse_chunk(ar_ptr, p) arena *ar_ptr; mchunkptr p;
2281 #endif
2283 mchunkptr next = next_chunk(p);
2284 do_check_chunk(ar_ptr, p);
2286 /* Check whether it claims to be in use ... */
2287 assert(inuse(p));
2289 /* ... whether its size is OK (it might be a fencepost) ... */
2290 assert(chunksize(p) >= MINSIZE || next->size == (0|PREV_INUSE));
2292 /* ... and is surrounded by OK chunks.
2293 Since more things can be checked with free chunks than inuse ones,
2294 if an inuse chunk borders them and debug is on, it's worth doing them.
2296 if (!prev_inuse(p))
2298 mchunkptr prv = prev_chunk(p);
2299 assert(next_chunk(prv) == p);
2300 do_check_free_chunk(ar_ptr, prv);
2302 if (next == top(ar_ptr))
2304 assert(prev_inuse(next));
2305 assert(chunksize(next) >= MINSIZE);
2307 else if (!inuse(next))
2308 do_check_free_chunk(ar_ptr, next);
2312 #if __STD_C
2313 static void do_check_malloced_chunk(arena *ar_ptr,
2314 mchunkptr p, INTERNAL_SIZE_T s)
2315 #else
2316 static void do_check_malloced_chunk(ar_ptr, p, s)
2317 arena *ar_ptr; mchunkptr p; INTERNAL_SIZE_T s;
2318 #endif
2320 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
2321 long room = sz - s;
2323 do_check_inuse_chunk(ar_ptr, p);
2325 /* Legal size ... */
2326 assert((long)sz >= (long)MINSIZE);
2327 assert((sz & MALLOC_ALIGN_MASK) == 0);
2328 assert(room >= 0);
2329 assert(room < (long)MINSIZE);
2331 /* ... and alignment */
2332 assert(aligned_OK(chunk2mem(p)));
2335 /* ... and was allocated at front of an available chunk */
2336 assert(prev_inuse(p));
2341 #define check_free_chunk(A,P) do_check_free_chunk(A,P)
2342 #define check_inuse_chunk(A,P) do_check_inuse_chunk(A,P)
2343 #define check_chunk(A,P) do_check_chunk(A,P)
2344 #define check_malloced_chunk(A,P,N) do_check_malloced_chunk(A,P,N)
2345 #else
2346 #define check_free_chunk(A,P)
2347 #define check_inuse_chunk(A,P)
2348 #define check_chunk(A,P)
2349 #define check_malloced_chunk(A,P,N)
2350 #endif
2355 Macro-based internal utilities
2360 Linking chunks in bin lists.
2361 Call these only with variables, not arbitrary expressions, as arguments.
2365 Place chunk p of size s in its bin, in size order,
2366 putting it ahead of others of same size.
2370 #define frontlink(A, P, S, IDX, BK, FD) \
2372 if (S < MAX_SMALLBIN_SIZE) \
2374 IDX = smallbin_index(S); \
2375 mark_binblock(A, IDX); \
2376 BK = bin_at(A, IDX); \
2377 FD = BK->fd; \
2378 P->bk = BK; \
2379 P->fd = FD; \
2380 FD->bk = BK->fd = P; \
2382 else \
2384 IDX = bin_index(S); \
2385 BK = bin_at(A, IDX); \
2386 FD = BK->fd; \
2387 if (FD == BK) mark_binblock(A, IDX); \
2388 else \
2390 while (FD != BK && S < chunksize(FD)) FD = FD->fd; \
2391 BK = FD->bk; \
2393 P->bk = BK; \
2394 P->fd = FD; \
2395 FD->bk = BK->fd = P; \
2400 /* take a chunk off a list */
2402 #define unlink(P, BK, FD) \
2404 BK = P->bk; \
2405 FD = P->fd; \
2406 FD->bk = BK; \
2407 BK->fd = FD; \
2410 /* Place p as the last remainder */
2412 #define link_last_remainder(A, P) \
2414 last_remainder(A)->fd = last_remainder(A)->bk = P; \
2415 P->fd = P->bk = last_remainder(A); \
2418 /* Clear the last_remainder bin */
2420 #define clear_last_remainder(A) \
2421 (last_remainder(A)->fd = last_remainder(A)->bk = last_remainder(A))
2428 Extend the top-most chunk by obtaining memory from system.
2429 Main interface to sbrk (but see also malloc_trim).
2432 #if defined __GNUC__ && __GNUC__ >= 2
2433 /* This function is called only from one place, inline it. */
2434 __inline__
2435 #endif
2436 static void
2437 internal_function
2438 #if __STD_C
2439 malloc_extend_top(arena *ar_ptr, INTERNAL_SIZE_T nb)
2440 #else
2441 malloc_extend_top(ar_ptr, nb) arena *ar_ptr; INTERNAL_SIZE_T nb;
2442 #endif
2444 unsigned long pagesz = malloc_getpagesize;
2445 mchunkptr old_top = top(ar_ptr); /* Record state of old top */
2446 INTERNAL_SIZE_T old_top_size = chunksize(old_top);
2447 INTERNAL_SIZE_T top_size; /* new size of top chunk */
2449 #if USE_ARENAS
2450 if(ar_ptr == &main_arena) {
2451 #endif
2453 char* brk; /* return value from sbrk */
2454 INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of sbrked space */
2455 INTERNAL_SIZE_T correction; /* bytes for 2nd sbrk call */
2456 char* new_brk; /* return of 2nd sbrk call */
2457 char* old_end = (char*)(chunk_at_offset(old_top, old_top_size));
2459 /* Pad request with top_pad plus minimal overhead */
2460 INTERNAL_SIZE_T sbrk_size = nb + top_pad + MINSIZE;
2462 /* If not the first time through, round to preserve page boundary */
2463 /* Otherwise, we need to correct to a page size below anyway. */
2464 /* (We also correct below if an intervening foreign sbrk call.) */
2466 if (sbrk_base != (char*)(-1))
2467 sbrk_size = (sbrk_size + (pagesz - 1)) & ~(pagesz - 1);
2469 brk = (char*)(MORECORE (sbrk_size));
2471 /* Fail if sbrk failed or if a foreign sbrk call killed our space */
2472 if (brk == (char*)(MORECORE_FAILURE) ||
2473 (brk < old_end && old_top != initial_top(&main_arena)))
2474 return;
2476 #if defined _LIBC || defined MALLOC_HOOKS
2477 /* Call the `morecore' hook if necessary. */
2478 if (__after_morecore_hook)
2479 (*__after_morecore_hook) ();
2480 #endif
2482 sbrked_mem += sbrk_size;
2484 if (brk == old_end) { /* can just add bytes to current top */
2485 top_size = sbrk_size + old_top_size;
2486 set_head(old_top, top_size | PREV_INUSE);
2487 old_top = 0; /* don't free below */
2488 } else {
2489 if (sbrk_base == (char*)(-1)) /* First time through. Record base */
2490 sbrk_base = brk;
2491 else
2492 /* Someone else called sbrk(). Count those bytes as sbrked_mem. */
2493 sbrked_mem += brk - (char*)old_end;
2495 /* Guarantee alignment of first new chunk made from this space */
2496 front_misalign = (unsigned long)chunk2mem(brk) & MALLOC_ALIGN_MASK;
2497 if (front_misalign > 0) {
2498 correction = (MALLOC_ALIGNMENT) - front_misalign;
2499 brk += correction;
2500 } else
2501 correction = 0;
2503 /* Guarantee the next brk will be at a page boundary */
2504 correction += pagesz - ((unsigned long)(brk + sbrk_size) & (pagesz - 1));
2506 /* Allocate correction */
2507 new_brk = (char*)(MORECORE (correction));
2508 if (new_brk == (char*)(MORECORE_FAILURE)) return;
2510 #if defined _LIBC || defined MALLOC_HOOKS
2511 /* Call the `morecore' hook if necessary. */
2512 if (__after_morecore_hook)
2513 (*__after_morecore_hook) ();
2514 #endif
2516 sbrked_mem += correction;
2518 top(&main_arena) = (mchunkptr)brk;
2519 top_size = new_brk - brk + correction;
2520 set_head(top(&main_arena), top_size | PREV_INUSE);
2522 if (old_top == initial_top(&main_arena))
2523 old_top = 0; /* don't free below */
2526 if ((unsigned long)sbrked_mem > (unsigned long)max_sbrked_mem)
2527 max_sbrked_mem = sbrked_mem;
2528 #ifdef NO_THREADS
2529 if ((unsigned long)(mmapped_mem + arena_mem + sbrked_mem) > max_total_mem)
2530 max_total_mem = mmapped_mem + arena_mem + sbrked_mem;
2531 #endif
2533 #if USE_ARENAS
2534 } else { /* ar_ptr != &main_arena */
2535 heap_info *old_heap, *heap;
2536 size_t old_heap_size;
2538 if(old_top_size < MINSIZE) /* this should never happen */
2539 return;
2541 /* First try to extend the current heap. */
2542 if(MINSIZE + nb <= old_top_size)
2543 return;
2544 old_heap = heap_for_ptr(old_top);
2545 old_heap_size = old_heap->size;
2546 if(grow_heap(old_heap, MINSIZE + nb - old_top_size) == 0) {
2547 ar_ptr->size += old_heap->size - old_heap_size;
2548 arena_mem += old_heap->size - old_heap_size;
2549 #ifdef NO_THREADS
2550 if(mmapped_mem + arena_mem + sbrked_mem > max_total_mem)
2551 max_total_mem = mmapped_mem + arena_mem + sbrked_mem;
2552 #endif
2553 top_size = ((char *)old_heap + old_heap->size) - (char *)old_top;
2554 set_head(old_top, top_size | PREV_INUSE);
2555 return;
2558 /* A new heap must be created. */
2559 heap = new_heap(nb + (MINSIZE + sizeof(*heap)));
2560 if(!heap)
2561 return;
2562 heap->ar_ptr = ar_ptr;
2563 heap->prev = old_heap;
2564 ar_ptr->size += heap->size;
2565 arena_mem += heap->size;
2566 #ifdef NO_THREADS
2567 if((unsigned long)(mmapped_mem + arena_mem + sbrked_mem) > max_total_mem)
2568 max_total_mem = mmapped_mem + arena_mem + sbrked_mem;
2569 #endif
2571 /* Set up the new top, so we can safely use chunk_free() below. */
2572 top(ar_ptr) = chunk_at_offset(heap, sizeof(*heap));
2573 top_size = heap->size - sizeof(*heap);
2574 set_head(top(ar_ptr), top_size | PREV_INUSE);
2576 #endif /* USE_ARENAS */
2578 /* We always land on a page boundary */
2579 assert(((unsigned long)((char*)top(ar_ptr) + top_size) & (pagesz-1)) == 0);
2581 /* Setup fencepost and free the old top chunk. */
2582 if(old_top) {
2583 /* The fencepost takes at least MINSIZE bytes, because it might
2584 become the top chunk again later. Note that a footer is set
2585 up, too, although the chunk is marked in use. */
2586 old_top_size -= MINSIZE;
2587 set_head(chunk_at_offset(old_top, old_top_size + 2*SIZE_SZ), 0|PREV_INUSE);
2588 if(old_top_size >= MINSIZE) {
2589 set_head(chunk_at_offset(old_top, old_top_size), (2*SIZE_SZ)|PREV_INUSE);
2590 set_foot(chunk_at_offset(old_top, old_top_size), (2*SIZE_SZ));
2591 set_head_size(old_top, old_top_size);
2592 chunk_free(ar_ptr, old_top);
2593 } else {
2594 set_head(old_top, (old_top_size + 2*SIZE_SZ)|PREV_INUSE);
2595 set_foot(old_top, (old_top_size + 2*SIZE_SZ));
2603 /* Main public routines */
2607 Malloc Algorithm:
2609 The requested size is first converted into a usable form, `nb'.
2610 This currently means to add 4 bytes overhead plus possibly more to
2611 obtain 8-byte alignment and/or to obtain a size of at least
2612 MINSIZE (currently 16, 24, or 32 bytes), the smallest allocatable
2613 size. (All fits are considered `exact' if they are within MINSIZE
2614 bytes.)
2616 From there, the first successful of the following steps is taken:
2618 1. The bin corresponding to the request size is scanned, and if
2619 a chunk of exactly the right size is found, it is taken.
2621 2. The most recently remaindered chunk is used if it is big
2622 enough. This is a form of (roving) first fit, used only in
2623 the absence of exact fits. Runs of consecutive requests use
2624 the remainder of the chunk used for the previous such request
2625 whenever possible. This limited use of a first-fit style
2626 allocation strategy tends to give contiguous chunks
2627 coextensive lifetimes, which improves locality and can reduce
2628 fragmentation in the long run.
2630 3. Other bins are scanned in increasing size order, using a
2631 chunk big enough to fulfill the request, and splitting off
2632 any remainder. This search is strictly by best-fit; i.e.,
2633 the smallest (with ties going to approximately the least
2634 recently used) chunk that fits is selected.
2636 4. If large enough, the chunk bordering the end of memory
2637 (`top') is split off. (This use of `top' is in accord with
2638 the best-fit search rule. In effect, `top' is treated as
2639 larger (and thus less well fitting) than any other available
2640 chunk since it can be extended to be as large as necessary
2641 (up to system limitations).
2643 5. If the request size meets the mmap threshold and the
2644 system supports mmap, and there are few enough currently
2645 allocated mmapped regions, and a call to mmap succeeds,
2646 the request is allocated via direct memory mapping.
2648 6. Otherwise, the top of memory is extended by
2649 obtaining more space from the system (normally using sbrk,
2650 but definable to anything else via the MORECORE macro).
2651 Memory is gathered from the system (in system page-sized
2652 units) in a way that allows chunks obtained across different
2653 sbrk calls to be consolidated, but does not require
2654 contiguous memory. Thus, it should be safe to intersperse
2655 mallocs with other sbrk calls.
2658 All allocations are made from the `lowest' part of any found
2659 chunk. (The implementation invariant is that prev_inuse is
2660 always true of any allocated chunk; i.e., that each allocated
2661 chunk borders either a previously allocated and still in-use chunk,
2662 or the base of its memory arena.)
2666 #if __STD_C
2667 Void_t* mALLOc(size_t bytes)
2668 #else
2669 Void_t* mALLOc(bytes) size_t bytes;
2670 #endif
2672 arena *ar_ptr;
2673 INTERNAL_SIZE_T nb; /* padded request size */
2674 mchunkptr victim;
2676 #if defined _LIBC || defined MALLOC_HOOKS
2677 if (__malloc_hook != NULL) {
2678 Void_t* result;
2680 #if defined __GNUC__ && __GNUC__ >= 2
2681 result = (*__malloc_hook)(bytes, __builtin_return_address (0));
2682 #else
2683 result = (*__malloc_hook)(bytes, NULL);
2684 #endif
2685 return result;
2687 #endif
2689 if(request2size(bytes, nb))
2690 return 0;
2691 arena_get(ar_ptr, nb);
2692 if(!ar_ptr)
2693 return 0;
2694 victim = chunk_alloc(ar_ptr, nb);
2695 if(!victim) {
2696 /* Maybe the failure is due to running out of mmapped areas. */
2697 if(ar_ptr != &main_arena) {
2698 (void)mutex_unlock(&ar_ptr->mutex);
2699 (void)mutex_lock(&main_arena.mutex);
2700 victim = chunk_alloc(&main_arena, nb);
2701 (void)mutex_unlock(&main_arena.mutex);
2702 } else {
2703 #if USE_ARENAS
2704 /* ... or sbrk() has failed and there is still a chance to mmap() */
2705 ar_ptr = arena_get2(ar_ptr->next ? ar_ptr : 0, nb);
2706 (void)mutex_unlock(&main_arena.mutex);
2707 if(ar_ptr) {
2708 victim = chunk_alloc(ar_ptr, nb);
2709 (void)mutex_unlock(&ar_ptr->mutex);
2711 #endif
2713 if(!victim) return 0;
2714 } else
2715 (void)mutex_unlock(&ar_ptr->mutex);
2716 return chunk2mem(victim);
2719 static mchunkptr
2720 internal_function
2721 #if __STD_C
2722 chunk_alloc(arena *ar_ptr, INTERNAL_SIZE_T nb)
2723 #else
2724 chunk_alloc(ar_ptr, nb) arena *ar_ptr; INTERNAL_SIZE_T nb;
2725 #endif
2727 mchunkptr victim; /* inspected/selected chunk */
2728 INTERNAL_SIZE_T victim_size; /* its size */
2729 int idx; /* index for bin traversal */
2730 mbinptr bin; /* associated bin */
2731 mchunkptr remainder; /* remainder from a split */
2732 long remainder_size; /* its size */
2733 int remainder_index; /* its bin index */
2734 unsigned long block; /* block traverser bit */
2735 int startidx; /* first bin of a traversed block */
2736 mchunkptr fwd; /* misc temp for linking */
2737 mchunkptr bck; /* misc temp for linking */
2738 mbinptr q; /* misc temp */
2741 /* Check for exact match in a bin */
2743 if (is_small_request(nb)) /* Faster version for small requests */
2745 idx = smallbin_index(nb);
2747 /* No traversal or size check necessary for small bins. */
2749 q = bin_at(ar_ptr, idx);
2750 victim = last(q);
2752 /* Also scan the next one, since it would have a remainder < MINSIZE */
2753 if (victim == q)
2755 q = next_bin(q);
2756 victim = last(q);
2758 if (victim != q)
2760 victim_size = chunksize(victim);
2761 unlink(victim, bck, fwd);
2762 set_inuse_bit_at_offset(victim, victim_size);
2763 check_malloced_chunk(ar_ptr, victim, nb);
2764 return victim;
2767 idx += 2; /* Set for bin scan below. We've already scanned 2 bins. */
2770 else
2772 idx = bin_index(nb);
2773 bin = bin_at(ar_ptr, idx);
2775 for (victim = last(bin); victim != bin; victim = victim->bk)
2777 victim_size = chunksize(victim);
2778 remainder_size = victim_size - nb;
2780 if (remainder_size >= (long)MINSIZE) /* too big */
2782 --idx; /* adjust to rescan below after checking last remainder */
2783 break;
2786 else if (remainder_size >= 0) /* exact fit */
2788 unlink(victim, bck, fwd);
2789 set_inuse_bit_at_offset(victim, victim_size);
2790 check_malloced_chunk(ar_ptr, victim, nb);
2791 return victim;
2795 ++idx;
2799 /* Try to use the last split-off remainder */
2801 if ( (victim = last_remainder(ar_ptr)->fd) != last_remainder(ar_ptr))
2803 victim_size = chunksize(victim);
2804 remainder_size = victim_size - nb;
2806 if (remainder_size >= (long)MINSIZE) /* re-split */
2808 remainder = chunk_at_offset(victim, nb);
2809 set_head(victim, nb | PREV_INUSE);
2810 link_last_remainder(ar_ptr, remainder);
2811 set_head(remainder, remainder_size | PREV_INUSE);
2812 set_foot(remainder, remainder_size);
2813 check_malloced_chunk(ar_ptr, victim, nb);
2814 return victim;
2817 clear_last_remainder(ar_ptr);
2819 if (remainder_size >= 0) /* exhaust */
2821 set_inuse_bit_at_offset(victim, victim_size);
2822 check_malloced_chunk(ar_ptr, victim, nb);
2823 return victim;
2826 /* Else place in bin */
2828 frontlink(ar_ptr, victim, victim_size, remainder_index, bck, fwd);
2832 If there are any possibly nonempty big-enough blocks,
2833 search for best fitting chunk by scanning bins in blockwidth units.
2836 if ( (block = idx2binblock(idx)) <= binblocks(ar_ptr))
2839 /* Get to the first marked block */
2841 if ( (block & binblocks(ar_ptr)) == 0)
2843 /* force to an even block boundary */
2844 idx = (idx & ~(BINBLOCKWIDTH - 1)) + BINBLOCKWIDTH;
2845 block <<= 1;
2846 while ((block & binblocks(ar_ptr)) == 0)
2848 idx += BINBLOCKWIDTH;
2849 block <<= 1;
2853 /* For each possibly nonempty block ... */
2854 for (;;)
2856 startidx = idx; /* (track incomplete blocks) */
2857 q = bin = bin_at(ar_ptr, idx);
2859 /* For each bin in this block ... */
2862 /* Find and use first big enough chunk ... */
2864 for (victim = last(bin); victim != bin; victim = victim->bk)
2866 victim_size = chunksize(victim);
2867 remainder_size = victim_size - nb;
2869 if (remainder_size >= (long)MINSIZE) /* split */
2871 remainder = chunk_at_offset(victim, nb);
2872 set_head(victim, nb | PREV_INUSE);
2873 unlink(victim, bck, fwd);
2874 link_last_remainder(ar_ptr, remainder);
2875 set_head(remainder, remainder_size | PREV_INUSE);
2876 set_foot(remainder, remainder_size);
2877 check_malloced_chunk(ar_ptr, victim, nb);
2878 return victim;
2881 else if (remainder_size >= 0) /* take */
2883 set_inuse_bit_at_offset(victim, victim_size);
2884 unlink(victim, bck, fwd);
2885 check_malloced_chunk(ar_ptr, victim, nb);
2886 return victim;
2891 bin = next_bin(bin);
2893 } while ((++idx & (BINBLOCKWIDTH - 1)) != 0);
2895 /* Clear out the block bit. */
2897 do /* Possibly backtrack to try to clear a partial block */
2899 if ((startidx & (BINBLOCKWIDTH - 1)) == 0)
2901 binblocks(ar_ptr) &= ~block;
2902 break;
2904 --startidx;
2905 q = prev_bin(q);
2906 } while (first(q) == q);
2908 /* Get to the next possibly nonempty block */
2910 if ( (block <<= 1) <= binblocks(ar_ptr) && (block != 0) )
2912 while ((block & binblocks(ar_ptr)) == 0)
2914 idx += BINBLOCKWIDTH;
2915 block <<= 1;
2918 else
2919 break;
2924 /* Try to use top chunk */
2926 /* Require that there be a remainder, ensuring top always exists */
2927 if ( (remainder_size = chunksize(top(ar_ptr)) - nb) < (long)MINSIZE)
2930 #if HAVE_MMAP
2931 /* If the request is big and there are not yet too many regions,
2932 and we would otherwise need to extend, try to use mmap instead. */
2933 if ((unsigned long)nb >= (unsigned long)mmap_threshold &&
2934 n_mmaps < n_mmaps_max &&
2935 (victim = mmap_chunk(nb)) != 0)
2936 return victim;
2937 #endif
2939 /* Try to extend */
2940 malloc_extend_top(ar_ptr, nb);
2941 if ((remainder_size = chunksize(top(ar_ptr)) - nb) < (long)MINSIZE)
2943 #if HAVE_MMAP
2944 /* A last attempt: when we are out of address space in the arena,
2945 try mmap anyway, disregarding n_mmaps_max. */
2946 if((victim = mmap_chunk(nb)) != 0)
2947 return victim;
2948 #endif
2949 return 0; /* propagate failure */
2953 victim = top(ar_ptr);
2954 set_head(victim, nb | PREV_INUSE);
2955 top(ar_ptr) = chunk_at_offset(victim, nb);
2956 set_head(top(ar_ptr), remainder_size | PREV_INUSE);
2957 check_malloced_chunk(ar_ptr, victim, nb);
2958 return victim;
2967 free() algorithm :
2969 cases:
2971 1. free(0) has no effect.
2973 2. If the chunk was allocated via mmap, it is released via munmap().
2975 3. If a returned chunk borders the current high end of memory,
2976 it is consolidated into the top, and if the total unused
2977 topmost memory exceeds the trim threshold, malloc_trim is
2978 called.
2980 4. Other chunks are consolidated as they arrive, and
2981 placed in corresponding bins. (This includes the case of
2982 consolidating with the current `last_remainder').
2987 #if __STD_C
2988 void fREe(Void_t* mem)
2989 #else
2990 void fREe(mem) Void_t* mem;
2991 #endif
2993 arena *ar_ptr;
2994 mchunkptr p; /* chunk corresponding to mem */
2996 #if defined _LIBC || defined MALLOC_HOOKS
2997 if (__free_hook != NULL) {
2998 #if defined __GNUC__ && __GNUC__ >= 2
2999 (*__free_hook)(mem, __builtin_return_address (0));
3000 #else
3001 (*__free_hook)(mem, NULL);
3002 #endif
3003 return;
3005 #endif
3007 if (mem == 0) /* free(0) has no effect */
3008 return;
3010 p = mem2chunk(mem);
3012 #if HAVE_MMAP
3013 if (chunk_is_mmapped(p)) /* release mmapped memory. */
3015 munmap_chunk(p);
3016 return;
3018 #endif
3020 ar_ptr = arena_for_ptr(p);
3021 #if THREAD_STATS
3022 if(!mutex_trylock(&ar_ptr->mutex))
3023 ++(ar_ptr->stat_lock_direct);
3024 else {
3025 (void)mutex_lock(&ar_ptr->mutex);
3026 ++(ar_ptr->stat_lock_wait);
3028 #else
3029 (void)mutex_lock(&ar_ptr->mutex);
3030 #endif
3031 chunk_free(ar_ptr, p);
3032 (void)mutex_unlock(&ar_ptr->mutex);
3035 static void
3036 internal_function
3037 #if __STD_C
3038 chunk_free(arena *ar_ptr, mchunkptr p)
3039 #else
3040 chunk_free(ar_ptr, p) arena *ar_ptr; mchunkptr p;
3041 #endif
3043 INTERNAL_SIZE_T hd = p->size; /* its head field */
3044 INTERNAL_SIZE_T sz; /* its size */
3045 int idx; /* its bin index */
3046 mchunkptr next; /* next contiguous chunk */
3047 INTERNAL_SIZE_T nextsz; /* its size */
3048 INTERNAL_SIZE_T prevsz; /* size of previous contiguous chunk */
3049 mchunkptr bck; /* misc temp for linking */
3050 mchunkptr fwd; /* misc temp for linking */
3051 int islr; /* track whether merging with last_remainder */
3053 check_inuse_chunk(ar_ptr, p);
3055 sz = hd & ~PREV_INUSE;
3056 next = chunk_at_offset(p, sz);
3057 nextsz = chunksize(next);
3059 if (next == top(ar_ptr)) /* merge with top */
3061 sz += nextsz;
3063 if (!(hd & PREV_INUSE)) /* consolidate backward */
3065 prevsz = p->prev_size;
3066 p = chunk_at_offset(p, -(long)prevsz);
3067 sz += prevsz;
3068 unlink(p, bck, fwd);
3071 set_head(p, sz | PREV_INUSE);
3072 top(ar_ptr) = p;
3074 #if USE_ARENAS
3075 if(ar_ptr == &main_arena) {
3076 #endif
3077 if ((unsigned long)(sz) >= (unsigned long)trim_threshold)
3078 main_trim(top_pad);
3079 #if USE_ARENAS
3080 } else {
3081 heap_info *heap = heap_for_ptr(p);
3083 assert(heap->ar_ptr == ar_ptr);
3085 /* Try to get rid of completely empty heaps, if possible. */
3086 if((unsigned long)(sz) >= (unsigned long)trim_threshold ||
3087 p == chunk_at_offset(heap, sizeof(*heap)))
3088 heap_trim(heap, top_pad);
3090 #endif
3091 return;
3094 islr = 0;
3096 if (!(hd & PREV_INUSE)) /* consolidate backward */
3098 prevsz = p->prev_size;
3099 p = chunk_at_offset(p, -(long)prevsz);
3100 sz += prevsz;
3102 if (p->fd == last_remainder(ar_ptr)) /* keep as last_remainder */
3103 islr = 1;
3104 else
3105 unlink(p, bck, fwd);
3108 if (!(inuse_bit_at_offset(next, nextsz))) /* consolidate forward */
3110 sz += nextsz;
3112 if (!islr && next->fd == last_remainder(ar_ptr))
3113 /* re-insert last_remainder */
3115 islr = 1;
3116 link_last_remainder(ar_ptr, p);
3118 else
3119 unlink(next, bck, fwd);
3121 next = chunk_at_offset(p, sz);
3123 else
3124 set_head(next, nextsz); /* clear inuse bit */
3126 set_head(p, sz | PREV_INUSE);
3127 next->prev_size = sz;
3128 if (!islr)
3129 frontlink(ar_ptr, p, sz, idx, bck, fwd);
3131 #if USE_ARENAS
3132 /* Check whether the heap containing top can go away now. */
3133 if(next->size < MINSIZE &&
3134 (unsigned long)sz > trim_threshold &&
3135 ar_ptr != &main_arena) { /* fencepost */
3136 heap_info *heap = heap_for_ptr(top(ar_ptr));
3138 if(top(ar_ptr) == chunk_at_offset(heap, sizeof(*heap)) &&
3139 heap->prev == heap_for_ptr(p))
3140 heap_trim(heap, top_pad);
3142 #endif
3151 Realloc algorithm:
3153 Chunks that were obtained via mmap cannot be extended or shrunk
3154 unless HAVE_MREMAP is defined, in which case mremap is used.
3155 Otherwise, if their reallocation is for additional space, they are
3156 copied. If for less, they are just left alone.
3158 Otherwise, if the reallocation is for additional space, and the
3159 chunk can be extended, it is, else a malloc-copy-free sequence is
3160 taken. There are several different ways that a chunk could be
3161 extended. All are tried:
3163 * Extending forward into following adjacent free chunk.
3164 * Shifting backwards, joining preceding adjacent space
3165 * Both shifting backwards and extending forward.
3166 * Extending into newly sbrked space
3168 Unless the #define REALLOC_ZERO_BYTES_FREES is set, realloc with a
3169 size argument of zero (re)allocates a minimum-sized chunk.
3171 If the reallocation is for less space, and the new request is for
3172 a `small' (<512 bytes) size, then the newly unused space is lopped
3173 off and freed.
3175 The old unix realloc convention of allowing the last-free'd chunk
3176 to be used as an argument to realloc is no longer supported.
3177 I don't know of any programs still relying on this feature,
3178 and allowing it would also allow too many other incorrect
3179 usages of realloc to be sensible.
3185 #if __STD_C
3186 Void_t* rEALLOc(Void_t* oldmem, size_t bytes)
3187 #else
3188 Void_t* rEALLOc(oldmem, bytes) Void_t* oldmem; size_t bytes;
3189 #endif
3191 arena *ar_ptr;
3192 INTERNAL_SIZE_T nb; /* padded request size */
3194 mchunkptr oldp; /* chunk corresponding to oldmem */
3195 INTERNAL_SIZE_T oldsize; /* its size */
3197 mchunkptr newp; /* chunk to return */
3199 #if defined _LIBC || defined MALLOC_HOOKS
3200 if (__realloc_hook != NULL) {
3201 Void_t* result;
3203 #if defined __GNUC__ && __GNUC__ >= 2
3204 result = (*__realloc_hook)(oldmem, bytes, __builtin_return_address (0));
3205 #else
3206 result = (*__realloc_hook)(oldmem, bytes, NULL);
3207 #endif
3208 return result;
3210 #endif
3212 #ifdef REALLOC_ZERO_BYTES_FREES
3213 if (bytes == 0 && oldmem != NULL) { fREe(oldmem); return 0; }
3214 #endif
3216 /* realloc of null is supposed to be same as malloc */
3217 if (oldmem == 0) return mALLOc(bytes);
3219 oldp = mem2chunk(oldmem);
3220 oldsize = chunksize(oldp);
3222 if(request2size(bytes, nb))
3223 return 0;
3225 #if HAVE_MMAP
3226 if (chunk_is_mmapped(oldp))
3228 Void_t* newmem;
3230 #if HAVE_MREMAP
3231 newp = mremap_chunk(oldp, nb);
3232 if(newp) return chunk2mem(newp);
3233 #endif
3234 /* Note the extra SIZE_SZ overhead. */
3235 if(oldsize - SIZE_SZ >= nb) return oldmem; /* do nothing */
3236 /* Must alloc, copy, free. */
3237 newmem = mALLOc(bytes);
3238 if (newmem == 0) return 0; /* propagate failure */
3239 MALLOC_COPY(newmem, oldmem, oldsize - 2*SIZE_SZ);
3240 munmap_chunk(oldp);
3241 return newmem;
3243 #endif
3245 ar_ptr = arena_for_ptr(oldp);
3246 #if THREAD_STATS
3247 if(!mutex_trylock(&ar_ptr->mutex))
3248 ++(ar_ptr->stat_lock_direct);
3249 else {
3250 (void)mutex_lock(&ar_ptr->mutex);
3251 ++(ar_ptr->stat_lock_wait);
3253 #else
3254 (void)mutex_lock(&ar_ptr->mutex);
3255 #endif
3257 #ifndef NO_THREADS
3258 /* As in malloc(), remember this arena for the next allocation. */
3259 tsd_setspecific(arena_key, (Void_t *)ar_ptr);
3260 #endif
3262 newp = chunk_realloc(ar_ptr, oldp, oldsize, nb);
3264 (void)mutex_unlock(&ar_ptr->mutex);
3265 return newp ? chunk2mem(newp) : NULL;
3268 static mchunkptr
3269 internal_function
3270 #if __STD_C
3271 chunk_realloc(arena* ar_ptr, mchunkptr oldp, INTERNAL_SIZE_T oldsize,
3272 INTERNAL_SIZE_T nb)
3273 #else
3274 chunk_realloc(ar_ptr, oldp, oldsize, nb)
3275 arena* ar_ptr; mchunkptr oldp; INTERNAL_SIZE_T oldsize, nb;
3276 #endif
3278 mchunkptr newp = oldp; /* chunk to return */
3279 INTERNAL_SIZE_T newsize = oldsize; /* its size */
3281 mchunkptr next; /* next contiguous chunk after oldp */
3282 INTERNAL_SIZE_T nextsize; /* its size */
3284 mchunkptr prev; /* previous contiguous chunk before oldp */
3285 INTERNAL_SIZE_T prevsize; /* its size */
3287 mchunkptr remainder; /* holds split off extra space from newp */
3288 INTERNAL_SIZE_T remainder_size; /* its size */
3290 mchunkptr bck; /* misc temp for linking */
3291 mchunkptr fwd; /* misc temp for linking */
3293 check_inuse_chunk(ar_ptr, oldp);
3295 if ((long)(oldsize) < (long)(nb))
3298 /* Try expanding forward */
3300 next = chunk_at_offset(oldp, oldsize);
3301 if (next == top(ar_ptr) || !inuse(next))
3303 nextsize = chunksize(next);
3305 /* Forward into top only if a remainder */
3306 if (next == top(ar_ptr))
3308 if ((long)(nextsize + newsize) >= (long)(nb + MINSIZE))
3310 newsize += nextsize;
3311 top(ar_ptr) = chunk_at_offset(oldp, nb);
3312 set_head(top(ar_ptr), (newsize - nb) | PREV_INUSE);
3313 set_head_size(oldp, nb);
3314 return oldp;
3318 /* Forward into next chunk */
3319 else if (((long)(nextsize + newsize) >= (long)(nb)))
3321 unlink(next, bck, fwd);
3322 newsize += nextsize;
3323 goto split;
3326 else
3328 next = 0;
3329 nextsize = 0;
3332 /* Try shifting backwards. */
3334 if (!prev_inuse(oldp))
3336 prev = prev_chunk(oldp);
3337 prevsize = chunksize(prev);
3339 /* try forward + backward first to save a later consolidation */
3341 if (next != 0)
3343 /* into top */
3344 if (next == top(ar_ptr))
3346 if ((long)(nextsize + prevsize + newsize) >= (long)(nb + MINSIZE))
3348 unlink(prev, bck, fwd);
3349 newp = prev;
3350 newsize += prevsize + nextsize;
3351 MALLOC_COPY(chunk2mem(newp), chunk2mem(oldp), oldsize - SIZE_SZ);
3352 top(ar_ptr) = chunk_at_offset(newp, nb);
3353 set_head(top(ar_ptr), (newsize - nb) | PREV_INUSE);
3354 set_head_size(newp, nb);
3355 return newp;
3359 /* into next chunk */
3360 else if (((long)(nextsize + prevsize + newsize) >= (long)(nb)))
3362 unlink(next, bck, fwd);
3363 unlink(prev, bck, fwd);
3364 newp = prev;
3365 newsize += nextsize + prevsize;
3366 MALLOC_COPY(chunk2mem(newp), chunk2mem(oldp), oldsize - SIZE_SZ);
3367 goto split;
3371 /* backward only */
3372 if (prev != 0 && (long)(prevsize + newsize) >= (long)nb)
3374 unlink(prev, bck, fwd);
3375 newp = prev;
3376 newsize += prevsize;
3377 MALLOC_COPY(chunk2mem(newp), chunk2mem(oldp), oldsize - SIZE_SZ);
3378 goto split;
3382 /* Must allocate */
3384 newp = chunk_alloc (ar_ptr, nb);
3386 if (newp == 0) {
3387 /* Maybe the failure is due to running out of mmapped areas. */
3388 if (ar_ptr != &main_arena) {
3389 (void)mutex_lock(&main_arena.mutex);
3390 newp = chunk_alloc(&main_arena, nb);
3391 (void)mutex_unlock(&main_arena.mutex);
3392 } else {
3393 #if USE_ARENAS
3394 /* ... or sbrk() has failed and there is still a chance to mmap() */
3395 arena* ar_ptr2 = arena_get2(ar_ptr->next ? ar_ptr : 0, nb);
3396 if(ar_ptr2) {
3397 newp = chunk_alloc(ar_ptr2, nb);
3398 (void)mutex_unlock(&ar_ptr2->mutex);
3400 #endif
3402 if (newp == 0) /* propagate failure */
3403 return 0;
3406 /* Avoid copy if newp is next chunk after oldp. */
3407 /* (This can only happen when new chunk is sbrk'ed.) */
3409 if ( newp == next_chunk(oldp))
3411 newsize += chunksize(newp);
3412 newp = oldp;
3413 goto split;
3416 /* Otherwise copy, free, and exit */
3417 MALLOC_COPY(chunk2mem(newp), chunk2mem(oldp), oldsize - SIZE_SZ);
3418 chunk_free(ar_ptr, oldp);
3419 return newp;
3423 split: /* split off extra room in old or expanded chunk */
3425 if (newsize - nb >= MINSIZE) /* split off remainder */
3427 remainder = chunk_at_offset(newp, nb);
3428 remainder_size = newsize - nb;
3429 set_head_size(newp, nb);
3430 set_head(remainder, remainder_size | PREV_INUSE);
3431 set_inuse_bit_at_offset(remainder, remainder_size);
3432 chunk_free(ar_ptr, remainder);
3434 else
3436 set_head_size(newp, newsize);
3437 set_inuse_bit_at_offset(newp, newsize);
3440 check_inuse_chunk(ar_ptr, newp);
3441 return newp;
3449 memalign algorithm:
3451 memalign requests more than enough space from malloc, finds a spot
3452 within that chunk that meets the alignment request, and then
3453 possibly frees the leading and trailing space.
3455 The alignment argument must be a power of two. This property is not
3456 checked by memalign, so misuse may result in random runtime errors.
3458 8-byte alignment is guaranteed by normal malloc calls, so don't
3459 bother calling memalign with an argument of 8 or less.
3461 Overreliance on memalign is a sure way to fragment space.
3466 #if __STD_C
3467 Void_t* mEMALIGn(size_t alignment, size_t bytes)
3468 #else
3469 Void_t* mEMALIGn(alignment, bytes) size_t alignment; size_t bytes;
3470 #endif
3472 arena *ar_ptr;
3473 INTERNAL_SIZE_T nb; /* padded request size */
3474 mchunkptr p;
3476 #if defined _LIBC || defined MALLOC_HOOKS
3477 if (__memalign_hook != NULL) {
3478 Void_t* result;
3480 #if defined __GNUC__ && __GNUC__ >= 2
3481 result = (*__memalign_hook)(alignment, bytes,
3482 __builtin_return_address (0));
3483 #else
3484 result = (*__memalign_hook)(alignment, bytes, NULL);
3485 #endif
3486 return result;
3488 #endif
3490 /* If need less alignment than we give anyway, just relay to malloc */
3492 if (alignment <= MALLOC_ALIGNMENT) return mALLOc(bytes);
3494 /* Otherwise, ensure that it is at least a minimum chunk size */
3496 if (alignment < MINSIZE) alignment = MINSIZE;
3498 if(request2size(bytes, nb))
3499 return 0;
3500 arena_get(ar_ptr, nb + alignment + MINSIZE);
3501 if(!ar_ptr)
3502 return 0;
3503 p = chunk_align(ar_ptr, nb, alignment);
3504 (void)mutex_unlock(&ar_ptr->mutex);
3505 if(!p) {
3506 /* Maybe the failure is due to running out of mmapped areas. */
3507 if(ar_ptr != &main_arena) {
3508 (void)mutex_lock(&main_arena.mutex);
3509 p = chunk_align(&main_arena, nb, alignment);
3510 (void)mutex_unlock(&main_arena.mutex);
3511 } else {
3512 #if USE_ARENAS
3513 /* ... or sbrk() has failed and there is still a chance to mmap() */
3514 ar_ptr = arena_get2(ar_ptr->next ? ar_ptr : 0, nb);
3515 if(ar_ptr) {
3516 p = chunk_align(ar_ptr, nb, alignment);
3517 (void)mutex_unlock(&ar_ptr->mutex);
3519 #endif
3521 if(!p) return 0;
3523 return chunk2mem(p);
3526 static mchunkptr
3527 internal_function
3528 #if __STD_C
3529 chunk_align(arena* ar_ptr, INTERNAL_SIZE_T nb, size_t alignment)
3530 #else
3531 chunk_align(ar_ptr, nb, alignment)
3532 arena* ar_ptr; INTERNAL_SIZE_T nb; size_t alignment;
3533 #endif
3535 char* m; /* memory returned by malloc call */
3536 mchunkptr p; /* corresponding chunk */
3537 char* brk; /* alignment point within p */
3538 mchunkptr newp; /* chunk to return */
3539 INTERNAL_SIZE_T newsize; /* its size */
3540 INTERNAL_SIZE_T leadsize; /* leading space befor alignment point */
3541 mchunkptr remainder; /* spare room at end to split off */
3542 long remainder_size; /* its size */
3544 /* Call chunk_alloc with worst case padding to hit alignment. */
3545 p = chunk_alloc(ar_ptr, nb + alignment + MINSIZE);
3546 if (p == 0)
3547 return 0; /* propagate failure */
3549 m = (char*)chunk2mem(p);
3551 if ((((unsigned long)(m)) % alignment) == 0) /* aligned */
3553 #if HAVE_MMAP
3554 if(chunk_is_mmapped(p)) {
3555 return p; /* nothing more to do */
3557 #endif
3559 else /* misaligned */
3562 Find an aligned spot inside chunk.
3563 Since we need to give back leading space in a chunk of at
3564 least MINSIZE, if the first calculation places us at
3565 a spot with less than MINSIZE leader, we can move to the
3566 next aligned spot -- we've allocated enough total room so that
3567 this is always possible.
3570 brk = (char*)mem2chunk(((unsigned long)(m + alignment - 1)) &
3571 -(long)alignment);
3572 if ((long)(brk - (char*)(p)) < (long)MINSIZE) brk += alignment;
3574 newp = (mchunkptr)brk;
3575 leadsize = brk - (char*)(p);
3576 newsize = chunksize(p) - leadsize;
3578 #if HAVE_MMAP
3579 if(chunk_is_mmapped(p))
3581 newp->prev_size = p->prev_size + leadsize;
3582 set_head(newp, newsize|IS_MMAPPED);
3583 return newp;
3585 #endif
3587 /* give back leader, use the rest */
3589 set_head(newp, newsize | PREV_INUSE);
3590 set_inuse_bit_at_offset(newp, newsize);
3591 set_head_size(p, leadsize);
3592 chunk_free(ar_ptr, p);
3593 p = newp;
3595 assert (newsize>=nb && (((unsigned long)(chunk2mem(p))) % alignment) == 0);
3598 /* Also give back spare room at the end */
3600 remainder_size = chunksize(p) - nb;
3602 if (remainder_size >= (long)MINSIZE)
3604 remainder = chunk_at_offset(p, nb);
3605 set_head(remainder, remainder_size | PREV_INUSE);
3606 set_head_size(p, nb);
3607 chunk_free(ar_ptr, remainder);
3610 check_inuse_chunk(ar_ptr, p);
3611 return p;
3618 valloc just invokes memalign with alignment argument equal
3619 to the page size of the system (or as near to this as can
3620 be figured out from all the includes/defines above.)
3623 #if __STD_C
3624 Void_t* vALLOc(size_t bytes)
3625 #else
3626 Void_t* vALLOc(bytes) size_t bytes;
3627 #endif
3629 if(__malloc_initialized < 0)
3630 ptmalloc_init ();
3631 return mEMALIGn (malloc_getpagesize, bytes);
3635 pvalloc just invokes valloc for the nearest pagesize
3636 that will accommodate request
3640 #if __STD_C
3641 Void_t* pvALLOc(size_t bytes)
3642 #else
3643 Void_t* pvALLOc(bytes) size_t bytes;
3644 #endif
3646 size_t pagesize;
3647 if(__malloc_initialized < 0)
3648 ptmalloc_init ();
3649 pagesize = malloc_getpagesize;
3650 return mEMALIGn (pagesize, (bytes + pagesize - 1) & ~(pagesize - 1));
3655 calloc calls chunk_alloc, then zeroes out the allocated chunk.
3659 #if __STD_C
3660 Void_t* cALLOc(size_t n, size_t elem_size)
3661 #else
3662 Void_t* cALLOc(n, elem_size) size_t n; size_t elem_size;
3663 #endif
3665 arena *ar_ptr;
3666 mchunkptr p, oldtop;
3667 INTERNAL_SIZE_T sz, csz, oldtopsize;
3668 Void_t* mem;
3670 #if defined _LIBC || defined MALLOC_HOOKS
3671 if (__malloc_hook != NULL) {
3672 sz = n * elem_size;
3673 #if defined __GNUC__ && __GNUC__ >= 2
3674 mem = (*__malloc_hook)(sz, __builtin_return_address (0));
3675 #else
3676 mem = (*__malloc_hook)(sz, NULL);
3677 #endif
3678 if(mem == 0)
3679 return 0;
3680 #ifdef HAVE_MEMSET
3681 return memset(mem, 0, sz);
3682 #else
3683 while(sz > 0) ((char*)mem)[--sz] = 0; /* rather inefficient */
3684 return mem;
3685 #endif
3687 #endif
3689 if(request2size(n * elem_size, sz))
3690 return 0;
3691 arena_get(ar_ptr, sz);
3692 if(!ar_ptr)
3693 return 0;
3695 /* Check if expand_top called, in which case there may be
3696 no need to clear. */
3697 #if MORECORE_CLEARS
3698 oldtop = top(ar_ptr);
3699 oldtopsize = chunksize(top(ar_ptr));
3700 #if MORECORE_CLEARS < 2
3701 /* Only newly allocated memory is guaranteed to be cleared. */
3702 if (ar_ptr == &main_arena &&
3703 oldtopsize < sbrk_base + max_sbrked_mem - (char *)oldtop)
3704 oldtopsize = (sbrk_base + max_sbrked_mem - (char *)oldtop);
3705 #endif
3706 #endif
3707 p = chunk_alloc (ar_ptr, sz);
3709 /* Only clearing follows, so we can unlock early. */
3710 (void)mutex_unlock(&ar_ptr->mutex);
3712 if (p == 0) {
3713 /* Maybe the failure is due to running out of mmapped areas. */
3714 if(ar_ptr != &main_arena) {
3715 (void)mutex_lock(&main_arena.mutex);
3716 p = chunk_alloc(&main_arena, sz);
3717 (void)mutex_unlock(&main_arena.mutex);
3718 } else {
3719 #if USE_ARENAS
3720 /* ... or sbrk() has failed and there is still a chance to mmap() */
3721 (void)mutex_lock(&main_arena.mutex);
3722 ar_ptr = arena_get2(ar_ptr->next ? ar_ptr : 0, sz);
3723 (void)mutex_unlock(&main_arena.mutex);
3724 if(ar_ptr) {
3725 p = chunk_alloc(ar_ptr, sz);
3726 (void)mutex_unlock(&ar_ptr->mutex);
3728 #endif
3730 if (p == 0) return 0;
3732 mem = chunk2mem(p);
3734 /* Two optional cases in which clearing not necessary */
3736 #if HAVE_MMAP
3737 if (chunk_is_mmapped(p)) return mem;
3738 #endif
3740 csz = chunksize(p);
3742 #if MORECORE_CLEARS
3743 if (p == oldtop && csz > oldtopsize) {
3744 /* clear only the bytes from non-freshly-sbrked memory */
3745 csz = oldtopsize;
3747 #endif
3749 MALLOC_ZERO(mem, csz - SIZE_SZ);
3750 return mem;
3755 cfree just calls free. It is needed/defined on some systems
3756 that pair it with calloc, presumably for odd historical reasons.
3760 #if !defined(_LIBC)
3761 #if __STD_C
3762 void cfree(Void_t *mem)
3763 #else
3764 void cfree(mem) Void_t *mem;
3765 #endif
3767 fREe(mem);
3769 #endif
3775 Malloc_trim gives memory back to the system (via negative
3776 arguments to sbrk) if there is unused memory at the `high' end of
3777 the malloc pool. You can call this after freeing large blocks of
3778 memory to potentially reduce the system-level memory requirements
3779 of a program. However, it cannot guarantee to reduce memory. Under
3780 some allocation patterns, some large free blocks of memory will be
3781 locked between two used chunks, so they cannot be given back to
3782 the system.
3784 The `pad' argument to malloc_trim represents the amount of free
3785 trailing space to leave untrimmed. If this argument is zero,
3786 only the minimum amount of memory to maintain internal data
3787 structures will be left (one page or less). Non-zero arguments
3788 can be supplied to maintain enough trailing space to service
3789 future expected allocations without having to re-obtain memory
3790 from the system.
3792 Malloc_trim returns 1 if it actually released any memory, else 0.
3796 #if __STD_C
3797 int mALLOC_TRIm(size_t pad)
3798 #else
3799 int mALLOC_TRIm(pad) size_t pad;
3800 #endif
3802 int res;
3804 (void)mutex_lock(&main_arena.mutex);
3805 res = main_trim(pad);
3806 (void)mutex_unlock(&main_arena.mutex);
3807 return res;
3810 /* Trim the main arena. */
3812 static int
3813 internal_function
3814 #if __STD_C
3815 main_trim(size_t pad)
3816 #else
3817 main_trim(pad) size_t pad;
3818 #endif
3820 mchunkptr top_chunk; /* The current top chunk */
3821 long top_size; /* Amount of top-most memory */
3822 long extra; /* Amount to release */
3823 char* current_brk; /* address returned by pre-check sbrk call */
3824 char* new_brk; /* address returned by negative sbrk call */
3826 unsigned long pagesz = malloc_getpagesize;
3828 top_chunk = top(&main_arena);
3829 top_size = chunksize(top_chunk);
3830 extra = ((top_size - pad - MINSIZE + (pagesz-1)) / pagesz - 1) * pagesz;
3832 if (extra < (long)pagesz) /* Not enough memory to release */
3833 return 0;
3835 /* Test to make sure no one else called sbrk */
3836 current_brk = (char*)(MORECORE (0));
3837 if (current_brk != (char*)(top_chunk) + top_size)
3838 return 0; /* Apparently we don't own memory; must fail */
3840 new_brk = (char*)(MORECORE (-extra));
3842 #if defined _LIBC || defined MALLOC_HOOKS
3843 /* Call the `morecore' hook if necessary. */
3844 if (__after_morecore_hook)
3845 (*__after_morecore_hook) ();
3846 #endif
3848 if (new_brk == (char*)(MORECORE_FAILURE)) { /* sbrk failed? */
3849 /* Try to figure out what we have */
3850 current_brk = (char*)(MORECORE (0));
3851 top_size = current_brk - (char*)top_chunk;
3852 if (top_size >= (long)MINSIZE) /* if not, we are very very dead! */
3854 sbrked_mem = current_brk - sbrk_base;
3855 set_head(top_chunk, top_size | PREV_INUSE);
3857 check_chunk(&main_arena, top_chunk);
3858 return 0;
3860 sbrked_mem -= extra;
3862 /* Success. Adjust top accordingly. */
3863 set_head(top_chunk, (top_size - extra) | PREV_INUSE);
3864 check_chunk(&main_arena, top_chunk);
3865 return 1;
3868 #if USE_ARENAS
3870 static int
3871 internal_function
3872 #if __STD_C
3873 heap_trim(heap_info *heap, size_t pad)
3874 #else
3875 heap_trim(heap, pad) heap_info *heap; size_t pad;
3876 #endif
3878 unsigned long pagesz = malloc_getpagesize;
3879 arena *ar_ptr = heap->ar_ptr;
3880 mchunkptr top_chunk = top(ar_ptr), p, bck, fwd;
3881 heap_info *prev_heap;
3882 long new_size, top_size, extra;
3884 /* Can this heap go away completely ? */
3885 while(top_chunk == chunk_at_offset(heap, sizeof(*heap))) {
3886 prev_heap = heap->prev;
3887 p = chunk_at_offset(prev_heap, prev_heap->size - (MINSIZE-2*SIZE_SZ));
3888 assert(p->size == (0|PREV_INUSE)); /* must be fencepost */
3889 p = prev_chunk(p);
3890 new_size = chunksize(p) + (MINSIZE-2*SIZE_SZ);
3891 assert(new_size>0 && new_size<(long)(2*MINSIZE));
3892 if(!prev_inuse(p))
3893 new_size += p->prev_size;
3894 assert(new_size>0 && new_size<HEAP_MAX_SIZE);
3895 if(new_size + (HEAP_MAX_SIZE - prev_heap->size) < pad + MINSIZE + pagesz)
3896 break;
3897 ar_ptr->size -= heap->size;
3898 arena_mem -= heap->size;
3899 delete_heap(heap);
3900 heap = prev_heap;
3901 if(!prev_inuse(p)) { /* consolidate backward */
3902 p = prev_chunk(p);
3903 unlink(p, bck, fwd);
3905 assert(((unsigned long)((char*)p + new_size) & (pagesz-1)) == 0);
3906 assert( ((char*)p + new_size) == ((char*)heap + heap->size) );
3907 top(ar_ptr) = top_chunk = p;
3908 set_head(top_chunk, new_size | PREV_INUSE);
3909 check_chunk(ar_ptr, top_chunk);
3911 top_size = chunksize(top_chunk);
3912 extra = ((top_size - pad - MINSIZE + (pagesz-1))/pagesz - 1) * pagesz;
3913 if(extra < (long)pagesz)
3914 return 0;
3915 /* Try to shrink. */
3916 if(grow_heap(heap, -extra) != 0)
3917 return 0;
3918 ar_ptr->size -= extra;
3919 arena_mem -= extra;
3921 /* Success. Adjust top accordingly. */
3922 set_head(top_chunk, (top_size - extra) | PREV_INUSE);
3923 check_chunk(ar_ptr, top_chunk);
3924 return 1;
3927 #endif /* USE_ARENAS */
3932 malloc_usable_size:
3934 This routine tells you how many bytes you can actually use in an
3935 allocated chunk, which may be more than you requested (although
3936 often not). You can use this many bytes without worrying about
3937 overwriting other allocated objects. Not a particularly great
3938 programming practice, but still sometimes useful.
3942 #if __STD_C
3943 size_t mALLOC_USABLE_SIZe(Void_t* mem)
3944 #else
3945 size_t mALLOC_USABLE_SIZe(mem) Void_t* mem;
3946 #endif
3948 mchunkptr p;
3950 if (mem == 0)
3951 return 0;
3952 else
3954 p = mem2chunk(mem);
3955 if(!chunk_is_mmapped(p))
3957 if (!inuse(p)) return 0;
3958 check_inuse_chunk(arena_for_ptr(mem), p);
3959 return chunksize(p) - SIZE_SZ;
3961 return chunksize(p) - 2*SIZE_SZ;
3968 /* Utility to update mallinfo for malloc_stats() and mallinfo() */
3970 static void
3971 #if __STD_C
3972 malloc_update_mallinfo(arena *ar_ptr, struct mallinfo *mi)
3973 #else
3974 malloc_update_mallinfo(ar_ptr, mi) arena *ar_ptr; struct mallinfo *mi;
3975 #endif
3977 int i, navail;
3978 mbinptr b;
3979 mchunkptr p;
3980 #if MALLOC_DEBUG
3981 mchunkptr q;
3982 #endif
3983 INTERNAL_SIZE_T avail;
3985 (void)mutex_lock(&ar_ptr->mutex);
3986 avail = chunksize(top(ar_ptr));
3987 navail = ((long)(avail) >= (long)MINSIZE)? 1 : 0;
3989 for (i = 1; i < NAV; ++i)
3991 b = bin_at(ar_ptr, i);
3992 for (p = last(b); p != b; p = p->bk)
3994 #if MALLOC_DEBUG
3995 check_free_chunk(ar_ptr, p);
3996 for (q = next_chunk(p);
3997 q != top(ar_ptr) && inuse(q) && (long)chunksize(q) > 0;
3998 q = next_chunk(q))
3999 check_inuse_chunk(ar_ptr, q);
4000 #endif
4001 avail += chunksize(p);
4002 navail++;
4006 mi->arena = ar_ptr->size;
4007 mi->ordblks = navail;
4008 mi->smblks = mi->usmblks = mi->fsmblks = 0; /* clear unused fields */
4009 mi->uordblks = ar_ptr->size - avail;
4010 mi->fordblks = avail;
4011 mi->hblks = n_mmaps;
4012 mi->hblkhd = mmapped_mem;
4013 mi->keepcost = chunksize(top(ar_ptr));
4015 (void)mutex_unlock(&ar_ptr->mutex);
4018 #if USE_ARENAS && MALLOC_DEBUG > 1
4020 /* Print the complete contents of a single heap to stderr. */
4022 static void
4023 #if __STD_C
4024 dump_heap(heap_info *heap)
4025 #else
4026 dump_heap(heap) heap_info *heap;
4027 #endif
4029 char *ptr;
4030 mchunkptr p;
4032 fprintf(stderr, "Heap %p, size %10lx:\n", heap, (long)heap->size);
4033 ptr = (heap->ar_ptr != (arena*)(heap+1)) ?
4034 (char*)(heap + 1) : (char*)(heap + 1) + sizeof(arena);
4035 p = (mchunkptr)(((unsigned long)ptr + MALLOC_ALIGN_MASK) &
4036 ~MALLOC_ALIGN_MASK);
4037 for(;;) {
4038 fprintf(stderr, "chunk %p size %10lx", p, (long)p->size);
4039 if(p == top(heap->ar_ptr)) {
4040 fprintf(stderr, " (top)\n");
4041 break;
4042 } else if(p->size == (0|PREV_INUSE)) {
4043 fprintf(stderr, " (fence)\n");
4044 break;
4046 fprintf(stderr, "\n");
4047 p = next_chunk(p);
4051 #endif
4057 malloc_stats:
4059 For all arenas separately and in total, prints on stderr the
4060 amount of space obtained from the system, and the current number
4061 of bytes allocated via malloc (or realloc, etc) but not yet
4062 freed. (Note that this is the number of bytes allocated, not the
4063 number requested. It will be larger than the number requested
4064 because of alignment and bookkeeping overhead.) When not compiled
4065 for multiple threads, the maximum amount of allocated memory
4066 (which may be more than current if malloc_trim and/or munmap got
4067 called) is also reported. When using mmap(), prints the maximum
4068 number of simultaneous mmap regions used, too.
4072 void mALLOC_STATs()
4074 int i;
4075 arena *ar_ptr;
4076 struct mallinfo mi;
4077 unsigned int in_use_b = mmapped_mem, system_b = in_use_b;
4078 #if THREAD_STATS
4079 long stat_lock_direct = 0, stat_lock_loop = 0, stat_lock_wait = 0;
4080 #endif
4082 for(i=0, ar_ptr = &main_arena;; i++) {
4083 malloc_update_mallinfo(ar_ptr, &mi);
4084 fprintf(stderr, "Arena %d:\n", i);
4085 fprintf(stderr, "system bytes = %10u\n", (unsigned int)mi.arena);
4086 fprintf(stderr, "in use bytes = %10u\n", (unsigned int)mi.uordblks);
4087 system_b += mi.arena;
4088 in_use_b += mi.uordblks;
4089 #if THREAD_STATS
4090 stat_lock_direct += ar_ptr->stat_lock_direct;
4091 stat_lock_loop += ar_ptr->stat_lock_loop;
4092 stat_lock_wait += ar_ptr->stat_lock_wait;
4093 #endif
4094 #if USE_ARENAS && MALLOC_DEBUG > 1
4095 if(ar_ptr != &main_arena) {
4096 heap_info *heap;
4097 (void)mutex_lock(&ar_ptr->mutex);
4098 heap = heap_for_ptr(top(ar_ptr));
4099 while(heap) { dump_heap(heap); heap = heap->prev; }
4100 (void)mutex_unlock(&ar_ptr->mutex);
4102 #endif
4103 ar_ptr = ar_ptr->next;
4104 if(ar_ptr == &main_arena) break;
4106 #if HAVE_MMAP
4107 fprintf(stderr, "Total (incl. mmap):\n");
4108 #else
4109 fprintf(stderr, "Total:\n");
4110 #endif
4111 fprintf(stderr, "system bytes = %10u\n", system_b);
4112 fprintf(stderr, "in use bytes = %10u\n", in_use_b);
4113 #ifdef NO_THREADS
4114 fprintf(stderr, "max system bytes = %10u\n", (unsigned int)max_total_mem);
4115 #endif
4116 #if HAVE_MMAP
4117 fprintf(stderr, "max mmap regions = %10u\n", (unsigned int)max_n_mmaps);
4118 fprintf(stderr, "max mmap bytes = %10lu\n", max_mmapped_mem);
4119 #endif
4120 #if THREAD_STATS
4121 fprintf(stderr, "heaps created = %10d\n", stat_n_heaps);
4122 fprintf(stderr, "locked directly = %10ld\n", stat_lock_direct);
4123 fprintf(stderr, "locked in loop = %10ld\n", stat_lock_loop);
4124 fprintf(stderr, "locked waiting = %10ld\n", stat_lock_wait);
4125 fprintf(stderr, "locked total = %10ld\n",
4126 stat_lock_direct + stat_lock_loop + stat_lock_wait);
4127 #endif
4131 mallinfo returns a copy of updated current mallinfo.
4132 The information reported is for the arena last used by the thread.
4135 struct mallinfo mALLINFo()
4137 struct mallinfo mi;
4138 Void_t *vptr = NULL;
4140 #ifndef NO_THREADS
4141 tsd_getspecific(arena_key, vptr);
4142 #endif
4143 malloc_update_mallinfo((vptr ? (arena*)vptr : &main_arena), &mi);
4144 return mi;
4151 mallopt:
4153 mallopt is the general SVID/XPG interface to tunable parameters.
4154 The format is to provide a (parameter-number, parameter-value) pair.
4155 mallopt then sets the corresponding parameter to the argument
4156 value if it can (i.e., so long as the value is meaningful),
4157 and returns 1 if successful else 0.
4159 See descriptions of tunable parameters above.
4163 #if __STD_C
4164 int mALLOPt(int param_number, int value)
4165 #else
4166 int mALLOPt(param_number, value) int param_number; int value;
4167 #endif
4169 switch(param_number)
4171 case M_TRIM_THRESHOLD:
4172 trim_threshold = value; return 1;
4173 case M_TOP_PAD:
4174 top_pad = value; return 1;
4175 case M_MMAP_THRESHOLD:
4176 #if USE_ARENAS
4177 /* Forbid setting the threshold too high. */
4178 if((unsigned long)value > HEAP_MAX_SIZE/2) return 0;
4179 #endif
4180 mmap_threshold = value; return 1;
4181 case M_MMAP_MAX:
4182 #if HAVE_MMAP
4183 n_mmaps_max = value; return 1;
4184 #else
4185 if (value != 0) return 0; else n_mmaps_max = value; return 1;
4186 #endif
4187 case M_CHECK_ACTION:
4188 check_action = value; return 1;
4190 default:
4191 return 0;
4197 /* Get/set state: malloc_get_state() records the current state of all
4198 malloc variables (_except_ for the actual heap contents and `hook'
4199 function pointers) in a system dependent, opaque data structure.
4200 This data structure is dynamically allocated and can be free()d
4201 after use. malloc_set_state() restores the state of all malloc
4202 variables to the previously obtained state. This is especially
4203 useful when using this malloc as part of a shared library, and when
4204 the heap contents are saved/restored via some other method. The
4205 primary example for this is GNU Emacs with its `dumping' procedure.
4206 `Hook' function pointers are never saved or restored by these
4207 functions, with two exceptions: If malloc checking was in use when
4208 malloc_get_state() was called, then malloc_set_state() calls
4209 __malloc_check_init() if possible; if malloc checking was not in
4210 use in the recorded state but the user requested malloc checking,
4211 then the hooks are reset to 0. */
4213 #define MALLOC_STATE_MAGIC 0x444c4541l
4214 #define MALLOC_STATE_VERSION (0*0x100l + 1l) /* major*0x100 + minor */
4216 struct malloc_state {
4217 long magic;
4218 long version;
4219 mbinptr av[NAV * 2 + 2];
4220 char* sbrk_base;
4221 int sbrked_mem_bytes;
4222 unsigned long trim_threshold;
4223 unsigned long top_pad;
4224 unsigned int n_mmaps_max;
4225 unsigned long mmap_threshold;
4226 int check_action;
4227 unsigned long max_sbrked_mem;
4228 unsigned long max_total_mem;
4229 unsigned int n_mmaps;
4230 unsigned int max_n_mmaps;
4231 unsigned long mmapped_mem;
4232 unsigned long max_mmapped_mem;
4233 int using_malloc_checking;
4236 Void_t*
4237 mALLOC_GET_STATe()
4239 struct malloc_state* ms;
4240 int i;
4241 mbinptr b;
4243 ms = (struct malloc_state*)mALLOc(sizeof(*ms));
4244 if (!ms)
4245 return 0;
4246 (void)mutex_lock(&main_arena.mutex);
4247 ms->magic = MALLOC_STATE_MAGIC;
4248 ms->version = MALLOC_STATE_VERSION;
4249 ms->av[0] = main_arena.av[0];
4250 ms->av[1] = main_arena.av[1];
4251 for(i=0; i<NAV; i++) {
4252 b = bin_at(&main_arena, i);
4253 if(first(b) == b)
4254 ms->av[2*i+2] = ms->av[2*i+3] = 0; /* empty bin (or initial top) */
4255 else {
4256 ms->av[2*i+2] = first(b);
4257 ms->av[2*i+3] = last(b);
4260 ms->sbrk_base = sbrk_base;
4261 ms->sbrked_mem_bytes = sbrked_mem;
4262 ms->trim_threshold = trim_threshold;
4263 ms->top_pad = top_pad;
4264 ms->n_mmaps_max = n_mmaps_max;
4265 ms->mmap_threshold = mmap_threshold;
4266 ms->check_action = check_action;
4267 ms->max_sbrked_mem = max_sbrked_mem;
4268 #ifdef NO_THREADS
4269 ms->max_total_mem = max_total_mem;
4270 #else
4271 ms->max_total_mem = 0;
4272 #endif
4273 ms->n_mmaps = n_mmaps;
4274 ms->max_n_mmaps = max_n_mmaps;
4275 ms->mmapped_mem = mmapped_mem;
4276 ms->max_mmapped_mem = max_mmapped_mem;
4277 #if defined _LIBC || defined MALLOC_HOOKS
4278 ms->using_malloc_checking = using_malloc_checking;
4279 #else
4280 ms->using_malloc_checking = 0;
4281 #endif
4282 (void)mutex_unlock(&main_arena.mutex);
4283 return (Void_t*)ms;
4287 #if __STD_C
4288 mALLOC_SET_STATe(Void_t* msptr)
4289 #else
4290 mALLOC_SET_STATe(msptr) Void_t* msptr;
4291 #endif
4293 struct malloc_state* ms = (struct malloc_state*)msptr;
4294 int i;
4295 mbinptr b;
4297 #if defined _LIBC || defined MALLOC_HOOKS
4298 disallow_malloc_check = 1;
4299 #endif
4300 ptmalloc_init();
4301 if(ms->magic != MALLOC_STATE_MAGIC) return -1;
4302 /* Must fail if the major version is too high. */
4303 if((ms->version & ~0xffl) > (MALLOC_STATE_VERSION & ~0xffl)) return -2;
4304 (void)mutex_lock(&main_arena.mutex);
4305 main_arena.av[0] = ms->av[0];
4306 main_arena.av[1] = ms->av[1];
4307 for(i=0; i<NAV; i++) {
4308 b = bin_at(&main_arena, i);
4309 if(ms->av[2*i+2] == 0)
4310 first(b) = last(b) = b;
4311 else {
4312 first(b) = ms->av[2*i+2];
4313 last(b) = ms->av[2*i+3];
4314 if(i > 0) {
4315 /* Make sure the links to the `av'-bins in the heap are correct. */
4316 first(b)->bk = b;
4317 last(b)->fd = b;
4321 sbrk_base = ms->sbrk_base;
4322 sbrked_mem = ms->sbrked_mem_bytes;
4323 trim_threshold = ms->trim_threshold;
4324 top_pad = ms->top_pad;
4325 n_mmaps_max = ms->n_mmaps_max;
4326 mmap_threshold = ms->mmap_threshold;
4327 check_action = ms->check_action;
4328 max_sbrked_mem = ms->max_sbrked_mem;
4329 #ifdef NO_THREADS
4330 max_total_mem = ms->max_total_mem;
4331 #endif
4332 n_mmaps = ms->n_mmaps;
4333 max_n_mmaps = ms->max_n_mmaps;
4334 mmapped_mem = ms->mmapped_mem;
4335 max_mmapped_mem = ms->max_mmapped_mem;
4336 /* add version-dependent code here */
4337 if (ms->version >= 1) {
4338 #if defined _LIBC || defined MALLOC_HOOKS
4339 /* Check whether it is safe to enable malloc checking, or whether
4340 it is necessary to disable it. */
4341 if (ms->using_malloc_checking && !using_malloc_checking &&
4342 !disallow_malloc_check)
4343 __malloc_check_init ();
4344 else if (!ms->using_malloc_checking && using_malloc_checking) {
4345 __malloc_hook = 0;
4346 __free_hook = 0;
4347 __realloc_hook = 0;
4348 __memalign_hook = 0;
4349 using_malloc_checking = 0;
4351 #endif
4354 (void)mutex_unlock(&main_arena.mutex);
4355 return 0;
4360 #if defined _LIBC || defined MALLOC_HOOKS
4362 /* A simple, standard set of debugging hooks. Overhead is `only' one
4363 byte per chunk; still this will catch most cases of double frees or
4364 overruns. The goal here is to avoid obscure crashes due to invalid
4365 usage, unlike in the MALLOC_DEBUG code. */
4367 #define MAGICBYTE(p) ( ( ((size_t)p >> 3) ^ ((size_t)p >> 11)) & 0xFF )
4369 /* Instrument a chunk with overrun detector byte(s) and convert it
4370 into a user pointer with requested size sz. */
4372 static Void_t*
4373 internal_function
4374 #if __STD_C
4375 chunk2mem_check(mchunkptr p, size_t sz)
4376 #else
4377 chunk2mem_check(p, sz) mchunkptr p; size_t sz;
4378 #endif
4380 unsigned char* m_ptr = (unsigned char*)chunk2mem(p);
4381 size_t i;
4383 for(i = chunksize(p) - (chunk_is_mmapped(p) ? 2*SIZE_SZ+1 : SIZE_SZ+1);
4384 i > sz;
4385 i -= 0xFF) {
4386 if(i-sz < 0x100) {
4387 m_ptr[i] = (unsigned char)(i-sz);
4388 break;
4390 m_ptr[i] = 0xFF;
4392 m_ptr[sz] = MAGICBYTE(p);
4393 return (Void_t*)m_ptr;
4396 /* Convert a pointer to be free()d or realloc()ed to a valid chunk
4397 pointer. If the provided pointer is not valid, return NULL. */
4399 static mchunkptr
4400 internal_function
4401 #if __STD_C
4402 mem2chunk_check(Void_t* mem)
4403 #else
4404 mem2chunk_check(mem) Void_t* mem;
4405 #endif
4407 mchunkptr p;
4408 INTERNAL_SIZE_T sz, c;
4409 unsigned char magic;
4411 p = mem2chunk(mem);
4412 if(!aligned_OK(p)) return NULL;
4413 if( (char*)p>=sbrk_base && (char*)p<(sbrk_base+sbrked_mem) ) {
4414 /* Must be a chunk in conventional heap memory. */
4415 if(chunk_is_mmapped(p) ||
4416 ( (sz = chunksize(p)), ((char*)p + sz)>=(sbrk_base+sbrked_mem) ) ||
4417 sz<MINSIZE || sz&MALLOC_ALIGN_MASK || !inuse(p) ||
4418 ( !prev_inuse(p) && (p->prev_size&MALLOC_ALIGN_MASK ||
4419 (long)prev_chunk(p)<(long)sbrk_base ||
4420 next_chunk(prev_chunk(p))!=p) ))
4421 return NULL;
4422 magic = MAGICBYTE(p);
4423 for(sz += SIZE_SZ-1; (c = ((unsigned char*)p)[sz]) != magic; sz -= c) {
4424 if(c<=0 || sz<(c+2*SIZE_SZ)) return NULL;
4426 ((unsigned char*)p)[sz] ^= 0xFF;
4427 } else {
4428 unsigned long offset, page_mask = malloc_getpagesize-1;
4430 /* mmap()ed chunks have MALLOC_ALIGNMENT or higher power-of-two
4431 alignment relative to the beginning of a page. Check this
4432 first. */
4433 offset = (unsigned long)mem & page_mask;
4434 if((offset!=MALLOC_ALIGNMENT && offset!=0 && offset!=0x10 &&
4435 offset!=0x20 && offset!=0x40 && offset!=0x80 && offset!=0x100 &&
4436 offset!=0x200 && offset!=0x400 && offset!=0x800 && offset!=0x1000 &&
4437 offset<0x2000) ||
4438 !chunk_is_mmapped(p) || (p->size & PREV_INUSE) ||
4439 ( (((unsigned long)p - p->prev_size) & page_mask) != 0 ) ||
4440 ( (sz = chunksize(p)), ((p->prev_size + sz) & page_mask) != 0 ) )
4441 return NULL;
4442 magic = MAGICBYTE(p);
4443 for(sz -= 1; (c = ((unsigned char*)p)[sz]) != magic; sz -= c) {
4444 if(c<=0 || sz<(c+2*SIZE_SZ)) return NULL;
4446 ((unsigned char*)p)[sz] ^= 0xFF;
4448 return p;
4451 /* Check for corruption of the top chunk, and try to recover if
4452 necessary. */
4454 static int
4455 internal_function
4456 #if __STD_C
4457 top_check(void)
4458 #else
4459 top_check()
4460 #endif
4462 mchunkptr t = top(&main_arena);
4463 char* brk, * new_brk;
4464 INTERNAL_SIZE_T front_misalign, sbrk_size;
4465 unsigned long pagesz = malloc_getpagesize;
4467 if((char*)t + chunksize(t) == sbrk_base + sbrked_mem ||
4468 t == initial_top(&main_arena)) return 0;
4470 if(check_action & 1)
4471 fprintf(stderr, "malloc: top chunk is corrupt\n");
4472 if(check_action & 2)
4473 abort();
4475 /* Try to set up a new top chunk. */
4476 brk = MORECORE(0);
4477 front_misalign = (unsigned long)chunk2mem(brk) & MALLOC_ALIGN_MASK;
4478 if (front_misalign > 0)
4479 front_misalign = MALLOC_ALIGNMENT - front_misalign;
4480 sbrk_size = front_misalign + top_pad + MINSIZE;
4481 sbrk_size += pagesz - ((unsigned long)(brk + sbrk_size) & (pagesz - 1));
4482 new_brk = (char*)(MORECORE (sbrk_size));
4483 if (new_brk == (char*)(MORECORE_FAILURE)) return -1;
4484 sbrked_mem = (new_brk - sbrk_base) + sbrk_size;
4486 top(&main_arena) = (mchunkptr)(brk + front_misalign);
4487 set_head(top(&main_arena), (sbrk_size - front_misalign) | PREV_INUSE);
4489 return 0;
4492 static Void_t*
4493 #if __STD_C
4494 malloc_check(size_t sz, const Void_t *caller)
4495 #else
4496 malloc_check(sz, caller) size_t sz; const Void_t *caller;
4497 #endif
4499 mchunkptr victim;
4500 INTERNAL_SIZE_T nb;
4502 if(request2size(sz+1, nb))
4503 return 0;
4504 (void)mutex_lock(&main_arena.mutex);
4505 victim = (top_check() >= 0) ? chunk_alloc(&main_arena, nb) : NULL;
4506 (void)mutex_unlock(&main_arena.mutex);
4507 if(!victim) return NULL;
4508 return chunk2mem_check(victim, sz);
4511 static void
4512 #if __STD_C
4513 free_check(Void_t* mem, const Void_t *caller)
4514 #else
4515 free_check(mem, caller) Void_t* mem; const Void_t *caller;
4516 #endif
4518 mchunkptr p;
4520 if(!mem) return;
4521 (void)mutex_lock(&main_arena.mutex);
4522 p = mem2chunk_check(mem);
4523 if(!p) {
4524 (void)mutex_unlock(&main_arena.mutex);
4525 if(check_action & 1)
4526 fprintf(stderr, "free(): invalid pointer %p!\n", mem);
4527 if(check_action & 2)
4528 abort();
4529 return;
4531 #if HAVE_MMAP
4532 if (chunk_is_mmapped(p)) {
4533 (void)mutex_unlock(&main_arena.mutex);
4534 munmap_chunk(p);
4535 return;
4537 #endif
4538 #if 0 /* Erase freed memory. */
4539 memset(mem, 0, chunksize(p) - (SIZE_SZ+1));
4540 #endif
4541 chunk_free(&main_arena, p);
4542 (void)mutex_unlock(&main_arena.mutex);
4545 static Void_t*
4546 #if __STD_C
4547 realloc_check(Void_t* oldmem, size_t bytes, const Void_t *caller)
4548 #else
4549 realloc_check(oldmem, bytes, caller)
4550 Void_t* oldmem; size_t bytes; const Void_t *caller;
4551 #endif
4553 mchunkptr oldp, newp;
4554 INTERNAL_SIZE_T nb, oldsize;
4556 if (oldmem == 0) return malloc_check(bytes, NULL);
4557 (void)mutex_lock(&main_arena.mutex);
4558 oldp = mem2chunk_check(oldmem);
4559 if(!oldp) {
4560 (void)mutex_unlock(&main_arena.mutex);
4561 if(check_action & 1)
4562 fprintf(stderr, "realloc(): invalid pointer %p!\n", oldmem);
4563 if(check_action & 2)
4564 abort();
4565 return malloc_check(bytes, NULL);
4567 oldsize = chunksize(oldp);
4569 if(request2size(bytes+1, nb)) {
4570 (void)mutex_unlock(&main_arena.mutex);
4571 return 0;
4574 #if HAVE_MMAP
4575 if (chunk_is_mmapped(oldp)) {
4576 #if HAVE_MREMAP
4577 newp = mremap_chunk(oldp, nb);
4578 if(!newp) {
4579 #endif
4580 /* Note the extra SIZE_SZ overhead. */
4581 if(oldsize - SIZE_SZ >= nb) newp = oldp; /* do nothing */
4582 else {
4583 /* Must alloc, copy, free. */
4584 newp = (top_check() >= 0) ? chunk_alloc(&main_arena, nb) : NULL;
4585 if (newp) {
4586 MALLOC_COPY(chunk2mem(newp), oldmem, oldsize - 2*SIZE_SZ);
4587 munmap_chunk(oldp);
4590 #if HAVE_MREMAP
4592 #endif
4593 } else {
4594 #endif /* HAVE_MMAP */
4595 newp = (top_check() >= 0) ?
4596 chunk_realloc(&main_arena, oldp, oldsize, nb) : NULL;
4597 #if 0 /* Erase freed memory. */
4598 nb = chunksize(newp);
4599 if(oldp<newp || oldp>=chunk_at_offset(newp, nb)) {
4600 memset((char*)oldmem + 2*sizeof(mbinptr), 0,
4601 oldsize - (2*sizeof(mbinptr)+2*SIZE_SZ+1));
4602 } else if(nb > oldsize+SIZE_SZ) {
4603 memset((char*)chunk2mem(newp) + oldsize, 0, nb - (oldsize+SIZE_SZ));
4605 #endif
4606 #if HAVE_MMAP
4608 #endif
4609 (void)mutex_unlock(&main_arena.mutex);
4611 if(!newp) return NULL;
4612 return chunk2mem_check(newp, bytes);
4615 static Void_t*
4616 #if __STD_C
4617 memalign_check(size_t alignment, size_t bytes, const Void_t *caller)
4618 #else
4619 memalign_check(alignment, bytes, caller)
4620 size_t alignment; size_t bytes; const Void_t *caller;
4621 #endif
4623 INTERNAL_SIZE_T nb;
4624 mchunkptr p;
4626 if (alignment <= MALLOC_ALIGNMENT) return malloc_check(bytes, NULL);
4627 if (alignment < MINSIZE) alignment = MINSIZE;
4629 if(request2size(bytes+1, nb))
4630 return 0;
4631 (void)mutex_lock(&main_arena.mutex);
4632 p = (top_check() >= 0) ? chunk_align(&main_arena, nb, alignment) : NULL;
4633 (void)mutex_unlock(&main_arena.mutex);
4634 if(!p) return NULL;
4635 return chunk2mem_check(p, bytes);
4638 #ifndef NO_THREADS
4640 /* The following hooks are used when the global initialization in
4641 ptmalloc_init() hasn't completed yet. */
4643 static Void_t*
4644 #if __STD_C
4645 malloc_starter(size_t sz, const Void_t *caller)
4646 #else
4647 malloc_starter(sz, caller) size_t sz; const Void_t *caller;
4648 #endif
4650 INTERNAL_SIZE_T nb;
4651 mchunkptr victim;
4653 if(request2size(sz, nb))
4654 return 0;
4655 victim = chunk_alloc(&main_arena, nb);
4657 return victim ? chunk2mem(victim) : 0;
4660 static void
4661 #if __STD_C
4662 free_starter(Void_t* mem, const Void_t *caller)
4663 #else
4664 free_starter(mem, caller) Void_t* mem; const Void_t *caller;
4665 #endif
4667 mchunkptr p;
4669 if(!mem) return;
4670 p = mem2chunk(mem);
4671 #if HAVE_MMAP
4672 if (chunk_is_mmapped(p)) {
4673 munmap_chunk(p);
4674 return;
4676 #endif
4677 chunk_free(&main_arena, p);
4680 /* The following hooks are used while the `atfork' handling mechanism
4681 is active. */
4683 static Void_t*
4684 #if __STD_C
4685 malloc_atfork (size_t sz, const Void_t *caller)
4686 #else
4687 malloc_atfork(sz, caller) size_t sz; const Void_t *caller;
4688 #endif
4690 Void_t *vptr = NULL;
4691 INTERNAL_SIZE_T nb;
4692 mchunkptr victim;
4694 tsd_getspecific(arena_key, vptr);
4695 if(!vptr) {
4696 if(save_malloc_hook != malloc_check) {
4697 if(request2size(sz, nb))
4698 return 0;
4699 victim = chunk_alloc(&main_arena, nb);
4700 return victim ? chunk2mem(victim) : 0;
4701 } else {
4702 if(top_check()<0 || request2size(sz+1, nb))
4703 return 0;
4704 victim = chunk_alloc(&main_arena, nb);
4705 return victim ? chunk2mem_check(victim, sz) : 0;
4707 } else {
4708 /* Suspend the thread until the `atfork' handlers have completed.
4709 By that time, the hooks will have been reset as well, so that
4710 mALLOc() can be used again. */
4711 (void)mutex_lock(&list_lock);
4712 (void)mutex_unlock(&list_lock);
4713 return mALLOc(sz);
4717 static void
4718 #if __STD_C
4719 free_atfork(Void_t* mem, const Void_t *caller)
4720 #else
4721 free_atfork(mem, caller) Void_t* mem; const Void_t *caller;
4722 #endif
4724 Void_t *vptr = NULL;
4725 arena *ar_ptr;
4726 mchunkptr p; /* chunk corresponding to mem */
4728 if (mem == 0) /* free(0) has no effect */
4729 return;
4731 p = mem2chunk(mem); /* do not bother to replicate free_check here */
4733 #if HAVE_MMAP
4734 if (chunk_is_mmapped(p)) /* release mmapped memory. */
4736 munmap_chunk(p);
4737 return;
4739 #endif
4741 ar_ptr = arena_for_ptr(p);
4742 tsd_getspecific(arena_key, vptr);
4743 if(vptr)
4744 (void)mutex_lock(&ar_ptr->mutex);
4745 chunk_free(ar_ptr, p);
4746 if(vptr)
4747 (void)mutex_unlock(&ar_ptr->mutex);
4750 #endif /* !defined NO_THREADS */
4752 #endif /* defined _LIBC || defined MALLOC_HOOKS */
4756 #ifdef _LIBC
4757 /* We need a wrapper function for one of the additions of POSIX. */
4759 __posix_memalign (void **memptr, size_t alignment, size_t size)
4761 void *mem;
4763 /* Test whether the SIZE argument is valid. It must be a power of
4764 two multiple of sizeof (void *). */
4765 if (size % sizeof (void *) != 0 || (size & (size - 1)) != 0)
4766 return EINVAL;
4768 mem = __libc_memalign (alignment, size);
4770 if (mem != NULL)
4772 *memptr = mem;
4773 return 0;
4776 return ENOMEM;
4778 weak_alias (__posix_memalign, posix_memalign)
4780 weak_alias (__libc_calloc, __calloc) weak_alias (__libc_calloc, calloc)
4781 weak_alias (__libc_free, __cfree) weak_alias (__libc_free, cfree)
4782 weak_alias (__libc_free, __free) weak_alias (__libc_free, free)
4783 weak_alias (__libc_malloc, __malloc) weak_alias (__libc_malloc, malloc)
4784 weak_alias (__libc_memalign, __memalign) weak_alias (__libc_memalign, memalign)
4785 weak_alias (__libc_realloc, __realloc) weak_alias (__libc_realloc, realloc)
4786 weak_alias (__libc_valloc, __valloc) weak_alias (__libc_valloc, valloc)
4787 weak_alias (__libc_pvalloc, __pvalloc) weak_alias (__libc_pvalloc, pvalloc)
4788 weak_alias (__libc_mallinfo, __mallinfo) weak_alias (__libc_mallinfo, mallinfo)
4789 weak_alias (__libc_mallopt, __mallopt) weak_alias (__libc_mallopt, mallopt)
4791 weak_alias (__malloc_stats, malloc_stats)
4792 weak_alias (__malloc_usable_size, malloc_usable_size)
4793 weak_alias (__malloc_trim, malloc_trim)
4794 weak_alias (__malloc_get_state, malloc_get_state)
4795 weak_alias (__malloc_set_state, malloc_set_state)
4796 #endif
4800 History:
4802 V2.6.4-pt3 Thu Feb 20 1997 Wolfram Gloger (wmglo@dent.med.uni-muenchen.de)
4803 * Added malloc_get/set_state() (mainly for use in GNU emacs),
4804 using interface from Marcus Daniels
4805 * All parameters are now adjustable via environment variables
4807 V2.6.4-pt2 Sat Dec 14 1996 Wolfram Gloger (wmglo@dent.med.uni-muenchen.de)
4808 * Added debugging hooks
4809 * Fixed possible deadlock in realloc() when out of memory
4810 * Don't pollute namespace in glibc: use __getpagesize, __mmap, etc.
4812 V2.6.4-pt Wed Dec 4 1996 Wolfram Gloger (wmglo@dent.med.uni-muenchen.de)
4813 * Very minor updates from the released 2.6.4 version.
4814 * Trimmed include file down to exported data structures.
4815 * Changes from H.J. Lu for glibc-2.0.
4817 V2.6.3i-pt Sep 16 1996 Wolfram Gloger (wmglo@dent.med.uni-muenchen.de)
4818 * Many changes for multiple threads
4819 * Introduced arenas and heaps
4821 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
4822 * Added pvalloc, as recommended by H.J. Liu
4823 * Added 64bit pointer support mainly from Wolfram Gloger
4824 * Added anonymously donated WIN32 sbrk emulation
4825 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
4826 * malloc_extend_top: fix mask error that caused wastage after
4827 foreign sbrks
4828 * Add linux mremap support code from HJ Liu
4830 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
4831 * Integrated most documentation with the code.
4832 * Add support for mmap, with help from
4833 Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
4834 * Use last_remainder in more cases.
4835 * Pack bins using idea from colin@nyx10.cs.du.edu
4836 * Use ordered bins instead of best-fit threshold
4837 * Eliminate block-local decls to simplify tracing and debugging.
4838 * Support another case of realloc via move into top
4839 * Fix error occurring when initial sbrk_base not word-aligned.
4840 * Rely on page size for units instead of SBRK_UNIT to
4841 avoid surprises about sbrk alignment conventions.
4842 * Add mallinfo, mallopt. Thanks to Raymond Nijssen
4843 (raymond@es.ele.tue.nl) for the suggestion.
4844 * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
4845 * More precautions for cases where other routines call sbrk,
4846 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
4847 * Added macros etc., allowing use in linux libc from
4848 H.J. Lu (hjl@gnu.ai.mit.edu)
4849 * Inverted this history list
4851 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
4852 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
4853 * Removed all preallocation code since under current scheme
4854 the work required to undo bad preallocations exceeds
4855 the work saved in good cases for most test programs.
4856 * No longer use return list or unconsolidated bins since
4857 no scheme using them consistently outperforms those that don't
4858 given above changes.
4859 * Use best fit for very large chunks to prevent some worst-cases.
4860 * Added some support for debugging
4862 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
4863 * Removed footers when chunks are in use. Thanks to
4864 Paul Wilson (wilson@cs.texas.edu) for the suggestion.
4866 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
4867 * Added malloc_trim, with help from Wolfram Gloger
4868 (wmglo@Dent.MED.Uni-Muenchen.DE).
4870 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
4872 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
4873 * realloc: try to expand in both directions
4874 * malloc: swap order of clean-bin strategy;
4875 * realloc: only conditionally expand backwards
4876 * Try not to scavenge used bins
4877 * Use bin counts as a guide to preallocation
4878 * Occasionally bin return list chunks in first scan
4879 * Add a few optimizations from colin@nyx10.cs.du.edu
4881 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
4882 * faster bin computation & slightly different binning
4883 * merged all consolidations to one part of malloc proper
4884 (eliminating old malloc_find_space & malloc_clean_bin)
4885 * Scan 2 returns chunks (not just 1)
4886 * Propagate failure in realloc if malloc returns 0
4887 * Add stuff to allow compilation on non-ANSI compilers
4888 from kpv@research.att.com
4890 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
4891 * removed potential for odd address access in prev_chunk
4892 * removed dependency on getpagesize.h
4893 * misc cosmetics and a bit more internal documentation
4894 * anticosmetics: mangled names in macros to evade debugger strangeness
4895 * tested on sparc, hp-700, dec-mips, rs6000
4896 with gcc & native cc (hp, dec only) allowing
4897 Detlefs & Zorn comparison study (in SIGPLAN Notices.)
4899 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
4900 * Based loosely on libg++-1.2X malloc. (It retains some of the overall
4901 structure of old version, but most details differ.)