* t/op/lexicals-2.t (added), MANIFEST:
[parrot.git] / src / malloc.c
blobe50d79bdcfbe86e91e81d26186c55fbe3340feff
1 /*
2 This is a version (aka dlmalloc) of malloc/free/realloc written by
3 Doug Lea and released to the public domain. Use, modify, and
4 redistribute this code without permission or acknowledgment in any
5 way you wish. Send questions, comments, complaints, performance
6 data, etc to dl@cs.oswego.edu
8 * VERSION 2.7.2 Sat Aug 17 09:07:30 2002 Doug Lea (dl at gee)
10 Note: There may be an updated version of this malloc obtainable at
11 ftp://gee.cs.oswego.edu/pub/misc/malloc.c
12 Check before installing!
14 * Quickstart
16 This library is all in one file to simplify the most common usage:
17 ftp it, compile it (-O), and link it into another program. All
18 of the compile-time options default to reasonable values for use on
19 most unix platforms. Compile -DWIN32 for reasonable defaults on windows.
20 You might later want to step through various compile-time and dynamic
21 tuning options.
23 For convenience, an include file for code using this malloc is at:
24 ftp://gee.cs.oswego.edu/pub/misc/malloc-2.7.1.h
25 You don't really need this .h file unless you call functions not
26 defined in your system include files. The .h file contains only the
27 excerpts from this file needed for using this malloc on ANSI C/C++
28 systems, so long as you haven't changed compile-time options about
29 naming and tuning parameters. If you do, then you can create your
30 own malloc.h that does include all settings by cutting at the point
31 indicated below.
33 * Why use this malloc?
35 This is not the fastest, most space-conserving, most portable, or
36 most tunable malloc ever written. However it is among the fastest
37 while also being among the most space-conserving, portable and tunable.
38 Consistent balance across these factors results in a good general-purpose
39 allocator for malloc-intensive programs.
41 The main properties of the algorithms are:
42 * For large (>= 512 bytes) requests, it is a pure best-fit allocator,
43 with ties normally decided via FIFO (i.e. least recently used).
44 * For small (<= 64 bytes by default) requests, it is a caching
45 allocator, that maintains pools of quickly recycled chunks.
46 * In between, and for combinations of large and small requests, it does
47 the best it can trying to meet both goals at once.
48 * For very large requests (>= 128KB by default), it relies on system
49 memory mapping facilities, if supported.
51 For a longer but slightly out of date high-level description, see
52 http://gee.cs.oswego.edu/dl/html/malloc.html
54 You may already by default be using a C library containing a malloc
55 that is based on some version of this malloc (for example in
56 linux). You might still want to use the one in this file in order to
57 customize settings or to avoid overheads associated with library
58 versions.
60 * Contents, described in more detail in "description of public routines" below.
62 Standard (ANSI/SVID/...) functions:
63 malloc(size_t n);
64 calloc(size_t n_elements, size_t element_size);
65 free(Void_t* p);
66 realloc(Void_t* p, size_t n);
67 memalign(size_t alignment, size_t n);
68 valloc(size_t n);
69 mallinfo()
70 mallopt(int parameter_number, int parameter_value)
72 Additional functions:
73 independent_calloc(size_t n_elements, size_t size, Void_t* chunks[]);
74 independent_comalloc(size_t n_elements, size_t sizes[], Void_t* chunks[]);
75 pvalloc(size_t n);
76 cfree(Void_t* p);
77 malloc_trim(size_t pad);
78 malloc_usable_size(Void_t* p);
79 malloc_stats();
81 * Vital statistics:
83 Supported pointer representation: 4 or 8 bytes
84 Supported size_t representation: 4 or 8 bytes
85 Note that size_t is allowed to be 4 bytes even if pointers are 8.
86 You can adjust this by defining INTERNAL_SIZE_T
88 Alignment: 2 * sizeof (size_t) (default)
89 (i.e., 8 byte alignment with 4byte size_t). This suffices for
90 nearly all current machines and C compilers. However, you can
91 define MALLOC_ALIGNMENT to be wider than this if necessary.
93 Minimum overhead per allocated chunk: 4 or 8 bytes
94 Each malloced chunk has a hidden word of overhead holding size
95 and status information.
97 Minimum allocated size: 4-byte ptrs: 16 bytes (including 4 overhead)
98 8-byte ptrs: 24/32 bytes (including, 4/8 overhead)
100 When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte
101 ptrs but 4 byte size) or 24 (for 8/8) additional bytes are
102 needed; 4 (8) for a trailing size field and 8 (16) bytes for
103 free list pointers. Thus, the minimum allocatable size is
104 16/24/32 bytes.
106 Even a request for zero bytes (i.e., malloc(0)) returns a
107 pointer to something of the minimum allocatable size.
109 The maximum overhead wastage (i.e., number of extra bytes
110 allocated than were requested in malloc) is less than or equal
111 to the minimum size, except for requests >= mmap_threshold that
112 are serviced via mmap(), where the worst case wastage is 2 *
113 sizeof (size_t) bytes plus the remainder from a system page (the
114 minimal mmap unit); typically 4096 or 8192 bytes.
116 Maximum allocated size: 4-byte size_t: 2^32 minus about two pages
117 8-byte size_t: 2^64 minus about two pages
119 It is assumed that (possibly signed) size_t values suffice to
120 represent chunk sizes. `Possibly signed' is due to the fact
121 that `size_t' may be defined on a system as either a signed or
122 an unsigned type. The ISO C standard says that it must be
123 unsigned, but a few systems are known not to adhere to this.
124 Additionally, even when size_t is unsigned, sbrk (which is by
125 default used to obtain memory from system) accepts signed
126 arguments, and may not be able to handle size_t-wide arguments
127 with negative sign bit. Generally, values that would
128 appear as negative after accounting for overhead and alignment
129 are supported only via mmap(), which does not have this
130 limitation.
132 Requests for sizes outside the allowed range will perform an optional
133 failure action and then return null. (Requests may also
134 also fail because a system is out of memory.)
136 Thread-safety: NOT thread-safe unless USE_MALLOC_LOCK defined
138 When USE_MALLOC_LOCK is defined, wrappers are created to
139 surround every public call with either a pthread mutex or
140 a win32 spinlock (depending on WIN32). This is not
141 especially fast, and can be a major bottleneck.
142 It is designed only to provide minimal protection
143 in concurrent environments, and to provide a basis for
144 extensions. If you are using malloc in a concurrent program,
145 you would be far better off obtaining ptmalloc, which is
146 derived from a version of this malloc, and is well-tuned for
147 concurrent programs. (See http://www.malloc.de) Note that
148 even when USE_MALLOC_LOCK is defined, you can can guarantee
149 full thread-safety only if no threads acquire memory through
150 direct calls to MORECORE or other system-level allocators.
152 Compliance: I believe it is compliant with the 1997 Single Unix Specification
153 (See http://www.opennc.org). Also SVID/XPG, ANSI C, and probably
154 others as well.
156 * Synopsis of compile-time options:
158 People have reported using previous versions of this malloc on all
159 versions of Unix, sometimes by tweaking some of the defines
160 below. It has been tested most extensively on Solaris and
161 Linux. It is also reported to work on WIN32 platforms.
162 People also report using it in stand-alone embedded systems.
164 The implementation is in straight, hand-tuned ANSI C. It is not
165 at all modular. (Sorry!) It uses a lot of macros. To be at all
166 usable, this code should be compiled using an optimizing compiler
167 (for example gcc -O3) that can simplify expressions and control
168 paths. (FAQ: some macros import variables as arguments rather than
169 declare locals because people reported that some debuggers
170 otherwise get confused.)
172 OPTION DEFAULT VALUE
174 Compilation Environment options:
176 __STD_C derived from C compiler defines
177 WIN32 NOT defined
178 HAVE_MEMCPY defined
179 USE_MEMCPY 1 if HAVE_MEMCPY is defined
180 HAVE_MMAP defined as 1
181 MMAP_CLEARS 1
182 HAVE_MREMAP 0 unless linux defined
183 malloc_getpagesize derived from system #includes, or 4096 if not
184 HAVE_USR_INCLUDE_MALLOC_H NOT defined
185 LACKS_UNISTD_H NOT defined unless WIN32
186 LACKS_SYS_PARAM_H NOT defined unless WIN32
187 LACKS_SYS_MMAN_H NOT defined unless WIN32
188 LACKS_FCNTL_H NOT defined
190 Changing default word sizes:
192 INTERNAL_SIZE_T size_t
193 MALLOC_ALIGNMENT 2 * sizeof (INTERNAL_SIZE_T)
194 PTR_UINT unsigned long
195 CHUNK_SIZE_T unsigned long
197 Configuration and functionality options:
199 USE_DL_PREFIX NOT defined
200 USE_PUBLIC_MALLOC_WRAPPERS NOT defined
201 USE_MALLOC_LOCK NOT defined
202 DEBUG NOT defined
203 REALLOC_ZERO_BYTES_FREES NOT defined
204 MALLOC_FAILURE_ACTION errno = ENOMEM, if __STD_C defined, else no-op
205 TRIM_FASTBINS 0
206 FIRST_SORTED_BIN_SIZE 512
208 Options for customizing MORECORE:
210 MORECORE sbrk
211 MORECORE_CONTIGUOUS 1
212 MORECORE_CANNOT_TRIM NOT defined
213 MMAP_AS_MORECORE_SIZE (1024 * 1024)
215 Tuning options that are also dynamically changeable via mallopt:
217 DEFAULT_MXFAST 64
218 DEFAULT_TRIM_THRESHOLD 256 * 1024
219 DEFAULT_TOP_PAD 0
220 DEFAULT_MMAP_THRESHOLD 256 * 1024
221 DEFAULT_MMAP_MAX 65536
223 There are several other #defined constants and macros that you
224 probably don't want to touch unless you are extending or adapting malloc.
228 WIN32 sets up defaults for MS environment and compilers.
229 Otherwise defaults are for unix.
232 /* #define WIN32 */
234 #ifdef WIN32
236 # define WIN32_LEAN_AND_MEAN
237 # include <windows.h>
239 /* Win32 doesn't supply or need the following headers */
240 # define LACKS_UNISTD_H
241 # define LACKS_SYS_PARAM_H
242 # define LACKS_SYS_MMAN_H
244 /* Use the supplied emulation of sbrk */
245 # define MORECORE sbrk
246 # define MORECORE_CONTIGUOUS 1
247 # define MORECORE_FAILURE ((void*)(-1))
249 /* Use the supplied emulation of mmap and munmap */
250 # define HAVE_MMAP 1
251 # define MUNMAP_FAILURE (-1)
252 # define MMAP_CLEARS 1
254 /* These values don't really matter in windows mmap emulation */
255 # define MAP_PRIVATE 1
256 # define MAP_ANONYMOUS 2
257 # define PROT_READ 1
258 # define PROT_WRITE 2
260 /* Emulation functions defined at the end of this file */
262 /* If USE_MALLOC_LOCK, use supplied critical-section-based lock functions */
263 # ifdef USE_MALLOC_LOCK
264 static int slwait(int *sl);
265 static int slrelease(int *sl);
266 # endif
268 static long getpagesize(void);
269 static long getregionsize(void);
270 static void *sbrk(long size);
271 static void *mmap(void *ptr, long size, long prot, long type,
272 long handle, long arg);
273 static long munmap(void *ptr, long size);
275 static void vminfo (unsigned long*free, unsigned long*reserved,
276 unsigned long*committed);
277 static int cpuinfo (int whole, unsigned long*kernel, unsigned long*user);
279 #endif
282 __STD_C should be nonzero if using ANSI-standard C compiler, a C++
283 compiler, or a C compiler sufficiently close to ANSI to get away
284 with it.
287 #ifndef __STD_C
288 # if defined(__STDC__) || defined(_cplusplus)
289 # define __STD_C 1
290 # else
291 # define __STD_C 0
292 # endif
293 #endif /*__STD_C*/
297 Void_t* is the pointer type that malloc should say it returns
300 #ifndef Void_t
301 # if (__STD_C || defined(WIN32))
302 # define Void_t void
303 # else
304 # define Void_t char
305 # endif
306 #endif /*Void_t*/
308 #if __STD_C
309 # include <stddef.h> /* for size_t */
310 #else
311 # include <sys/types.h>
312 #endif
314 #ifdef __cplusplus
315 extern "C" {
316 #endif
318 /* define LACKS_UNISTD_H if your system does not have a <unistd.h>. */
320 /* #define LACKS_UNISTD_H */
322 #ifndef LACKS_UNISTD_H
323 # include <unistd.h>
324 #endif
326 /* define LACKS_SYS_PARAM_H if your system does not have a <sys/param.h>. */
328 /* #define LACKS_SYS_PARAM_H */
331 #include <stdio.h> /* needed for malloc_stats */
332 #include <errno.h> /* needed for optional MALLOC_FAILURE_ACTION */
336 Debugging:
338 Because freed chunks may be overwritten with bookkeeping fields, this
339 malloc will often die when freed memory is overwritten by user
340 programs. This can be very effective (albeit in an annoying way)
341 in helping track down dangling pointers.
343 If you compile with -DDEBUG, a number of assertion checks are
344 enabled that will catch more memory errors. You probably won't be
345 able to make much sense of the actual assertion errors, but they
346 should help you locate incorrectly overwritten memory. The
347 checking is fairly extensive, and will slow down execution
348 noticeably. Calling malloc_stats or mallinfo with DEBUG set will
349 attempt to check every non-mmapped allocated and free chunk in the
350 course of computing the summaries. (By nature, mmapped regions
351 cannot be checked very much automatically.)
353 Setting DEBUG may also be helpful if you are trying to modify
354 this code. The assertions in the check routines spell out in more
355 detail the assumptions and invariants underlying the algorithms.
357 Setting DEBUG does NOT provide an automated mechanism for checking
358 that all accesses to malloced memory stay within their
359 bounds. However, there are several add-ons and adaptations of this
360 or other mallocs available that do this.
363 #if DEBUG
364 # include <assert.h>
365 #else
366 # define assert(x) ((void)0)
367 #endif
370 The unsigned integer type used for comparing any two chunk sizes.
371 This should be at least as wide as size_t, but should not be signed.
374 #ifndef CHUNK_SIZE_T
375 # define CHUNK_SIZE_T unsigned long
376 #endif
379 The unsigned integer type used to hold addresses when they are are
380 manipulated as integers. Except that it is not defined on all
381 systems, intptr_t would suffice.
383 #ifndef PTR_UINT
384 # define PTR_UINT unsigned long
385 #endif
389 INTERNAL_SIZE_T is the word-size used for internal bookkeeping
390 of chunk sizes.
392 The default version is the same as size_t.
394 While not strictly necessary, it is best to define this as an
395 unsigned type, even if size_t is a signed type. This may avoid some
396 artificial size limitations on some systems.
398 On a 64-bit machine, you may be able to reduce malloc overhead by
399 defining INTERNAL_SIZE_T to be a 32 bit `unsigned int' at the
400 expense of not being able to handle more than 2^32 of malloced
401 space. If this limitation is acceptable, you are encouraged to set
402 this unless you are on a platform requiring 16byte alignments. In
403 this case the alignment requirements turn out to negate any
404 potential advantages of decreasing size_t word size.
406 Implementors: Beware of the possible combinations of:
407 - INTERNAL_SIZE_T might be signed or unsigned, might be 32 or 64 bits,
408 and might be the same width as int or as long
409 - size_t might have different width and signedness as INTERNAL_SIZE_T
410 - int and long might be 32 or 64 bits, and might be the same width
411 To deal with this, most comparisons and difference computations
412 among INTERNAL_SIZE_Ts should cast them to CHUNK_SIZE_T, being
413 aware of the fact that casting an unsigned int to a wider long does
414 not sign-extend. (This also makes checking for negative numbers
415 awkward.) Some of these casts result in harmless compiler warnings
416 on some systems.
419 #ifndef INTERNAL_SIZE_T
420 # define INTERNAL_SIZE_T size_t
421 #endif
423 /* The corresponding word size */
424 #define SIZE_SZ (sizeof (INTERNAL_SIZE_T))
429 MALLOC_ALIGNMENT is the minimum alignment for malloc'ed chunks.
430 It must be a power of two at least 2 * SIZE_SZ, even on machines
431 for which smaller alignments would suffice. It may be defined as
432 larger than this though. Note however that code and data structures
433 are optimized for the case of 8-byte alignment.
437 #ifndef MALLOC_ALIGNMENT
438 # define MALLOC_ALIGNMENT (2 * SIZE_SZ)
439 #endif
441 /* The corresponding bit mask value */
442 #define MALLOC_ALIGN_MASK (MALLOC_ALIGNMENT - 1)
447 REALLOC_ZERO_BYTES_FREES should be set if a call to
448 realloc with zero bytes should be the same as a call to free.
449 Some people think it should. Otherwise, since this malloc
450 returns a unique pointer for malloc(0), so does realloc(p, 0).
453 /* #define REALLOC_ZERO_BYTES_FREES */
456 TRIM_FASTBINS controls whether free() of a very small chunk can
457 immediately lead to trimming. Setting to true (1) can reduce memory
458 footprint, but will almost always slow down programs that use a lot
459 of small chunks.
461 Define this only if you are willing to give up some speed to more
462 aggressively reduce system-level memory footprint when releasing
463 memory in programs that use many small chunks. You can get
464 essentially the same effect by setting MXFAST to 0, but this can
465 lead to even greater slowdowns in programs using many small chunks.
466 TRIM_FASTBINS is an in-between compile-time option, that disables
467 only those chunks bordering topmost memory from being placed in
468 fastbins.
471 #ifndef TRIM_FASTBINS
472 # define TRIM_FASTBINS 0
473 #endif
477 USE_DL_PREFIX will prefix all public routines with the string 'dl'.
478 This is necessary when you only want to use this malloc in one part
479 of a program, using your regular system malloc elsewhere.
482 /* #define USE_DL_PREFIX */
486 USE_MALLOC_LOCK causes wrapper functions to surround each
487 callable routine with pthread mutex lock/unlock.
489 USE_MALLOC_LOCK forces USE_PUBLIC_MALLOC_WRAPPERS to be defined
493 /* #define USE_MALLOC_LOCK */
497 If USE_PUBLIC_MALLOC_WRAPPERS is defined, every public routine is
498 actually a wrapper function that first calls MALLOC_PREACTION, then
499 calls the internal routine, and follows it with
500 MALLOC_POSTACTION. This is needed for locking, but you can also use
501 this, without USE_MALLOC_LOCK, for purposes of interception,
502 instrumentation, etc. It is a sad fact that using wrappers often
503 noticeably degrades performance of malloc-intensive programs.
506 #ifdef USE_MALLOC_LOCK
507 # define USE_PUBLIC_MALLOC_WRAPPERS
508 #else
509 /* #define USE_PUBLIC_MALLOC_WRAPPERS */
510 #endif
514 Two-phase name translation.
515 All of the actual routines are given mangled names.
516 When wrappers are used, they become the public callable versions.
517 When DL_PREFIX is used, the callable names are prefixed.
520 #ifndef USE_PUBLIC_MALLOC_WRAPPERS
521 # define cALLOc public_cALLOc
522 # define fREe public_fREe
523 # define cFREe public_cFREe
524 # define mALLOc public_mALLOc
525 # define mEMALIGn public_mEMALIGn
526 # define rEALLOc public_rEALLOc
527 # define vALLOc public_vALLOc
528 # define pVALLOc public_pVALLOc
529 # define mALLINFo public_mALLINFo
530 # define mALLOPt public_mALLOPt
531 # define mTRIm public_mTRIm
532 # define mSTATs public_mSTATs
533 # define mUSABLe public_mUSABLe
534 # define iCALLOc public_iCALLOc
535 # define iCOMALLOc public_iCOMALLOc
536 #endif
538 #ifdef USE_DL_PREFIX
539 # define public_cALLOc dlcalloc
540 # define public_fREe dlfree
541 # define public_cFREe dlcfree
542 # define public_mALLOc dlmalloc
543 # define public_mEMALIGn dlmemalign
544 # define public_rEALLOc dlrealloc
545 # define public_vALLOc dlvalloc
546 # define public_pVALLOc dlpvalloc
547 # define public_mALLINFo dlmallinfo
548 # define public_mALLOPt dlmallopt
549 # define public_mTRIm dlmalloc_trim
550 # define public_mSTATs dlmalloc_stats
551 # define public_mUSABLe dlmalloc_usable_size
552 # define public_iCALLOc dlindependent_calloc
553 # define public_iCOMALLOc dlindependent_comalloc
554 #else /* USE_DL_PREFIX */
555 # define public_cALLOc calloc
556 # define public_fREe free
557 # define public_cFREe cfree
558 # define public_mALLOc malloc
559 # define public_mEMALIGn memalign
560 # define public_rEALLOc realloc
561 # define public_vALLOc valloc
562 # define public_pVALLOc pvalloc
563 # define public_mALLINFo mallinfo
564 # define public_mALLOPt mallopt
565 # define public_mTRIm malloc_trim
566 # define public_mSTATs malloc_stats
567 # define public_mUSABLe malloc_usable_size
568 # define public_iCALLOc independent_calloc
569 # define public_iCOMALLOc independent_comalloc
570 #endif /* USE_DL_PREFIX */
574 HAVE_MEMCPY should be defined if you are not otherwise using
575 ANSI STD C, but still have memcpy and memset in your C library
576 and want to use them in calloc and realloc. Otherwise simple
577 macro versions are defined below.
579 USE_MEMCPY should be defined as 1 if you actually want to
580 have memset and memcpy called. People report that the macro
581 versions are faster than libc versions on some systems.
583 Even if USE_MEMCPY is set to 1, loops to copy/clear small chunks
584 (of <= 36 bytes) are manually unrolled in realloc and calloc.
587 #define HAVE_MEMCPY
589 #ifndef USE_MEMCPY
590 # ifdef HAVE_MEMCPY
591 # define USE_MEMCPY 1
592 # else
593 # define USE_MEMCPY 0
594 # endif
595 #endif
598 #if (__STD_C || defined(HAVE_MEMCPY))
600 # ifdef WIN32
601 /* On Win32 memset and memcpy are already declared in windows.h */
602 # else
603 # if __STD_C
604 void* memset(void*, int, size_t);
605 void* memcpy(void*, const void*, size_t);
606 # else
607 Void_t* memset();
608 Void_t* memcpy();
609 # endif
610 # endif
611 #endif
614 MALLOC_FAILURE_ACTION is the action to take before "return 0" when
615 malloc fails to be able to return memory, either because memory is
616 exhausted or because of illegal arguments.
618 By default, sets errno if running on STD_C platform, else does nothing.
621 #ifndef MALLOC_FAILURE_ACTION
622 # if __STD_C
623 # define MALLOC_FAILURE_ACTION \
624 errno = ENOMEM;
626 # else
627 # define MALLOC_FAILURE_ACTION
628 # endif
629 #endif
632 MORECORE-related declarations. By default, rely on sbrk
636 #ifdef LACKS_UNISTD_H
637 # if !defined(__FreeBSD__) && !defined(__OpenBSD__) && \
638 !defined(__NetBSD__) && !defined(__GNUC__)
639 # if __STD_C
640 extern Void_t* sbrk(ptrdiff_t);
641 # else
642 extern Void_t* sbrk();
643 # endif
644 # endif
645 #endif
648 MORECORE is the name of the routine to call to obtain more memory
649 from the system. See below for general guidance on writing
650 alternative MORECORE functions, as well as a version for WIN32 and a
651 sample version for pre-OSX macos.
654 #ifndef MORECORE
655 # define MORECORE sbrk
656 #endif
659 MORECORE_FAILURE is the value returned upon failure of MORECORE
660 as well as mmap. Since it cannot be an otherwise valid memory address,
661 and must reflect values of standard sys calls, you probably ought not
662 try to redefine it.
665 #ifndef MORECORE_FAILURE
666 # define MORECORE_FAILURE (-1)
667 #endif
670 If MORECORE_CONTIGUOUS is true, take advantage of fact that
671 consecutive calls to MORECORE with positive arguments always return
672 contiguous increasing addresses. This is true of unix sbrk. Even
673 if not defined, when regions happen to be contiguous, malloc will
674 permit allocations spanning regions obtained from different
675 calls. But defining this when applicable enables some stronger
676 consistency checks and space efficiencies.
679 #ifndef MORECORE_CONTIGUOUS
680 # define MORECORE_CONTIGUOUS 1
681 #endif
684 Define MORECORE_CANNOT_TRIM if your version of MORECORE
685 cannot release space back to the system when given negative
686 arguments. This is generally necessary only if you are using
687 a hand-crafted MORECORE function that cannot handle negative arguments.
690 /* #define MORECORE_CANNOT_TRIM */
694 Define HAVE_MMAP as true to optionally make malloc() use mmap() to
695 allocate very large blocks. These will be returned to the
696 operating system immediately after a free(). Also, if mmap
697 is available, it is used as a backup strategy in cases where
698 MORECORE fails to provide space from system.
700 This malloc is best tuned to work with mmap for large requests.
701 If you do not have mmap, operations involving very large chunks (1MB
702 or so) may be slower than you'd like.
705 #ifndef HAVE_MMAP
706 # define HAVE_MMAP 1
707 #endif
709 #if HAVE_MMAP
711 Standard unix mmap using /dev/zero clears memory so calloc doesn't
712 need to.
715 # ifndef MMAP_CLEARS
716 # define MMAP_CLEARS 1
717 # endif
719 #else /* no mmap */
720 # ifndef MMAP_CLEARS
721 # define MMAP_CLEARS 0
722 # endif
723 #endif
727 MMAP_AS_MORECORE_SIZE is the minimum mmap size argument to use if
728 sbrk fails, and mmap is used as a backup (which is done only if
729 HAVE_MMAP). The value must be a multiple of page size. This
730 backup strategy generally applies only when systems have "holes" in
731 address space, so sbrk cannot perform contiguous expansion, but
732 there is still space available on system. On systems for which
733 this is known to be useful (i.e. most linux kernels), this occurs
734 only when programs allocate huge amounts of memory. Between this,
735 and the fact that mmap regions tend to be limited, the size should
736 be large, to avoid too many mmap calls and thus avoid running out
737 of kernel resources.
740 #ifndef MMAP_AS_MORECORE_SIZE
741 # define MMAP_AS_MORECORE_SIZE (1024 * 1024)
742 #endif
745 Define HAVE_MREMAP to make realloc() use mremap() to re-allocate
746 large blocks. This is currently only possible on Linux with
747 kernel versions newer than 1.3.77.
750 #ifndef HAVE_MREMAP
751 # ifdef linux
752 # define HAVE_MREMAP 1
753 # else
754 # define HAVE_MREMAP 0
755 # endif
757 #endif /* HAVE_MMAP */
761 The system page size. To the extent possible, this malloc manages
762 memory from the system in page-size units. Note that this value is
763 cached during initialization into a field of malloc_state. So even
764 if malloc_getpagesize is a function, it is only called once.
766 The following mechanics for getpagesize were adapted from bsd/gnu
767 getpagesize.h. If none of the system-probes here apply, a value of
768 4096 is used, which should be OK: If they don't apply, then using
769 the actual value probably doesn't impact performance.
773 #ifndef malloc_getpagesize
775 # ifndef LACKS_UNISTD_H
776 # include <unistd.h>
777 # endif
779 # ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
780 # ifndef _SC_PAGE_SIZE
781 # define _SC_PAGE_SIZE _SC_PAGESIZE
782 # endif
783 # endif
785 # ifdef _SC_PAGE_SIZE
786 # define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
787 # else
788 # if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
789 extern size_t getpagesize();
790 # define malloc_getpagesize getpagesize()
791 # else
792 # ifdef WIN32 /* use supplied emulation of getpagesize */
793 # define malloc_getpagesize getpagesize()
794 # else
795 # ifndef LACKS_SYS_PARAM_H
796 # include <sys/param.h>
797 # endif
798 # ifdef EXEC_PAGESIZE
799 # define malloc_getpagesize EXEC_PAGESIZE
800 # else
801 # ifdef NBPG
802 # ifndef CLSIZE
803 # define malloc_getpagesize NBPG
804 # else
805 # define malloc_getpagesize (NBPG * CLSIZE)
806 # endif
807 # else
808 # ifdef NBPC
809 # define malloc_getpagesize NBPC
810 # else
811 # ifdef PAGESIZE
812 # define malloc_getpagesize PAGESIZE
813 # else /* just guess */
814 # define malloc_getpagesize (4096)
815 # endif
816 # endif
817 # endif
818 # endif
819 # endif
820 # endif
821 # endif
822 #endif
825 This version of malloc supports the standard SVID/XPG mallinfo
826 routine that returns a struct containing usage properties and
827 statistics. It should work on any SVID/XPG compliant system that has
828 a /usr/include/malloc.h defining struct mallinfo. (If you'd like to
829 install such a thing yourself, cut out the preliminary declarations
830 as described above and below and save them in a malloc.h file. But
831 there's no compelling reason to bother to do this.)
833 The main declaration needed is the mallinfo struct that is returned
834 (by-copy) by mallinfo(). The SVID/XPG malloinfo struct contains a
835 bunch of fields that are not even meaningful in this version of
836 malloc. These fields are are instead filled by mallinfo() with
837 other numbers that might be of interest.
839 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
840 /usr/include/malloc.h file that includes a declaration of struct
841 mallinfo. If so, it is included; else an SVID2/XPG2 compliant
842 version is declared below. These must be precisely the same for
843 mallinfo() to work. The original SVID version of this struct,
844 defined on most systems with mallinfo, declares all fields as
845 ints. But some others define as unsigned long. If your system
846 defines the fields using a type of different width than listed here,
847 you must #include your system version and #define
848 HAVE_USR_INCLUDE_MALLOC_H.
851 /* #define HAVE_USR_INCLUDE_MALLOC_H */
853 #ifdef HAVE_USR_INCLUDE_MALLOC_H
854 # include "/usr/include/malloc.h"
855 #else
857 /* SVID2/XPG mallinfo structure */
859 struct mallinfo {
860 int arena; /* non-mmapped space allocated from system */
861 int ordblks; /* number of free chunks */
862 int smblks; /* number of fastbin blocks */
863 int hblks; /* number of mmapped regions */
864 int hblkhd; /* space in mmapped regions */
865 int usmblks; /* maximum total allocated space */
866 int fsmblks; /* space available in freed fastbin blocks */
867 int uordblks; /* total allocated space */
868 int fordblks; /* total free space */
869 int keepcost; /* top-most, releasable (via malloc_trim) space */
873 SVID/XPG defines four standard parameter numbers for mallopt,
874 normally defined in malloc.h. Only one of these (M_MXFAST) is used
875 in this malloc. The others (M_NLBLKS, M_GRAIN, M_KEEP) don't apply,
876 so setting them has no effect. But this malloc also supports other
877 options in mallopt described below.
879 #endif
882 /* ---------- description of public routines ------------ */
885 malloc(size_t n)
886 Returns a pointer to a newly allocated chunk of at least n bytes, or null
887 if no space is available. Additionally, on failure, errno is
888 set to ENOMEM on ANSI C systems.
890 If n is zero, malloc returns a minumum-sized chunk. (The minimum
891 size is 16 bytes on most 32bit systems, and 24 or 32 bytes on 64bit
892 systems.) On most systems, size_t is an unsigned type, so calls
893 with negative arguments are interpreted as requests for huge amounts
894 of space, which will often fail. The maximum supported value of n
895 differs across systems, but is in all cases less than the maximum
896 representable value of a size_t.
898 #if __STD_C
899 Void_t* public_mALLOc(size_t);
900 #else
901 Void_t* public_mALLOc();
902 #endif
905 free(Void_t* p)
906 Releases the chunk of memory pointed to by p, that had been previously
907 allocated using malloc or a related routine such as realloc.
908 It has no effect if p is null. It can have arbitrary (i.e., bad!)
909 effects if p has already been freed.
911 Unless disabled (using mallopt), freeing very large spaces will
912 when possible, automatically trigger operations that give
913 back unused memory to the system, thus reducing program footprint.
915 #if __STD_C
916 void public_fREe(Void_t*);
917 #else
918 void public_fREe();
919 #endif
922 calloc(size_t n_elements, size_t element_size);
923 Returns a pointer to n_elements * element_size bytes, with all locations
924 set to zero.
926 #if __STD_C
927 Void_t* public_cALLOc(size_t, size_t);
928 #else
929 Void_t* public_cALLOc();
930 #endif
933 realloc(Void_t* p, size_t n)
934 Returns a pointer to a chunk of size n that contains the same data
935 as does chunk p up to the minimum of (n, p's size) bytes, or null
936 if no space is available.
938 The returned pointer may or may not be the same as p. The algorithm
939 prefers extending p when possible, otherwise it employs the
940 equivalent of a malloc-copy-free sequence.
942 If p is null, realloc is equivalent to malloc.
944 If space is not available, realloc returns null, errno is set (if on
945 ANSI) and p is NOT freed.
947 if n is for fewer bytes than already held by p, the newly unused
948 space is lopped off and freed if possible. Unless the #define
949 REALLOC_ZERO_BYTES_FREES is set, realloc with a size argument of
950 zero (re)allocates a minimum-sized chunk.
952 Large chunks that were internally obtained via mmap will always
953 be reallocated using malloc-copy-free sequences unless
954 the system supports MREMAP (currently only linux).
956 The old unix realloc convention of allowing the last-free'd chunk
957 to be used as an argument to realloc is not supported.
959 #if __STD_C
960 Void_t* public_rEALLOc(Void_t*, size_t);
961 #else
962 Void_t* public_rEALLOc();
963 #endif
966 memalign(size_t alignment, size_t n);
967 Returns a pointer to a newly allocated chunk of n bytes, aligned
968 in accord with the alignment argument.
970 The alignment argument should be a power of two. If the argument is
971 not a power of two, the nearest greater power is used.
972 8-byte alignment is guaranteed by normal malloc calls, so don't
973 bother calling memalign with an argument of 8 or less.
975 Overreliance on memalign is a sure way to fragment space.
977 #if __STD_C
978 Void_t* public_mEMALIGn(size_t, size_t);
979 #else
980 Void_t* public_mEMALIGn();
981 #endif
984 valloc(size_t n);
985 Equivalent to memalign(pagesize, n), where pagesize is the page
986 size of the system. If the pagesize is unknown, 4096 is used.
988 #if __STD_C
989 Void_t* public_vALLOc(size_t);
990 #else
991 Void_t* public_vALLOc();
992 #endif
997 mallopt(int parameter_number, int parameter_value)
998 Sets tunable parameters The format is to provide a
999 (parameter-number, parameter-value) pair. mallopt then sets the
1000 corresponding parameter to the argument value if it can (i.e., so
1001 long as the value is meaningful), and returns 1 if successful else
1002 0. SVID/XPG/ANSI defines four standard param numbers for mallopt,
1003 normally defined in malloc.h. Only one of these (M_MXFAST) is used
1004 in this malloc. The others (M_NLBLKS, M_GRAIN, M_KEEP) don't apply,
1005 so setting them has no effect. But this malloc also supports four
1006 other options in mallopt. See below for details. Briefly, supported
1007 parameters are as follows (listed defaults are for "typical"
1008 configurations).
1010 Symbol param # default allowed param values
1011 M_MXFAST 1 64 0-80 (0 disables fastbins)
1012 M_TRIM_THRESHOLD -1 256*1024 any (-1U disables trimming)
1013 M_TOP_PAD -2 0 any
1014 M_MMAP_THRESHOLD -3 256*1024 any (or 0 if no MMAP support)
1015 M_MMAP_MAX -4 65536 any (0 disables use of mmap)
1017 #if __STD_C
1018 int public_mALLOPt(int, int);
1019 #else
1020 int public_mALLOPt();
1021 #endif
1025 mallinfo()
1026 Returns (by copy) a struct containing various summary statistics:
1028 arena: current total non-mmapped bytes allocated from system
1029 ordblks: the number of free chunks
1030 smblks: the number of fastbin blocks (i.e., small chunks that
1031 have been freed but not use, reused, or consolidated)
1032 hblks: current number of mmapped regions
1033 hblkhd: total bytes held in mmapped regions
1034 usmblks: the maximum total allocated space. This will be greater
1035 than current total if trimming has occurred.
1036 fsmblks: total bytes held in fastbin blocks
1037 uordblks: current total allocated space (normal or mmapped)
1038 fordblks: total free space
1039 keepcost: the maximum number of bytes that could ideally be released
1040 back to system via malloc_trim. ("ideally" means that
1041 it ignores page restrictions etc.)
1043 Because these fields are ints, but internal bookkeeping may
1044 be kept as longs, the reported values may wrap around zero and
1045 thus be inaccurate.
1047 #if __STD_C
1048 struct mallinfo public_mALLINFo(void);
1049 #else
1050 struct mallinfo public_mALLINFo();
1051 #endif
1054 independent_calloc(size_t n_elements, size_t element_size, Void_t* chunks[]);
1056 independent_calloc is similar to calloc, but instead of returning a
1057 single cleared space, it returns an array of pointers to n_elements
1058 independent elements that can hold contents of size elem_size, each
1059 of which starts out cleared, and can be independently freed,
1060 realloc'ed etc. The elements are guaranteed to be adjacently
1061 allocated (this is not guaranteed to occur with multiple callocs or
1062 mallocs), which may also improve cache locality in some
1063 applications.
1065 The "chunks" argument is optional (i.e., may be null, which is
1066 probably the most typical usage). If it is null, the returned array
1067 is itself dynamically allocated and should also be freed when it is
1068 no longer needed. Otherwise, the chunks array must be of at least
1069 n_elements in length. It is filled in with the pointers to the
1070 chunks.
1072 In either case, independent_calloc returns this pointer array, or
1073 null if the allocation failed. If n_elements is zero and "chunks"
1074 is null, it returns a chunk representing an array with zero elements
1075 (which should be freed if not wanted).
1077 Each element must be individually freed when it is no longer
1078 needed. If you'd like to instead be able to free all at once, you
1079 should instead use regular calloc and assign pointers into this
1080 space to represent elements. (In this case though, you cannot
1081 independently free elements.)
1083 independent_calloc simplifies and speeds up implementations of many
1084 kinds of pools. It may also be useful when constructing large data
1085 structures that initially have a fixed number of fixed-sized nodes,
1086 but the number is not known at compile time, and some of the nodes
1087 may later need to be freed. For example:
1089 struct Node { int item; struct Node* next; };
1091 struct Node* build_list() {
1092 struct Node** pool;
1093 int n = read_number_of_nodes_needed();
1094 if (n <= 0) return 0;
1095 pool = (struct Node**)(independent_calloc(n, sizeof (struct Node), 0);
1096 if (pool == 0) die();
1097 / / organize into a linked list...
1098 struct Node* first = pool[0];
1099 for (i = 0; i < n-1; ++i)
1100 pool[i]->next = pool[i+1];
1101 free(pool); / / Can now free the array (or not, if it is needed later)
1102 return first;
1105 #if __STD_C
1106 Void_t** public_iCALLOc(size_t, size_t, Void_t**);
1107 #else
1108 Void_t** public_iCALLOc();
1109 #endif
1112 independent_comalloc(size_t n_elements, size_t sizes[], Void_t* chunks[]);
1114 independent_comalloc allocates, all at once, a set of n_elements
1115 chunks with sizes indicated in the "sizes" array. It returns
1116 an array of pointers to these elements, each of which can be
1117 independently freed, realloc'ed etc. The elements are guaranteed to
1118 be adjacently allocated (this is not guaranteed to occur with
1119 multiple callocs or mallocs), which may also improve cache locality
1120 in some applications.
1122 The "chunks" argument is optional (i.e., may be null). If it is null
1123 the returned array is itself dynamically allocated and should also
1124 be freed when it is no longer needed. Otherwise, the chunks array
1125 must be of at least n_elements in length. It is filled in with the
1126 pointers to the chunks.
1128 In either case, independent_comalloc returns this pointer array, or
1129 null if the allocation failed. If n_elements is zero and chunks is
1130 null, it returns a chunk representing an array with zero elements
1131 (which should be freed if not wanted).
1133 Each element must be individually freed when it is no longer
1134 needed. If you'd like to instead be able to free all at once, you
1135 should instead use a single regular malloc, and assign pointers at
1136 particular offsets in the aggregate space. (In this case though, you
1137 cannot independently free elements.)
1139 independent_comallac differs from independent_calloc in that each
1140 element may have a different size, and also that it does not
1141 automatically clear elements.
1143 independent_comalloc can be used to speed up allocation in cases
1144 where several structs or objects must always be allocated at the
1145 same time. For example:
1147 struct Head { ... }
1148 struct Foot { ... }
1150 void send_message(char* msg) {
1151 int msglen = strlen(msg);
1152 size_t sizes[3] = { sizeof (struct Head), msglen, sizeof (struct Foot) };
1153 void* chunks[3];
1154 if (independent_comalloc(3, sizes, chunks) == 0)
1155 die();
1156 struct Head* head = (struct Head*)(chunks[0]);
1157 char* body = (char*)(chunks[1]);
1158 struct Foot* foot = (struct Foot*)(chunks[2]);
1159 / / ...
1162 In general though, independent_comalloc is worth using only for
1163 larger values of n_elements. For small values, you probably won't
1164 detect enough difference from series of malloc calls to bother.
1166 Overuse of independent_comalloc can increase overall memory usage,
1167 since it cannot reuse existing noncontiguous small chunks that
1168 might be available for some of the elements.
1170 #if __STD_C
1171 Void_t** public_iCOMALLOc(size_t, size_t*, Void_t**);
1172 #else
1173 Void_t** public_iCOMALLOc();
1174 #endif
1178 pvalloc(size_t n);
1179 Equivalent to valloc(minimum-page-that-holds(n)), that is,
1180 round up n to nearest pagesize.
1182 #if __STD_C
1183 Void_t* public_pVALLOc(size_t);
1184 #else
1185 Void_t* public_pVALLOc();
1186 #endif
1189 cfree(Void_t* p);
1190 Equivalent to free(p).
1192 cfree is needed/defined on some systems that pair it with calloc,
1193 for odd historical reasons (such as: cfree is used in example
1194 code in the first edition of K&R).
1196 #if __STD_C
1197 void public_cFREe(Void_t*);
1198 #else
1199 void public_cFREe();
1200 #endif
1203 malloc_trim(size_t pad);
1205 If possible, gives memory back to the system (via negative
1206 arguments to sbrk) if there is unused memory at the `high' end of
1207 the malloc pool. You can call this after freeing large blocks of
1208 memory to potentially reduce the system-level memory requirements
1209 of a program. However, it cannot guarantee to reduce memory. Under
1210 some allocation patterns, some large free blocks of memory will be
1211 locked between two used chunks, so they cannot be given back to
1212 the system.
1214 The `pad' argument to malloc_trim represents the amount of free
1215 trailing space to leave untrimmed. If this argument is zero,
1216 only the minimum amount of memory to maintain internal data
1217 structures will be left (one page or less). Non-zero arguments
1218 can be supplied to maintain enough trailing space to service
1219 future expected allocations without having to re-obtain memory
1220 from the system.
1222 Malloc_trim returns 1 if it actually released any memory, else 0.
1223 On systems that do not support "negative sbrks", it will always
1224 rreturn 0.
1226 #if __STD_C
1227 int public_mTRIm(size_t);
1228 #else
1229 int public_mTRIm();
1230 #endif
1233 malloc_usable_size(Void_t* p);
1235 Returns the number of bytes you can actually use in
1236 an allocated chunk, which may be more than you requested (although
1237 often not) due to alignment and minimum size constraints.
1238 You can use this many bytes without worrying about
1239 overwriting other allocated objects. This is not a particularly great
1240 programming practice. malloc_usable_size can be more useful in
1241 debugging and assertions, for example:
1243 p = malloc(n);
1244 assert(malloc_usable_size(p) >= 256);
1247 #if __STD_C
1248 size_t public_mUSABLe(Void_t*);
1249 #else
1250 size_t public_mUSABLe();
1251 #endif
1254 malloc_stats();
1255 Prints on stderr the amount of space obtained from the system (both
1256 via sbrk and mmap), the maximum amount (which may be more than
1257 current if malloc_trim and/or munmap got called), and the current
1258 number of bytes allocated via malloc (or realloc, etc) but not yet
1259 freed. Note that this is the number of bytes allocated, not the
1260 number requested. It will be larger than the number requested
1261 because of alignment and bookkeeping overhead. Because it includes
1262 alignment wastage as being in use, this figure may be greater than
1263 zero even when no user-level chunks are allocated.
1265 The reported current and maximum system memory can be inaccurate if
1266 a program makes other calls to system memory allocation functions
1267 (normally sbrk) outside of malloc.
1269 malloc_stats prints only the most commonly interesting statistics.
1270 More information can be obtained by calling mallinfo.
1273 #if __STD_C
1274 void public_mSTATs();
1275 #else
1276 void public_mSTATs();
1277 #endif
1279 /* mallopt tuning options */
1282 M_MXFAST is the maximum request size used for "fastbins", special bins
1283 that hold returned chunks without consolidating their spaces. This
1284 enables future requests for chunks of the same size to be handled
1285 very quickly, but can increase fragmentation, and thus increase the
1286 overall memory footprint of a program.
1288 This malloc manages fastbins very conservatively yet still
1289 efficiently, so fragmentation is rarely a problem for values less
1290 than or equal to the default. The maximum supported value of MXFAST
1291 is 80. You wouldn't want it any higher than this anyway. Fastbins
1292 are designed especially for use with many small structs, objects or
1293 strings -- the default handles structs/objects/arrays with sizes up
1294 to 16 4byte fields, or small strings representing words, tokens,
1295 etc. Using fastbins for larger objects normally worsens
1296 fragmentation without improving speed.
1298 M_MXFAST is set in REQUEST size units. It is internally used in
1299 chunksize units, which adds padding and alignment. You can reduce
1300 M_MXFAST to 0 to disable all use of fastbins. This causes the malloc
1301 algorithm to be a closer approximation of fifo-best-fit in all cases,
1302 not just for larger requests, but will generally cause it to be
1303 slower.
1307 /* M_MXFAST is a standard SVID/XPG tuning option, usually listed in malloc.h */
1308 #ifndef M_MXFAST
1309 # define M_MXFAST 1
1310 #endif
1312 #ifndef DEFAULT_MXFAST
1313 # define DEFAULT_MXFAST 64
1314 #endif
1318 M_TRIM_THRESHOLD is the maximum amount of unused top-most memory
1319 to keep before releasing via malloc_trim in free().
1321 Automatic trimming is mainly useful in long-lived programs.
1322 Because trimming via sbrk can be slow on some systems, and can
1323 sometimes be wasteful (in cases where programs immediately
1324 afterward allocate more large chunks) the value should be high
1325 enough so that your overall system performance would improve by
1326 releasing this much memory.
1328 The trim threshold and the mmap control parameters (see below)
1329 can be traded off with one another. Trimming and mmapping are
1330 two different ways of releasing unused memory back to the
1331 system. Between these two, it is often possible to keep
1332 system-level demands of a long-lived program down to a bare
1333 minimum. For example, in one test suite of sessions measuring
1334 the XF86 X server on Linux, using a trim threshold of 128K and a
1335 mmap threshold of 192K led to near-minimal long term resource
1336 consumption.
1338 If you are using this malloc in a long-lived program, it should
1339 pay to experiment with these values. As a rough guide, you
1340 might set to a value close to the average size of a process
1341 (program) running on your system. Releasing this much memory
1342 would allow such a process to run in memory. Generally, it's
1343 worth it to tune for trimming rather tham memory mapping when a
1344 program undergoes phases where several large chunks are
1345 allocated and released in ways that can reuse each other's
1346 storage, perhaps mixed with phases where there are no such
1347 chunks at all. And in well-behaved long-lived programs,
1348 controlling release of large blocks via trimming versus mapping
1349 is usually faster.
1351 However, in most programs, these parameters serve mainly as
1352 protection against the system-level effects of carrying around
1353 massive amounts of unneeded memory. Since frequent calls to
1354 sbrk, mmap, and munmap otherwise degrade performance, the default
1355 parameters are set to relatively high values that serve only as
1356 safeguards.
1358 The trim value must be greater than page size to have any useful
1359 effect. To disable trimming completely, you can set to
1360 (unsigned long)(-1)
1362 Trim settings interact with fastbin (MXFAST) settings: Unless
1363 TRIM_FASTBINS is defined, automatic trimming never takes place upon
1364 freeing a chunk with size less than or equal to MXFAST. Trimming is
1365 instead delayed until subsequent freeing of larger chunks. However,
1366 you can still force an attempted trim by calling malloc_trim.
1368 Also, trimming is not generally possible in cases where
1369 the main arena is obtained via mmap.
1371 Note that the trick some people use of mallocing a huge space and
1372 then freeing it at program startup, in an attempt to reserve system
1373 memory, doesn't have the intended effect under automatic trimming,
1374 since that memory will immediately be returned to the system.
1377 #define M_TRIM_THRESHOLD -1
1379 #ifndef DEFAULT_TRIM_THRESHOLD
1380 # define DEFAULT_TRIM_THRESHOLD (256 * 1024)
1381 #endif
1384 M_TOP_PAD is the amount of extra `padding' space to allocate or
1385 retain whenever sbrk is called. It is used in two ways internally:
1387 * When sbrk is called to extend the top of the arena to satisfy
1388 a new malloc request, this much padding is added to the sbrk
1389 request.
1391 * When malloc_trim is called automatically from free(),
1392 it is used as the `pad' argument.
1394 In both cases, the actual amount of padding is rounded
1395 so that the end of the arena is always a system page boundary.
1397 The main reason for using padding is to avoid calling sbrk so
1398 often. Having even a small pad greatly reduces the likelihood
1399 that nearly every malloc request during program start-up (or
1400 after trimming) will invoke sbrk, which needlessly wastes
1401 time.
1403 Automatic rounding-up to page-size units is normally sufficient
1404 to avoid measurable overhead, so the default is 0. However, in
1405 systems where sbrk is relatively slow, it can pay to increase
1406 this value, at the expense of carrying around more memory than
1407 the program needs.
1410 #define M_TOP_PAD -2
1412 #ifndef DEFAULT_TOP_PAD
1413 # define DEFAULT_TOP_PAD (0)
1414 #endif
1417 M_MMAP_THRESHOLD is the request size threshold for using mmap()
1418 to service a request. Requests of at least this size that cannot
1419 be allocated using already-existing space will be serviced via mmap.
1420 (If enough normal freed space already exists it is used instead.)
1422 Using mmap segregates relatively large chunks of memory so that
1423 they can be individually obtained and released from the host
1424 system. A request serviced through mmap is never reused by any
1425 other request (at least not directly; the system may just so
1426 happen to remap successive requests to the same locations).
1428 Segregating space in this way has the benefits that:
1430 1. Mmapped space can ALWAYS be individually released back
1431 to the system, which helps keep the system level memory
1432 demands of a long-lived program low.
1433 2. Mapped memory can never become `locked' between
1434 other chunks, as can happen with normally allocated chunks, which
1435 means that even trimming via malloc_trim would not release them.
1436 3. On some systems with "holes" in address spaces, mmap can obtain
1437 memory that sbrk cannot.
1439 However, it has the disadvantages that:
1441 1. The space cannot be reclaimed, consolidated, and then
1442 used to service later requests, as happens with normal chunks.
1443 2. It can lead to more wastage because of mmap page alignment
1444 requirements
1445 3. It causes malloc performance to be more dependent on host
1446 system memory management support routines which may vary in
1447 implementation quality and may impose arbitrary
1448 limitations. Generally, servicing a request via normal
1449 malloc steps is faster than going through a system's mmap.
1451 The advantages of mmap nearly always outweigh disadvantages for
1452 "large" chunks, but the value of "large" varies across systems. The
1453 default is an empirically derived value that works well in most
1454 systems.
1457 #define M_MMAP_THRESHOLD -3
1459 #ifndef DEFAULT_MMAP_THRESHOLD
1460 # define DEFAULT_MMAP_THRESHOLD (256 * 1024)
1461 #endif
1464 M_MMAP_MAX is the maximum number of requests to simultaneously
1465 service using mmap. This parameter exists because
1466 . Some systems have a limited number of internal tables for
1467 use by mmap, and using more than a few of them may degrade
1468 performance.
1470 The default is set to a value that serves only as a safeguard.
1471 Setting to 0 disables use of mmap for servicing large requests. If
1472 HAVE_MMAP is not set, the default value is 0, and attempts to set it
1473 to non-zero values in mallopt will fail.
1476 #define M_MMAP_MAX -4
1478 #ifndef DEFAULT_MMAP_MAX
1479 # if HAVE_MMAP
1480 # define DEFAULT_MMAP_MAX (65536)
1481 # else
1482 # define DEFAULT_MMAP_MAX (0)
1483 # endif
1484 #endif
1486 #ifdef __cplusplus
1487 }; /* end of extern "C" */
1488 #endif
1491 ========================================================================
1492 To make a fully customizable malloc.h header file, cut everything
1493 above this line, put into file malloc.h, edit to suit, and #include it
1494 on the next line, as well as in programs that use this malloc.
1495 ========================================================================
1498 /* #include "malloc.h" */
1500 /* --------------------- public wrappers ---------------------- */
1502 #ifdef USE_PUBLIC_MALLOC_WRAPPERS
1504 /* Declare all routines as internal */
1505 # if __STD_C
1506 static Void_t* mALLOc(size_t);
1507 static void fREe(Void_t*);
1508 static Void_t* rEALLOc(Void_t*, size_t);
1509 static Void_t* mEMALIGn(size_t, size_t);
1510 static Void_t* vALLOc(size_t);
1511 static Void_t* pVALLOc(size_t);
1512 static Void_t* cALLOc(size_t, size_t);
1513 static Void_t** iCALLOc(size_t, size_t, Void_t**);
1514 static Void_t** iCOMALLOc(size_t, size_t*, Void_t**);
1515 static void cFREe(Void_t*);
1516 static int mTRIm(size_t);
1517 static size_t mUSABLe(Void_t*);
1518 static void mSTATs();
1519 static int mALLOPt(int, int);
1520 static struct mallinfo mALLINFo(void);
1521 # else
1522 static Void_t* mALLOc();
1523 static void fREe();
1524 static Void_t* rEALLOc();
1525 static Void_t* mEMALIGn();
1526 static Void_t* vALLOc();
1527 static Void_t* pVALLOc();
1528 static Void_t* cALLOc();
1529 static Void_t** iCALLOc();
1530 static Void_t** iCOMALLOc();
1531 static void cFREe();
1532 static int mTRIm();
1533 static size_t mUSABLe();
1534 static void mSTATs();
1535 static int mALLOPt();
1536 static struct mallinfo mALLINFo();
1537 # endif
1540 MALLOC_PREACTION and MALLOC_POSTACTION should be
1541 defined to return 0 on success, and nonzero on failure.
1542 The return value of MALLOC_POSTACTION is currently ignored
1543 in wrapper functions since there is no reasonable default
1544 action to take on failure.
1548 # ifdef USE_MALLOC_LOCK
1550 # ifdef WIN32
1552 static int mALLOC_MUTEx;
1553 # define MALLOC_PREACTION slwait(&mALLOC_MUTEx)
1554 # define MALLOC_POSTACTION slrelease(&mALLOC_MUTEx)
1556 # else
1558 # include <pthread.h>
1560 static pthread_mutex_t mALLOC_MUTEx = PTHREAD_MUTEX_INITIALIZER;
1562 # define MALLOC_PREACTION pthread_mutex_lock(&mALLOC_MUTEx)
1563 # define MALLOC_POSTACTION pthread_mutex_unlock(&mALLOC_MUTEx)
1565 # endif /* USE_MALLOC_LOCK */
1567 # else
1569 /* Substitute anything you like for these */
1571 # define MALLOC_PREACTION (0)
1572 # define MALLOC_POSTACTION (0)
1574 # endif
1576 Void_t* public_mALLOc(size_t bytes) {
1577 Void_t* m;
1578 if (MALLOC_PREACTION != 0) {
1579 return 0;
1581 m = mALLOc(bytes);
1582 if (MALLOC_POSTACTION != 0) {
1584 return m;
1587 void public_fREe(Void_t* m) {
1588 if (MALLOC_PREACTION != 0) {
1589 return;
1591 fREe(m);
1592 if (MALLOC_POSTACTION != 0) {
1596 Void_t* public_rEALLOc(Void_t* m, size_t bytes) {
1597 if (MALLOC_PREACTION != 0) {
1598 return 0;
1600 m = rEALLOc(m, bytes);
1601 if (MALLOC_POSTACTION != 0) {
1603 return m;
1606 Void_t* public_mEMALIGn(size_t alignment, size_t bytes) {
1607 Void_t* m;
1608 if (MALLOC_PREACTION != 0) {
1609 return 0;
1611 m = mEMALIGn(alignment, bytes);
1612 if (MALLOC_POSTACTION != 0) {
1614 return m;
1617 Void_t* public_vALLOc(size_t bytes) {
1618 Void_t* m;
1619 if (MALLOC_PREACTION != 0) {
1620 return 0;
1622 m = vALLOc(bytes);
1623 if (MALLOC_POSTACTION != 0) {
1625 return m;
1628 Void_t* public_pVALLOc(size_t bytes) {
1629 Void_t* m;
1630 if (MALLOC_PREACTION != 0) {
1631 return 0;
1633 m = pVALLOc(bytes);
1634 if (MALLOC_POSTACTION != 0) {
1636 return m;
1639 Void_t* public_cALLOc(size_t n, size_t elem_size) {
1640 Void_t* m;
1641 if (MALLOC_PREACTION != 0) {
1642 return 0;
1644 m = cALLOc(n, elem_size);
1645 if (MALLOC_POSTACTION != 0) {
1647 return m;
1651 Void_t** public_iCALLOc(size_t n, size_t elem_size, Void_t** chunks) {
1652 Void_t** m;
1653 if (MALLOC_PREACTION != 0) {
1654 return 0;
1656 m = iCALLOc(n, elem_size, chunks);
1657 if (MALLOC_POSTACTION != 0) {
1659 return m;
1662 Void_t** public_iCOMALLOc(size_t n, size_t sizes[], Void_t** chunks) {
1663 Void_t** m;
1664 if (MALLOC_PREACTION != 0) {
1665 return 0;
1667 m = iCOMALLOc(n, sizes, chunks);
1668 if (MALLOC_POSTACTION != 0) {
1670 return m;
1673 void public_cFREe(Void_t* m) {
1674 if (MALLOC_PREACTION != 0) {
1675 return;
1677 cFREe(m);
1678 if (MALLOC_POSTACTION != 0) {
1682 int public_mTRIm(size_t s) {
1683 int result;
1684 if (MALLOC_PREACTION != 0) {
1685 return 0;
1687 result = mTRIm(s);
1688 if (MALLOC_POSTACTION != 0) {
1690 return result;
1693 size_t public_mUSABLe(Void_t* m) {
1694 size_t result;
1695 if (MALLOC_PREACTION != 0) {
1696 return 0;
1698 result = mUSABLe(m);
1699 if (MALLOC_POSTACTION != 0) {
1701 return result;
1704 void public_mSTATs() {
1705 if (MALLOC_PREACTION != 0) {
1706 return;
1708 mSTATs();
1709 if (MALLOC_POSTACTION != 0) {
1713 struct mallinfo public_mALLINFo() {
1714 struct mallinfo m;
1715 if (MALLOC_PREACTION != 0) {
1716 struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
1717 return nm;
1719 m = mALLINFo();
1720 if (MALLOC_POSTACTION != 0) {
1722 return m;
1725 int public_mALLOPt(int p, int v) {
1726 int result;
1727 if (MALLOC_PREACTION != 0) {
1728 return 0;
1730 result = mALLOPt(p, v);
1731 if (MALLOC_POSTACTION != 0) {
1733 return result;
1736 #endif
1740 /* ------------- Optional versions of memcopy ---------------- */
1743 #if USE_MEMCPY
1746 Note: memcpy is ONLY invoked with non-overlapping regions,
1747 so the (usually slower) memmove is not needed.
1750 # define MALLOC_COPY(dest, src, nbytes) memcpy(dest, src, nbytes)
1751 # define MALLOC_ZERO(dest, nbytes) memset(dest, 0, nbytes)
1753 #else /* !USE_MEMCPY */
1755 /* Use Duff's device for good zeroing/copying performance. */
1757 # define MALLOC_ZERO(charp, nbytes) \
1758 do { \
1759 INTERNAL_SIZE_T* mzp = (INTERNAL_SIZE_T*)(charp); \
1760 CHUNK_SIZE_T mctmp = (nbytes)/sizeof (INTERNAL_SIZE_T); \
1761 long mcn; \
1762 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
1763 switch (mctmp) { \
1764 case 0: for (;;) { *mzp++ = 0; \
1765 case 7: *mzp++ = 0; \
1766 case 6: *mzp++ = 0; \
1767 case 5: *mzp++ = 0; \
1768 case 4: *mzp++ = 0; \
1769 case 3: *mzp++ = 0; \
1770 case 2: *mzp++ = 0; \
1771 case 1: *mzp++ = 0; if (mcn <= 0) break; mcn--; } \
1773 } while (0)
1775 # define MALLOC_COPY(dest,src,nbytes) \
1776 do { \
1777 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) src; \
1778 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) dest; \
1779 CHUNK_SIZE_T mctmp = (nbytes)/sizeof (INTERNAL_SIZE_T); \
1780 long mcn; \
1781 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
1782 switch (mctmp) { \
1783 case 0: for (;;) { *mcdst++ = *mcsrc++; \
1784 case 7: *mcdst++ = *mcsrc++; \
1785 case 6: *mcdst++ = *mcsrc++; \
1786 case 5: *mcdst++ = *mcsrc++; \
1787 case 4: *mcdst++ = *mcsrc++; \
1788 case 3: *mcdst++ = *mcsrc++; \
1789 case 2: *mcdst++ = *mcsrc++; \
1790 case 1: *mcdst++ = *mcsrc++; if (mcn <= 0) break; mcn--; } \
1792 } while (0)
1794 #endif
1796 /* ------------------ MMAP support ------------------ */
1799 #if HAVE_MMAP
1801 # ifndef LACKS_FCNTL_H
1802 # include <fcntl.h>
1803 # endif
1805 # ifndef LACKS_SYS_MMAN_H
1806 # include <sys/mman.h>
1807 # endif
1809 # if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
1810 # define MAP_ANONYMOUS MAP_ANON
1811 # endif
1814 Nearly all versions of mmap support MAP_ANONYMOUS,
1815 so the following is unlikely to be needed, but is
1816 supplied just in case.
1819 # ifndef MAP_ANONYMOUS
1821 static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
1823 # define MMAP(addr, size, prot, flags) ((dev_zero_fd < 0) ? \
1824 (dev_zero_fd = open("/dev/zero", O_RDWR), \
1825 mmap((addr), (size), (prot), (flags), dev_zero_fd, 0)) : \
1826 mmap((addr), (size), (prot), (flags), dev_zero_fd, 0))
1828 # else
1830 # define MMAP(addr, size, prot, flags) \
1831 (mmap((addr), (size), (prot), (flags)|MAP_ANONYMOUS, -1, 0))
1833 # endif
1836 #endif /* HAVE_MMAP */
1840 ----------------------- Chunk representations -----------------------
1845 This struct declaration is misleading (but accurate and necessary).
1846 It declares a "view" into memory allowing access to necessary
1847 fields at known offsets from a given base. See explanation below.
1850 struct malloc_chunk {
1852 INTERNAL_SIZE_T prev_size; /* Size of previous chunk (if free). */
1853 INTERNAL_SIZE_T size; /* Size in bytes, including overhead. */
1855 struct malloc_chunk* fd; /* double links -- used only if free. */
1856 struct malloc_chunk* bk;
1860 typedef struct malloc_chunk* mchunkptr;
1863 malloc_chunk details:
1865 (The following includes lightly edited explanations by Colin Plumb.)
1867 Chunks of memory are maintained using a `boundary tag' method as
1868 described in e.g., Knuth or Standish. (See the paper by Paul
1869 Wilson ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a
1870 survey of such techniques.) Sizes of free chunks are stored both
1871 in the front of each chunk and at the end. This makes
1872 consolidating fragmented chunks into bigger chunks very fast. The
1873 size fields also hold bits representing whether chunks are free or
1874 in use.
1876 An allocated chunk looks like this:
1879 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1880 | Size of previous chunk, if allocated | |
1881 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1882 | Size of chunk, in bytes |P|
1883 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1884 | User data starts here... .
1886 . (malloc_usable_space() bytes) .
1888 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1889 | Size of chunk |
1890 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1893 Where "chunk" is the front of the chunk for the purpose of most of
1894 the malloc code, but "mem" is the pointer that is returned to the
1895 user. "Nextchunk" is the beginning of the next contiguous chunk.
1897 Chunks always begin on even word boundaries, so the mem portion
1898 (which is returned to the user) is also on an even word boundary, and
1899 thus at least double-word aligned.
1901 Free chunks are stored in circular doubly-linked lists, and look like this:
1903 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1904 | Size of previous chunk |
1905 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1906 `head:' | Size of chunk, in bytes |P|
1907 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1908 | Forward pointer to next chunk in list |
1909 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1910 | Back pointer to previous chunk in list |
1911 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1912 | Unused space (may be 0 bytes long) .
1915 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1916 `foot:' | Size of chunk, in bytes |
1917 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1919 The P (PREV_INUSE) bit, stored in the unused low-order bit of the
1920 chunk size (which is always a multiple of two words), is an in-use
1921 bit for the *previous* chunk. If that bit is *clear*, then the
1922 word before the current chunk size contains the previous chunk
1923 size, and can be used to find the front of the previous chunk.
1924 The very first chunk allocated always has this bit set,
1925 preventing access to non-existent (or non-owned) memory. If
1926 prev_inuse is set for any given chunk, then you CANNOT determine
1927 the size of the previous chunk, and might even get a memory
1928 addressing fault when trying to do so.
1930 Note that the `foot' of the current chunk is actually represented
1931 as the prev_size of the NEXT chunk. This makes it easier to
1932 deal with alignments etc but can be very confusing when trying
1933 to extend or adapt this code.
1935 The two exceptions to all this are
1937 1. The special chunk `top' doesn't bother using the
1938 trailing size field since there is no next contiguous chunk
1939 that would have to index off it. After initialization, `top'
1940 is forced to always exist. If it would become less than
1941 MINSIZE bytes long, it is replenished.
1943 2. Chunks allocated via mmap, which have the second-lowest-order
1944 bit (IS_MMAPPED) set in their size fields. Because they are
1945 allocated one-by-one, each must contain its own trailing size field.
1950 ---------- Size and alignment checks and conversions ----------
1953 /* conversion from malloc headers to user pointers, and back */
1955 #define chunk2mem(p) ((Void_t*)((char*)(p) + 2*SIZE_SZ))
1956 #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - 2*SIZE_SZ))
1958 /* The smallest possible chunk */
1959 #define MIN_CHUNK_SIZE (sizeof (struct malloc_chunk))
1961 /* The smallest size we can malloc is an aligned minimal chunk */
1963 #define MINSIZE \
1964 (CHUNK_SIZE_T)(((MIN_CHUNK_SIZE+MALLOC_ALIGN_MASK) & ~MALLOC_ALIGN_MASK))
1966 /* Check if m has acceptable alignment */
1968 #define aligned_OK(m) (((PTR_UINT)((m)) & (MALLOC_ALIGN_MASK)) == 0)
1972 Check if a request is so large that it would wrap around zero when
1973 padded and aligned. To simplify some other code, the bound is made
1974 low enough so that adding MINSIZE will also not wrap around sero.
1977 #define REQUEST_OUT_OF_RANGE(req) \
1978 ((CHUNK_SIZE_T)(req) >= \
1979 (CHUNK_SIZE_T)(INTERNAL_SIZE_T)(-2 * MINSIZE))
1981 /* pad request bytes into a usable size -- internal version */
1983 #define request2size(req) \
1984 (((req) + SIZE_SZ + MALLOC_ALIGN_MASK < MINSIZE) ? \
1985 MINSIZE : \
1986 ((req) + SIZE_SZ + MALLOC_ALIGN_MASK) & ~MALLOC_ALIGN_MASK)
1988 /* Same, except also perform argument check */
1990 #define checked_request2size(req, sz) \
1991 if (REQUEST_OUT_OF_RANGE(req)) { \
1992 MALLOC_FAILURE_ACTION; \
1993 return 0; \
1995 (sz) = request2size(req);
1998 --------------- Physical chunk operations ---------------
2002 /* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */
2003 #define PREV_INUSE 0x1
2005 /* extract inuse bit of previous chunk */
2006 #define prev_inuse(p) ((p)->size & PREV_INUSE)
2009 /* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */
2010 #define IS_MMAPPED 0x2
2012 /* check for mmap()'ed chunk */
2013 #define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED)
2016 Bits to mask off when extracting size
2018 Note: IS_MMAPPED is intentionally not masked off from size field in
2019 macros for which mmapped chunks should never be seen. This should
2020 cause helpful core dumps to occur if it is tried by accident by
2021 people extending or adapting this malloc.
2023 #define SIZE_BITS (PREV_INUSE|IS_MMAPPED)
2025 /* Get size, ignoring use bits */
2026 #define chunksize(p) ((p)->size & ~(SIZE_BITS))
2029 /* Ptr to next physical malloc_chunk. */
2030 #define next_chunk(p) ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))
2032 /* Ptr to previous physical malloc_chunk */
2033 #define prev_chunk(p) ((mchunkptr)(((char*)(p)) - ((p)->prev_size)))
2035 /* Treat space at ptr + offset as a chunk */
2036 #define chunk_at_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
2038 /* extract p's inuse bit */
2039 #define inuse(p)\
2040 ((((mchunkptr)(((char*)(p))+((p)->size & ~PREV_INUSE)))->size) & PREV_INUSE)
2042 /* set/clear chunk as being inuse without otherwise disturbing */
2043 #define set_inuse(p)\
2044 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size |= PREV_INUSE
2046 #define clear_inuse(p)\
2047 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size &= ~(PREV_INUSE)
2050 /* check/set/clear inuse bits in known places */
2051 #define inuse_bit_at_offset(p, s)\
2052 (((mchunkptr)(((char*)(p)) + (s)))->size & PREV_INUSE)
2054 #define set_inuse_bit_at_offset(p, s)\
2055 (((mchunkptr)(((char*)(p)) + (s)))->size |= PREV_INUSE)
2057 #define clear_inuse_bit_at_offset(p, s)\
2058 (((mchunkptr)(((char*)(p)) + (s)))->size &= ~(PREV_INUSE))
2061 /* Set size at head, without disturbing its use bit */
2062 #define set_head_size(p, s) ((p)->size = (((p)->size & PREV_INUSE) | (s)))
2064 /* Set size/use field */
2065 #define set_head(p, s) ((p)->size = (s))
2067 /* Set size at footer (only when chunk is not in use) */
2068 #define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_size = (s))
2072 -------------------- Internal data structures --------------------
2074 All internal state is held in an instance of malloc_state defined
2075 below. There are no other static variables, except in two optional
2076 cases:
2077 * If USE_MALLOC_LOCK is defined, the mALLOC_MUTEx declared above.
2078 * If HAVE_MMAP is true, but mmap doesn't support
2079 MAP_ANONYMOUS, a dummy file descriptor for mmap.
2081 Beware of lots of tricks that minimize the total bookkeeping space
2082 requirements. The result is a little over 1K bytes (for 4byte
2083 pointers and size_t.)
2087 Bins
2089 An array of bin headers for free chunks. Each bin is doubly
2090 linked. The bins are approximately proportionally (log) spaced.
2091 There are a lot of these bins (128). This may look excessive, but
2092 works very well in practice. Most bins hold sizes that are
2093 unusual as malloc request sizes, but are more usual for fragments
2094 and consolidated sets of chunks, which is what these bins hold, so
2095 they can be found quickly. All procedures maintain the invariant
2096 that no consolidated chunk physically borders another one, so each
2097 chunk in a list is known to be preceded and followed by either
2098 inuse chunks or the ends of memory.
2100 Chunks in bins are kept in size order, with ties going to the
2101 approximately least recently used chunk. Ordering isn't needed
2102 for the small bins, which all contain the same-sized chunks, but
2103 facilitates best-fit allocation for larger chunks. These lists
2104 are just sequential. Keeping them in order almost never requires
2105 enough traversal to warrant using fancier ordered data
2106 structures.
2108 Chunks of the same size are linked with the most
2109 recently freed at the front, and allocations are taken from the
2110 back. This results in LRU (FIFO) allocation order, which tends
2111 to give each chunk an equal opportunity to be consolidated with
2112 adjacent freed chunks, resulting in larger free chunks and less
2113 fragmentation.
2115 To simplify use in double-linked lists, each bin header acts
2116 as a malloc_chunk. This avoids special-casing for headers.
2117 But to conserve space and improve locality, we allocate
2118 only the fd/bk pointers of bins, and then use repositioning tricks
2119 to treat these as the fields of a malloc_chunk*.
2122 typedef struct malloc_chunk* mbinptr;
2124 /* addressing -- note that bin_at(0) does not exist */
2125 #define bin_at(m, i) ((mbinptr)((char*)&((m)->bins[(i)<<1]) - (SIZE_SZ<<1)))
2127 /* analog of ++bin */
2128 #define next_bin(b) ((mbinptr)((char*)(b) + (sizeof (mchunkptr)<<1)))
2130 /* Reminders about list directionality within bins */
2131 #define first(b) ((b)->fd)
2132 #define last(b) ((b)->bk)
2134 /* Take a chunk off a bin list */
2135 #define unlink(P, BK, FD) { \
2136 FD = P->fd; \
2137 BK = P->bk; \
2138 FD->bk = BK; \
2139 BK->fd = FD; \
2143 Indexing
2145 Bins for sizes < 512 bytes contain chunks of all the same size, spaced
2146 8 bytes apart. Larger bins are approximately logarithmically spaced:
2148 64 bins of size 8
2149 32 bins of size 64
2150 16 bins of size 512
2151 8 bins of size 4096
2152 4 bins of size 32768
2153 2 bins of size 262144
2154 1 bin of size what's left
2156 The bins top out around 1MB because we expect to service large
2157 requests via mmap.
2160 #define NBINS 96
2161 #define NSMALLBINS 32
2162 #define SMALLBIN_WIDTH 8
2163 #define MIN_LARGE_SIZE 256
2165 #define in_smallbin_range(sz) \
2166 ((CHUNK_SIZE_T)(sz) < (CHUNK_SIZE_T)MIN_LARGE_SIZE)
2168 #define smallbin_index(sz) (((unsigned)(sz)) >> 3)
2171 Compute index for size. We expect this to be inlined when
2172 compiled with optimization, else not, which works out well.
2174 static int largebin_index(unsigned int sz) {
2175 unsigned int x = sz >> SMALLBIN_WIDTH;
2176 unsigned int m; /* bit position of highest set bit of m */
2178 if (x >= 0x10000) return NBINS-1;
2180 /* On intel, use BSRL instruction to find highest bit */
2181 #if defined(__GNUC__) && defined(i386)
2183 __asm__("bsrl %1,%0\n\t"
2184 : "=r" (m)
2185 : "g" (x));
2187 #else
2190 Based on branch-free nlz algorithm in chapter 5 of Henry
2191 S. Warren Jr's book "Hacker's Delight".
2194 unsigned int n = ((x - 0x100) >> 16) & 8;
2195 x <<= n;
2196 m = ((x - 0x1000) >> 16) & 4;
2197 n += m;
2198 x <<= m;
2199 m = ((x - 0x4000) >> 16) & 2;
2200 n += m;
2201 x = (x << m) >> 14;
2202 m = 13 - n + (x & ~(x>>1));
2204 #endif
2206 /* Use next 2 bits to create finer-granularity bins */
2207 return NSMALLBINS + (m << 2) + ((sz >> (m + 6)) & 3);
2210 #define bin_index(sz) \
2211 ((in_smallbin_range(sz)) ? smallbin_index(sz) : largebin_index(sz))
2214 FIRST_SORTED_BIN_SIZE is the chunk size corresponding to the
2215 first bin that is maintained in sorted order. This must
2216 be the smallest size corresponding to a given bin.
2218 Normally, this should be MIN_LARGE_SIZE. But you can weaken
2219 best fit guarantees to sometimes speed up malloc by increasing value.
2220 Doing this means that malloc may choose a chunk that is
2221 non-best-fitting by up to the width of the bin.
2223 Some useful cutoff values:
2224 512 - all bins sorted
2225 2560 - leaves bins <= 64 bytes wide unsorted
2226 12288 - leaves bins <= 512 bytes wide unsorted
2227 65536 - leaves bins <= 4096 bytes wide unsorted
2228 262144 - leaves bins <= 32768 bytes wide unsorted
2229 -1 - no bins sorted (not recommended!)
2232 #define FIRST_SORTED_BIN_SIZE MIN_LARGE_SIZE
2233 /* #define FIRST_SORTED_BIN_SIZE 65536 */
2236 Unsorted chunks
2238 All remainders from chunk splits, as well as all returned chunks,
2239 are first placed in the "unsorted" bin. They are then placed
2240 in regular bins after malloc gives them ONE chance to be used before
2241 binning. So, basically, the unsorted_chunks list acts as a queue,
2242 with chunks being placed on it in free (and malloc_consolidate),
2243 and taken off (to be either used or placed in bins) in malloc.
2246 /* The otherwise unindexable 1-bin is used to hold unsorted chunks. */
2247 #define unsorted_chunks(M) (bin_at(M, 1))
2252 The top-most available chunk (i.e., the one bordering the end of
2253 available memory) is treated specially. It is never included in
2254 any bin, is used only if no other chunk is available, and is
2255 released back to the system if it is very large (see
2256 M_TRIM_THRESHOLD). Because top initially
2257 points to its own bin with initial zero size, thus forcing
2258 extension on the first malloc request, we avoid having any special
2259 code in malloc to check whether it even exists yet. But we still
2260 need to do so when getting memory from system, so we make
2261 initial_top treat the bin as a legal but unusable chunk during the
2262 interval between initialization and the first call to
2263 sYSMALLOc. (This is somewhat delicate, since it relies on
2264 the 2 preceding words to be zero during this interval as well.)
2267 /* Conveniently, the unsorted bin can be used as dummy top on first call */
2268 #define initial_top(M) (unsorted_chunks(M))
2271 Binmap
2273 To help compensate for the large number of bins, a one-level index
2274 structure is used for bin-by-bin searching. `binmap' is a
2275 bitvector recording whether bins are definitely empty so they can
2276 be skipped over during during traversals. The bits are NOT always
2277 cleared as soon as bins are empty, but instead only
2278 when they are noticed to be empty during traversal in malloc.
2281 /* Conservatively use 32 bits per map word, even if on 64bit system */
2282 #define BINMAPSHIFT 5
2283 #define BITSPERMAP (1U << BINMAPSHIFT)
2284 #define BINMAPSIZE (NBINS / BITSPERMAP)
2286 #define idx2block(i) ((i) >> BINMAPSHIFT)
2287 #define idx2bit(i) ((1U << ((i) & ((1U << BINMAPSHIFT)-1))))
2289 #define mark_bin(m,i) ((m)->binmap[idx2block(i)] |= idx2bit(i))
2290 #define unmark_bin(m,i) ((m)->binmap[idx2block(i)] &= ~(idx2bit(i)))
2291 #define get_binmap(m,i) ((m)->binmap[idx2block(i)] & idx2bit(i))
2294 Fastbins
2296 An array of lists holding recently freed small chunks. Fastbins
2297 are not doubly linked. It is faster to single-link them, and
2298 since chunks are never removed from the middles of these lists,
2299 double linking is not necessary. Also, unlike regular bins, they
2300 are not even processed in FIFO order (they use faster LIFO) since
2301 ordering doesn't much matter in the transient contexts in which
2302 fastbins are normally used.
2304 Chunks in fastbins keep their inuse bit set, so they cannot
2305 be consolidated with other free chunks. malloc_consolidate
2306 releases all chunks in fastbins and consolidates them with
2307 other free chunks.
2310 typedef struct malloc_chunk* mfastbinptr;
2312 /* offset 2 to use otherwise unindexable first 2 bins */
2313 #define fastbin_index(sz) ((((unsigned int)(sz)) >> 3) - 2)
2315 /* The maximum fastbin request size we support */
2316 #define MAX_FAST_SIZE 80
2318 #define NFASTBINS (fastbin_index(request2size(MAX_FAST_SIZE))+1)
2321 FASTBIN_CONSOLIDATION_THRESHOLD is the size of a chunk in free()
2322 that triggers automatic consolidation of possibly-surrounding
2323 fastbin chunks. This is a heuristic, so the exact value should not
2324 matter too much. It is defined at half the default trim threshold as a
2325 compromise heuristic to only attempt consolidation if it is likely
2326 to lead to trimming. However, it is not dynamically tunable, since
2327 consolidation reduces fragmentation surrounding loarge chunks even
2328 if trimming is not used.
2331 #define FASTBIN_CONSOLIDATION_THRESHOLD \
2332 ((unsigned long)(DEFAULT_TRIM_THRESHOLD) >> 1)
2335 Since the lowest 2 bits in max_fast don't matter in size comparisons,
2336 they are used as flags.
2340 ANYCHUNKS_BIT held in max_fast indicates that there may be any
2341 freed chunks at all. It is set true when entering a chunk into any
2342 bin.
2345 #define ANYCHUNKS_BIT (1U)
2347 #define have_anychunks(M) (((M)->max_fast & ANYCHUNKS_BIT))
2348 #define set_anychunks(M) ((M)->max_fast |= ANYCHUNKS_BIT)
2349 #define clear_anychunks(M) ((M)->max_fast &= ~ANYCHUNKS_BIT)
2352 FASTCHUNKS_BIT held in max_fast indicates that there are probably
2353 some fastbin chunks. It is set true on entering a chunk into any
2354 fastbin, and cleared only in malloc_consolidate.
2357 #define FASTCHUNKS_BIT (2U)
2359 #define have_fastchunks(M) (((M)->max_fast & FASTCHUNKS_BIT))
2360 #define set_fastchunks(M) ((M)->max_fast |= (FASTCHUNKS_BIT|ANYCHUNKS_BIT))
2361 #define clear_fastchunks(M) ((M)->max_fast &= ~(FASTCHUNKS_BIT))
2364 Set value of max_fast.
2365 Use impossibly small value if 0.
2368 #define set_max_fast(M, s) \
2369 (M)->max_fast = (((s) == 0)? SMALLBIN_WIDTH: request2size(s)) | \
2370 ((M)->max_fast & (FASTCHUNKS_BIT|ANYCHUNKS_BIT))
2372 #define get_max_fast(M) \
2373 ((M)->max_fast & ~(FASTCHUNKS_BIT | ANYCHUNKS_BIT))
2377 morecore_properties is a status word holding dynamically discovered
2378 or controlled properties of the morecore function
2381 #define MORECORE_CONTIGUOUS_BIT (1U)
2383 #define contiguous(M) \
2384 (((M)->morecore_properties & MORECORE_CONTIGUOUS_BIT))
2385 #define noncontiguous(M) \
2386 (((M)->morecore_properties & MORECORE_CONTIGUOUS_BIT) == 0)
2387 #define set_contiguous(M) \
2388 ((M)->morecore_properties |= MORECORE_CONTIGUOUS_BIT)
2389 #define set_noncontiguous(M) \
2390 ((M)->morecore_properties &= ~MORECORE_CONTIGUOUS_BIT)
2394 ----------- Internal state representation and initialization -----------
2397 struct malloc_state {
2399 /* The maximum chunk size to be eligible for fastbin */
2400 INTERNAL_SIZE_T max_fast; /* low 2 bits used as flags */
2402 /* Fastbins */
2403 mfastbinptr fastbins[NFASTBINS];
2405 /* Base of the topmost chunk -- not otherwise kept in a bin */
2406 mchunkptr top;
2408 /* The remainder from the most recent split of a small request */
2409 mchunkptr last_remainder;
2411 /* Normal bins packed as described above */
2412 mchunkptr bins[NBINS * 2];
2414 /* Bitmap of bins. Trailing zero map handles cases of largest binned size */
2415 unsigned int binmap[BINMAPSIZE+1];
2417 /* Tunable parameters */
2418 CHUNK_SIZE_T trim_threshold;
2419 INTERNAL_SIZE_T top_pad;
2420 INTERNAL_SIZE_T mmap_threshold;
2422 /* Memory map support */
2423 int n_mmaps;
2424 int n_mmaps_max;
2425 int max_n_mmaps;
2427 /* Cache malloc_getpagesize */
2428 unsigned int pagesize;
2430 /* Track properties of MORECORE */
2431 unsigned int morecore_properties;
2433 /* Statistics */
2434 INTERNAL_SIZE_T mmapped_mem;
2435 INTERNAL_SIZE_T sbrked_mem;
2436 INTERNAL_SIZE_T max_sbrked_mem;
2437 INTERNAL_SIZE_T max_mmapped_mem;
2438 INTERNAL_SIZE_T max_total_mem;
2441 typedef struct malloc_state *mstate;
2444 There is exactly one instance of this struct in this malloc.
2445 If you are adapting this malloc in a way that does NOT use a static
2446 malloc_state, you MUST explicitly zero-fill it before using. This
2447 malloc relies on the property that malloc_state is initialized to
2448 all zeroes (as is true of C statics).
2451 static struct malloc_state av_; /* never directly referenced */
2454 All uses of av_ are via get_malloc_state().
2455 At most one "call" to get_malloc_state is made per invocation of
2456 the public versions of malloc and free, but other routines
2457 that in turn invoke malloc and/or free may call more then once.
2458 Also, it is called in check* routines if DEBUG is set.
2461 #define get_malloc_state() (&(av_))
2464 Initialize a malloc_state struct.
2466 This is called only from within malloc_consolidate, which needs
2467 be called in the same contexts anyway. It is never called directly
2468 outside of malloc_consolidate because some optimizing compilers try
2469 to inline it at all call points, which turns out not to be an
2470 optimization at all. (Inlining it in malloc_consolidate is fine though.)
2473 #if __STD_C
2474 static void malloc_init_state(mstate av)
2475 #else
2476 static void malloc_init_state(av) mstate av;
2477 #endif
2479 int i;
2480 mbinptr bin;
2482 /* Establish circular links for normal bins */
2483 for (i = 1; i < NBINS; ++i) {
2484 bin = bin_at(av,i);
2485 bin->fd = bin->bk = bin;
2488 av->top_pad = DEFAULT_TOP_PAD;
2489 av->n_mmaps_max = DEFAULT_MMAP_MAX;
2490 av->mmap_threshold = DEFAULT_MMAP_THRESHOLD;
2491 av->trim_threshold = DEFAULT_TRIM_THRESHOLD;
2493 #if MORECORE_CONTIGUOUS
2494 set_contiguous(av);
2495 #else
2496 set_noncontiguous(av);
2497 #endif
2500 set_max_fast(av, DEFAULT_MXFAST);
2502 av->top = initial_top(av);
2503 av->pagesize = malloc_getpagesize;
2507 Other internal utilities operating on mstates
2510 #if __STD_C
2511 static Void_t* sYSMALLOc(INTERNAL_SIZE_T, mstate);
2512 static int sYSTRIm(size_t, mstate);
2513 static void malloc_consolidate(mstate);
2514 static Void_t** iALLOc(size_t, size_t*, int, Void_t**);
2515 #else
2516 static Void_t* sYSMALLOc();
2517 static int sYSTRIm();
2518 static void malloc_consolidate();
2519 static Void_t** iALLOc();
2520 #endif
2523 Debugging support
2525 These routines make a number of assertions about the states
2526 of data structures that should be true at all times. If any
2527 are not true, it's very likely that a user program has somehow
2528 trashed memory. (It's also possible that there is a coding error
2529 in malloc. In which case, please report it!)
2532 #if ! DEBUG
2534 # define check_chunk(P)
2535 # define check_free_chunk(P)
2536 # define check_inuse_chunk(P)
2537 # define check_remalloced_chunk(P,N)
2538 # define check_malloced_chunk(P,N)
2539 # define check_malloc_state()
2541 #else
2542 # define check_chunk(P) do_check_chunk(P)
2543 # define check_free_chunk(P) do_check_free_chunk(P)
2544 # define check_inuse_chunk(P) do_check_inuse_chunk(P)
2545 # define check_remalloced_chunk(P,N) do_check_remalloced_chunk(P,N)
2546 # define check_malloced_chunk(P,N) do_check_malloced_chunk(P,N)
2547 # define check_malloc_state() do_check_malloc_state()
2550 Properties of all chunks
2553 # if __STD_C
2554 static void do_check_chunk(mchunkptr p)
2555 # else
2556 static void do_check_chunk(p) mchunkptr p;
2557 # endif
2559 mstate av = get_malloc_state();
2560 CHUNK_SIZE_T sz = chunksize(p);
2561 /* min and max possible addresses assuming contiguous allocation */
2562 char* max_address = (char*)(av->top) + chunksize(av->top);
2563 char* min_address = max_address - av->sbrked_mem;
2565 if (!chunk_is_mmapped(p)) {
2567 /* Has legal address ... */
2568 if (p != av->top) {
2569 if (contiguous(av)) {
2570 assert(((char*)p) >= min_address);
2571 assert(((char*)p + sz) <= ((char*)(av->top)));
2574 else {
2575 /* top size is always at least MINSIZE */
2576 assert((CHUNK_SIZE_T)(sz) >= MINSIZE);
2577 /* top predecessor always marked inuse */
2578 assert(prev_inuse(p));
2582 else {
2583 # if HAVE_MMAP
2584 /* address is outside main heap */
2585 if (contiguous(av) && av->top != initial_top(av)) {
2586 assert(((char*)p) < min_address || ((char*)p) > max_address);
2588 /* chunk is page-aligned */
2589 assert(((p->prev_size + sz) & (av->pagesize-1)) == 0);
2590 /* mem is aligned */
2591 assert(aligned_OK(chunk2mem(p)));
2592 # else
2593 /* force an appropriate assert violation if debug set */
2594 assert(!chunk_is_mmapped(p));
2595 # endif
2600 Properties of free chunks
2603 # if __STD_C
2604 static void do_check_free_chunk(mchunkptr p)
2605 # else
2606 static void do_check_free_chunk(p) mchunkptr p;
2607 # endif
2609 mstate av = get_malloc_state();
2611 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
2612 mchunkptr next = chunk_at_offset(p, sz);
2614 do_check_chunk(p);
2616 /* Chunk must claim to be free ... */
2617 assert(!inuse(p));
2618 assert (!chunk_is_mmapped(p));
2620 /* Unless a special marker, must have OK fields */
2621 if ((CHUNK_SIZE_T)(sz) >= MINSIZE)
2623 assert((sz & MALLOC_ALIGN_MASK) == 0);
2624 assert(aligned_OK(chunk2mem(p)));
2625 /* ... matching footer field */
2626 assert(next->prev_size == sz);
2627 /* ... and is fully consolidated */
2628 assert(prev_inuse(p));
2629 assert (next == av->top || inuse(next));
2631 /* ... and has minimally sane links */
2632 assert(p->fd->bk == p);
2633 assert(p->bk->fd == p);
2635 else /* markers are always of size SIZE_SZ */
2636 assert(sz == SIZE_SZ);
2640 Properties of inuse chunks
2643 # if __STD_C
2644 static void do_check_inuse_chunk(mchunkptr p)
2645 # else
2646 static void do_check_inuse_chunk(p) mchunkptr p;
2647 # endif
2649 mstate av = get_malloc_state();
2650 mchunkptr next;
2651 do_check_chunk(p);
2653 if (chunk_is_mmapped(p))
2654 return; /* mmapped chunks have no next/prev */
2656 /* Check whether it claims to be in use ... */
2657 assert(inuse(p));
2659 next = next_chunk(p);
2661 /* ... and is surrounded by OK chunks.
2662 Since more things can be checked with free chunks than inuse ones,
2663 if an inuse chunk borders them and debug is on, it's worth doing them.
2665 if (!prev_inuse(p)) {
2666 /* Note that we cannot even look at prev unless it is not inuse */
2667 mchunkptr prv = prev_chunk(p);
2668 assert(next_chunk(prv) == p);
2669 do_check_free_chunk(prv);
2672 if (next == av->top) {
2673 assert(prev_inuse(next));
2674 assert(chunksize(next) >= MINSIZE);
2676 else if (!inuse(next))
2677 do_check_free_chunk(next);
2681 Properties of chunks recycled from fastbins
2684 # if __STD_C
2685 static void do_check_remalloced_chunk(mchunkptr p, INTERNAL_SIZE_T s)
2686 # else
2687 static void do_check_remalloced_chunk(p, s) mchunkptr p; INTERNAL_SIZE_T s;
2688 # endif
2690 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
2692 do_check_inuse_chunk(p);
2694 /* Legal size ... */
2695 assert((sz & MALLOC_ALIGN_MASK) == 0);
2696 assert((CHUNK_SIZE_T)(sz) >= MINSIZE);
2697 /* ... and alignment */
2698 assert(aligned_OK(chunk2mem(p)));
2699 /* chunk is less than MINSIZE more than request */
2700 assert((long)(sz) - (long)(s) >= 0);
2701 assert((long)(sz) - (long)(s + MINSIZE) < 0);
2705 Properties of nonrecycled chunks at the point they are malloced
2708 # if __STD_C
2709 static void do_check_malloced_chunk(mchunkptr p, INTERNAL_SIZE_T s)
2710 # else
2711 static void do_check_malloced_chunk(p, s) mchunkptr p; INTERNAL_SIZE_T s;
2712 # endif
2714 /* same as recycled case ... */
2715 do_check_remalloced_chunk(p, s);
2718 ... plus, must obey implementation invariant that prev_inuse is
2719 always true of any allocated chunk; i.e., that each allocated
2720 chunk borders either a previously allocated and still in-use
2721 chunk, or the base of its memory arena. This is ensured
2722 by making all allocations from the the `lowest' part of any found
2723 chunk. This does not necessarily hold however for chunks
2724 recycled via fastbins.
2727 assert(prev_inuse(p));
2732 Properties of malloc_state.
2734 This may be useful for debugging malloc, as well as detecting user
2735 programmer errors that somehow write into malloc_state.
2737 If you are extending or experimenting with this malloc, you can
2738 probably figure out how to hack this routine to print out or
2739 display chunk addresses, sizes, bins, and other instrumentation.
2742 static void do_check_malloc_state()
2744 mstate av = get_malloc_state();
2745 int i;
2746 mchunkptr p;
2747 mchunkptr q;
2748 mbinptr b;
2749 unsigned int binbit;
2750 int empty;
2751 unsigned int idx;
2752 INTERNAL_SIZE_T size;
2753 CHUNK_SIZE_T total = 0;
2754 int max_fast_bin;
2756 /* internal size_t must be no wider than pointer type */
2757 assert(sizeof (INTERNAL_SIZE_T) <= sizeof (char*));
2759 /* alignment is a power of 2 */
2760 assert((MALLOC_ALIGNMENT & (MALLOC_ALIGNMENT-1)) == 0);
2762 /* cannot run remaining checks until fully initialized */
2763 if (av->top == 0 || av->top == initial_top(av))
2764 return;
2766 /* pagesize is a power of 2 */
2767 assert((av->pagesize & (av->pagesize-1)) == 0);
2769 /* properties of fastbins */
2771 /* max_fast is in allowed range */
2772 assert(get_max_fast(av) <= request2size(MAX_FAST_SIZE));
2774 max_fast_bin = fastbin_index(av->max_fast);
2776 for (i = 0; i < NFASTBINS; ++i) {
2777 p = av->fastbins[i];
2779 /* all bins past max_fast are empty */
2780 if (i > max_fast_bin)
2781 assert(p == 0);
2783 while (p != 0) {
2784 /* each chunk claims to be inuse */
2785 do_check_inuse_chunk(p);
2786 total += chunksize(p);
2787 /* chunk belongs in this bin */
2788 assert(fastbin_index(chunksize(p)) == i);
2789 p = p->fd;
2793 if (total != 0)
2794 assert(have_fastchunks(av));
2795 else if (!have_fastchunks(av))
2796 assert(total == 0);
2798 /* check normal bins */
2799 for (i = 1; i < NBINS; ++i) {
2800 b = bin_at(av,i);
2802 /* binmap is accurate (except for bin 1 == unsorted_chunks) */
2803 if (i >= 2) {
2804 binbit = get_binmap(av,i);
2805 empty = last(b) == b;
2806 if (!binbit)
2807 assert(empty);
2808 else if (!empty)
2809 assert(binbit);
2812 for (p = last(b); p != b; p = p->bk) {
2813 /* each chunk claims to be free */
2814 do_check_free_chunk(p);
2815 size = chunksize(p);
2816 total += size;
2817 if (i >= 2) {
2818 /* chunk belongs in bin */
2819 idx = bin_index(size);
2820 assert(idx == i);
2821 /* lists are sorted */
2822 if ((CHUNK_SIZE_T) size >= (CHUNK_SIZE_T)(FIRST_SORTED_BIN_SIZE)) {
2823 assert(p->bk == b ||
2824 (CHUNK_SIZE_T)chunksize(p->bk) >=
2825 (CHUNK_SIZE_T)chunksize(p));
2828 /* chunk is followed by a legal chain of inuse chunks */
2829 for (q = next_chunk(p);
2830 (q != av->top && inuse(q) &&
2831 (CHUNK_SIZE_T)(chunksize(q)) >= MINSIZE);
2832 q = next_chunk(q))
2833 do_check_inuse_chunk(q);
2837 /* top chunk is OK */
2838 check_chunk(av->top);
2840 /* sanity checks for statistics */
2842 assert(total <= (CHUNK_SIZE_T)(av->max_total_mem));
2843 assert(av->n_mmaps >= 0);
2844 assert(av->n_mmaps <= av->max_n_mmaps);
2846 assert((CHUNK_SIZE_T)(av->sbrked_mem) <=
2847 (CHUNK_SIZE_T)(av->max_sbrked_mem));
2849 assert((CHUNK_SIZE_T)(av->mmapped_mem) <=
2850 (CHUNK_SIZE_T)(av->max_mmapped_mem));
2852 assert((CHUNK_SIZE_T)(av->max_total_mem) >=
2853 (CHUNK_SIZE_T)(av->mmapped_mem) + (CHUNK_SIZE_T)(av->sbrked_mem));
2855 #endif
2858 /* ----------- Routines dealing with system allocation -------------- */
2861 sysmalloc handles malloc cases requiring more memory from the system.
2862 On entry, it is assumed that av->top does not have enough
2863 space to service request for nb bytes, thus requiring that av->top
2864 be extended or replaced.
2867 #if __STD_C
2868 static Void_t* sYSMALLOc(INTERNAL_SIZE_T nb, mstate av)
2869 #else
2870 static Void_t* sYSMALLOc(nb, av) INTERNAL_SIZE_T nb; mstate av;
2871 #endif
2873 mchunkptr old_top; /* incoming value of av->top */
2874 INTERNAL_SIZE_T old_size; /* its size */
2875 char* old_end; /* its end address */
2877 long size; /* arg to first MORECORE or mmap call */
2878 char* brk; /* return value from MORECORE */
2880 long correction; /* arg to 2nd MORECORE call */
2881 char* snd_brk; /* 2nd return val */
2883 INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of new space */
2884 INTERNAL_SIZE_T end_misalign; /* partial page left at end of new space */
2885 char* aligned_brk; /* aligned offset into brk */
2887 mchunkptr p; /* the allocated/returned chunk */
2888 mchunkptr remainder; /* remainder from allocation */
2889 CHUNK_SIZE_T remainder_size; /* its size */
2891 CHUNK_SIZE_T sum; /* for updating stats */
2893 size_t pagemask = av->pagesize - 1;
2896 If there is space available in fastbins, consolidate and retry
2897 malloc from scratch rather than getting memory from system. This
2898 can occur only if nb is in smallbin range so we didn't consolidate
2899 upon entry to malloc. It is much easier to handle this case here
2900 than in malloc proper.
2903 if (have_fastchunks(av)) {
2904 assert(in_smallbin_range(nb));
2905 malloc_consolidate(av);
2906 return mALLOc(nb - MALLOC_ALIGN_MASK);
2910 #if HAVE_MMAP
2913 If have mmap, and the request size meets the mmap threshold, and
2914 the system supports mmap, and there are few enough currently
2915 allocated mmapped regions, try to directly map this request
2916 rather than expanding top.
2919 if ((CHUNK_SIZE_T)(nb) >= (CHUNK_SIZE_T)(av->mmap_threshold) &&
2920 (av->n_mmaps < av->n_mmaps_max)) {
2922 char* mm; /* return value from mmap call*/
2925 Round up size to nearest page. For mmapped chunks, the overhead
2926 is one SIZE_SZ unit larger than for normal chunks, because there
2927 is no following chunk whose prev_size field could be used.
2929 size = (nb + SIZE_SZ + MALLOC_ALIGN_MASK + pagemask) & ~pagemask;
2931 /* Don't try if size wraps around 0 */
2932 if ((CHUNK_SIZE_T)(size) > (CHUNK_SIZE_T)(nb)) {
2934 mm = (char*)(MMAP(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE));
2936 if (mm != (char*)(MORECORE_FAILURE)) {
2939 The offset to the start of the mmapped region is stored
2940 in the prev_size field of the chunk. This allows us to adjust
2941 returned start address to meet alignment requirements here
2942 and in memalign(), and still be able to compute proper
2943 address argument for later munmap in free() and realloc().
2946 front_misalign = (INTERNAL_SIZE_T)chunk2mem(mm) & MALLOC_ALIGN_MASK;
2947 if (front_misalign > 0) {
2948 correction = MALLOC_ALIGNMENT - front_misalign;
2949 p = (mchunkptr)(mm + correction);
2950 p->prev_size = correction;
2951 set_head(p, (size - correction) |IS_MMAPPED);
2953 else {
2954 p = (mchunkptr)mm;
2955 p->prev_size = 0;
2956 set_head(p, size|IS_MMAPPED);
2959 /* update statistics */
2961 if (++av->n_mmaps > av->max_n_mmaps)
2962 av->max_n_mmaps = av->n_mmaps;
2964 sum = av->mmapped_mem += size;
2965 if (sum > (CHUNK_SIZE_T)(av->max_mmapped_mem))
2966 av->max_mmapped_mem = sum;
2967 sum += av->sbrked_mem;
2968 if (sum > (CHUNK_SIZE_T)(av->max_total_mem))
2969 av->max_total_mem = sum;
2971 check_chunk(p);
2973 return chunk2mem(p);
2977 #endif
2979 /* Record incoming configuration of top */
2981 old_top = av->top;
2982 old_size = chunksize(old_top);
2983 old_end = (char*)(chunk_at_offset(old_top, old_size));
2985 brk = snd_brk = (char*)(MORECORE_FAILURE);
2988 If not the first time through, we require old_size to be
2989 at least MINSIZE and to have prev_inuse set.
2992 assert((old_top == initial_top(av) && old_size == 0) ||
2993 ((CHUNK_SIZE_T) (old_size) >= MINSIZE &&
2994 prev_inuse(old_top)));
2996 /* Precondition: not enough current space to satisfy nb request */
2997 assert((CHUNK_SIZE_T)(old_size) < (CHUNK_SIZE_T)(nb + MINSIZE));
2999 /* Precondition: all fastbins are consolidated */
3000 assert(!have_fastchunks(av));
3003 /* Request enough space for nb + pad + overhead */
3005 size = nb + av->top_pad + MINSIZE;
3008 If contiguous, we can subtract out existing space that we hope to
3009 combine with new space. We add it back later only if
3010 we don't actually get contiguous space.
3013 if (contiguous(av))
3014 size -= old_size;
3017 Round to a multiple of page size.
3018 If MORECORE is not contiguous, this ensures that we only call it
3019 with whole-page arguments. And if MORECORE is contiguous and
3020 this is not first time through, this preserves page-alignment of
3021 previous calls. Otherwise, we correct to page-align below.
3024 size = (size + pagemask) & ~pagemask;
3027 Don't try to call MORECORE if argument is so big as to appear
3028 negative. Note that since mmap takes size_t arg, it may succeed
3029 below even if we cannot call MORECORE.
3032 if (size > 0)
3033 brk = (char*)(MORECORE(size));
3036 If have mmap, try using it as a backup when MORECORE fails or
3037 cannot be used. This is worth doing on systems that have "holes" in
3038 address space, so sbrk cannot extend to give contiguous space, but
3039 space is available elsewhere. Note that we ignore mmap max count
3040 and threshold limits, since the space will not be used as a
3041 segregated mmap region.
3044 #if HAVE_MMAP
3045 if (brk == (char*)(MORECORE_FAILURE)) {
3047 /* Cannot merge with old top, so add its size back in */
3048 if (contiguous(av))
3049 size = (size + old_size + pagemask) & ~pagemask;
3051 /* If we are relying on mmap as backup, then use larger units */
3052 if ((CHUNK_SIZE_T)(size) < (CHUNK_SIZE_T)(MMAP_AS_MORECORE_SIZE))
3053 size = MMAP_AS_MORECORE_SIZE;
3055 /* Don't try if size wraps around 0 */
3056 if ((CHUNK_SIZE_T)(size) > (CHUNK_SIZE_T)(nb)) {
3058 brk = (char*)(MMAP(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE));
3060 if (brk != (char*)(MORECORE_FAILURE)) {
3062 /* We do not need, and cannot use, another sbrk call to find end */
3063 snd_brk = brk + size;
3066 Record that we no longer have a contiguous sbrk region.
3067 After the first time mmap is used as backup, we do not
3068 ever rely on contiguous space since this could incorrectly
3069 bridge regions.
3071 set_noncontiguous(av);
3075 #endif
3077 if (brk != (char*)(MORECORE_FAILURE)) {
3078 av->sbrked_mem += size;
3081 If MORECORE extends previous space, we can likewise extend top size.
3084 if (brk == old_end && snd_brk == (char*)(MORECORE_FAILURE)) {
3085 set_head(old_top, (size + old_size) | PREV_INUSE);
3089 Otherwise, make adjustments:
3091 * If the first time through or noncontiguous, we need to call sbrk
3092 just to find out where the end of memory lies.
3094 * We need to ensure that all returned chunks from malloc will meet
3095 MALLOC_ALIGNMENT
3097 * If there was an intervening foreign sbrk, we need to adjust sbrk
3098 request size to account for fact that we will not be able to
3099 combine new space with existing space in old_top.
3101 * Almost all systems internally allocate whole pages at a time, in
3102 which case we might as well use the whole last page of request.
3103 So we allocate enough more memory to hit a page boundary now,
3104 which in turn causes future contiguous calls to page-align.
3107 else {
3108 front_misalign = 0;
3109 end_misalign = 0;
3110 correction = 0;
3111 aligned_brk = brk;
3114 If MORECORE returns an address lower than we have seen before,
3115 we know it isn't really contiguous. This and some subsequent
3116 checks help cope with non-conforming MORECORE functions and
3117 the presence of "foreign" calls to MORECORE from outside of
3118 malloc or by other threads. We cannot guarantee to detect
3119 these in all cases, but cope with the ones we do detect.
3121 if (contiguous(av) && old_size != 0 && brk < old_end) {
3122 set_noncontiguous(av);
3125 /* handle contiguous cases */
3126 if (contiguous(av)) {
3129 We can tolerate forward non-contiguities here (usually due
3130 to foreign calls) but treat them as part of our space for
3131 stats reporting.
3133 if (old_size != 0)
3134 av->sbrked_mem += brk - old_end;
3136 /* Guarantee alignment of first new chunk made from this space */
3138 front_misalign = (INTERNAL_SIZE_T)chunk2mem(brk) & MALLOC_ALIGN_MASK;
3139 if (front_misalign > 0) {
3142 Skip over some bytes to arrive at an aligned position.
3143 We don't need to specially mark these wasted front bytes.
3144 They will never be accessed anyway because
3145 prev_inuse of av->top (and any chunk created from its start)
3146 is always true after initialization.
3149 correction = MALLOC_ALIGNMENT - front_misalign;
3150 aligned_brk += correction;
3154 If this isn't adjacent to existing space, then we will not
3155 be able to merge with old_top space, so must add to 2nd request.
3158 correction += old_size;
3160 /* Extend the end address to hit a page boundary */
3161 end_misalign = (INTERNAL_SIZE_T)(brk + size + correction);
3162 correction += ((end_misalign + pagemask) & ~pagemask) - end_misalign;
3164 assert(correction >= 0);
3165 snd_brk = (char*)(MORECORE(correction));
3167 if (snd_brk == (char*)(MORECORE_FAILURE)) {
3169 If can't allocate correction, try to at least find out current
3170 brk. It might be enough to proceed without failing.
3172 correction = 0;
3173 snd_brk = (char*)(MORECORE(0));
3175 else if (snd_brk < brk) {
3177 If the second call gives noncontiguous space even though
3178 it says it won't, the only course of action is to ignore
3179 results of second call, and conservatively estimate where
3180 the first call left us. Also set noncontiguous, so this
3181 won't happen again, leaving at most one hole.
3183 Note that this check is intrinsically incomplete. Because
3184 MORECORE is allowed to give more space than we ask for,
3185 there is no reliable way to detect a noncontiguity
3186 producing a forward gap for the second call.
3188 snd_brk = brk + size;
3189 correction = 0;
3190 set_noncontiguous(av);
3194 /* handle non-contiguous cases */
3195 else {
3196 /* MORECORE/mmap must correctly align */
3197 assert(aligned_OK(chunk2mem(brk)));
3199 /* Find out current end of memory */
3200 if (snd_brk == (char*)(MORECORE_FAILURE)) {
3201 snd_brk = (char*)(MORECORE(0));
3202 av->sbrked_mem += snd_brk - brk - size;
3206 /* Adjust top based on results of second sbrk */
3207 if (snd_brk != (char*)(MORECORE_FAILURE)) {
3208 av->top = (mchunkptr)aligned_brk;
3209 set_head(av->top, (snd_brk - aligned_brk + correction) | PREV_INUSE);
3210 av->sbrked_mem += correction;
3213 If not the first time through, we either have a
3214 gap due to foreign sbrk or a non-contiguous region. Insert a
3215 double fencepost at old_top to prevent consolidation with space
3216 we don't own. These fenceposts are artificial chunks that are
3217 marked as inuse and are in any case too small to use. We need
3218 two to make sizes and alignments work out.
3221 if (old_size != 0) {
3223 Shrink old_top to insert fenceposts, keeping size a
3224 multiple of MALLOC_ALIGNMENT. We know there is at least
3225 enough space in old_top to do this.
3227 old_size = (old_size - 3*SIZE_SZ) & ~MALLOC_ALIGN_MASK;
3228 set_head(old_top, old_size | PREV_INUSE);
3231 Note that the following assignments completely overwrite
3232 old_top when old_size was previously MINSIZE. This is
3233 intentional. We need the fencepost, even if old_top otherwise gets
3234 lost.
3236 chunk_at_offset(old_top, old_size)->size =
3237 SIZE_SZ|PREV_INUSE;
3239 chunk_at_offset(old_top, old_size + SIZE_SZ)->size =
3240 SIZE_SZ|PREV_INUSE;
3243 If possible, release the rest, suppressing trimming.
3245 if (old_size >= MINSIZE) {
3246 INTERNAL_SIZE_T tt = av->trim_threshold;
3247 av->trim_threshold = (INTERNAL_SIZE_T)(-1);
3248 fREe(chunk2mem(old_top));
3249 av->trim_threshold = tt;
3255 /* Update statistics */
3256 sum = av->sbrked_mem;
3257 if (sum > (CHUNK_SIZE_T)(av->max_sbrked_mem))
3258 av->max_sbrked_mem = sum;
3260 sum += av->mmapped_mem;
3261 if (sum > (CHUNK_SIZE_T)(av->max_total_mem))
3262 av->max_total_mem = sum;
3264 check_malloc_state();
3266 /* finally, do the allocation */
3268 p = av->top;
3269 size = chunksize(p);
3271 /* check that one of the above allocation paths succeeded */
3272 if ((CHUNK_SIZE_T)(size) >= (CHUNK_SIZE_T)(nb + MINSIZE)) {
3273 remainder_size = size - nb;
3274 remainder = chunk_at_offset(p, nb);
3275 av->top = remainder;
3276 set_head(p, nb | PREV_INUSE);
3277 set_head(remainder, remainder_size | PREV_INUSE);
3278 check_malloced_chunk(p, nb);
3279 return chunk2mem(p);
3284 /* catch all failure paths */
3285 MALLOC_FAILURE_ACTION;
3286 return 0;
3293 sYSTRIm is an inverse of sorts to sYSMALLOc. It gives memory back
3294 to the system (via negative arguments to sbrk) if there is unused
3295 memory at the `high' end of the malloc pool. It is called
3296 automatically by free() when top space exceeds the trim
3297 threshold. It is also called by the public malloc_trim routine. It
3298 returns 1 if it actually released any memory, else 0.
3301 #if __STD_C
3302 static int sYSTRIm(size_t pad, mstate av)
3303 #else
3304 static int sYSTRIm(pad, av) size_t pad; mstate av;
3305 #endif
3307 long top_size; /* Amount of top-most memory */
3308 long extra; /* Amount to release */
3309 long released; /* Amount actually released */
3310 char* current_brk; /* address returned by pre-check sbrk call */
3311 char* new_brk; /* address returned by post-check sbrk call */
3312 size_t pagesz;
3314 pagesz = av->pagesize;
3315 top_size = chunksize(av->top);
3317 /* Release in pagesize units, keeping at least one page */
3318 extra = ((top_size - pad - MINSIZE + (pagesz-1)) / pagesz - 1) * pagesz;
3320 if (extra > 0) {
3323 Only proceed if end of memory is where we last set it.
3324 This avoids problems if there were foreign sbrk calls.
3326 current_brk = (char*)(MORECORE(0));
3327 if (current_brk == (char*)(av->top) + top_size) {
3330 Attempt to release memory. We ignore MORECORE return value,
3331 and instead call again to find out where new end of memory is.
3332 This avoids problems if first call releases less than we asked,
3333 of if failure somehow altered brk value. (We could still
3334 encounter problems if it altered brk in some very bad way,
3335 but the only thing we can do is adjust anyway, which will cause
3336 some downstream failure.)
3339 MORECORE(-extra);
3340 new_brk = (char*)(MORECORE(0));
3342 if (new_brk != (char*)MORECORE_FAILURE) {
3343 released = (long)(current_brk - new_brk);
3345 if (released != 0) {
3346 /* Success. Adjust top. */
3347 av->sbrked_mem -= released;
3348 set_head(av->top, (top_size - released) | PREV_INUSE);
3349 check_malloc_state();
3350 return 1;
3355 return 0;
3359 ------------------------------ malloc ------------------------------
3363 #if __STD_C
3364 Void_t* mALLOc(size_t bytes)
3365 #else
3366 Void_t* mALLOc(bytes) size_t bytes;
3367 #endif
3369 mstate av = get_malloc_state();
3371 INTERNAL_SIZE_T nb; /* normalized request size */
3372 unsigned int idx; /* associated bin index */
3373 mbinptr bin; /* associated bin */
3374 mfastbinptr* fb; /* associated fastbin */
3376 mchunkptr victim; /* inspected/selected chunk */
3377 INTERNAL_SIZE_T size; /* its size */
3378 int victim_index; /* its bin index */
3380 mchunkptr remainder; /* remainder from a split */
3381 CHUNK_SIZE_T remainder_size; /* its size */
3383 unsigned int block; /* bit map traverser */
3384 unsigned int bit; /* bit map traverser */
3385 unsigned int map; /* current word of binmap */
3387 mchunkptr fwd; /* misc temp for linking */
3388 mchunkptr bck; /* misc temp for linking */
3391 Convert request size to internal form by adding SIZE_SZ bytes
3392 overhead plus possibly more to obtain necessary alignment and/or
3393 to obtain a size of at least MINSIZE, the smallest allocatable
3394 size. Also, checked_request2size traps (returning 0) request sizes
3395 that are so large that they wrap around zero when padded and
3396 aligned.
3399 checked_request2size(bytes, nb);
3402 Bypass search if no frees yet
3404 if (!have_anychunks(av)) {
3405 if (av->max_fast == 0) /* initialization check */
3406 malloc_consolidate(av);
3407 goto use_top;
3411 If the size qualifies as a fastbin, first check corresponding bin.
3414 if ((CHUNK_SIZE_T)(nb) <= (CHUNK_SIZE_T)(av->max_fast)) {
3415 fb = &(av->fastbins[(fastbin_index(nb))]);
3416 if ((victim = *fb) != 0) {
3417 *fb = victim->fd;
3418 check_remalloced_chunk(victim, nb);
3419 return chunk2mem(victim);
3424 If a small request, check regular bin. Since these "smallbins"
3425 hold one size each, no searching within bins is necessary.
3426 (For a large request, we need to wait until unsorted chunks are
3427 processed to find best fit. But for small ones, fits are exact
3428 anyway, so we can check now, which is faster.)
3431 if (in_smallbin_range(nb)) {
3432 idx = smallbin_index(nb);
3433 bin = bin_at(av,idx);
3435 if ((victim = last(bin)) != bin) {
3436 bck = victim->bk;
3437 set_inuse_bit_at_offset(victim, nb);
3438 bin->bk = bck;
3439 bck->fd = bin;
3441 check_malloced_chunk(victim, nb);
3442 return chunk2mem(victim);
3447 If this is a large request, consolidate fastbins before continuing.
3448 While it might look excessive to kill all fastbins before
3449 even seeing if there is space available, this avoids
3450 fragmentation problems normally associated with fastbins.
3451 Also, in practice, programs tend to have runs of either small or
3452 large requests, but less often mixtures, so consolidation is not
3453 invoked all that often in most programs. And the programs that
3454 it is called frequently in otherwise tend to fragment.
3457 else {
3458 idx = largebin_index(nb);
3459 if (have_fastchunks(av))
3460 malloc_consolidate(av);
3464 Process recently freed or remaindered chunks, taking one only if
3465 it is exact fit, or, if this a small request, the chunk is remainder from
3466 the most recent non-exact fit. Place other traversed chunks in
3467 bins. Note that this step is the only place in any routine where
3468 chunks are placed in bins.
3471 while ((victim = unsorted_chunks(av)->bk) != unsorted_chunks(av)) {
3472 bck = victim->bk;
3473 size = chunksize(victim);
3476 If a small request, try to use last remainder if it is the
3477 only chunk in unsorted bin. This helps promote locality for
3478 runs of consecutive small requests. This is the only
3479 exception to best-fit, and applies only when there is
3480 no exact fit for a small chunk.
3483 if (in_smallbin_range(nb) &&
3484 bck == unsorted_chunks(av) &&
3485 victim == av->last_remainder &&
3486 (CHUNK_SIZE_T)(size) > (CHUNK_SIZE_T)(nb + MINSIZE)) {
3488 /* split and reattach remainder */
3489 remainder_size = size - nb;
3490 remainder = chunk_at_offset(victim, nb);
3491 unsorted_chunks(av)->bk = unsorted_chunks(av)->fd = remainder;
3492 av->last_remainder = remainder;
3493 remainder->bk = remainder->fd = unsorted_chunks(av);
3495 set_head(victim, nb | PREV_INUSE);
3496 set_head(remainder, remainder_size | PREV_INUSE);
3497 set_foot(remainder, remainder_size);
3499 check_malloced_chunk(victim, nb);
3500 return chunk2mem(victim);
3503 /* remove from unsorted list */
3504 unsorted_chunks(av)->bk = bck;
3505 bck->fd = unsorted_chunks(av);
3507 /* Take now instead of binning if exact fit */
3509 if (size == nb) {
3510 set_inuse_bit_at_offset(victim, size);
3511 check_malloced_chunk(victim, nb);
3512 return chunk2mem(victim);
3515 /* place chunk in bin */
3517 if (in_smallbin_range(size)) {
3518 victim_index = smallbin_index(size);
3519 bck = bin_at(av, victim_index);
3520 fwd = bck->fd;
3522 else {
3523 victim_index = largebin_index(size);
3524 bck = bin_at(av, victim_index);
3525 fwd = bck->fd;
3527 if (fwd != bck) {
3528 /* if smaller than smallest, place first */
3529 if ((CHUNK_SIZE_T)(size) < (CHUNK_SIZE_T)(bck->bk->size)) {
3530 fwd = bck;
3531 bck = bck->bk;
3533 else if ((CHUNK_SIZE_T)(size) >=
3534 (CHUNK_SIZE_T)(FIRST_SORTED_BIN_SIZE)) {
3536 /* maintain large bins in sorted order */
3537 size |= PREV_INUSE; /* Or with inuse bit to speed comparisons */
3538 while ((CHUNK_SIZE_T)(size) < (CHUNK_SIZE_T)(fwd->size))
3539 fwd = fwd->fd;
3540 bck = fwd->bk;
3545 mark_bin(av, victim_index);
3546 victim->bk = bck;
3547 victim->fd = fwd;
3548 fwd->bk = victim;
3549 bck->fd = victim;
3553 If a large request, scan through the chunks of current bin to
3554 find one that fits. (This will be the smallest that fits unless
3555 FIRST_SORTED_BIN_SIZE has been changed from default.) This is
3556 the only step where an unbounded number of chunks might be
3557 scanned without doing anything useful with them. However the
3558 lists tend to be short.
3561 if (!in_smallbin_range(nb)) {
3562 bin = bin_at(av, idx);
3564 for (victim = last(bin); victim != bin; victim = victim->bk) {
3565 size = chunksize(victim);
3567 if ((CHUNK_SIZE_T)(size) >= (CHUNK_SIZE_T)(nb)) {
3568 remainder_size = size - nb;
3569 unlink(victim, bck, fwd);
3571 /* Exhaust */
3572 if (remainder_size < MINSIZE) {
3573 set_inuse_bit_at_offset(victim, size);
3574 check_malloced_chunk(victim, nb);
3575 return chunk2mem(victim);
3577 /* Split */
3578 else {
3579 remainder = chunk_at_offset(victim, nb);
3580 unsorted_chunks(av)->bk = unsorted_chunks(av)->fd = remainder;
3581 remainder->bk = remainder->fd = unsorted_chunks(av);
3582 set_head(victim, nb | PREV_INUSE);
3583 set_head(remainder, remainder_size | PREV_INUSE);
3584 set_foot(remainder, remainder_size);
3585 check_malloced_chunk(victim, nb);
3586 return chunk2mem(victim);
3593 Search for a chunk by scanning bins, starting with next largest
3594 bin. This search is strictly by best-fit; i.e., the smallest
3595 (with ties going to approximately the least recently used) chunk
3596 that fits is selected.
3598 The bitmap avoids needing to check that most blocks are nonempty.
3601 ++idx;
3602 bin = bin_at(av,idx);
3603 block = idx2block(idx);
3604 map = av->binmap[block];
3605 bit = idx2bit(idx);
3607 for (;;) {
3609 /* Skip rest of block if there are no more set bits in this block. */
3610 if (bit > map || bit == 0) {
3611 do {
3612 if (++block >= BINMAPSIZE) /* out of bins */
3613 goto use_top;
3614 } while ((map = av->binmap[block]) == 0);
3616 bin = bin_at(av, (block << BINMAPSHIFT));
3617 bit = 1;
3620 /* Advance to bin with set bit. There must be one. */
3621 while ((bit & map) == 0) {
3622 bin = next_bin(bin);
3623 bit <<= 1;
3624 assert(bit != 0);
3627 /* Inspect the bin. It is likely to be non-empty */
3628 victim = last(bin);
3630 /* If a false alarm (empty bin), clear the bit. */
3631 if (victim == bin) {
3632 av->binmap[block] = map &= ~bit; /* Write through */
3633 bin = next_bin(bin);
3634 bit <<= 1;
3637 else {
3638 size = chunksize(victim);
3640 /* We know the first chunk in this bin is big enough to use. */
3641 assert((CHUNK_SIZE_T)(size) >= (CHUNK_SIZE_T)(nb));
3643 remainder_size = size - nb;
3645 /* unlink */
3646 bck = victim->bk;
3647 bin->bk = bck;
3648 bck->fd = bin;
3650 /* Exhaust */
3651 if (remainder_size < MINSIZE) {
3652 set_inuse_bit_at_offset(victim, size);
3653 check_malloced_chunk(victim, nb);
3654 return chunk2mem(victim);
3657 /* Split */
3658 else {
3659 remainder = chunk_at_offset(victim, nb);
3661 unsorted_chunks(av)->bk = unsorted_chunks(av)->fd = remainder;
3662 remainder->bk = remainder->fd = unsorted_chunks(av);
3663 /* advertise as last remainder */
3664 if (in_smallbin_range(nb))
3665 av->last_remainder = remainder;
3667 set_head(victim, nb | PREV_INUSE);
3668 set_head(remainder, remainder_size | PREV_INUSE);
3669 set_foot(remainder, remainder_size);
3670 check_malloced_chunk(victim, nb);
3671 return chunk2mem(victim);
3676 use_top:
3678 If large enough, split off the chunk bordering the end of memory
3679 (held in av->top). Note that this is in accord with the best-fit
3680 search rule. In effect, av->top is treated as larger (and thus
3681 less well fitting) than any other available chunk since it can
3682 be extended to be as large as necessary (up to system
3683 limitations).
3685 We require that av->top always exists (i.e., has size >=
3686 MINSIZE) after initialization, so if it would otherwise be
3687 exhausted by the current request, it is replenished. (The main
3688 reason for ensuring it exists is that we may need MINSIZE space
3689 to put in fenceposts in sysmalloc.)
3692 victim = av->top;
3693 size = chunksize(victim);
3695 if ((CHUNK_SIZE_T)(size) >= (CHUNK_SIZE_T)(nb + MINSIZE)) {
3696 remainder_size = size - nb;
3697 remainder = chunk_at_offset(victim, nb);
3698 av->top = remainder;
3699 set_head(victim, nb | PREV_INUSE);
3700 set_head(remainder, remainder_size | PREV_INUSE);
3702 check_malloced_chunk(victim, nb);
3703 return chunk2mem(victim);
3707 If no space in top, relay to handle system-dependent cases
3709 return sYSMALLOc(nb, av);
3713 ------------------------------ free ------------------------------
3716 #if __STD_C
3717 void fREe(Void_t* mem)
3718 #else
3719 void fREe(mem) Void_t* mem;
3720 #endif
3722 mstate av = get_malloc_state();
3724 mchunkptr p; /* chunk corresponding to mem */
3725 INTERNAL_SIZE_T size; /* its size */
3726 mfastbinptr* fb; /* associated fastbin */
3727 mchunkptr nextchunk; /* next contiguous chunk */
3728 INTERNAL_SIZE_T nextsize; /* its size */
3729 int nextinuse; /* true if nextchunk is used */
3730 INTERNAL_SIZE_T prevsize; /* size of previous contiguous chunk */
3731 mchunkptr bck; /* misc temp for linking */
3732 mchunkptr fwd; /* misc temp for linking */
3734 /* free(0) has no effect */
3735 if (mem != 0) {
3736 p = mem2chunk(mem);
3737 size = chunksize(p);
3739 check_inuse_chunk(p);
3742 If eligible, place chunk on a fastbin so it can be found
3743 and used quickly in malloc.
3746 if ((CHUNK_SIZE_T)(size) <= (CHUNK_SIZE_T)(av->max_fast)
3748 #if TRIM_FASTBINS
3750 If TRIM_FASTBINS set, don't place chunks
3751 bordering top into fastbins
3753 && (chunk_at_offset(p, size) != av->top)
3754 #endif
3757 set_fastchunks(av);
3758 fb = &(av->fastbins[fastbin_index(size)]);
3759 p->fd = *fb;
3760 *fb = p;
3764 Consolidate other non-mmapped chunks as they arrive.
3767 else if (!chunk_is_mmapped(p)) {
3768 set_anychunks(av);
3770 nextchunk = chunk_at_offset(p, size);
3771 nextsize = chunksize(nextchunk);
3773 /* consolidate backward */
3774 if (!prev_inuse(p)) {
3775 prevsize = p->prev_size;
3776 size += prevsize;
3777 p = chunk_at_offset(p, -((long) prevsize));
3778 unlink(p, bck, fwd);
3781 if (nextchunk != av->top) {
3782 /* get and clear inuse bit */
3783 nextinuse = inuse_bit_at_offset(nextchunk, nextsize);
3784 set_head(nextchunk, nextsize);
3786 /* consolidate forward */
3787 if (!nextinuse) {
3788 unlink(nextchunk, bck, fwd);
3789 size += nextsize;
3793 Place the chunk in unsorted chunk list. Chunks are
3794 not placed into regular bins until after they have
3795 been given one chance to be used in malloc.
3798 bck = unsorted_chunks(av);
3799 fwd = bck->fd;
3800 p->bk = bck;
3801 p->fd = fwd;
3802 bck->fd = p;
3803 fwd->bk = p;
3805 set_head(p, size | PREV_INUSE);
3806 set_foot(p, size);
3808 check_free_chunk(p);
3812 If the chunk borders the current high end of memory,
3813 consolidate into top
3816 else {
3817 size += nextsize;
3818 set_head(p, size | PREV_INUSE);
3819 av->top = p;
3820 check_chunk(p);
3824 If freeing a large space, consolidate possibly-surrounding
3825 chunks. Then, if the total unused topmost memory exceeds trim
3826 threshold, ask malloc_trim to reduce top.
3828 Unless max_fast is 0, we don't know if there are fastbins
3829 bordering top, so we cannot tell for sure whether threshold
3830 has been reached unless fastbins are consolidated. But we
3831 don't want to consolidate on each free. As a compromise,
3832 consolidation is performed if FASTBIN_CONSOLIDATION_THRESHOLD
3833 is reached.
3836 if ((CHUNK_SIZE_T)(size) >= FASTBIN_CONSOLIDATION_THRESHOLD) {
3837 if (have_fastchunks(av))
3838 malloc_consolidate(av);
3840 #ifndef MORECORE_CANNOT_TRIM
3841 if ((CHUNK_SIZE_T)(chunksize(av->top)) >=
3842 (CHUNK_SIZE_T)(av->trim_threshold))
3843 sYSTRIm(av->top_pad, av);
3844 #endif
3849 If the chunk was allocated via mmap, release via munmap()
3850 Note that if HAVE_MMAP is false but chunk_is_mmapped is
3851 true, then user must have overwritten memory. There's nothing
3852 we can do to catch this error unless DEBUG is set, in which case
3853 check_inuse_chunk (above) will have triggered error.
3856 else {
3857 #if HAVE_MMAP
3858 int ret;
3859 INTERNAL_SIZE_T offset = p->prev_size;
3860 av->n_mmaps--;
3861 av->mmapped_mem -= (size + offset);
3862 ret = munmap((char*)p - offset, size + offset);
3863 /* munmap returns non-zero on failure */
3864 assert(ret == 0);
3865 #endif
3871 ------------------------- malloc_consolidate -------------------------
3873 malloc_consolidate is a specialized version of free() that tears
3874 down chunks held in fastbins. Free itself cannot be used for this
3875 purpose since, among other things, it might place chunks back onto
3876 fastbins. So, instead, we need to use a minor variant of the same
3877 code.
3879 Also, because this routine needs to be called the first time through
3880 malloc anyway, it turns out to be the perfect place to trigger
3881 initialization code.
3884 #if __STD_C
3885 static void malloc_consolidate(mstate av)
3886 #else
3887 static void malloc_consolidate(av) mstate av;
3888 #endif
3890 mfastbinptr* fb; /* current fastbin being consolidated */
3891 mfastbinptr* maxfb; /* last fastbin (for loop control) */
3892 mchunkptr p; /* current chunk being consolidated */
3893 mchunkptr nextp; /* next chunk to consolidate */
3894 mchunkptr unsorted_bin; /* bin header */
3895 mchunkptr first_unsorted; /* chunk to link to */
3897 /* These have same use as in free() */
3898 mchunkptr nextchunk;
3899 INTERNAL_SIZE_T size;
3900 INTERNAL_SIZE_T nextsize;
3901 INTERNAL_SIZE_T prevsize;
3902 int nextinuse;
3903 mchunkptr bck;
3904 mchunkptr fwd;
3907 If max_fast is 0, we know that av hasn't
3908 yet been initialized, in which case do so below
3911 if (av->max_fast != 0) {
3912 clear_fastchunks(av);
3914 unsorted_bin = unsorted_chunks(av);
3917 Remove each chunk from fast bin and consolidate it, placing it
3918 then in unsorted bin. Among other reasons for doing this,
3919 placing in unsorted bin avoids needing to calculate actual bins
3920 until malloc is sure that chunks aren't immediately going to be
3921 reused anyway.
3924 maxfb = &(av->fastbins[fastbin_index(av->max_fast)]);
3925 fb = &(av->fastbins[0]);
3926 do {
3927 if ((p = *fb) != 0) {
3928 *fb = 0;
3930 do {
3931 check_inuse_chunk(p);
3932 nextp = p->fd;
3934 /* Slightly streamlined version of consolidation code in free() */
3935 size = p->size & ~PREV_INUSE;
3936 nextchunk = chunk_at_offset(p, size);
3937 nextsize = chunksize(nextchunk);
3939 if (!prev_inuse(p)) {
3940 prevsize = p->prev_size;
3941 size += prevsize;
3942 p = chunk_at_offset(p, -((long) prevsize));
3943 unlink(p, bck, fwd);
3946 if (nextchunk != av->top) {
3947 nextinuse = inuse_bit_at_offset(nextchunk, nextsize);
3948 set_head(nextchunk, nextsize);
3950 if (!nextinuse) {
3951 size += nextsize;
3952 unlink(nextchunk, bck, fwd);
3955 first_unsorted = unsorted_bin->fd;
3956 unsorted_bin->fd = p;
3957 first_unsorted->bk = p;
3959 set_head(p, size | PREV_INUSE);
3960 p->bk = unsorted_bin;
3961 p->fd = first_unsorted;
3962 set_foot(p, size);
3965 else {
3966 size += nextsize;
3967 set_head(p, size | PREV_INUSE);
3968 av->top = p;
3971 } while ((p = nextp) != 0);
3974 } while (fb++ != maxfb);
3976 else {
3977 malloc_init_state(av);
3978 check_malloc_state();
3983 ------------------------------ realloc ------------------------------
3987 #if __STD_C
3988 Void_t* rEALLOc(Void_t* oldmem, size_t bytes)
3989 #else
3990 Void_t* rEALLOc(oldmem, bytes) Void_t* oldmem; size_t bytes;
3991 #endif
3993 mstate av = get_malloc_state();
3995 INTERNAL_SIZE_T nb; /* padded request size */
3997 mchunkptr oldp; /* chunk corresponding to oldmem */
3998 INTERNAL_SIZE_T oldsize; /* its size */
4000 mchunkptr newp; /* chunk to return */
4001 INTERNAL_SIZE_T newsize; /* its size */
4002 Void_t* newmem; /* corresponding user mem */
4004 mchunkptr next; /* next contiguous chunk after oldp */
4006 mchunkptr remainder; /* extra space at end of newp */
4007 CHUNK_SIZE_T remainder_size; /* its size */
4009 mchunkptr bck; /* misc temp for linking */
4010 mchunkptr fwd; /* misc temp for linking */
4012 CHUNK_SIZE_T copysize; /* bytes to copy */
4013 unsigned int ncopies; /* INTERNAL_SIZE_T words to copy */
4014 INTERNAL_SIZE_T* s; /* copy source */
4015 INTERNAL_SIZE_T* d; /* copy destination */
4018 #ifdef REALLOC_ZERO_BYTES_FREES
4019 if (bytes == 0) {
4020 fREe(oldmem);
4021 return 0;
4023 #endif
4025 /* realloc of null is supposed to be same as malloc */
4026 if (oldmem == 0)
4027 return mALLOc(bytes);
4029 checked_request2size(bytes, nb);
4031 oldp = mem2chunk(oldmem);
4032 oldsize = chunksize(oldp);
4034 check_inuse_chunk(oldp);
4036 if (!chunk_is_mmapped(oldp)) {
4038 if ((CHUNK_SIZE_T)(oldsize) >= (CHUNK_SIZE_T)(nb)) {
4039 /* already big enough; split below */
4040 newp = oldp;
4041 newsize = oldsize;
4044 else {
4045 next = chunk_at_offset(oldp, oldsize);
4047 /* Try to expand forward into top */
4048 if (next == av->top &&
4049 (CHUNK_SIZE_T)(newsize = oldsize + chunksize(next)) >=
4050 (CHUNK_SIZE_T)(nb + MINSIZE)) {
4051 set_head_size(oldp, nb);
4052 av->top = chunk_at_offset(oldp, nb);
4053 set_head(av->top, (newsize - nb) | PREV_INUSE);
4054 return chunk2mem(oldp);
4057 /* Try to expand forward into next chunk; split off remainder below */
4058 else if (next != av->top &&
4059 !inuse(next) &&
4060 (CHUNK_SIZE_T)(newsize = oldsize + chunksize(next)) >=
4061 (CHUNK_SIZE_T)(nb)) {
4062 newp = oldp;
4063 unlink(next, bck, fwd);
4066 /* allocate, copy, free */
4067 else {
4068 newmem = mALLOc(nb - MALLOC_ALIGN_MASK);
4069 if (newmem == 0)
4070 return 0; /* propagate failure */
4072 newp = mem2chunk(newmem);
4073 newsize = chunksize(newp);
4076 Avoid copy if newp is next chunk after oldp.
4078 if (newp == next) {
4079 newsize += oldsize;
4080 newp = oldp;
4082 else {
4084 Unroll copy of <= 36 bytes (72 if 8byte sizes)
4085 We know that contents have an odd number of
4086 INTERNAL_SIZE_T-sized words; minimally 3.
4089 copysize = oldsize - SIZE_SZ;
4090 s = (INTERNAL_SIZE_T*)(oldmem);
4091 d = (INTERNAL_SIZE_T*)(newmem);
4092 ncopies = copysize / sizeof (INTERNAL_SIZE_T);
4093 assert(ncopies >= 3);
4095 if (ncopies > 9)
4096 MALLOC_COPY(d, s, copysize);
4098 else {
4099 *(d+0) = *(s+0);
4100 *(d+1) = *(s+1);
4101 *(d+2) = *(s+2);
4102 if (ncopies > 4) {
4103 *(d+3) = *(s+3);
4104 *(d+4) = *(s+4);
4105 if (ncopies > 6) {
4106 *(d+5) = *(s+5);
4107 *(d+6) = *(s+6);
4108 if (ncopies > 8) {
4109 *(d+7) = *(s+7);
4110 *(d+8) = *(s+8);
4116 fREe(oldmem);
4117 check_inuse_chunk(newp);
4118 return chunk2mem(newp);
4123 /* If possible, free extra space in old or extended chunk */
4125 assert((CHUNK_SIZE_T)(newsize) >= (CHUNK_SIZE_T)(nb));
4127 remainder_size = newsize - nb;
4129 if (remainder_size < MINSIZE) { /* not enough extra to split off */
4130 set_head_size(newp, newsize);
4131 set_inuse_bit_at_offset(newp, newsize);
4133 else { /* split remainder */
4134 remainder = chunk_at_offset(newp, nb);
4135 set_head_size(newp, nb);
4136 set_head(remainder, remainder_size | PREV_INUSE);
4137 /* Mark remainder as inuse so free() won't complain */
4138 set_inuse_bit_at_offset(remainder, remainder_size);
4139 fREe(chunk2mem(remainder));
4142 check_inuse_chunk(newp);
4143 return chunk2mem(newp);
4147 Handle mmap cases
4150 else {
4151 #if HAVE_MMAP
4153 # if HAVE_MREMAP
4154 INTERNAL_SIZE_T offset = oldp->prev_size;
4155 size_t pagemask = av->pagesize - 1;
4156 char *cp;
4157 CHUNK_SIZE_T sum;
4159 /* Note the extra SIZE_SZ overhead */
4160 newsize = (nb + offset + SIZE_SZ + pagemask) & ~pagemask;
4162 /* don't need to remap if still within same page */
4163 if (oldsize == newsize - offset)
4164 return oldmem;
4166 cp = (char*)mremap((char*)oldp - offset, oldsize + offset, newsize, 1);
4168 if (cp != (char*)MORECORE_FAILURE) {
4170 newp = (mchunkptr)(cp + offset);
4171 set_head(newp, (newsize - offset)|IS_MMAPPED);
4173 assert(aligned_OK(chunk2mem(newp)));
4174 assert((newp->prev_size == offset));
4176 /* update statistics */
4177 sum = av->mmapped_mem += newsize - oldsize;
4178 if (sum > (CHUNK_SIZE_T)(av->max_mmapped_mem))
4179 av->max_mmapped_mem = sum;
4180 sum += av->sbrked_mem;
4181 if (sum > (CHUNK_SIZE_T)(av->max_total_mem))
4182 av->max_total_mem = sum;
4184 return chunk2mem(newp);
4186 # endif
4188 /* Note the extra SIZE_SZ overhead. */
4189 if ((CHUNK_SIZE_T)(oldsize) >= (CHUNK_SIZE_T)(nb + SIZE_SZ))
4190 newmem = oldmem; /* do nothing */
4191 else {
4192 /* Must alloc, copy, free. */
4193 newmem = mALLOc(nb - MALLOC_ALIGN_MASK);
4194 if (newmem != 0) {
4195 MALLOC_COPY(newmem, oldmem, oldsize - 2*SIZE_SZ);
4196 fREe(oldmem);
4199 return newmem;
4201 #else
4202 /* If !HAVE_MMAP, but chunk_is_mmapped, user must have overwritten mem */
4203 check_malloc_state();
4204 MALLOC_FAILURE_ACTION;
4205 return 0;
4206 #endif
4211 ------------------------------ memalign ------------------------------
4214 #if __STD_C
4215 Void_t* mEMALIGn(size_t alignment, size_t bytes)
4216 #else
4217 Void_t* mEMALIGn(alignment, bytes) size_t alignment; size_t bytes;
4218 #endif
4220 INTERNAL_SIZE_T nb; /* padded request size */
4221 char* m; /* memory returned by malloc call */
4222 mchunkptr p; /* corresponding chunk */
4223 char* brk; /* alignment point within p */
4224 mchunkptr newp; /* chunk to return */
4225 INTERNAL_SIZE_T newsize; /* its size */
4226 INTERNAL_SIZE_T leadsize; /* leading space before alignment point */
4227 mchunkptr remainder; /* spare room at end to split off */
4228 CHUNK_SIZE_T remainder_size; /* its size */
4229 INTERNAL_SIZE_T size;
4231 /* If need less alignment than we give anyway, just relay to malloc */
4233 if (alignment <= MALLOC_ALIGNMENT)
4234 return mALLOc(bytes);
4236 /* Otherwise, ensure that it is at least a minimum chunk size */
4238 if (alignment < MINSIZE)
4239 alignment = MINSIZE;
4241 /* Make sure alignment is power of 2 (in case MINSIZE is not). */
4242 if ((alignment & (alignment - 1)) != 0) {
4243 size_t a = MALLOC_ALIGNMENT * 2;
4244 while ((CHUNK_SIZE_T)a < (CHUNK_SIZE_T)alignment)
4245 a <<= 1;
4246 alignment = a;
4249 checked_request2size(bytes, nb);
4252 Strategy: find a spot within that chunk that meets the alignment
4253 request, and then possibly free the leading and trailing space.
4257 /* Call malloc with worst case padding to hit alignment. */
4259 m = (char*)(mALLOc(nb + alignment + MINSIZE));
4261 if (m == 0)
4262 return 0; /* propagate failure */
4264 p = mem2chunk(m);
4266 if ((((PTR_UINT)(m)) % alignment) != 0) { /* misaligned */
4269 Find an aligned spot inside chunk. Since we need to give back
4270 leading space in a chunk of at least MINSIZE, if the first
4271 calculation places us at a spot with less than MINSIZE leader,
4272 we can move to the next aligned spot -- we've allocated enough
4273 total room so that this is always possible.
4276 brk = (char*)mem2chunk((PTR_UINT)(((PTR_UINT)(m + alignment - 1)) &
4277 -((signed long) alignment)));
4278 if ((CHUNK_SIZE_T)(brk - (char*)(p)) < MINSIZE)
4279 brk += alignment;
4281 newp = (mchunkptr)brk;
4282 leadsize = brk - (char*)(p);
4283 newsize = chunksize(p) - leadsize;
4285 /* For mmapped chunks, just adjust offset */
4286 if (chunk_is_mmapped(p)) {
4287 newp->prev_size = p->prev_size + leadsize;
4288 set_head(newp, newsize|IS_MMAPPED);
4289 return chunk2mem(newp);
4292 /* Otherwise, give back leader, use the rest */
4293 set_head(newp, newsize | PREV_INUSE);
4294 set_inuse_bit_at_offset(newp, newsize);
4295 set_head_size(p, leadsize);
4296 fREe(chunk2mem(p));
4297 p = newp;
4299 assert (newsize >= nb &&
4300 (((PTR_UINT)(chunk2mem(p))) % alignment) == 0);
4303 /* Also give back spare room at the end */
4304 if (!chunk_is_mmapped(p)) {
4305 size = chunksize(p);
4306 if ((CHUNK_SIZE_T)(size) > (CHUNK_SIZE_T)(nb + MINSIZE)) {
4307 remainder_size = size - nb;
4308 remainder = chunk_at_offset(p, nb);
4309 set_head(remainder, remainder_size | PREV_INUSE);
4310 set_head_size(p, nb);
4311 fREe(chunk2mem(remainder));
4315 check_inuse_chunk(p);
4316 return chunk2mem(p);
4320 ------------------------------ calloc ------------------------------
4323 #if __STD_C
4324 Void_t* cALLOc(size_t n_elements, size_t elem_size)
4325 #else
4326 Void_t* cALLOc(n_elements, elem_size) size_t n_elements; size_t elem_size;
4327 #endif
4329 mchunkptr p;
4330 CHUNK_SIZE_T clearsize;
4331 CHUNK_SIZE_T nclears;
4332 INTERNAL_SIZE_T* d;
4334 Void_t* mem = mALLOc(n_elements * elem_size);
4336 if (mem != 0) {
4337 p = mem2chunk(mem);
4339 if (!chunk_is_mmapped(p))
4342 Unroll clear of <= 36 bytes (72 if 8byte sizes)
4343 We know that contents have an odd number of
4344 INTERNAL_SIZE_T-sized words; minimally 3.
4347 d = (INTERNAL_SIZE_T*)mem;
4348 clearsize = chunksize(p) - SIZE_SZ;
4349 nclears = clearsize / sizeof (INTERNAL_SIZE_T);
4350 assert(nclears >= 3);
4352 if (nclears > 9)
4353 MALLOC_ZERO(d, clearsize);
4355 else {
4356 *(d+0) = 0;
4357 *(d+1) = 0;
4358 *(d+2) = 0;
4359 if (nclears > 4) {
4360 *(d+3) = 0;
4361 *(d+4) = 0;
4362 if (nclears > 6) {
4363 *(d+5) = 0;
4364 *(d+6) = 0;
4365 if (nclears > 8) {
4366 *(d+7) = 0;
4367 *(d+8) = 0;
4373 #if ! MMAP_CLEARS
4374 else
4376 d = (INTERNAL_SIZE_T*)mem;
4378 Note the additional SIZE_SZ
4380 clearsize = chunksize(p) - 2*SIZE_SZ;
4381 MALLOC_ZERO(d, clearsize);
4383 #endif
4385 return mem;
4389 ------------------------------ cfree ------------------------------
4392 #if __STD_C
4393 void cFREe(Void_t *mem)
4394 #else
4395 void cFREe(mem) Void_t *mem;
4396 #endif
4398 fREe(mem);
4402 ------------------------- independent_calloc -------------------------
4405 #if __STD_C
4406 Void_t** iCALLOc(size_t n_elements, size_t elem_size, Void_t* chunks[])
4407 #else
4408 Void_t** iCALLOc(n_elements, elem_size, chunks) size_t n_elements;
4409 size_t elem_size;
4410 Void_t* chunks[];
4411 #endif
4413 size_t sz = elem_size; /* serves as 1-element array */
4414 /* opts arg of 3 means all elements are same size, and should be cleared */
4415 return iALLOc(n_elements, &sz, 3, chunks);
4419 ------------------------- independent_comalloc -------------------------
4422 #if __STD_C
4423 Void_t** iCOMALLOc(size_t n_elements, size_t sizes[], Void_t* chunks[])
4424 #else
4425 Void_t** iCOMALLOc(n_elements, sizes, chunks)
4426 size_t n_elements;
4427 size_t sizes[];
4428 Void_t* chunks[];
4429 #endif
4431 return iALLOc(n_elements, sizes, 0, chunks);
4436 ------------------------------ ialloc ------------------------------
4437 ialloc provides common support for independent_X routines, handling all of
4438 the combinations that can result.
4440 The opts arg has:
4441 bit 0 set if all elements are same size (using sizes[0])
4442 bit 1 set if elements should be zeroed
4446 #if __STD_C
4447 static Void_t** iALLOc(size_t n_elements,
4448 size_t* sizes,
4449 int opts,
4450 Void_t* chunks[])
4451 #else
4452 static Void_t** iALLOc(n_elements, sizes, opts, chunks) size_t n_elements;
4453 size_t* sizes;
4454 int opts;
4455 Void_t* chunks[];
4456 #endif
4458 mstate av = get_malloc_state();
4459 INTERNAL_SIZE_T element_size; /* chunksize of each element, if all same */
4460 INTERNAL_SIZE_T contents_size; /* total size of elements */
4461 INTERNAL_SIZE_T array_size; /* request size of pointer array */
4462 Void_t* mem; /* malloced aggregate space */
4463 mchunkptr p; /* corresponding chunk */
4464 INTERNAL_SIZE_T remainder_size; /* remaining bytes while splitting */
4465 Void_t** marray; /* either "chunks" or malloced ptr array */
4466 mchunkptr array_chunk; /* chunk for malloced ptr array */
4467 int mmx; /* to disable mmap */
4468 INTERNAL_SIZE_T size;
4469 size_t i;
4471 /* Ensure initialization */
4472 if (av->max_fast == 0)
4473 malloc_consolidate(av);
4475 /* compute array length, if needed */
4476 if (chunks != 0) {
4477 if (n_elements == 0)
4478 return chunks; /* nothing to do */
4479 marray = chunks;
4480 array_size = 0;
4482 else {
4483 /* if empty req, must still return chunk representing empty array */
4484 if (n_elements == 0)
4485 return (Void_t**) mALLOc(0);
4486 marray = 0;
4487 array_size = request2size(n_elements * (sizeof (Void_t*)));
4490 /* compute total element size */
4491 if (opts & 0x1) { /* all-same-size */
4492 element_size = request2size(*sizes);
4493 contents_size = n_elements * element_size;
4495 else { /* add up all the sizes */
4496 element_size = 0;
4497 contents_size = 0;
4498 for (i = 0; i != n_elements; ++i)
4499 contents_size += request2size(sizes[i]);
4502 /* subtract out alignment bytes from total to minimize overallocation */
4503 size = contents_size + array_size - MALLOC_ALIGN_MASK;
4506 Allocate the aggregate chunk.
4507 But first disable mmap so malloc won't use it, since
4508 we would not be able to later free/realloc space internal
4509 to a segregated mmap region.
4511 mmx = av->n_mmaps_max; /* disable mmap */
4512 av->n_mmaps_max = 0;
4513 mem = mALLOc(size);
4514 av->n_mmaps_max = mmx; /* reset mmap */
4515 if (mem == 0)
4516 return 0;
4518 p = mem2chunk(mem);
4519 assert(!chunk_is_mmapped(p));
4520 remainder_size = chunksize(p);
4522 if (opts & 0x2) { /* optionally clear the elements */
4523 MALLOC_ZERO(mem, remainder_size - SIZE_SZ - array_size);
4526 /* If not provided, allocate the pointer array as final part of chunk */
4527 if (marray == 0) {
4528 array_chunk = chunk_at_offset(p, contents_size);
4529 marray = (Void_t**) (chunk2mem(array_chunk));
4530 set_head(array_chunk, (remainder_size - contents_size) | PREV_INUSE);
4531 remainder_size = contents_size;
4534 /* split out elements */
4535 for (i = 0; ; ++i) {
4536 marray[i] = chunk2mem(p);
4537 if (i != n_elements-1) {
4538 if (element_size != 0)
4539 size = element_size;
4540 else
4541 size = request2size(sizes[i]);
4542 remainder_size -= size;
4543 set_head(p, size | PREV_INUSE);
4544 p = chunk_at_offset(p, size);
4546 else { /* the final element absorbs any overallocation slop */
4547 set_head(p, remainder_size | PREV_INUSE);
4548 break;
4552 #if DEBUG
4553 if (marray != chunks) {
4554 /* final element must have exactly exhausted chunk */
4555 if (element_size != 0)
4556 assert(remainder_size == element_size);
4557 else
4558 assert(remainder_size == request2size(sizes[i]));
4559 check_inuse_chunk(mem2chunk(marray));
4562 for (i = 0; i != n_elements; ++i)
4563 check_inuse_chunk(mem2chunk(marray[i]));
4564 #endif
4566 return marray;
4571 ------------------------------ valloc ------------------------------
4574 #if __STD_C
4575 Void_t* vALLOc(size_t bytes)
4576 #else
4577 Void_t* vALLOc(bytes) size_t bytes;
4578 #endif
4580 /* Ensure initialization */
4581 mstate av = get_malloc_state();
4582 if (av->max_fast == 0)
4583 malloc_consolidate(av);
4584 return mEMALIGn(av->pagesize, bytes);
4588 ------------------------------ pvalloc ------------------------------
4592 #if __STD_C
4593 Void_t* pVALLOc(size_t bytes)
4594 #else
4595 Void_t* pVALLOc(bytes) size_t bytes;
4596 #endif
4598 mstate av = get_malloc_state();
4599 size_t pagesz;
4601 /* Ensure initialization */
4602 if (av->max_fast == 0)
4603 malloc_consolidate(av);
4604 pagesz = av->pagesize;
4605 return mEMALIGn(pagesz, (bytes + pagesz - 1) & ~(pagesz - 1));
4610 ------------------------------ malloc_trim ------------------------------
4613 #if __STD_C
4614 int mTRIm(size_t pad)
4615 #else
4616 int mTRIm(pad) size_t pad;
4617 #endif
4619 mstate av = get_malloc_state();
4620 /* Ensure initialization/consolidation */
4621 malloc_consolidate(av);
4623 #ifndef MORECORE_CANNOT_TRIM
4624 return sYSTRIm(pad, av);
4625 #else
4626 return 0;
4627 #endif
4632 ------------------------- malloc_usable_size -------------------------
4635 #if __STD_C
4636 size_t mUSABLe(Void_t* mem)
4637 #else
4638 size_t mUSABLe(mem) Void_t* mem;
4639 #endif
4641 mchunkptr p;
4642 if (mem != 0) {
4643 p = mem2chunk(mem);
4644 if (chunk_is_mmapped(p))
4645 return chunksize(p) - 2*SIZE_SZ;
4646 else if (inuse(p))
4647 return chunksize(p) - SIZE_SZ;
4649 return 0;
4653 ------------------------------ mallinfo ------------------------------
4656 struct mallinfo mALLINFo()
4658 mstate av = get_malloc_state();
4659 struct mallinfo mi;
4660 int i;
4661 mbinptr b;
4662 mchunkptr p;
4663 INTERNAL_SIZE_T avail;
4664 INTERNAL_SIZE_T fastavail;
4665 int nblocks;
4666 int nfastblocks;
4668 /* Ensure initialization */
4669 if (av->top == 0)
4670 malloc_consolidate(av);
4672 check_malloc_state();
4674 /* Account for top */
4675 avail = chunksize(av->top);
4676 nblocks = 1; /* top always exists */
4678 /* traverse fastbins */
4679 nfastblocks = 0;
4680 fastavail = 0;
4682 for (i = 0; i < NFASTBINS; ++i) {
4683 for (p = av->fastbins[i]; p != 0; p = p->fd) {
4684 ++nfastblocks;
4685 fastavail += chunksize(p);
4689 avail += fastavail;
4691 /* traverse regular bins */
4692 for (i = 1; i < NBINS; ++i) {
4693 b = bin_at(av, i);
4694 for (p = last(b); p != b; p = p->bk) {
4695 ++nblocks;
4696 avail += chunksize(p);
4700 mi.smblks = nfastblocks;
4701 mi.ordblks = nblocks;
4702 mi.fordblks = avail;
4703 mi.uordblks = av->sbrked_mem - avail;
4704 mi.arena = av->sbrked_mem;
4705 mi.hblks = av->n_mmaps;
4706 mi.hblkhd = av->mmapped_mem;
4707 mi.fsmblks = fastavail;
4708 mi.keepcost = chunksize(av->top);
4709 mi.usmblks = av->max_total_mem;
4710 return mi;
4714 ------------------------------ malloc_stats ------------------------------
4717 void mSTATs()
4719 struct mallinfo mi = mALLINFo();
4721 #ifdef WIN32
4723 CHUNK_SIZE_T free, reserved, committed;
4724 vminfo (&free, &reserved, &committed);
4725 PIO_eprintf(NULL, "free bytes = %10lu\n",
4726 free);
4727 PIO_eprintf(NULL, "reserved bytes = %10lu\n",
4728 reserved);
4729 PIO_eprintf(NULL, "committed bytes = %10lu\n",
4730 committed);
4732 #endif
4735 PIO_eprintf(NULL, "max system bytes = %10lu\n",
4736 (CHUNK_SIZE_T)(mi.usmblks));
4737 PIO_eprintf(NULL, "system bytes = %10lu\n",
4738 (CHUNK_SIZE_T)(mi.arena + mi.hblkhd));
4739 PIO_eprintf(NULL, "in use bytes = %10lu\n",
4740 (CHUNK_SIZE_T)(mi.uordblks + mi.hblkhd));
4742 #ifdef WIN32
4744 CHUNK_SIZE_T kernel, user;
4745 if (cpuinfo (TRUE, &kernel, &user)) {
4746 PIO_eprintf(NULL, "kernel ms = %10lu\n",
4747 kernel);
4748 PIO_eprintf(NULL, "user ms = %10lu\n",
4749 user);
4752 #endif
4757 ------------------------------ mallopt ------------------------------
4760 #if __STD_C
4761 int mALLOPt(int param_number, int value)
4762 #else
4763 int mALLOPt(param_number, value) int param_number; int value;
4764 #endif
4766 mstate av = get_malloc_state();
4767 /* Ensure initialization/consolidation */
4768 malloc_consolidate(av);
4770 switch (param_number) {
4771 case M_MXFAST:
4772 if (value >= 0 && value <= MAX_FAST_SIZE) {
4773 set_max_fast(av, value);
4774 return 1;
4776 else
4777 return 0;
4779 case M_TRIM_THRESHOLD:
4780 av->trim_threshold = value;
4781 return 1;
4783 case M_TOP_PAD:
4784 av->top_pad = value;
4785 return 1;
4787 case M_MMAP_THRESHOLD:
4788 av->mmap_threshold = value;
4789 return 1;
4791 case M_MMAP_MAX:
4792 #if !HAVE_MMAP
4793 if (value != 0)
4794 return 0;
4795 #endif
4796 av->n_mmaps_max = value;
4797 return 1;
4799 default:
4800 return 0;
4806 -------------------- Alternative MORECORE functions --------------------
4811 General Requirements for MORECORE.
4813 The MORECORE function must have the following properties:
4815 If MORECORE_CONTIGUOUS is false:
4817 * MORECORE must allocate in multiples of pagesize. It will
4818 only be called with arguments that are multiples of pagesize.
4820 * MORECORE(0) must return an address that is at least
4821 MALLOC_ALIGNMENT aligned. (Page-aligning always suffices.)
4823 else (i.e. If MORECORE_CONTIGUOUS is true):
4825 * Consecutive calls to MORECORE with positive arguments
4826 return increasing addresses, indicating that space has been
4827 contiguously extended.
4829 * MORECORE need not allocate in multiples of pagesize.
4830 Calls to MORECORE need not have args of multiples of pagesize.
4832 * MORECORE need not page-align.
4834 In either case:
4836 * MORECORE may allocate more memory than requested. (Or even less,
4837 but this will generally result in a malloc failure.)
4839 * MORECORE must not allocate memory when given argument zero, but
4840 instead return one past the end address of memory from previous
4841 nonzero call. This malloc does NOT call MORECORE(0)
4842 until at least one call with positive arguments is made, so
4843 the initial value returned is not important.
4845 * Even though consecutive calls to MORECORE need not return contiguous
4846 addresses, it must be OK for malloc'ed chunks to span multiple
4847 regions in those cases where they do happen to be contiguous.
4849 * MORECORE need not handle negative arguments -- it may instead
4850 just return MORECORE_FAILURE when given negative arguments.
4851 Negative arguments are always multiples of pagesize. MORECORE
4852 must not misinterpret negative args as large positive unsigned
4853 args. You can suppress all such calls from even occurring by defining
4854 MORECORE_CANNOT_TRIM,
4856 There is some variation across systems about the type of the
4857 argument to sbrk/MORECORE. If size_t is unsigned, then it cannot
4858 actually be size_t, because sbrk supports negative args, so it is
4859 normally the signed type of the same width as size_t (sometimes
4860 declared as "intptr_t", and sometimes "ptrdiff_t"). It doesn't much
4861 matter though. Internally, we use "long" as arguments, which should
4862 work across all reasonable possibilities.
4864 Additionally, if MORECORE ever returns failure for a positive
4865 request, and HAVE_MMAP is true, then mmap is used as a noncontiguous
4866 system allocator. This is a useful backup strategy for systems with
4867 holes in address spaces -- in this case sbrk cannot contiguously
4868 expand the heap, but mmap may be able to map noncontiguous space.
4870 If you'd like mmap to ALWAYS be used, you can define MORECORE to be
4871 a function that always returns MORECORE_FAILURE.
4873 Malloc only has limited ability to detect failures of MORECORE
4874 to supply contiguous space when it says it can. In particular,
4875 multithreaded programs that do not use locks may result in
4876 rece conditions across calls to MORECORE that result in gaps
4877 that cannot be detected as such, and subsequent corruption.
4879 If you are using this malloc with something other than sbrk (or its
4880 emulation) to supply memory regions, you probably want to set
4881 MORECORE_CONTIGUOUS as false. As an example, here is a custom
4882 allocator kindly contributed for pre-OSX macOS. It uses virtually
4883 but not necessarily physically contiguous non-paged memory (locked
4884 in, present and won't get swapped out). You can use it by
4885 uncommenting this section, adding some #includes, and setting up the
4886 appropriate defines above:
4888 #define MORECORE osMoreCore
4889 #define MORECORE_CONTIGUOUS 0
4891 There is also a shutdown routine that should somehow be called for
4892 cleanup upon program exit.
4894 #define MAX_POOL_ENTRIES 100
4895 #define MINIMUM_MORECORE_SIZE (64 * 1024)
4896 static int next_os_pool;
4897 void *our_os_pools[MAX_POOL_ENTRIES];
4899 void *osMoreCore(int size)
4901 void *ptr = 0;
4902 static void *sbrk_top = 0;
4904 if (size > 0)
4906 if (size < MINIMUM_MORECORE_SIZE)
4907 size = MINIMUM_MORECORE_SIZE;
4908 if (CurrentExecutionLevel() == kTaskLevel)
4909 ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
4910 if (ptr == 0)
4912 return (void *) MORECORE_FAILURE;
4914 / / save ptrs so they can be freed during cleanup
4915 our_os_pools[next_os_pool] = ptr;
4916 next_os_pool++;
4917 ptr = (void *) ((((CHUNK_SIZE_T) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
4918 sbrk_top = (char *) ptr + size;
4919 return ptr;
4921 else if (size < 0)
4923 / / we don't currently support shrink behavior
4924 return (void *) MORECORE_FAILURE;
4926 else
4928 return sbrk_top;
4932 / / cleanup any allocated memory pools
4933 / / called as last thing before shutting down driver
4935 void osCleanupMem(void)
4937 void **ptr;
4939 for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
4940 if (*ptr)
4942 PoolDeallocate(*ptr);
4943 *ptr = 0;
4951 --------------------------------------------------------------
4953 Emulation of sbrk for win32.
4954 Donated by J. Walter <Walter@GeNeSys-e.de>.
4955 For additional information about this code, and malloc on Win32, see
4956 http://www.genesys-e.de/jwalter/
4960 #ifdef WIN32
4962 # ifdef _DEBUG
4963 /* #define TRACE */
4964 # endif
4966 /* Support for USE_MALLOC_LOCK */
4967 # ifdef USE_MALLOC_LOCK
4969 /* Wait for spin lock */
4970 static int slwait (int *sl) {
4971 while (InterlockedCompareExchange ((void **) sl,
4972 (void *) 1,
4973 (void *) 0) != 0)
4974 Sleep (0);
4975 return 0;
4978 /* Release spin lock */
4979 static int slrelease (int *sl) {
4980 InterlockedExchange (sl, 0);
4981 return 0;
4984 # ifdef NEEDED
4985 /* Spin lock for emulation code */
4986 static int g_sl;
4987 # endif
4989 # endif /* USE_MALLOC_LOCK */
4991 /* getpagesize for windows */
4992 static long getpagesize (void) {
4993 static long g_pagesize = 0;
4994 if (! g_pagesize) {
4995 SYSTEM_INFO system_info;
4996 GetSystemInfo (&system_info);
4997 g_pagesize = system_info.dwPageSize;
4999 return g_pagesize;
5001 static long getregionsize (void) {
5002 static long g_regionsize = 0;
5003 if (! g_regionsize) {
5004 SYSTEM_INFO system_info;
5005 GetSystemInfo (&system_info);
5006 g_regionsize = system_info.dwAllocationGranularity;
5008 return g_regionsize;
5011 /* A region list entry */
5012 typedef struct _region_list_entry {
5013 void *top_allocated;
5014 void *top_committed;
5015 void *top_reserved;
5016 long reserve_size;
5017 struct _region_list_entry *previous;
5018 } region_list_entry;
5020 /* Allocate and link a region entry in the region list */
5021 static int region_list_append (region_list_entry **last,
5022 void *base_reserved, long reserve_size) {
5023 region_list_entry *next = HeapAlloc (GetProcessHeap (), 0,
5024 sizeof (region_list_entry));
5025 if (! next)
5026 return FALSE;
5027 next->top_allocated = (char *) base_reserved;
5028 next->top_committed = (char *) base_reserved;
5029 next->top_reserved = (char *) base_reserved + reserve_size;
5030 next->reserve_size = reserve_size;
5031 next->previous = *last;
5032 *last = next;
5033 return TRUE;
5035 /* Free and unlink the last region entry from the region list */
5036 static int region_list_remove (region_list_entry **last) {
5037 region_list_entry *previous = (*last)->previous;
5038 if (! HeapFree (GetProcessHeap (), sizeof (region_list_entry), *last))
5039 return FALSE;
5040 *last = previous;
5041 return TRUE;
5044 # define CEIL(size,to) (((size)+(to)-1)&~((to)-1))
5045 # define FLOOR(size,to) ((size)&~((to)-1))
5047 # define SBRK_SCALE 0
5048 /* #define SBRK_SCALE 1 */
5049 /* #define SBRK_SCALE 2 */
5050 /* #define SBRK_SCALE 4 */
5052 /* sbrk for windows */
5053 static void *sbrk (long size) {
5054 static long g_pagesize, g_my_pagesize;
5055 static long g_regionsize, g_my_regionsize;
5056 static region_list_entry *g_last;
5057 void *result = (void *) MORECORE_FAILURE;
5058 # ifdef TRACE
5059 PIO_printf (NULL, "sbrk %d\n", size);
5060 # endif
5061 # if defined (USE_MALLOC_LOCK) && defined (NEEDED)
5062 /* Wait for spin lock */
5063 slwait (&g_sl);
5064 # endif
5065 /* First time initialization */
5066 if (! g_pagesize) {
5067 g_pagesize = getpagesize ();
5068 g_my_pagesize = g_pagesize << SBRK_SCALE;
5070 if (! g_regionsize) {
5071 g_regionsize = getregionsize ();
5072 g_my_regionsize = g_regionsize << SBRK_SCALE;
5074 if (! g_last) {
5075 if (! region_list_append (&g_last, 0, 0))
5076 goto sbrk_exit;
5078 /* Assert invariants */
5079 assert (g_last);
5080 assert ((char *) g_last->top_reserved - g_last->reserve_size
5081 <= (char *) g_last->top_allocated &&
5082 g_last->top_allocated <= g_last->top_committed);
5083 assert ((char *) g_last->top_reserved - g_last->reserve_size
5084 <= (char *) g_last->top_committed &&
5085 g_last->top_committed <= g_last->top_reserved &&
5086 (unsigned) g_last->top_committed % g_pagesize == 0);
5087 assert ((unsigned) g_last->top_reserved % g_regionsize == 0);
5088 assert ((unsigned) g_last->reserve_size % g_regionsize == 0);
5089 /* Allocation requested? */
5090 if (size >= 0) {
5091 /* Allocation size is the requested size */
5092 long allocate_size = size;
5093 /* Compute the size to commit */
5094 long to_commit = (char *) g_last->top_allocated + allocate_size -
5095 (char *) g_last->top_committed;
5096 /* Do we reach the commit limit? */
5097 if (to_commit > 0) {
5098 /* Round size to commit */
5099 long commit_size = CEIL (to_commit, g_my_pagesize);
5100 /* Compute the size to reserve */
5101 long to_reserve = (char *) g_last->top_committed + commit_size -
5102 (char *) g_last->top_reserved;
5103 /* Do we reach the reserve limit? */
5104 if (to_reserve > 0) {
5105 /* Compute the remaining size to commit in the current region */
5106 long remaining_commit_size = (char *) g_last->top_reserved -
5107 (char *) g_last->top_committed;
5108 if (remaining_commit_size > 0) {
5109 /* Assert preconditions */
5110 assert ((unsigned) g_last->top_committed % g_pagesize == 0);
5111 assert (0 < remaining_commit_size &&
5112 remaining_commit_size % g_pagesize == 0); {
5113 /* Commit this */
5114 void *base_committed =
5115 VirtualAlloc(g_last->top_committed,
5116 remaining_commit_size,
5117 MEM_COMMIT, PAGE_READWRITE);
5118 /* Check returned pointer for consistency */
5119 if (base_committed != g_last->top_committed)
5120 goto sbrk_exit;
5121 /* Assert postconditions */
5122 assert ((unsigned) base_committed % g_pagesize == 0);
5123 # ifdef TRACE
5124 PIO_printf (NULL, "Commit %p %d\n", base_committed,
5125 remaining_commit_size);
5126 # endif
5127 /* Adjust the regions commit top */
5128 g_last->top_committed = (char *) base_committed +
5129 remaining_commit_size;
5132 /* Now we are going to search and reserve. */
5133 int contiguous = -1;
5134 int found = FALSE;
5135 MEMORY_BASIC_INFORMATION memory_info;
5136 void *base_reserved;
5137 long reserve_size;
5138 do {
5139 /* Assume contiguous memory */
5140 contiguous = TRUE;
5141 /* Round size to reserve */
5142 reserve_size = CEIL (to_reserve, g_my_regionsize);
5143 /* Start with the current region's top */
5144 memory_info.BaseAddress = g_last->top_reserved;
5145 /* Assert preconditions */
5146 assert ((unsigned) memory_info.BaseAddress %
5147 g_pagesize == 0);
5148 assert (0 < reserve_size && reserve_size %
5149 g_regionsize == 0);
5150 while (VirtualQuery (memory_info.BaseAddress,
5151 &memory_info,
5152 sizeof (memory_info))) {
5153 /* Assert postconditions */
5154 assert ((unsigned) memory_info.BaseAddress %
5155 g_pagesize == 0);
5156 # ifdef TRACE
5157 PIO_printf (NULL, "Query %p %d %s\n",
5158 memory_info.BaseAddress,
5159 memory_info.RegionSize,
5160 memory_info.State == MEM_FREE ? "FREE":
5161 (memory_info.State == MEM_RESERVE ?
5162 "RESERVED":
5163 (memory_info.State == MEM_COMMIT ?
5164 "COMMITTED": "?")));
5165 # endif
5166 /* Region is free, well aligned and big
5167 * enough: we are done */
5168 if (memory_info.State == MEM_FREE &&
5169 (unsigned) memory_info.BaseAddress %
5170 g_regionsize == 0
5172 memory_info.RegionSize >= (unsigned)
5173 reserve_size) {
5174 found = TRUE;
5175 break;
5177 /* From now on we can't get contiguous memory! */
5178 contiguous = FALSE;
5179 /* Recompute size to reserve */
5180 reserve_size = CEIL (allocate_size,
5181 g_my_regionsize);
5182 memory_info.BaseAddress =
5183 (char *) memory_info.BaseAddress +
5184 memory_info.RegionSize;
5185 /* Assert preconditions */
5186 assert ((unsigned) memory_info.BaseAddress
5187 % g_pagesize == 0);
5188 assert (0 < reserve_size && reserve_size
5189 % g_regionsize == 0);
5191 /* Search failed? */
5192 if (! found)
5193 goto sbrk_exit;
5194 /* Assert preconditions */
5195 assert ((unsigned) memory_info.BaseAddress
5196 % g_regionsize == 0);
5197 assert (0 < reserve_size && reserve_size
5198 % g_regionsize == 0);
5199 /* Try to reserve this */
5200 base_reserved = VirtualAlloc (memory_info.BaseAddress,
5201 reserve_size,
5202 MEM_RESERVE,
5203 PAGE_NOACCESS);
5204 if (! base_reserved) {
5205 int rc = GetLastError ();
5206 if (rc != ERROR_INVALID_ADDRESS)
5207 goto sbrk_exit;
5209 /* A null pointer signals (hopefully) a race
5210 * condition with another thread. */
5211 /* In this case, we try again. */
5212 } while (! base_reserved);
5213 /* Check returned pointer for consistency */
5214 if (memory_info.BaseAddress &&
5215 base_reserved != memory_info.BaseAddress)
5216 goto sbrk_exit;
5217 /* Assert postconditions */
5218 assert ((unsigned) base_reserved % g_regionsize == 0);
5219 # ifdef TRACE
5220 PIO_printf (NULL, "Reserve %p %d\n", base_reserved,
5221 reserve_size);
5222 # endif
5223 /* Did we get contiguous memory? */
5224 if (contiguous) {
5225 long start_size = (char *) g_last->top_committed -
5226 (char *) g_last->top_allocated;
5227 /* Adjust allocation size */
5228 allocate_size -= start_size;
5229 /* Adjust the regions allocation top */
5230 g_last->top_allocated = g_last->top_committed;
5231 /* Recompute the size to commit */
5232 to_commit = (char *) g_last->top_allocated +
5233 allocate_size - (char *) g_last->top_committed;
5234 /* Round size to commit */
5235 commit_size = CEIL (to_commit, g_my_pagesize);
5237 /* Append the new region to the list */
5238 if (! region_list_append (&g_last, base_reserved,
5239 reserve_size))
5240 goto sbrk_exit;
5241 /* Didn't we get contiguous memory? */
5242 if (! contiguous) {
5243 /* Recompute the size to commit */
5244 to_commit = (char *) g_last->top_allocated +
5245 allocate_size - (char *) g_last->top_committed;
5246 /* Round size to commit */
5247 commit_size = CEIL (to_commit, g_my_pagesize);
5251 /* Assert preconditions */
5252 assert ((unsigned) g_last->top_committed % g_pagesize == 0);
5253 assert (0 < commit_size && commit_size % g_pagesize == 0); {
5254 /* Commit this */
5255 void *base_committed = VirtualAlloc (g_last->top_committed,
5256 commit_size,
5257 MEM_COMMIT,
5258 PAGE_READWRITE);
5259 /* Check returned pointer for consistency */
5260 if (base_committed != g_last->top_committed)
5261 goto sbrk_exit;
5262 /* Assert postconditions */
5263 assert ((unsigned) base_committed % g_pagesize == 0);
5264 # ifdef TRACE
5265 PIO_printf(NULL, "Commit %p %d\n", base_committed, commit_size);
5266 # endif
5267 /* Adjust the regions commit top */
5268 g_last->top_committed = (char *) base_committed + commit_size;
5271 /* Adjust the regions allocation top */
5272 g_last->top_allocated = (char *) g_last->top_allocated + allocate_size;
5273 result = (char *) g_last->top_allocated - size;
5274 /* Deallocation requested? */
5276 else if (size < 0) {
5277 long deallocate_size = - size;
5278 /* As long as we have a region to release */
5279 while ((char *) g_last->top_allocated - deallocate_size
5280 < (char *) g_last->top_reserved - g_last->reserve_size) {
5281 /* Get the size to release */
5282 long release_size = g_last->reserve_size;
5283 /* Get the base address */
5284 void *base_reserved = (char *) g_last->top_reserved - release_size;
5285 /* Assert preconditions */
5286 assert ((unsigned) base_reserved % g_regionsize == 0);
5287 assert (0 < release_size && release_size % g_regionsize == 0); {
5288 /* Release this */
5289 int rc = VirtualFree (base_reserved, 0,
5290 MEM_RELEASE);
5291 /* Check returned code for consistency */
5292 if (! rc)
5293 goto sbrk_exit;
5294 # ifdef TRACE
5295 PIO_printf(NULL, "Release %p %d\n", base_reserved,release_size);
5296 # endif
5298 /* Adjust deallocation size */
5299 deallocate_size -=
5300 (char *) g_last->top_allocated - (char *) base_reserved;
5301 /* Remove the old region from the list */
5302 if (! region_list_remove (&g_last))
5303 goto sbrk_exit;
5305 /* Compute the size to decommit */
5306 long to_decommit = (char *) g_last->top_committed -
5307 ((char *) g_last->top_allocated - deallocate_size);
5308 if (to_decommit >= g_my_pagesize) {
5309 /* Compute the size to decommit */
5310 long decommit_size = FLOOR (to_decommit, g_my_pagesize);
5311 /* Compute the base address */
5312 void *base_committed = (char *) g_last->top_committed -
5313 decommit_size;
5314 /* Assert preconditions */
5315 assert ((unsigned) base_committed % g_pagesize == 0);
5316 assert (0 < decommit_size && decommit_size % g_pagesize == 0); {
5317 /* Decommit this */
5318 int rc = VirtualFree((char *) base_committed, decommit_size,
5319 MEM_DECOMMIT);
5320 /* Check returned code for consistency */
5321 if (! rc)
5322 goto sbrk_exit;
5323 # ifdef TRACE
5324 PIO_printf (NULL, "Decommit %p %d\n",
5325 base_committed, decommit_size);
5326 # endif
5328 /* Adjust deallocation size and regions commit and
5329 * allocate top */
5330 deallocate_size -= (char *) g_last->top_allocated -
5331 (char *) base_committed;
5332 g_last->top_committed = base_committed;
5333 g_last->top_allocated = base_committed;
5336 /* Adjust regions allocate top */
5337 g_last->top_allocated = (char *)g_last->top_allocated - deallocate_size;
5338 /* Check for underflow */
5339 if ((char *) g_last->top_reserved - g_last->reserve_size
5340 > (char *) g_last->top_allocated ||
5341 g_last->top_allocated > g_last->top_committed) {
5342 /* Adjust regions allocate top */
5343 g_last->top_allocated = (char *) g_last->top_reserved
5344 - g_last->reserve_size;
5345 goto sbrk_exit;
5347 result = g_last->top_allocated;
5349 /* Assert invariants */
5350 assert (g_last);
5351 assert ((char *) g_last->top_reserved - g_last->reserve_size
5352 <= (char *) g_last->top_allocated &&
5353 g_last->top_allocated <= g_last->top_committed);
5354 assert ((char *) g_last->top_reserved - g_last->reserve_size
5355 <= (char *) g_last->top_committed &&
5356 g_last->top_committed <= g_last->top_reserved &&
5357 (unsigned) g_last->top_committed % g_pagesize == 0);
5358 assert ((unsigned) g_last->top_reserved % g_regionsize == 0);
5359 assert ((unsigned) g_last->reserve_size % g_regionsize == 0);
5361 sbrk_exit:
5362 # if defined (USE_MALLOC_LOCK) && defined (NEEDED)
5363 /* Release spin lock */
5364 slrelease (&g_sl);
5365 # endif
5366 return result;
5369 /* mmap for windows */
5370 static void *mmap (void *ptr, long size, long prot, long type,
5371 long handle, long arg) {
5372 static long g_pagesize;
5373 static long g_regionsize;
5374 # ifdef TRACE
5375 PIO_printf (NULL, "mmap %d\n", size);
5376 # endif
5377 # if defined (USE_MALLOC_LOCK) && defined (NEEDED)
5378 /* Wait for spin lock */
5379 slwait (&g_sl);
5380 # endif
5381 /* First time initialization */
5382 if (! g_pagesize)
5383 g_pagesize = getpagesize ();
5384 if (! g_regionsize)
5385 g_regionsize = getregionsize ();
5386 /* Assert preconditions */
5387 assert ((unsigned) ptr % g_regionsize == 0);
5388 assert (size % g_pagesize == 0);
5389 /* Allocate this */
5390 ptr = VirtualAlloc (ptr, size,
5391 MEM_RESERVE | MEM_COMMIT | MEM_TOP_DOWN, PAGE_READWRITE);
5392 if (! ptr) {
5393 ptr = (void *) MORECORE_FAILURE;
5394 goto mmap_exit;
5396 /* Assert postconditions */
5397 assert ((unsigned) ptr % g_regionsize == 0);
5398 # ifdef TRACE
5399 PIO_printf (NULL, "Commit %p %d\n", ptr, size);
5400 # endif
5401 mmap_exit:
5402 # if defined (USE_MALLOC_LOCK) && defined (NEEDED)
5403 /* Release spin lock */
5404 slrelease (&g_sl);
5405 # endif
5406 return ptr;
5409 /* munmap for windows */
5410 static long munmap (void *ptr, long size) {
5411 static long g_pagesize;
5412 static long g_regionsize;
5413 int rc = MUNMAP_FAILURE;
5414 # ifdef TRACE
5415 PIO_printf (NULL, "munmap %p %d\n", ptr, size);
5416 # endif
5417 # if defined (USE_MALLOC_LOCK) && defined (NEEDED)
5418 /* Wait for spin lock */
5419 slwait (&g_sl);
5420 # endif
5421 /* First time initialization */
5422 if (! g_pagesize)
5423 g_pagesize = getpagesize ();
5424 if (! g_regionsize)
5425 g_regionsize = getregionsize ();
5426 /* Assert preconditions */
5427 assert ((unsigned) ptr % g_regionsize == 0);
5428 assert (size % g_pagesize == 0);
5429 /* Free this */
5430 if (! VirtualFree (ptr, 0,
5431 MEM_RELEASE))
5432 goto munmap_exit;
5433 rc = 0;
5434 # ifdef TRACE
5435 PIO_printf (NULL, "Release %p %d\n", ptr, size);
5436 # endif
5437 munmap_exit:
5438 # if defined (USE_MALLOC_LOCK) && defined (NEEDED)
5439 /* Release spin lock */
5440 slrelease (&g_sl);
5441 # endif
5442 return rc;
5445 static void vminfo (CHUNK_SIZE_T *free, CHUNK_SIZE_T *reserved,
5446 CHUNK_SIZE_T *committed) {
5447 MEMORY_BASIC_INFORMATION memory_info;
5448 memory_info.BaseAddress = 0;
5449 *free = *reserved = *committed = 0;
5450 while (VirtualQuery (memory_info.BaseAddress, &memory_info,
5451 sizeof (memory_info))) {
5452 switch (memory_info.State) {
5453 case MEM_FREE:
5454 *free += memory_info.RegionSize;
5455 break;
5456 case MEM_RESERVE:
5457 *reserved += memory_info.RegionSize;
5458 break;
5459 case MEM_COMMIT:
5460 *committed += memory_info.RegionSize;
5461 break;
5463 memory_info.BaseAddress = (char *) memory_info.BaseAddress +
5464 memory_info.RegionSize;
5468 static int cpuinfo (int whole, CHUNK_SIZE_T *kernel, CHUNK_SIZE_T *user) {
5469 if (whole) {
5470 __int64 creation64, exit64, kernel64, user64;
5471 int rc = GetProcessTimes (GetCurrentProcess (),
5472 (FILETIME *) &creation64,
5473 (FILETIME *) &exit64,
5474 (FILETIME *) &kernel64,
5475 (FILETIME *) &user64);
5476 if (! rc) {
5477 *kernel = 0;
5478 *user = 0;
5479 return FALSE;
5481 *kernel = (CHUNK_SIZE_T) (kernel64 / 10000);
5482 *user = (CHUNK_SIZE_T) (user64 / 10000);
5483 return TRUE;
5485 else {
5486 __int64 creation64, exit64, kernel64, user64;
5487 int rc = GetThreadTimes (GetCurrentThread (),
5488 (FILETIME *) &creation64,
5489 (FILETIME *) &exit64,
5490 (FILETIME *) &kernel64,
5491 (FILETIME *) &user64);
5492 if (! rc) {
5493 *kernel = 0;
5494 *user = 0;
5495 return FALSE;
5497 *kernel = (CHUNK_SIZE_T) (kernel64 / 10000);
5498 *user = (CHUNK_SIZE_T) (user64 / 10000);
5499 return TRUE;
5503 #endif /* WIN32 */
5505 /* ------------------------------------------------------------
5506 History:
5507 V2.7.2 Sat Aug 17 09:07:30 2002 Doug Lea (dl at gee)
5508 * Fix malloc_state bitmap array misdeclaration
5510 V2.7.1 Thu Jul 25 10:58:03 2002 Doug Lea (dl at gee)
5511 * Allow tuning of FIRST_SORTED_BIN_SIZE
5512 * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
5513 * Better detection and support for non-contiguousness of MORECORE.
5514 Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
5515 * Bypass most of malloc if no frees. Thanks To Emery Berger.
5516 * Fix freeing of old top non-contiguous chunk im sysmalloc.
5517 * Raised default trim and map thresholds to 256K.
5518 * Fix mmap-related #defines. Thanks to Lubos Lunak.
5519 * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
5520 * Branch-free bin calculation
5521 * Default trim and mmap thresholds now 256K.
5523 V2.7.0 Sun Mar 11 14:14:06 2001 Doug Lea (dl at gee)
5524 * Introduce independent_comalloc and independent_calloc.
5525 Thanks to Michael Pachos for motivation and help.
5526 * Make optional .h file available
5527 * Allow > 2GB requests on 32bit systems.
5528 * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
5529 Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
5530 and Anonymous.
5531 * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
5532 helping test this.)
5533 * memalign: check alignment arg
5534 * realloc: don't try to shift chunks backwards, since this
5535 leads to more fragmentation in some programs and doesn't
5536 seem to help in any others.
5537 * Collect all cases in malloc requiring system memory into sYSMALLOc
5538 * Use mmap as backup to sbrk
5539 * Place all internal state in malloc_state
5540 * Introduce fastbins (although similar to 2.5.1)
5541 * Many minor tunings and cosmetic improvements
5542 * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
5543 * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
5544 Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
5545 * Include errno.h to support default failure action.
5547 V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee)
5548 * return null for negative arguments
5549 * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
5550 * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
5551 (e.g. WIN32 platforms)
5552 * Cleanup header file inclusion for WIN32 platforms
5553 * Cleanup code to avoid Microsoft Visual C++ compiler complaints
5554 * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
5555 memory allocation routines
5556 * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
5557 * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
5558 usage of 'assert' in non-WIN32 code
5559 * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
5560 avoid infinite loop
5561 * Always call 'fREe()' rather than 'free()'
5563 V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee)
5564 * Fixed ordering problem with boundary-stamping
5566 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
5567 * Added pvalloc, as recommended by H.J. Liu
5568 * Added 64bit pointer support mainly from Wolfram Gloger
5569 * Added anonymously donated WIN32 sbrk emulation
5570 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
5571 * malloc_extend_top: fix mask error that caused wastage after
5572 foreign sbrks
5573 * Add linux mremap support code from HJ Liu
5575 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
5576 * Integrated most documentation with the code.
5577 * Add support for mmap, with help from
5578 Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
5579 * Use last_remainder in more cases.
5580 * Pack bins using idea from colin@nyx10.cs.du.edu
5581 * Use ordered bins instead of best-fit threshold
5582 * Eliminate block-local decls to simplify tracing and debugging.
5583 * Support another case of realloc via move into top
5584 * Fix error occurring when initial sbrk_base not word-aligned.
5585 * Rely on page size for units instead of SBRK_UNIT to
5586 avoid surprises about sbrk alignment conventions.
5587 * Add mallinfo, mallopt. Thanks to Raymond Nijssen
5588 (raymond@es.ele.tue.nl) for the suggestion.
5589 * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
5590 * More precautions for cases where other routines call sbrk,
5591 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
5592 * Added macros etc., allowing use in linux libc from
5593 H.J. Lu (hjl@gnu.ai.mit.edu)
5594 * Inverted this history list
5596 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
5597 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
5598 * Removed all preallocation code since under current scheme
5599 the work required to undo bad preallocations exceeds
5600 the work saved in good cases for most test programs.
5601 * No longer use return list or unconsolidated bins since
5602 no scheme using them consistently outperforms those that don't
5603 given above changes.
5604 * Use best fit for very large chunks to prevent some worst-cases.
5605 * Added some support for debugging
5607 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
5608 * Removed footers when chunks are in use. Thanks to
5609 Paul Wilson (wilson@cs.texas.edu) for the suggestion.
5611 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
5612 * Added malloc_trim, with help from Wolfram Gloger
5613 (wmglo@Dent.MED.Uni-Muenchen.DE).
5615 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
5617 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
5618 * realloc: try to expand in both directions
5619 * malloc: swap order of clean-bin strategy;
5620 * realloc: only conditionally expand backwards
5621 * Try not to scavenge used bins
5622 * Use bin counts as a guide to preallocation
5623 * Occasionally bin return list chunks in first scan
5624 * Add a few optimizations from colin@nyx10.cs.du.edu
5626 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
5627 * faster bin computation & slightly different binning
5628 * merged all consolidations to one part of malloc proper
5629 (eliminating old malloc_find_space & malloc_clean_bin)
5630 * Scan 2 returns chunks (not just 1)
5631 * Propagate failure in realloc if malloc returns 0
5632 * Add stuff to allow compilation on non-ANSI compilers
5633 from kpv@research.att.com
5635 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
5636 * removed potential for odd address access in prev_chunk
5637 * removed dependency on getpagesize.h
5638 * misc cosmetics and a bit more internal documentation
5639 * anticosmetics: mangled names in macros to evade debugger strangeness
5640 * tested on sparc, hp-700, dec-mips, rs6000
5641 with gcc & native cc (hp, dec only) allowing
5642 Detlefs & Zorn comparison study (in SIGPLAN Notices.)
5644 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
5645 * Based loosely on libg++-1.2X malloc. (It retains some of the overall
5646 structure of old version, but most details differ.)
5651 * Local variables:
5652 * c-file-style: "parrot"
5653 * End:
5654 * vim: expandtab shiftwidth=4: